title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Provably Efficient Algorithm for Best Scoring Rule Identification in Online Principal-Agent Information Acquisition | Accept (poster) | Summary: This paper studies a principal-agent problem, where the principal aims at learning the optimal scoring rule for that problem. Notably, it proposes pure exploration algorithms in both settings of fixed budget and fixed confidence
Claims And Evidence: See below
Methods And Evaluation Criteria: See below
Theoretical Claims: See below
Experimental Designs Or Analyses: See below
Supplementary Material: See below
Relation To Broader Scientific Literature: See below
Essential References Not Discussed: See below
Other Strengths And Weaknesses: See below
Other Comments Or Suggestions: The problem of learning optimal pricing/scoring rules in principal agent is interesting and has been widely considered in the literature, as suggested by the related work section of this paper.
My main concern about this work is about the (over)complexity of the setting. I indeed find it quite overcomplicated, while similar ideas and messages can be found with much simpler problems. Notably, I fear that the Assumption 1, along with the fact that the agent plays before observing the nature state (which leads to Equation 5), makes this problem nearly equivalent to much simpler settings considered in the literature, such as the mentioned work of Scheid et al (2024). While the setting should be more intricate here, the Assumption 1 drastically simplifies the setting, ending up in something close to Scheid et al (2024), but in a pure exploration setting. If that is indeed the case, I would have just preferred an adaptation of this simpler setting to pure exploration.
This overcomplexity also makes the paper unnecessarily unclear: there are a lot of technical notations and considerations and it is hard to read through. I really think that the use of a simpler, but morally similar setting, would clearly help in that direction.
Also, the agent is here assumed to be myopic, i.e., they only react optimally to the current scoring rule. An agent could be nastier in reality, manipulating the learning of the principal so that it can maximize their own reward. Moreover, the agents are even assumed to be truthful, based on the existence of proper scoring rules.
While Myerson guarantees the existence of an optimal proper rule, how easy is it to transform any scoring rule into a proper one? This would probably require some knowledge of the game environment, that the principal does not have at the beginning of the game here (+ a potentially intensive computation).
----------
Here are more minor remarks/comments:
- it is not clear what is the exact knowledge of the principal? Notably for the UCB-LP formulation, it seems that knowledge of the outcome probability $p$ is necessary
- what is the value of B_S in Lemma 1 ? I guess it cannot be made arbitrarily small
- How computationally intense is the solving of the LP programming in UCB-LP?
- line 310: if $k_t=k_t^*$, this could also mean the principal is overpaying, which is not something we are happy about
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback, which has greatly contributed to improving our work.
**Q1:** My main concern about this work is about the (over)complexity of the setting ... help in that direction.
**A1:** Thank you for your insightful comment. We understand the concern about complexity, and we’d like to clarify why our framework is intentionally more involved than works like Scheid (2024).
The key difference from Scheid (2024) lies in the presence of an underlying state in our setting. The principal in our setting seeks to acquire high-quality information about this hidden state through the agent’s report, as the report quality and the underlying state-generating process have a direct impact on the principal’s profit. To achieve this, the principal designs a scoring rule that incentivizes the agent to conduct a good investigation (action) and report truthfully. This setup closely mirrors real-world scenarios—such as a manager hiring a domain expert to assess a situation—where **the profit of the principal depends on the true state and the quality of the report, not just the agent’s action**.
In contrast, the setting of Scheid (2024) does not involve an underlying state—the principal’s objective is to induce a desirable action via a payment rule. While this is suitable for scenarios where **the principal’s profit depends directly on the agent’s action**, it is less appropriate for richer interactions such as the manager–expert example introduced earlier.
Another advantage of our setting is its generality—it subsumes several important principal–agent problems as special cases, including the contract design problem (see the discussion in Chen (2023), Appendix B).
**Q2:** Also, the agent is here assumed to be myopic... Moreover, the agents are even assumed to be truthful...
**A2:** Thank you for your comment. Assuming the agent is myopic is a standard modeling choice in famous online principal–agent problems (e.g., Zhu 2022; Chen 2023; Scheid 2024). This is mainly because, in practice, the agent often lacks sufficient information about the principal’s future policy, making it difficult for them to plan long-term strategies that maximize their own profit. We agree that relaxing the myopic assumption and studying strategic, far-sighted agents is an interesting direction for future work.
Additionally, we would like to clarify that our framework focuses solely on proper scoring rules. By the revelation principle, this restriction will not result in any loss of optimality in terms of the principal’s expected profit [Chen (2023), Cacciamani et al. (2023)]. Therefore, our goal is to identify an optimal rule within the set of proper scoring rules that maximizes the principal’s expected profit. Thus we do not need to transform arbitrary scoring rules into proper ones.
**Q3:** It is not clear what is the ... $p$ is necessary.
**A3:** Thank you very much for your question. The principal is assumed to know its own utility function $u$. We respectfully notice that in the UCB-LP, there is no parameter named $p$. If your question refers to $q$, which denotes the distribution over the agent’s belief, then indeed the principal does not know it. Instead, the principal must gradually learn this belief distribution over time based on interactions with the agent (see Section 4.2).
**Q4:** What is the value of $B_S$ in Lemma 1 ?
**A4:** The constant $B_S$ is a boxing parameter that bounds the payment of the scoring rule $S$. Specifically, we assume $\| S \|_{\infty} < B_S$.
**Q5:** How computationally intense is the solving of the LP programming in UCB-LP?
**A5:** Thank you for your question. Our UCB-LP involves parameters and constraints that are polynomial in the number of arms $K$ and the number of belief $M$. Therefore, the LP can be solved in polynomial time w.r.t. $K$ and $M$. We also note that a similar LP was solved by Cacciamani (2023) (see their Eq (2)), who provided a discussion on its computational complexity in Corollary 3.2, further supporting the tractability of our approach.
**Q6:** Line 310 ... are happy about.
**A6:** Thank you very much for your question. Our paper primarily focuses on the pure exploration setting, which is analogous to the pure exploration problem in the bandit literature [Lattimore (2020)]. In this context, we typically do not consider the regret incurred during the exploration phase. That said, there has been published work in the bandit literature on minimizing regret while identifying the best arm. We view this (minimizing regret and identifying the best scoring rule) as a valuable direction for future research.
**Reference:**
Chen (2023). Learning to incentivize information acquisition.
Cacciamani (2023). Online information acquisition: Hiring multiple agents.
Lattimore (2020). Bandit algorithms.
Zhu (2022). The sample complexity of online contract design.
Scheid (2024). Incentivized learning in principal-agent bandit games.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answer. Given the different feedbacks and authors' answer, I am sure the clarity of the paper will be improved in its revised version and thus decide to raise my score. However, I am still unsure of the relevance of this **underlying state** setting: how does it really make learning harder/different?
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the positive feedback. We will revise the paper accordingly to improve its clarity. Below, we provide some intuition on why the presence of an underlying state increases the difficulty of the learning problem.
In the setting of Scheid et al. (2024), the principal's objective is relatively straightforward: to incentivize the agent to take a good action, as the agent’s action directly determines the principal’s utility. In contrast, our setting introduces an added layer of complexity due to the presence of an underlying state that is unobservable to the principal but directly impacts the agent’s utility. The principal must rely on the agent’s report to derive the information about the state. Consequently, maximizing utility in our setting requires addressing two intertwined challenges: it must incentivize the agent to (1) take the optimal action (e.g., exerting high effort in research so that it can derive high quality information of the state), and (2) truthfully report its belief about the underlying state.
Consequently, while the payment rule in Scheid et al. (2024) can be designed solely based on the agent’s action, the scoring rule in our setting instead depends on the agent’s report. This introduces another layer of difficulty: the scoring rule must not only incentivize the agent to take the optimal action, but also to truthfully report its belief. Hence, identifying the optimal scoring rule in our setting demands a more nuanced and sophisticated analysis than that in Scheid et al. (2024).
Once again, we sincerely thank the reviewers for their valuable feedback and constructive questions and suggestions. | Summary: The paper studies how to learn scoring rules within the principal-agent problem for information acquisition. In particular, the authors study how to identify approximately optimal scoring rules with high probability through interaction with the environment. They design two algorithms depending on whether the setting is with fixed confidence or fixed budget. The first algorithm guarantees both an instance-dependent and an instance-independent bound with $O(1/\epsilon^2)$ samples, where $\epsilon$ is the required accuracy. This improves upon the $O(1/\epsilon^3)$ bound of Chen et al. (2023). The second algorithm fixes the number of samples (budget) and identifies approximately optimal scoring rules.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: I didn't check the appendix.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper studies problems related to best arm identification for information acquisition. Previous works focus on the related problem of regret minimization.
Essential References Not Discussed: None, although a more comprehensive comparison with Cacciamani et al. (2023) could be beneficial. Indeed, since it generalizes the problem studied in the paper, their techniques can in principle be applied to the paper setting.
Other Strengths And Weaknesses: The problem of learning how to incentivize information acquisition is interesting. The paper improves upon the guarantees of Chen et al. (2023).
The authors discuss extensively the relatrion between their work and Chen et al.. However, it remains unclear how novel the presented algorithm is since the structure is very similar to Chen et al. It would be interesting to better highlight the algorithmic contributions (except with setting parameters differently), and why the algorithm of Chen et al. fails to achieve optimal bounds.
I feel that also a more extensive comparison with Cacciamani et al. is required. Indeed, their work can be applied to your problem. Since the algorithm of Cacciamani et al. has an explore then commit flavor and achieve $T^{2/3}$ regret, I would not be surprised if it directly implies a $1/\epsilon^2$ sample complexity bound.
Other Comments Or Suggestions: At line 173 in the "fixed budget" paragraph it is unclear which are the inputs of the problem (I guess T) and which are the outputs (I guess $\epsilon$ and $\delta$).
Questions For Authors: Why are the algorithms of Chen et al. and Cacciamani et al. suboptimal for sample complexity problems?
Why is the dependency on $1/\epsilon^2$ required also in instance-dependent bounds?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your insightful comments and questions, which have helped us improve the quality of our work.
**Q1:** Why are the algorithms of Chen et al. and Cacciamani et al. suboptimal for sample complexity problems?
**A1:** Thank you for the question.
The primary reason for the suboptimal sample complexity in Chen (2023), aside from choosing an inappropriate parameter, is the adoption of a suboptimal termination criterion. Specifically, Chen (2023) employs the breaking rule originally introduced by Jin (2018), which terminates the algorithm after $\tilde{O}(\varepsilon^{-6} K^6 C_{\mathcal{O}}^3)$ iterations and selects the scoring rule identified in the final round as the best estimate. This termination strategy is inherently inefficient because it intuitively demands a large number of samples to ensure the accuracy of the identified scoring rule. In contrast, our proposed breaking rule (line 15 in Algorithm 1) and corresponding decision rule (line 16 in Algorithm 1) enable a more efficient evaluation of whether a scoring rule satisfies the necessary conditions. Consequently, our approach achieves significantly improved sample complexity.
For Cacciamani (2023). We believe Cacciamani’s ETC algorithm can achieve a sample complexity bound of $K/\epsilon^2$ by carefully selecting the exploration phase based on the $\epsilon$. However, obtaining an instance-dependent sample complexity bound from ETC-based methods is challenging due to the necessity of setting the exploration phase length in advance, without prior knowledge of the actual instance-specific reward gaps or problem complexity. To our knowledge, there are currently no known ETC-based fixed confidence best arm identification algorithms in MAB problems that achieve instance-dependent complexity results.
In summary, our algorithm achieves near-optimal instance-dependent sample complexity due to: (1) the selection of appropriate parameters, (2) the design of suitable breaking and decision rules, and (3) the adoption of an effective algorithm structure (UCB).
**Q2:** At line 173 in the "fixed budget" paragraph it is unclear which are the inputs of the problem (I guess $T$) and which are the outputs (I guess $\epsilon$ and $\delta$).
**A2:** Thank you for your question. In the fixed budget setting, $T$ and $\epsilon$ are the inputs, while $\tilde{\delta}$ is the output (the smaller the $\tilde{\delta}$, the better the algorithm’s performance). We will clarify this in the revisednext version of our paper.
**Q3:** Why is the dependency on $1/\epsilon^2$ required also in instance-dependent bounds?
**A3:** Thank you for this insightful question.
To find the near-optimal scoring rule, we must identify both the best arm $k^*$ and the set of scoring rules that can trigger the best arm, denoted by $\mathcal{V}_{{k^*}}$. Intuitively, the dependency on $1/\Delta_i^2$ arises from identifying the best arm, whereas the dependency on $1/\epsilon^2$ emerges because, even after identifying the best arm, additional samples are required to estimate the set $\mathcal{V}{k^*}$ at an $\epsilon$-accurate level. Only then can we determine a scoring rule that meets the required accuracy.
**Reference:**
Chen, S., Wu, J., Wu, Y., & Yang, Z. (2023, July). Learning to incentivize information acquisition: Proper scoring rules meet principal-agent model. In International Conference on Machine Learning (pp. 5194-5218). PMLR.
Cacciamani, F., Castiglioni, M., & Gatti, N. (2023). Online information acquisition: Hiring multiple agents. arXiv preprint arXiv:2307.06210. | Summary: The paper addresses the Best Scoring Rule Identification problem, proposing two algorithms: OIAFC (Online Information Acquisition Fixed Confidence) and OIAFB (Online Information Acquisition Fixed Budget). These algorithms aim to identify the optimal scoring rule through online learning. The key contributions are the introduction of a novel method to balance exploration and exploitation, the use of conservative scoring rules, and the design of efficient stopping and decision rules. The paper provides theoretical upper bounds for the sample complexity of these algorithms, showing that they can identify an estimated optimal scoring rule with high confidence, achieving instance-dependent and instance-independent bounds. The algorithms are evaluated in both fixed-confidence and fixed-budget settings, with an emphasis on improving sample efficiency compared to previous methods.
Claims And Evidence: The claims made in the submission are generally supported by theoretical evidence. However, a few claims could benefit from more empirical or practical validation:
- Sample Complexity Bounds: While the paper presents theoretical upper bounds for sample complexity (both instance-dependent and independent), these bounds are derived under certain assumptions (e.g., the agent's behavior and the confidence levels). These claims could be seen as problematic if not empirically validated, especially since sample complexity is highly dependent on specific real-world conditions that are not fully addressed in the paper.
- The claim that forced exploration via binary search is efficient is supported theoretically, but the practical impact on performance (e.g., in terms of time or sample efficiency) is not clearly demonstrated. Empirical evidence or simulation results would help verify the actual effectiveness of this approach in practice.
- While the paper compares its results to prior work (like Chen et al., 2023), the paper primarily focuses on theoretical performance without empirical validation or concrete case studies. This leaves the claim that the proposed method is more efficient somewhat unsubstantiated outside the theoretical context.
Methods And Evaluation Criteria: The OIAFC and OIAFB algorithms are well-suited to the problem as they are designed to identify the best scoring rule efficiently under both fixed-confidence and fixed-budget settings. These algorithms take into account the reward gap and problem complexity, which are critical for addressing the exploration-exploitation trade-off inherent in the multi-armed bandit framework. The use of conservative scoring rules, forced exploration, and binary search strategies are appropriate for ensuring that the best arm is identified with minimal sample complexity.
The evaluation criteria, specifically the sample complexity bounds (both instance-dependent and independent), are appropriate for measuring the efficiency of the proposed methods. The focus on deriving upper bounds for sample complexity is standard in MAB research, allowing for a theoretical assessment of the algorithm's performance. Additionally, the use of (ϵ, δ)-optimality as a measure of success aligns with common practices in bandit theory.
While the theoretical evaluation is solid, the absence of empirical results or benchmarks (e.g., performance on real or simulated datasets) makes it harder to assess the practical utility of the methods in real-world applications. Including empirical validation against standard MAB benchmarks would strengthen the claim that these methods are practically applicable.
Theoretical Claims: High-level but did not follow the details and appendix. Based on the main submission, the proofs referenced in Lemmas, and Theorem 1 seem logically consistent with standard practices in multi-armed bandit theory, particularly around sample complexity bounds and exploration strategies, but admit for not detail following every steps in mathematical proofs.
Experimental Designs Or Analyses: no experimental designs in the paper.
Supplementary Material: No
Relation To Broader Scientific Literature: The key contributions of the paper—specifically the OIAFC and OIAFB algorithms—build on existing work in the Multi-Armed Bandit (MAB) literature, particularly in the context of best arm identification and optimal scoring rule problems. The approach extends prior work on bandit algorithms by introducing a more nuanced trade-off between exploration and exploitation, which improves the sample complexity bounds.
The paper distinguishes itself by introducing new parameters that fine-tune the balance between normal and forced exploration, contributing to a more efficient sample complexity, especially in instance-dependent settings. This contrasts with prior methods like those in Chen et al. (2023), which use simpler parameterization and result in higher sample complexity. Additionally, the comparison to Jin et al. (2018) highlights the inefficiencies of their naive breaking rule, further demonstrating the novelty and practical improvements of the proposed methods. In summary, the paper builds on established MAB and best arm identification frameworks but introduces more refined techniques for better sample complexity, offering an improvement over previous approaches.
Essential References Not Discussed: The paper cites important works like Chen et al. (2023) and Jin et al. (2018), below lists additional papers that might provide related works in the broader best-arm identification and exploration-exploitation trade-off literature. Specifically:
- the work of Auer et al. (2002) on UCB (Upper Confidence Bound) algorithms is foundational and could help provide context for understanding the theoretical underpinnings of the proposed algorithms.
- The paper mentions fixed-budget and fixed-confidence settings, Kalyanakrishnan et al. (2012) might also be relevant to provide a more comprehensive understanding of multi-armed bandit algorithms under these constraints.
- While the paper introduces a more efficient exploration strategy, Lattimore and Szepesvári (2018) might also be relevant.
Other Strengths And Weaknesses: Strengths:
- The paper introduces a novel algorithm (OIAFC) for best scoring rule identification, combining existing bandit methods with a conservative scoring rule to improve exploration efficiency. This approach is an interesting extension of prior work.
- The theoretical contributions are impactful, offering a refined understanding of sample complexity in the best arm identification problem, with applications to optimal scoring rule determination.
- The writing is generally clear, especially in explaining the algorithm’s workings and key definitions.
Weaknesses:
While the theoretical analysis is strong, there is little practical evaluation or comparison with state-of-the-art algorithms, which weakens the paper's applicability in real-world scenarios.
Other Comments Or Suggestions: NA
Questions For Authors: 1. Can you provide more detailed explanations or examples of how the reward gap and problem complexity are calculated in practical scenarios?
2. Can you elaborate on how the selection of the parameters alpha and beta affects the performance of the algorithm, especially in terms of sample complexity and exploration vs. exploitation balance? also the theoretical results mention upper bounds for the sample complexity, but it’s unclear how these bounds behave in practice with varying values of M, K, and the other parameters.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your comments and questions.
**About the references:**
Thank you for pointing out these relevant references. We will include the three suggested papers in our revised manuscript. We greatly appreciate your helpful suggestion.
**About the Experiments:**
Thank you for your suggestion. In response, we have conducted additional experiments to validate the effectiveness of our proposed algorithm OIAFC.Thank you for your suggestion. We present several experimental results here to demonstrate the effectiveness of our algorithm OIAFC.
**Experiment 1:**
We set $B_S = 1$, the number of actions $K = 3$, the number of beliefs $M = 3$, and the number of states = 2.
We vary the minimum reward gap between the best arm and the sub-optimal arm from 0.1 to 0.5 by adjusting $q_1,...,q_K$ and $c_1,...,c_K$. We set $\delta = 0.05$ and $\epsilon = 0.1$. The results are presented in Table 1.
**Experiment 2:**
We fix the minimum reward gap at 0.2 and vary both the number of arms and the number of beliefs simultaneously from $\{2, 4, 6, 8, 10\}$, i.e., the number of arms equals the number of beliefs in each setting.
All other settings are the same as in Experiment 1. The results are presented in Table 2.
Each data point in the table is the averaged over 10 independent runs.
**Table 1:**
| Reward gap | Samples |
|------------|-----------|
| 0.5 | 8909.4 |
| 0.4 | 13972.5 |
| 0.3 | 46331.3 |
| 0.2 | 69542.0 |
| 0.1 | 117963.8 |
**Table 2:**
| Arm and belief number | Samples |
|-----------------------|-----------|
| 2 | 54732.6 |
| 4 | 77321.5 |
| 6 | 89125.4 |
| 8 | 10052.9 |
| 10 | 113592.4 |
As shown in Tables 1 and 2, the sample complexity increases as the reward gap decreases and as the number of arms and beliefs grows. These trends are consistent with theoretical expectations, since smaller gaps and larger action spaces make accurate identification more challenging.
**Q1**:Can you provide more detailed explanations or examples of how the reward gap and problem complexity are calculated in practical scenarios?
**A1**: Thank you for your question. We would like to clarify that our algorithm does not require knowledge of the reward gap or the instance-dependent problem complexity. These quantities—such as the reward gap and instance-dependent problem complexity—are unknown to the algorithm and are not used as inputs. Their role is purely theoretical, as they only appear in the analysis of sample complexity upper bounds. This is analogous to the setting in pure exploration bandit problems].
**Q2:** Can you elaborate on how the selection of the parameters alpha and beta affects the performance of the algorithm, especially in terms of sample complexity and exploration vs. exploitation balance?
**A2:** Thank you very much for your thoughtful question. Since our work focuses on the pure exploration setting, our algorithm is not concerned with the exploration–exploitation trade-off as in the regret minimization setting. The parameter $\alpha$ in our algorithm is used to balance the trade-off between exploration to identify the best scoring rule and exploration to estimate the feasible region $\mathcal{V}_k$ as shown in line 10 of Algorithm 1. The parameter $\beta$ is used in the breaking rule in line 15 to determine whether the optimal scoring rule has been attained with high probability.
Specifically, if $\alpha$ is relatively large, the algorithm is more inclined to prompt the agent to choose the action we wish it to take, which reduces the number of binary searches required. For example, in the extreme case where $\alpha_k^t = 1$ for all $t$, the OIAFC algorithm would directly present the action-informed oracle of arm $k$, and the agent respond with $k_t = k$. However, this could potentially lead to an increased simple regret $h(S^*) - h(S_t)$, requiring more regular rounds to estimate a suitable $\hat{S}^*$ that satisfies the feasibility condition. Conversely, if we choose $\alpha_k^t = 0$ for all $t$, due to the absence of accurate estimation for the feasible region $\mathcal{V}_k$, the agent may not return the desired arm. In this case, the algorithm needs to perform more binary searches to estimate $\mathcal{V}_k$.
As for $\beta$, setting it too high may cause the algorithm to terminate prematurely with a small number of samples, but the estimated best scoring rule may not satisfy the required condition. On the other hand, if $\beta$ is set too low, the algorithm becomes overly conservative and continues sampling until the reward gap between the estimated and true best scoring rule is much smaller than the target threshold $\epsilon$, which may lead to significantly higher sample complexity than necessary. | Summary: This work considers optimal scoring rules for principal/agent problems in online settings, in two variants: with a fixed budget, or fixed confidence, where the principal is trying to make investment decisions based on the knowledge/actions of the agent. The utility of the principal is driven by the quality of the knowledge/actions of the agent.
Prior work in scoring rules usually focuses on offline settings, so generating optimal fixed scoring rule according to a metric like minimizing regret on the principals part. Chen et al [2023] had studied the online setting with fixed confidence target, with a large instance-independent sample complexity bound.
This work adds instance-dependent parameters to the algorithm and the analysis, which improves both actual performance and sample complexity bounds.
Claims And Evidence: Yes, the claims made in the submission are supported by clear theoretical evidence and proofs.
Methods And Evaluation Criteria: Yes, the theoretical methods make sense for the problem at hand.
Theoretical Claims: I did not verify proof correctness.
Experimental Designs Or Analyses: N/A
Supplementary Material: I did not review the supplementary material in detail.
Relation To Broader Scientific Literature: This work improves significantly on the literature for online best scoring rule identification, with improved performance and sample complexity bounds that brings bounds closer to online bandit sample complexity.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: The paper is well written and consistently raises the higher-level conceptual impacts and meanings of different aspects, starting from succinctly covering the reason these bounds should exist: the large gap between performance in non-incentive-based online bandit problems and the online scoring rule from Chen et al. 2023.
Other Comments Or Suggestions: n/a
Questions For Authors: none
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for recognizing the novelty and importance of our work. We truly appreciate your encouraging feedback. | null | null | null | null | null | null |
Fast, Accurate Manifold Denoising by Tunneling Riemannian Optimization | Accept (poster) | Summary: The paper considers the problem of constructing efficient denoisers that map noisy samples from a manifold to the manifold. An online learning approach is used to construct a graph for optimization, and a mixed-order method is used to aid optimization in order to achieve good performances. Theoretical analyses are provided, and experiments are performed to demonstrate the effectiveness of the proposed approach.
## update after rebuttal
The authors provided more evidence that further strengthens the paper. I keep my original score, weak accept, and did not increase it as I am not that familiar with the task in general.
Claims And Evidence: The claims are supported by mathematical proofs and empirical experiments.
Methods And Evaluation Criteria: The method is shown to outperform the nearest neighbor approach, which is a natural baseline, and I find the proposed method satisfying. My minor complaint is in the abstract the paper is motived using diffusion models, to some degree drawing expectation that a neural approach will be employed, yet the proposed method is largely graph based.
Theoretical Claims: The claims on Riemannian optimization are generally standard. I did not check the claims in theoretical analysis section.
Experimental Designs Or Analyses: The presented experiments are generally well-motivated, and I find the experimental settings sound and valid.
Supplementary Material: I did not properly review the supplementary material.
Relation To Broader Scientific Literature: The paper considers combining graph-based traversal with Riemannian optimization, which are both established methods.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: In terms of strengths, much theoretical analysis is provided to justify the proposed method. It is interesting that the proposed method outperforms the nearest neighbor approach.
Other Comments Or Suggestions: Page 2, second column, line 97: "Literature on Reimannian Optimization": should be "Literature on Riemannian optimization"
Page 3, first column, line 147-153 and also later: "The projection problem (4)", "The optimization problem (4)", but the equation is not given a label
Page 5, caption of Figure 4: "point q is a local minimizer of q over M", I think the second q should be replaced by the objective
Questions For Authors: The paper proposes using both first order and zeroth order Riemannian optimization. It can be interesting to provide an ablation study on the effect of removing one of them.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer’s positive feedback and insightful suggestions! We are delighted that the reviewer found our proposed method satisfactory and valued both the supporting experimental results (particularly the strong performance compared to the natural baseline of nearest neighbor search) and the accompanying theoretical analysis.
Below, we address the reviewer’s questions regarding the connection between our method and diffusion models, and we present an ablation study of the mixed-order method, as suggested. Additionally, we conducted further experiments demonstrating that our method also outperforms AutoEncoders, a widely
used generic learning baseline. All updated experimental results are available at [Figures.pdf](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf) (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf).
**Ablation Study:** We thank the reviewer for this excellent idea! We conducted the suggested ablation study on the mixed-order method by selectively removing the first-order or zero-order optimization components one at a time. The results in [Figure 3](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf) confirm that mixed-order optimization has significant advantages over both first- and zeroth-order optimization for efficient, high-accuracy denoising.
In [Figure 3](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf), we observe that the first-order optimization incurs the lowest computational cost. As expected, it also yields the highest error, due to getting stuck at local minima. In the high-accuracy regime, our mixed-order optimization achieves a more favorable complexity–accuracy tradeoff compared to zero-order optimization. This advantage arises because zero-order methods are less efficient when operating over dense graphs. Conversely, in the low-accuracy regime where the underlying graph is sparse, zero-order optimization slightly outperforms the mixed-order method. This is expected, as in sparse graphs the mixed-order method essentially relies only on its zero-order steps, behaving similarly to pure zero-order optimization.
**Connection to Diffusion Models:** As reviewer pointed out, diffusion models use generic learning architectures. To further validate our approach, we conducted additional experiments comparing our method with AutoEncoders -- a natural generic learning baseline designed to exploit low-dimensional structure in data. As illustrated in [Figure 4](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf), our mixed-order optimization method -- designed to exploit both the optimization structure of the denoising problem and the manifold structure of the data -- achieves a substantially better accuracy-complexity trade-off compared to the autoencoder. This further highlights the efficiency and effectiveness of our approach. We will add this comparison to the final version of the manuscript. Below we describe the details of this experiment.
To perform an extensive comparison to Autoencoders as baseline, we create eleven different networks with various depths and widths and symmetric encoder/decoders. The widest autoencoder layer has width equal to the data dimensionality, and the smallest bottleneck equals 2. The shallowest autoencoder (which appears as the point on the low-right corner of [Figure 4](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)), has one layer in the encoder, and one in the decoder. The deepest autoencoder has 10 layers in the encoder, and 10 in the decoder. As we can see in [Figure 4](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf), while autoencoders can achieve slightly higher accuracy, our method exhibits significantly better efficiency-accuracy tradeoffs -- by orders of magnitude. In our work, we notice that varying the $R(i)$ parameter in our mixed-order method improves accuracy of manifold traversal, and we plan to explore further in this direction in future work.
Since the denoiser is a critical building block of diffusion models and most test-time compute is being spent on denoising, improving its test-time efficiency can significantly accelerate the overall process. As part of our immediate next work, we plan to integrate the proposed method into diffusion models to further explore this direction.
We also appreciate the reviewer’s careful attention to detail, which has helped us improve the clarity and presentation of the paper. We will fix the typos in the final manuscript.
**Closing Comment:** We sincerely thank the reviewer for their valuable feedback and thoughtful suggestions. We hope our rebuttal has effectively addressed the comments and questions. We would be glad to further discuss or clarify any additional questions. | Summary: This paper addresses the problem of efficiently denoising new noisy data sampled from an unknown manifold M, relying only on noisy samples. To this end, a framework for test-time efficient manifold denoising was proposed. In the theoretical analyses, the optimality of the proposed methods was elucidated. In the experimental analyses, complexity-performance tradeoffs compared to baseline methods were examined on scientific and imagery data.
Update after rebuttal
I checked all responses and comments from authors and reviewers. Some of my questions were addressed in the rebuttal. Therefore, I keep my score.
Claims And Evidence: The main problems addressed in the paper are identified by “Learned denoisers play a fundamental role in various signal generation (e.g., diffusion models) and reconstruction (e.g., compressed sensing) architectures, whose success derives from their ability to leverage low-dimensional structure in data.”
In the theoretical analyses, convergence properties of the proposed methods, such as convergence regions and rates, were analyzed. However, these properties were not associated with those of the methods (e.g. diffusion and CS models) considered in the main problem definition.
In the experimental analyses, the proposed methods were examined on synthetic manifolds and High-Dimensional Scientific Data. However, further analyses on several additional tasks in comparison with the aforementioned models should be given.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem or application at hand, but limited.
Theoretical Claims: Theoretical analyses are clear, and I enjoyed the proofs. However, further comparison with the other baselines and related models would improve the novelty and merit of the results.
Experimental Designs Or Analyses: As mentioned in the previous comments, the proposed methods are evaluated in limited settings. The experimental analyses should be improved by comparing the proposed methods with additional baselines and related work using larger models on larger scale datasets.
Supplementary Material: I checked the mathematical and experimental analyses.
Relation To Broader Scientific Literature: As mentioned above, the paper is well written in general. However, theoretical and experimental analyses and results are limited, and should be improved in comparison with the other denoising methods.
Essential References Not Discussed: Most of the major prior work were considered in the literature review.
Other Strengths And Weaknesses: The paper is well written and mathematical results are clear. However, explanation of experimental setups and results should be improved. Moreover, authors should describe how they define and compute projections and solve the optimization problems given in the algorithms in the experimental analyses. Since these details are omitted, implementation of the algorithms is not trivial and reproduction of the results can be challenging.
Other Comments Or Suggestions: Please see the above comments.
Questions For Authors: Please see the above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our writing and theoretical analysis. At the core of our work is a novel, efficient and accurate algorithm for manifold denoising, which reframes learning-to-denoise as learning-to-optimize. Our key technical innovations includes an accurate and efficient mixed-order method, and a scalable online learning algorithm that learns to optimize from only noisy data. This design enables our method to scale effectively to large, high-dimensional datasets, and to practical scenarios where the manifold is unknown and only noisy samples are available.
Below, we present additional experiments and explanations. We also conducted an ablation study ([Figure 3](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)), which shows the superiority of mixed-order over both first- and zeroth-order methods. All new experimental results are available at https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf.
**Connection to Larger Architectures:** Thank you for your insightful comment. Denoising serves as a critical building block for diffusion-based signal generation and signal recovery (compressed sensing (CS)). Our theory demonstrates that mixed-order optimization solves the manifold denoising problem efficiently and accurately — with a number of operations that scales well in the ambient dimension, and a mean-squared error which is close to statistically optimal. These properties have implications for the efficiency and performance of architectures that use denoising as a building block: the overall number of operations is bounded by the number of diffusion/CS outer loop step times the number of operations used by the denoiser. Moreover, in denoising-based CS reconstruction, there is a direct relationship between the MSE of the denoiser and the MSE of CS reconstruction [Metzler et al, 2016]. By reframing the denoising problem (inner loop) as the optimization problem, which is within the outer loop of tasks such as signal reconstruction, we can consolidate these two loops into a single optimization framework, which could yield significant improvements in computation. We leave this direction to future work.
**Additional Experimental Analyses:** We sincerely thank the reviewer for suggestions. We have conducted additional experiments on real-world RGB images to further demonstrate the denoising capability of our method. In the scenario where only a single noisy image is available, the training error curve shows a clear decreasing trend [Figures 5, 6, and 7](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf). In the case with large-scale data, we see that the method is able to learn meaningful structure of real-world image patches ([Figure 8](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)). In our next work, we will integrate the proposed method into larger architectures, such as CS and diffusion models.
**Additional Baseline:** We thank the reviewer for the helpful and constructive feedback. As an additional baseline, we have added nonlinear autoencoders, which are generic learning architectures that capture low-dimensionality. Our mixed-order method — designed to exploit optimization structure of denoising problems and manifold structure of data — achieves a substantially better accuracy-complexity trade-off compared to autoencoders ([Figure 4](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)), further highlighting the efficiency of our approach. We will add this comparison to the final manuscript. For more details, we kindly refer the reviewer to our response ( paragraph [Connection to Diffusion Models]) to Reviewer eTvP due to space limitation.
**Large-Scale Data Denoising:** We appreciate the reviewer’s feedback regarding the exploration of larger-scale data. We have applied our method to real-world large-scale image data and performed denoising at the patch level. We use images from ImageNet with additive Gaussian noise on 1,310,000 patches. **The results on large-scale data further demonstrate our method's denoising ability([Figure 8](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)).**
**Improved Documentation of Experimental Setup:** We sincerely thank the reviewer for highlighting the importance of clarifying experimental setup and algorithmic details. Reproducibility is instrumental to progressing science. We will give more details in the supplementary material in the revised manuscript, e.g., computation of incremental PCA for efficient tangent space estimation, definitions of projections, and other details. We will also release all code and data for reproducibility our work.
**Closing Comment:**
We sincerely thank the reviewer for their time and thoughtful feedback. We hope our responses effectively addressed and clarified the thoughtful questions posed by the reviewer. | Summary: This paper proposes a new framework for efficient denoising of noisy data sampled from an unknown manifold, which treats "learning to denoise" as "learning to optimize." The key innovations include: 1) online learning that learns to optimize clean signals using only noisy data and improve the optimizer on the fly; 2) mixed-order methods ensure that the learned optimizers reach global optimality, balancing efficiency and denoising performance. The authors provide theoretical analysis as well as conducting experiments on synthetic and real data.
Claims And Evidence: Yes, I list the details in the [Other Strengths And Weaknesses] section.
Methods And Evaluation Criteria: Yes, I list the details in the [Other Strengths And Weaknesses] section.
Theoretical Claims: Yes, I checked part of the theoretical results.
Experimental Designs Or Analyses: Yes, I checked the experimental designs and results analysis.
Supplementary Material: I briefly reviewed some of them but did not dive into very details.
Relation To Broader Scientific Literature: I list the details in the [Other Strengths And Weaknesses] section.
Essential References Not Discussed: The authors have discussed the most important and related works.
Other Strengths And Weaknesses: Strengths
1) The paper introduces a novel method that translates the denoising process into an optimization problem over a manifold, leveraging Riemannian optimization techniques. It provides a new perspective on denoising.
2) The proposed method is efficiency. The approach is computationally efficient, particularly in high-dimensional settings, and scales well with the size of the dataset and dimensionality.
3) The introduction of mixed-order traversal which combines first-order and zero-order steps is interesting. It ensures robust convergence towards global minima, enhancing the reliability of the optimization process.
4) The idea of online learning is also novel. Its ability to build necessary structures on-the-fly as data points are encountered is a pioneering aspect, allowing for real-time adaptability and efficiency.
5) In addition, the authors not only provided the theoretical analyses with near-optimal denoising guarantees, but also conducted experiments on both synthetic manifolds as well as scientific and imagery data.
Weaknesses
1) I am wondering if due to some privacy policy, one only collected a very few noisy samples (e.g., in a very extreme case, a single noisy data), will this method still work? Since the noisy samples are very small, then the online learning strategy may not work as expected.
2) The other extreme case, for large-scale, complex image data with different kinds of noise, saying the noise from this benchmark (https://github.com/hendrycks/robustness), how would this method work?
Other Comments Or Suggestions: It would be very nice to see some visualization results showing the noisy data and the denoised results.
Questions For Authors: I list some of my questions in the [Other Strengths And Weaknesses] section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and thoughtful questions! We're glad the core idea of learning-to-optimize for denoising, along with our mixed-order method and scalable online learning approach, was well received. We also appreciate the recognition of our experimental and theoretical contributions. Below, we address the reviewer’s questions with additional results. Additionally, we present an ablation study ([Figure 3] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)), which further highlights that mixed-order optimization has significant advantages over both first- and zeroth-order optimization for efficient, high-accuracy denoising. All updated results are available at [Figures.pdf] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf).
**Single Noisy Data Denoising:** We thank the reviewer for this excellent question! The scenario where only a single noisy sample is available is indeed both interesting and challenging. To investigate this setting, we conducted an additional experiment focused on denoising a single natural image from the DIV2K dataset [Agustsson and Timofte, 2017] — a high-resolution dataset of natural RGB images with diverse contents. The result showed that the training error steadily decreased ([Figures 5, 6, 7] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)). Details follow below.
As suggested, we applied our method to a single training image, performing patch-level denoising under the common assumption that patches lie on a low-dimensional manifold. We choose a patch size as $ 8 \times 8 \times 3$, with a stride 8, which results in about 50,000 patches in a single randomly selected image from the DIV2K dataset. [Figures 5, 6, 7] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf) also show denoised images and patches.
**Large-Scale Data Denoising:** We appreciate the reviewer’s feedback and suggestions regarding the exploration of large-scale, complex image data. Following the reviewer’s suggestion, we have applied our method to real-world image data and performed denoising at the patch level. We use images from ImageNet with additive Gaussian noise on 1,310,000 patches. The results demonstrate clear denoising progress, evidenced by steadily decreasing training error curves, as shown in [Figure 8](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf).
Traditionally, image degradations, such as additive noise, blur, zoom, saturation, and others, are handled via iterative reconstruction methods. These methods alternate between two steps: (1) enforcing consistency with the observed image via a degradation model, and (2) applying a prior on the clean image, typically through a proximal operator. In approaches like Plug-and-Play [Venkatakrishnan et al., 2013], this proximal step is replaced by a learned denoiser, allowing for more flexible and data-driven regularization. Our method can be directly applied as a denoiser for Gaussian noise. However, for more complex degradations such as motion blur and others — a single-step L2 projection onto the clean-signal manifold is insufficient. Just as our method can be integrated into diffusion models and compressed sensing reconstruction pipelines, it can also be incorporated into iterative algorithms for signal restoration under more complex degradations. When the degradation model is known, frameworks like Plug-and-Play can be used to combine our denoising method with degradation-specific reconstruction steps.
Our method remains applicable to other types of challenging degradations when embedded within an iterative reconstruction framework that explicitly accounts for the degradation process, such as Plug-and-Play.
**Visualizations of Denoised Data:** We thank the reviewer for this helpful suggestion. In response, we have added visualizations that display the noisy inputs, the corresponding denoised outputs, and the ground truth signals. These examples — now presented in [Figure 2](https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf) (noisy gravitational wave (GW), denoised GW, and ground truth GW) and [Figures 5, 6, 7] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf) (ground truth image, noisy image, and denoised image) — offer qualitative evidence of the effectiveness of our method and clearly demonstrate its ability to recover clean signals from noisy observations.
**Closing Comment:** We sincerely thank the reviewer for their thoughtful feedback and constructive suggestions. We're glad that the core ideas and contributions of our work resonated with you, and we truly appreciate your comments, which helped us further improve our work. | Summary: This paper studies the interesting manifold denoising problem (ref. 1) with the focus “learning-to-optimize” and proposes test-time efficient denoising algorithm. Also mixed-order optimization is proposed to help achieve near optimal results, given first-order gradient only optimization is more efficient.
Strengths: the high-level idea of combing both first order (gradient) and zero order (zero-order neighbors search) is interesting, and the construction of manifold traversal networks is good to see. Also, theoretical analysis (section 7 and appendix) results show the proposed algorithm can achieve certain error accuracy (i.e., linked with geodesic curvature, intrinsic dimension) with upper bounded computational complexity.
Experimental results on synthetic data are presented in section 8 and illustrate the trend of training stage denoiser error converging to theoretical lower bound. Test time efficiency and the comparisons with nearest neighbor search are included to show the proposed algorithm can achieve same level of accuracy with lower computational cost.
References:
1. Hein, M. and Maier, M. Manifold denoising. Advances in neural information processing systems, 19, 2006
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I looked Theorem 7.1, and some more from appendix section A, though not very carefully for examination.
Experimental Designs Or Analyses: Section 8 included both (1) training error converge trend, and (2) test time accuracy vs. complexity tradeoff, and figure 5 for visualization of graph construction results for manifold traversal networks.
Supplementary Material: Yes, I read some from supplementary A for more details behind the theoretical analysis.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: This work cited a number of papers from traditional denoising methods to more recent deep neural network leaning based paper, and for manifold denoising then ref. 1 below is included, though there are more directly related works for manifold denoising, e.g., more recent denoising works like ref. 2 or others, should be good to include more closely related work discussions.
Ref. 1: Hein, M. and Maier, M. Manifold denoising. Advances in neural information processing systems, 19, 2006
Ref. 2: D. Gong, F. Sha, and G. Medioni. Locally linear denoising on image manifolds. AISTATS, Proc. Mach. Learn. Res., 9:265–272, 2010
Other Strengths And Weaknesses: Limitations:
1. Experimental results, unless I missed, seems only included synthetic data for the test time accuracy evaluation and comparisons with alternative methods. It will be informative to see results from real world data, and some real-world data indeed with lower intrinsic dimension.
2. Section 8 only included "nearest neighbor search" as the alternative baseline, it should be helpful to include other (more recent) works, to support the proposed algorithm in this paper.
Other Comments Or Suggestions: 1. Landmark, first mentioned in section 4 (page 4), feel should be helpful to add some illustration here, i.e., high level the concept and usage of landmark in this work, consider it is one important part of the proposed denoising framework.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s valuable feedback to improve our work. We are pleased that the reviewer finds the manifold denoising problem interesting, as well as our idea of rethinking learning-to-denoise as learning-to-optimize, and our accurate and efficient mixed-order method. We are also glad that the reviewer appreciates our theoretical analysis, which demonstrates near-optimal denoising performance and establishes an upper bound on the computational complexity, both of which are tied to the manifold’s geometric properties. Below, we respond to the reviewer’s questions with additional experiments. All updated results are available at [Figures.pdf] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf).
**Additional References:** We thank the reviewer for suggesting the inclusion of additional related works. We will incorporate [Hein and Maier, 2006], [Gong et al, 2010], [Hu et al, 2021], [Puchkin and Spokoiny, 2022] in the final manuscript. We discuss the two suggested references below.
[Hein and Maier, 2006] presents a manifold denoising algorithm based on graph-based diffusion process. [Gong et al, 2010] approximates the manifold by linear subspaces, and denoises based on the local linear approximation. Both works and ours denoise based on only noisy data without prior knowledge of the manifold. However, they are inefficient at test time: denoising previously unseen data requires a linear scan to locate its nearest neighbors. At the core, these methods rely on nearest neighbor search, while our approach achieves significantly better accuracy-complexity trade-offs at test-time.
**Real-World Data:** We appreciate the reviewer’s thoughtful suggestion. We would like to emphasize that using synthetic gravitational waves (GWs) is standard practice in GW astrophysics, as it enables systematic evaluation of methods’ performance/limitations in a setting where only a few hundred confirmed detections exist.
Following the suggestion, we have also applied our method to real-world images; patches are assumed to lie near a low-dimensional manifold, and denoising is performed at the patch level. We perform experiments on large-scale real-world data from ImageNet with additive Gaussian noise on 1,310,000 patches. **These experiments on large-scale data further demonstrate our method's denoising ability beyond synthetic settings** ([Figure 8] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)).
We tested our method under extreme scarcity, when only one noisy real-world image is available as training data. The results show clear denoising progress, with steadily decreasing training error curves ([Figures 5, 6, 7] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)). We will include training curves in the main paper and additional visuals in the Appendix of the final manuscript.
**Additional Baseline Comparison:** We thank the reviewer for helpful feedback. We chose nearest-neighbor search as our baseline because it serves as the foundation for state-of-the-art provable denoising methods [Yao et al, 2023], [Sober and Levin, 2020]. Our proposed mixed-order method achieves orders-of-magnitude improvements in computation while maintaining theoretical guarantees.
As suggested, we have added nonlinear Autoencoders as a baseline, which are generic learning architectures designed to leverage low-dimensionality. The result shows that our mixed-order optimization method — designed to exploit both the optimization structure of the denoising problem and the manifold structure of data — achieves a substantially better accuracy-complexity trade-off compared to the Autoencoder ([Figure 4] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf)). We will add this comparison to the final manuscript. For more details, we kindly refer the reviewer to the our response (**[Connection to Diffusion Models]** paragraph) to Reviewer eTvP for more details due to space limitation.
**Additional Landmark Elaboration:** We thank the reviewer for emphasizing the need to clarify the landmark concept. We will include
the following illustrations in the final manuscript. [Figure 1] (https://anonymous.4open.science/r/Figures_Manifold_Traversal-8D25/Figures.pdf) provides an illustration of landmarks. Intuitively, they serve as a discrete approximation of the unknown manifold. To enable optimization over the manifold, we need a structured domain, which is formed by the landmarks and connecting edges, facilitating optimization. Importantly, all these components — including the landmarks, edges, and other related quantities — are learned directly from the noisy data, using the proposed online learning algorithm.
**Closing Comments:** We sincerely thank the reviewer for the valuable feedback, which helped us improve the work. We hope our responses address your concerns and are happy to provide further clarification if needed. | null | null | null | null | null | null |
EquivaMap: Leveraging LLMs for Automatic Equivalence Checking of Optimization Formulations | Accept (poster) | Summary: This paper addresses the challenge of checking equivalence among modeling formulations. The authors introduce quasi-Karp as an equivalence criterion for determining model equivalence. The primary concept of this criterion involves verifying the existence of "equivalence mapping" (quasi-Karp) between models. The authors leverage GPT-based methods to automatically identify the existence of these equivalencies. The dataset used in experiments is derived from the NL4OPT dataset, further crafted by introducing equivalent and non-equivalent transformations. Experimental results indicate that Equivamap achieves 100% accuracy on this dataset.
Claims And Evidence: Not sufficiently clear and convincing.
The experiments are restricted to a dataset derived from the NL4OPT dataset, crafted by introducing equivalent and non-equivalent transformations. Broader experimentation or testing on diverse datasets would enhance result validity and generalizability.
Further, the experiments only show that
Methods And Evaluation Criteria: No, not enough.
NL4OPT is a relatively simple dataset and there are more complex, and closer-to-reality ones, like MAMO (ComplexLP) and IndustryOR. The generalizability of EquivaMap to more challenging problems remains uncertain.
Theoretical Claims: No proof is involved.
The definition of quasi-Karp equivalence is sound and reasonable. If the mapping defined in quasi-Karp equivalence exists, then the two optimization models are equivalent. This claim is true by definition, without the need for separate proof.
Experimental Designs Or Analyses: See **Claims And Evidence** and **Methods And Evaluation Criteria**.
Supplementary Material: Not yet.
Relation To Broader Scientific Literature: The ideas of the equivalent transformations (used in the construction of the dataset) are from the optimization area (MILP).
Essential References Not Discussed: Not found yet.
Other Strengths And Weaknesses: Other Strengths:
- This work tries to address an important challenge in modeling equivalence checking, where previous works have not provided sufficiently satisfying solutions.
Other Weakness:
- The method faces scalability issues. As the dimensionality of the problem increases, the input prompt length could significantly grow.
- The method also faces reliability issues. The reliance on LLMs introduces a risk of hallucinations, where the model might generate plausible but incorrect equivalence mappings. Such inaccuracies can be challenging to detect, potentially impacting the reliability of Equivamap in more complex or realistic scenarios.
Other Comments Or Suggestions: NA
Questions For Authors: In practice, when using the proposed method to benchmark LLMs for optimization modeling, can the proposed method provide better accuracies than the existing method? Specifically on more complex datasets like MAMO (ComplexLP) and IndustryOR.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed feedback and for acknowledging the importance of our work:
> "This work tries to address an important challenge in modeling equivalence checking, where previous works have not provided sufficiently satisfying solutions."
and for recognizing the soundness of our definition:
> "The definition of quasi-Karp equivalence is sound and reasonable."
We appreciate the opportunity to clarify aspects that may have been misunderstood. While we recognize the importance of evaluating generalization and scalability, we note that some of the concerns—particularly regarding the **MAMO** and **IndustryOR** datasets—**go beyond what these datasets can currently support**. They **do not offer the mathematical formulations and/or solver-compatible code needed for EquivaMap** to systematically compare different instantiations of the same optimization problem. Below, we address these points and provide additional context.
---
> **C1**: Generalizability to More Complex Datasets (e.g., MAMO (ComplexLP), IndustryOR)
**A1**: We appreciate the reviewer’s interest in evaluating EquivaMap on more datasets. However, datasets like **MAMO (ComplexLP)** and **IndustryOR** are not compatible with our evaluation method. While these datasets provide natural language problem descriptions and the respective optimal objective values, they do not include the corresponding **mathematical formulations** or **solver code** for obtaining such values. Our framework evaluates equivalence in formulations, thus any viable dataset for assessing it requires the existence of the formulations.
Specifically, EquivaMap tests quasi-Karp equivalence by identifying and verifying mappings between **different mathematical formulations** of the same optimization problem. This process requires access to mathematical formulations (or their equivalent solver-accessible code). Since **MAMO (ComplexLP)** and **IndustryOR** provide neither, they cannot be used to assess our method.
Though the original datasets cannot be used to assess EquivaMap, as an initial step to assess our method’s performance on MAMO and IndustryOR, we **manually constructed the mathematical formulations and solver-compatible code** of five instances from each dataset (we made sure to choose instances with different underlying formulations). We then applied the equivalent and non-equivalent transformations (described in Section 4) and evaluated EquivaMap accordingly. In all cases, EquivaMap achieved **100% accuracy**. We will add this discussion to our paper.
We also respectfully disagree that the **NLP4LP** dataset, which contains **over 300 instances**, is too simplistic for evaluating equivalence mapping. The average **description length** of **NLP4LP (hard)** exceeds 900 characters, indicating rich semantic content. It also includes **multi-dimensional parameters**, like MAMO (ComplexLP) and IndustryOR. This information is shown in Table 1 of [1].
[1] AhmadiTeshnizi, A., et al. (2024). Optimus-0.3: Using LLMs to model and solve optimization problems at scale.
> **C2**: Scalability with the increase of the dimensionality of the problem and corresponding input prompt length
**A2**: A key feature of EquivaMap is that it operates over **sets of variables**, rather than individual variables in a large-scale instance.
For example, in a graph coloring problem, although there is one decision variable per node, the formulation defines a **set of variables** (e.g., x[i] for each node i). EquivaMap only needs to reason about how one such variable set maps to another, rather than processing all individual variables (e.g., x[1], x[2], …) one-by-one. This is reflected in our input JSON format, where each set of variables and constraints is represented once, along with its index set.
This enables scalability because the **number of sets of variables** is typically **much smaller than the total number of variables**. In contrast, approaches that rely on analyzing variable-constraint graphs (e.g., in the WL-test) must operate over formulations that account for the total number of variables, leading to significant overhead as the problem size increases.
> **C3**: Reliability on LLMs introduces a risk of hallucinations, where the model might generate plausible but incorrect equivalence mappings
**A3**: EquivaMap is designed to be robust to LLM hallucinations. In EquivaMap, for a given LLM-found mapping, we run it through a **verification step** (L239 - L245 right column) that checks whether the mapped solution preserves optimality. Mappings that fail are discarded. This sampling + verification pipeline **ensures correctness** even when the LLM is imperfect.
This distinguishes EquivaMap from **naive prompting-based approaches** that treat LLM output as inherently reliable. Our method instead treats the LLM as a heuristic mapping generator, embedded within a strict verification loop that ensures only correct mappings are accepted.
---
Rebuttal Comment 1.1:
Comment: Thank authors for the reply, but I am not fully convinced by the rebuttal.
## Concern 2: Scalability
I disagree with the authors' argument, as I believe it may mislead readers:
1. **Prompt length will increase**: As shown in Appendix A and the codes, **the prompt clearly includes every individual variable and constraint**. That means the prompt length must grow significantly as the problem size increases.
2. **EquivaMap needs to examine individual variable/constraints**: According to the paper, EquivaMap should be able to determine whether an added cutting plane is valid. In Figure 1, a cutting plane is added based on a clique $\mathcal{K}$. A clique is a subset of nodes in which every two nodes are adjacent. This means that, to verify whether $\mathcal{K}$ is truly a clique, EquivaMap needs to check $k$ variables and $k(k-1)/2$ constraints. This CANNOT be achieved by simply inspecting the description on the sets, "where each set of variables and constraints is represented once, along with its index set" (mentioned in the rebuttal).
## Concern 1: Dataset
I apologize for mistakenly referring to **NLP4OPT** instead of **NLP4LP** in my comment. It was an honest error, and I did not intend to imply that NLP4LP is a simple dataset. However, this brings up two issues:
1. EquivaMap explicitly requires mathematical formulations, which could limit its applicability to datasets where such formulations are not readily available.
2. Although NLP4LP includes challenging instances (e.g., complicated text descriptions), the problem sizes are generally small. Many instances have fewer than 10 variables (based on my initial review of the data). Furthermore, the NLP4LP paper states that "passing all problem information directly to an LLM is not a scalable solution," and they ”separate the numerical data from the problem descriptions, providing only metadata to the LLM“. In contrast, EquvaMap includes all numerical data in the prompt, further enhancing the concern about its scalability.
## Concern 3: Reliability
Thanks for the clarification. EquivaMap indeed involves a verification step. Upon re-reading the paper, I realize the concern is more complicated than I initially thought:
1. According to Definition 3.5, Quasi-Karp Equivalence requires that **every** optimal solution of $\alpha'$ can be converted to an optimal solution of $\alpha$. However, **EquivaMap only verifies one optimal solution of $\alpha'$**. It is possible that $\alpha'$ has a strictly larger optimization set, and thus cannot be regarded equivalent to $\alpha$. I did not find explanations about this gap in the original paper.
2. Under the current pipeline, I am unsure whether EquivaMap is truly necessary for the following transformations: Add Slack Variables, Replace by Base-10 Representation, Add Valid Inequalities, Replace by Linear Combinations, Random Order, Loose Constraints. In all these cases, the optimal value will remain unchanged. As shown in the experiments, Execution Acc. already achieves 100% accuracy over all these cases.
For instance, consider Add Valid Inequalities. If the added inequality is valid, then $\alpha'$ and $\alpha$ have the same optimal values. If not, they have different ones. EquivaMap does not seem necessary to distinguish between these two cases.
If this is the case, the only situations where EquivaMap is effective over solver evaluation are Substitute Objective Functions and Rescaling. These two cases appear much simpler to handle and do not require mixing with the other complicated but solver-differentiable scenarios.
3. BTW, I am not clear about the Feasibility Transformation. The description in Table 1 says this transformation "Turn both the original and a randomly chosen instance into feasibility problems (replace objectives with 0)". But the example is to turn a feasibility problem into another one. Also, I don't understand why solvers cannot differentiate this case.
4. For large-scale and more complex MILP problems, solvers may not always output an optimal solution, or may take an unreasonably long time to obtain results. In such cases, EquivaMap may fail (if the solver fails).
## Concern 4: Data Availability (New)
I planned to check the original data to justify Concern 3. However, I cannot find any data in the supplementary material (in the anonymous link). However, the abstract claims: Our **code and data** can be found in this repository..."
## Remark
- I notice a significant discrepancy between my score and other reviewers' scores. To facilitate a fair and thorough evaluation of the paper, I re-read the paper, including the Appendix and supplementary materials. After this careful re-examination, I maintain my score..
- My research area involves using LLM to address industry-related optimization problems, which often involve hundreds or more variables. As such, I have a particular concern regarding the scalability of methods in this area, which informs my perspective on the paper.
---
Reply to Comment 1.1.1:
Comment: > C2.1 & 2.2: Scalability
We think there is still a misunderstanding about our inputs and prompt. Our approach **does indeed operate on sets of variables** (whose size is typically **much smaller than individual variables** for large-scale problems). Consider the max. ind. set example. EquivaMap takes as input the same metadata formatting as NLP4LP:
```
"Variables": {
"Node": {
"type": "binary",
"shape": ["NumNodes"]
}
},
"Constraints": {
"EdgeConstraint": {
"formulation": "Node[i] + Node[j] <= 1 for all i > j such that Edges[i][j] == 1",
}
}
}
```
In this case, when the prompt (as shown in Appendix A) iterates between variables of $\alpha$ and $\alpha'$, it is taking a set of variables (In this case ```Node```) as input, instead of ```Node[1], Node[2],...```. Suppose in formulation $\alpha'$ the variables are ```Node'```, then the mapping LLM finds will be ```Node[i] = Node'[i] \forall i```, instead of ```Node[1] = Node'[1], Node[2] = Node'[2],...```. Multiple instances in our dataset (No.96, No.132, No.247, etc) also highlight this distinction. This means that our prompt size is the same for this problem whether the underlying graph has 10 nodes or 10000. However, the length of our prompt does increase with the number of **sets of variables**. This is also true for constraints defined over sets (i.e., all cliques). We agree that this distinction was difficult to see in the current draft and we've added more explanation including a full example in a revision.
We also believe there may be a misunderstanding regarding EquivaMap’s role. EquivaMap is not designed to verify whether a given inequality (e.g., a cutting plane based on a clique) is valid (nor do we claim this in the paper).
> C1.1: “...could limit its applicability to datasets…"
EquivaMap is designed specifically for verifying equivalence between two given mathematical formulations. Our focus is not on datasets that lack formulations altogether, but rather on cases where such formulations are available and an equivalence check is needed. We acknowledge that many current benchmarks (e.g., MAMO, IndustryOR) are built around execution accuracy, but this reflects the limitations of available data, not of our method.
> C1.2: "...the NLP4LP paper states… EquivaMap includes all numerical data in the prompt..."
We clarify that EquivaMap uses **the same metadata** as NLP4LP. We separate numerical data from textual descriptions and only provide structured metadata to the LLM, as demonstrated in our example addressing C2.1 & 2.2.
> C3.1: "...EquivaMap only verifies..."
EquivaMap verifies equivalence using a single optimal solution of the transformed problem. This serves as a **necessary condition** for equivalence under our definition. In practice, this is precisely the case we care about finding the mapping for (i.e., solving $\alpha’$ enables us to solve $\alpha$). We will clarify this in the revision.
> C3.2: "...Execution Acc. already achieves 100% accuracy..."
EquivaMap is designed as a **general-purpose evaluator** that does not assume prior knowledge of the transformation type. In real-world scenarios, such transformations are often unknown. Existing approaches can perform well on certain cases; but **without knowing the transformation in advance**, one cannot guarantee their performance. In contrast, EquivaMap achieves 100% accuracy across different transformations **without** relying on such prior knowledge.
> C3.3: "...Feasibility Transformation..."
Feasibility transformations replace the objective function of two completely different optimization formulations with **the same constant**, but this does not mean the problems are equivalent—two problems can have the same objective yet entirely different feasible sets. Execution accuracy cannot detect this via optimal values.
> C3.4: "For large-scale and more complex MILP..."
We agree that if the solver fails to find any solution, EquivaMap cannot verify equivalence. However, this limitation is shared by execution accuracy, which also relies on solver outputs for generating the optimal objective value. In settings where a solver returns a suboptimal solution, EquivaMap can be used to verify the solution’s feasibility and provide an optimality gap compared to the existing optimal value, which cannot be done by execution accuracy.
> C4: "...I cannot find any data..."
Thank you for pointing this out and we apologize for this! We initially included a link to the Hugging Face repository but later removed it upon realizing it was not anonymous. We have now uploaded the data to the anonymous repository.
---
We emphasize that in response to the reviewer’s earlier suggestions, we ran EquivaMap on instances from MAMO(ComplexLP) and IndustryOR which we manually put into our metadata format and included those results in our rebuttal. We respectfully note that this additional effort was not acknowledged in the reviewer’s most recent comments. | Summary: The submission proposed a new method for determining whether two optimization problem formulations are equivalent. The authors introduce Quasi-Karp Equivalence, based on Karp reductions, and propose EquivaMap, a framework that utilizes LLMs to identify mappings between decision variables of different formulations. The paper also introduces EquivaFormulation, the first open-source dataset of equivalent optimization formulations. Experimental results demonstrate that EquivaMap outperforms existing equivalence-checking methods, including execution accuracy, canonical accuracy, and graph-based approaches. The contributions are positioned within the broader context of optimization copilots and automated mathematical modeling.
Claims And Evidence: The authors provide sufficient evidence to support their claims. However, I have some doubts about the scalability of EquivaMap in practical applications (mathematical modeling copilots), as it requires multiple calls to both the LLM and the solver for each problem instance. Furthermore, we typically seek a formulation that is correct for all problem instances, whereas EquivaMap can only demonstrate instance-specific equivalence, not a general equivalence.
Methods And Evaluation Criteria: The methods—embedding an LLM-based “mapping finder” plus a solver-based verification step—are straightforward and well-motivated. However, one drawback of the proposed approach is that it cannot account for partial equivalence between two formulations, which would be beneficial for more refined accuracy metrics. In contrast, methods like canonical accuracy and graph-edit distance do provide a measure of partial similarity, allowing more nuanced evaluations of how closely two formulations align.
Theoretical Claims: Quasi-Karp Equivalence: Inspired by polynomial-time Karp reductions, but the formal proof of correctness or completeness is limited to a set of instance-specific transformations. There is no guarantee that every subtlety in modern MILP formulations (such as symmetries or problem-specific decomposition constraints) can be captured by a single linear mapping.
Experimental Designs Or Analyses: - The new dataset is well-constructed, covering various equivalence-preserving transformations.
- Baseline comparisons include execution accuracy, canonical accuracy, WL-test, and a naive LLM approach.
Supplementary Material: The supplementary material includes additional experimental results, detailed prompts used for LLMs, and dataset details.
Relation To Broader Scientific Literature: The paper is well-situated within the broader research on LLM-based optimization modeling automation. The discussion on optimization copilots aligns with recent advancements in AI for mathematical modeling.
Essential References Not Discussed: The paper cites key relevant works.
Other Strengths And Weaknesses: - The introduction of Quasi-Karp Equivalence is an important conceptual contribution.
- The EquivaFormulation dataset is a valuable resource for future research.
- The work is timely, given the increasing interest in AI-assisted optimization modeling.
Other Comments Or Suggestions: - A breakdown of computational costs (LLM inference time vs. solver time) would provide insights into practical deployment.
Questions For Authors: 1 - The identity mapping in Figure 1 appears overly simple and may not effectively illustrate the complexity of the LLM-based mapping step. Could you provide a more challenging example that better demonstrates the depth of reasoning needed to discover a valid mapping?
2 - Because EquivaMap relies on instance-specific mappings, users would have to run the framework for every single problem instance. In addition, handling stochastic LLM outputs requires multiple runs. Do you see this as a serious limitation for real-world usage, especially in time-sensitive scenarios or in modeling copilots?
3 - EquivaMap verifies equivalence only for specific instances, not universally for the entire family of instances. Have you explored or considered approaches that could ensure (or at least give stronger guarantees of) equivalence across all instances rather than one?
4 - The framework currently restricts the mapping function f to be linear. This effectively limits the kinds of transformations EquivaMap can capture. Have you investigated whether more general classes of transformations (e.g., piecewise-linear or polynomial mappings) are feasible, or do you view the linear assumption as a fundamental design choice? If it is fundamental, what types of practically important reformulations do you expect it might fail to handle?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed feedback and for acknowledging the contribution of our work:
> "The introduction of Quasi-Karp Equivalence is an important conceptual contribution."
>” The EquivaFormulation dataset is a valuable resource for future research.”
>” The work is timely, given the increasing interest in AI-assisted optimization modeling.”
---
Below, we address the questions raised:
> **Q1**: "... Could you provide a more challenging example that better demonstrates the depth of reasoning needed to discover a valid mapping?"
**A1**: Below we present a more challenging transformation as an illustration: a digit-based reformulation where a single bounded integer variable is replaced by its **base-10 representation**. In this case, a bounded integer variable $x \leq 10^6$ is represented by 7 new variables $d_0, d_1, ..., d_6$, each representing one decimal digit of x, along with corresponding bounds and integrality constraints. Thus, the mapping $f$ will map $d_0, d_1, …, d_6$ to $x$, i.e., $x = \sum_{i = 0}^6 10^i d_i$. To successfully identify this mapping, EquivaMap must understand that a single variable $x$ can be **decomposed into multiple variables representing digits**, and recognize the weighted sum structure linking $x$ to its digits. In this case, EquivaMap also achieved **100% accuracy** as demonstrated in our experiments.
> **Q2**: "..., users would have to run the framework for every single problem instance. In addition, handling stochastic LLM outputs requires multiple runs. Do you see this as a serious limitation for real-world usage, especially in time-sensitive scenarios or in modeling copilots?"
**A2**: First, we note that **per-instance evaluation is standard across all existing metrics**, including **canonical accuracy**, **execution accuracy**, and the **WL-test**. This is inherent to the nature of automatic equivalence evaluation of optimization formulations, where ground-truth mappings are known and correctness is assessed instance by instance. Importantly, this is an offline task—it’s not meant to be executed in real-time during modeling. In modeling copilot settings, EquivaMap is meant to be used as an evaluator for a proposed copilot as it can evaluate its performance against correct formulations (instead of being the copilot itself).
We would also like to clarify that compared to execution accuracy, EquivaMap does not require additional MILP solving beyond the original problem instances. In particular, during the final verification step, we only evaluate whether the solution induced by the mapped variables satisfies the constraints in the target formulation and matches the known optimal objective value. This check is computationally lightweight and avoids any further calls to optimization solvers. In other words, once the MILPs are solved initially, **no further solving is needed** to verify a candidate mapping.
> **Q3**: "... Have you explored or considered approaches that could ensure (or at least give stronger guarantees of) equivalence across all instances rather than one?"
**A3**: We agree that automatically checking equivalence across an entire family of instances is a powerful and exciting direction! Achieving this would likely require extending our verification process to work on symbolically parameterized formulations, and leveraging formal methods or symbolic solvers to validate that the discovered mapping preserves equivalence under all admissible inputs. We see this as a promising direction for future work.
> **Q4**: "... Have you investigated whether more general classes of transformations (e.g., piecewise-linear or polynomial mappings) are feasible, or do you view the linear assumption as a fundamental design choice? If it is fundamental, what types of practically important reformulations do you expect it might fail to handle?"
**A4**: Thank you for this thoughtful question! To clarify: EquivaMap does not require $f$ to be linear by design—the only theoretical requirement is that the mapping function $f$ needs to be **polynomial-time computable** (L254 - L264, left column), in line with the notion of quasi-Karp equivalence. In practice, we choose $f$ to be linear because it suffices to cover all the equivalence-preserving transformations we included in our dataset. We will clarify this in our paper.
>**C1**: “The methods… partial equivalence … formulations align.”
**A5**: We appreciate the reviewer raising this point and believe that this could be an interesting future line of work. In many cases, partial equivalence between optimization formulations is not a well-defined or semantically meaningful concept. That is, in these settings, a formulation either **preserves the objective and feasible set**, or it does not, and thus does not have a direct connection to partial equivalence. Precisely identifying when partial equivalence makes sense and how it should be defined is of future interest. | Summary: This paper proposes a new method to assess whether two mathematical MILP formulations of a combinatorial optimization problem are equivalent. To this end, it proposes a formal notion of equivalence inspired by standard Karp reductions, creates a new dataset called EquivaFormulation for assessment, and compares the proposed method against 3 common existing approaches and a naive LLM-based approach. It finds that the proposed method, EquivaMap, works perfectly on the dataset, while all other baselines struggle in some, if not all, categories.
Claims And Evidence: The main claim, that EquivaMap is more effective at recognizing problem formulation equivalence, seems sufficiently supported on the specific classes of transformations considered in EquivaFormulation and in Table 2. It's not clear whether EquivaMap will also work equally well in natural and real settings, such as the motivating one involving that the authors refer to as optimization copilot, or in scenarios where multiple transformations are stacked together. The evaluation does not assess these.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The way the authors formulate and pitch the notion of "Quasi-Karp Equivalence" seems to lack some precision (unless I misunderstood something). Overall, I like this formal way of defining when two problems are equivalent. However, the following is causing confusion:
* While a reduction is naturally a one-way map (e.g., every "simpler" problem in the class NP reduces to an NP-complete problem, but not vice versa), I was naturally expecting *equivalence* to be a symmetric property. I am unclear why the authors didn't define problem formulations alpha and alpha' to be equivalent if alpha quasi-Karp reduce to alpha' AND alpha' quasi-Karp reduces to alpha. In fact, this is precisely how Karp equivalence is defined (two NP-complete problems are equivalent because they both reduce to each other).
* I don't quite see how the 2nd and 3rd bullet are different. Isn't it the case that f found be polynomial if and only if (some well-chosen) A computing f runs in polynomial time?
* In several places, the paper highlights the new notion being *instance-specific*. However, I really don't see how this is the case, after reading the description in section 3.2 and even after reading the detailed prompt in Appendix A. It seems to me a map from one problem formulation to another problem formulation. Just like in standard Karp reductions, it's a map from variables of one formulation to variables of another formulation. What is instance-specific about it? Wouldn't the transformation function be identical two different instances of, say, a traveling salesman problem? (I don't think this takes away anything from the findings, it's just confusing. E.g., the sentence "our method aims to evaluation the equivalence.... for a given instantiation of the problem" is confusing and doesn't seem to correctly represent what's being done)
* In several places, the paper highlights the new notion is a mapping from *solutions* to solutions, suggesting that a Karp reduction is not. But this isn't quite accurate --- a standard Karp mapping is precisely from an instance of a problem (e.g., network flow) to an instance of another problem (e.g., 3-SAT). It's always a mapping from variables of one problem to variables of the other problem, exactly like the proposed quasi-Karp mapping is. The only relevant difference is that the two problems considered are the same in this case, it's just that one has two distinct MILP formulations for it. But the mapping is still from variables of one to variables of the other.
Experimental Designs Or Analyses: Experiments seem justifiable. Results in Table 2 help clearly see which aspects of transformation (different rows) various methods struggle or excel at.
However, as noted above, this evaluation dataset is synthetic and designed specifically around individual transformations. In practice, I suspect one will find combinations of various transformations from one formulation to another. The current evaluation does not test EquivaMap in such compositional transformation scenarios (or on actual instances of transformations seen in the motivating optimization copilot research).
The "worst case" line in Table 2 is a bit confusing and not very helpful. I would suggest dropping it, possibly replacing it with "average" line.
Supplementary Material: Yes, the prompts in Appendix A.
Relation To Broader Scientific Literature: To my limited knowledge of the area, the findings seem relevant, well-motivated, and novel. There are some concerns about clarity of the theoretical formalism and coverage of empirical findings, as discussed above.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: * page 6, line 303, LLM inference time being polynomial in the input length: Specific soft-quadratic time and linear space bounds are known for transformers, e.g., The Expressive Power of Transformers with Chain of Thought, ICLR-2024.
* page 2, line 70: the Karp citation is presumably for the seminal paper where Karp shows dozens of problems to be equally NP-complete. That work was done in 1972, not 2010!
* page 2, line 83: similarly, Cook's book was published earlier than 2011, around 2007 I believe. please double-check.
Questions For Authors: Please see some questions in the theoretical and empirical sections above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's recognition of both the conceptual novelty of quasi-Karp equivalence and the empirical strength of EquivaMap on the benchmarked transformations. Below, we address the reviewer’s questions individually.
---
>**C1**: “It's not clear whether EquivaMap will also work equally well in natural and real settings, … or in scenarios where multiple transformations are stacked together.”
**A1**: We appreciate the reviewer’s concern about the performance of EquivaMap on compositional or stacked transformations. To assess this, we designed a new set of experiments where we **combined three transformation types**—Add Slack Variables, Add Valid Inequalities, and Replace by Linear Combinations. In this case, we have 59 LPs + 115 MILPs. We found that EquivaMap achieved **100% accuracy** on these new experiments with compositional transformations. We will add these experimental results along with other stacked transformation experiments to our paper.
>**C2**: "... I am unclear why the authors didn't define problem formulations alpha and alpha' to be equivalent if alpha quasi-Karp reduces to alpha' AND alpha' quasi-Karp reduces to alpha."
**A2**: We appreciate the reviewer’s thoughtful observation! In classical complexity theory, equivalence is defined via a two-way reducibility. However, in our setting, we intentionally use a quasi-Karp (one-way) definition to better reflect the practical considerations of optimization modeling. As discussed in Section 3.2 (page 5), we relax the condition that a no-instance (which, in our setting, corresponds to an infeasible or suboptimal solution)
under one formulation needs to be mapped to a no-instance of the other. This distinction is important in settings where a MILP formulation may exclude some, but not all, optimal solutions to improve efficiency. A common example is the addition of **symmetry-breaking constraints**, which preserve the objective and feasible set semantics, but eliminate functionally equivalent solutions. Because such transformations are often introduced in practice, we believe that a one-way reduction is sufficient and more appropriate for capturing formulation equivalence in applied contexts. Hence the use of the “quasi-” prefix.
>**C3**: "I don't quite see how the 2nd and 3rd bullet are different. Isn't it the case that f is found to be polynomial if and only if A computing f runs in polynomial time?"
**A3**: Thank you for this careful observation! While it’s true that in many cases these two conditions collapse, we intentionally separate them to illustrate the difference: it is theoretically possible for $A$ to output a description of $f$ in polynomial time (e.g., a program that implements $f$), but for $f$ itself to take super-polynomial time to evaluate. For example, $A$ could construct a branch-and-bound solver as $f$—in which case $A$ runs in polynomial time, but $f$ may not.
>**C4**: "In several places, the paper highlights the new notion being instance-specific… What is instance-specific about it? Wouldn't the transformation function be identical to two different instances of, say, a traveling salesman problem?"
**A4**: As the reviewer has mentioned, the mapping $f$ discovered by EquivaMap is **not instance-specific**. We feed the LLM a LaTeX-style formulation, where parameters such as ```TotalPeople``` or ```CapacityLargeUnit``` are left symbolic. The LLM then infers a symbolic mapping **between the two formulations**.
However, the verification step in our algorithm is **instance-specific**: to check whether a proposed mapping preserves optimality, we instantiate the symbolic formulation with **concrete parameter values** and verify that the mapped solution is valid and optimal.
To conclude, **the mapping is over formulations, while the verification check is over instances**.
> **C5**: “In several places, the paper highlights the new notion is a mapping from solutions to solutions, … But the mapping is still from variables of one to variables of the other.”
A5: We agree with the reviewer's comment and will make it clear in our paper that we are not “suggesting that a Karp reduction is not.”
---
We would also sincerely thank the reviewer for the other comments or suggestions session. We will update our citations to address these comments in the future version of the paper. We greatly appreciate your close reading and thoughtful suggestions, which help us improve both the precision and presentation of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanations. They helped me better understand various choices from your perspective. Some, like using "quasi" to indicate that the equivalence is actually a one-sided reduction, still seem suboptimal to me, but I see your reasoning and you might also want to give it one more thought. In any case, it would be very valuable for many readers if you could include these clarifications in the paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the thoughtful feedback and the close reading of the paper. We are revising the draft to incorporate these clarification points and design details so that they are clearer to the broader audience. | Summary: The paper introduces EquivaMap, an LLM-based framework for automatically verifying equivalence between combinatorial optimization formulations (MILP). It defines a new theoretical criterion called quasi-Karp equivalence, enabling robust equivalence checks even under transformations like variable scaling or auxiliary variables. Additionally, the authors create EquivaFormulation, a benchmark dataset of equivalent formulations. Experiments show EquivaMap achieves high accuracy, significantly outperforming baseline methods.
Claims And Evidence: The claims appear reasonably supported by the designed experiments.
Methods And Evaluation Criteria: Yes, both the proposed quasi-Karp equivalence and the designed dataset look reasonable to me.
Theoretical Claims: There isn't any proof. The definitions look good to me.
Experimental Designs Or Analyses: I appreciate the current design of the experiments. The only extra experiment/discussion I would love to see is the running time of the EquivaMap algorithm: since the algorithm heavily relies on a capable solver to establish the robust equivalence mapping between problem formulations, the running time of such solvers and its relationship with the size of input formula should be taken into account.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: I envision this paper could be beneficial to the operations research.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is well-written, with clear mathematical definitions and illustrative examples that effectively convey the key ideas.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive and thoughtful review, and for appreciating the design of our experiments. We also appreciate your suggestion to include a runtime discussion of the EquivaMap algorithm.
Compared to execution accuracy, EquivaMap does not require additional MILP solving beyond the original problem instances. In particular, during the final verification step, we only evaluate whether the solution induced by the mapped variables satisfies the constraints in the target formulation and matches the known optimal objective value. This check is computationally lightweight and avoids any further calls to optimization solvers. In other words, once the MILPs are solved initially, **no further solving is needed** to verify a candidate mapping.
In practice, EquivaMap typically completes in **a few seconds per instance**, including both the LLM inference step and the verification step (typically very fast). We will add a more detailed runtime analysis in the future version of the paper. | Summary: This paper discusses how LLMs (+ NLP) can help with automatic checking of equivalences in the context of combinatorial optimization reductions.
Claims And Evidence: Yes, the experiments are convincing.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: Yes, the ones in the main body.
Supplementary Material: No.
Relation To Broader Scientific Literature: They explore a novel direction in the use of LLMs, whereby LLMs are used as a copilot in checking equivalence between instances that are inputs/outputs of reductions.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
The paper gives a novel connection between computational complexity and LLMs.
En route, the authors introduce a new kind of reductions, namely quasi-Karp reductions, which might be of independent interest.
Weaknesses:
Technically, the paper is not so deep :)
Other Comments Or Suggestions: None.
Questions For Authors: Page 4:
Since you are mentioning cutting planes, do you see any connections of your work to proof complexity, etc.?
Page 5:
Algorithm 1:
I think you should elaborate on the LLM prompt.
This step is a bit hand-wavy :)
Page 6:
Why is it a reasonable assumption that LLM inference is in poly-time?
Page 8:
I think that in the Discussion, you should elaborate more on future work directions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their positive and thoughtful comments, and for recognizing the novelty of our work:
> “The paper gives a novel connection between computational complexity and LLMs.”
Below, we address the specific questions and suggestions:
---
> **Q1**: "Since you are mentioning cutting planes, do you see any connections of your work to proof complexity, etc.?"
**A1**: We indeed view our work as a first step toward automated reasoning about proof complexity. If Quasi-Karp Equivalence can be applied to two instances of two different families of optimization problems, it might be viewed as a potential **heuristic** for checking if two optimization problems are of the same complexity. That said, we recognize that these connections to proof complexity remain speculative and require further investigation. We thank the reviewer for highlighting this promising line of inquiry.
> **Q2**: "Algorithm 1: I think you should elaborate on the LLM prompt. This step is a bit hand-wavy"
**A2**: Thank you for this suggestion! Currently, in Appendix A, we provide the exact prompt template generator used in all of our current experiments. We will include more details on this in our paper.
> **Q3**: "Why is it a reasonable assumption that LLM inference is in poly-time?"
**A3**: Thank you for raising this question. As noted in The Expressive Power of Transformers with Chain of Thought (ICLR 2024) as well as comments by Reviewer FhjQ, it is shown that “a polynomial number of steps [in terms of the input] turns transformers into strong reasoners.”
Moreover, in EquivaMap, we typically call the LLM **once per set of variables**, not per individual variable. Since the number of sets of variables is often **orders of magnitude smaller** than the total number of variables in an instance, this further supports the practicality and tractability of our approach. We will add these discussions in the paper.
> **Q4**: "I think that in the Discussion, you should elaborate more on future work directions."
**A4**: We agree that the future directions are rich in this space and worth expanding. In the final version, we will include more concrete directions, such as the potential usage of quasi-Karp equivalence to work as a heuristic for checking if two optimization problems are of the same complexity. | null | null | null | null |
Aggregation of Dependent Expert Distributions in Multimodal Variational Autoencoders | Accept (poster) | Summary: The authors propose to challenge the assumption of independence between unimodal experts in computing the joint posterior in multimodal VAEs. Therefore they propose the CoDE-VAE that uses a Bayesian approach to compute the joint posterior between unimodal experts, modelling the dependence between them. Experimental results show that their idea is effective and are positive, despite not significantly outperforming the most recent alternative approaches.
Claims And Evidence: - Empirical results are positive and show the effectiveness of the approach, despite not outperforming SOTA in the multimodal VAE literature.
- I think there seems to be some confusion in the paper about the concept of subsampling modalities in the ELBO, related to the limitations highlighted in [1]. The authors state "It is noteworthy that CODE-VAE does not rely on sub-sampling techniques, which have been shown to harm the performance of multimodal VAEs", but in the CoDE-VAE ELBO in eqn 3 subsampling actually happens. To see it, it is sufficient to notice that computing a given term of the sum in the ELBO requires reconstruction of all modalities given only a subset used for inference, hence sub-sampling of modalities happens and the CoDE-VAE is also subject to the theoretical limitations outlined in [1], as also confirmed in the experimental results.
[1] Daunhawer et al On the limitations of multimodal VAEs, ICLR, 2022.
Methods And Evaluation Criteria: As outlined also below the chosen datasets to benchmark the approach are sensible, while already well-studied in the multimodal VAE literature. As for the proposed method, challenging the assumption of independence between unimodal experts in approximating joint posterior inference for multimodal VAEs is a valuable research direction. Moreover, the proposed method appears to be effective.
Theoretical Claims: The authors justify their approach as a Bayesian method to approximate the joint posterior assuming a dependence between unimodal experts. The derivations to be seem correct and back up their theoretical claims.
Experimental Designs Or Analyses: The datasets chosen for the experiments are fairly standard in the multimodal VAE literature and existing models already achieve convincing results in these setups. While the comparisons on these datasets are valid, I think the authors could have picked a novel more challenging dataset to outline the benefit of their proposed model. I think experiments on the chosen datasets are properly conducted, and the results are properly commented. While model performance does not surpass certain recent approaches (e.g. MMVAE+), the results are still somehow positive and show that the suggested idea works. I'd strongly suggest the authors to compare with a recent paper [2] that proves to outperform alternative multimodal VAEs. While the proposed model has the option to be equipped with diffusion decoders, which would make an unfair comparison in terms of generative quality with CoDE-VAE, the authors show that the proposed ELBO without diffusion decoders still improves over alternative multimodal VAEs. Hence it seems relevant to include it in the comparisons in this paper.
[2] Palumbo et al. Deep Generative Clustering with Multimodal Diffusion Variational Autoencoders, ICLR, 2024.
Supplementary Material: I reviewed some parts, including derivations qualitative results and metrics.
Relation To Broader Scientific Literature: The idea discussed in this paper fits nicely in the literature of multimodal VAEs as it explores the direction of modelling dependence between unimodal experts in computing the joint posterior, which was not explored thus far to my knowledge.
Essential References Not Discussed: Recent relevant work is not discussed. Specifically, [2]. As I mentioned above I strongly suggest the authors to at least discuss this paper in the related work, and advise also to include it in the experimental comparisons.
[2] Palumbo et al. Deep Generative Clustering with Multimodal Diffusion Variational Autoencoders, ICLR, 2024.
Other Strengths And Weaknesses: Weaknesses:
- Notation is sometimes confusing. E.g in section 3 the dimension $d$ appears at times as superscript and at times as subscript, even in the same equation ( $e^d_j = \mu^d_j - \theta_d^k$).
- Certain experimental comparisons could be more thorough. For instance, on PolyMNIST it would be more appropriate to assess the generative quality gap on all modalities (and possibly do an average in performance), instead of only focusing on generating modality $m0$.
Other Comments Or Suggestions: - I think clarity in the paper could be improved in section 3.
- I suggest that, when grid-searching hyperparamters, the authors make explicit (e.g. in the MNIST-SVHN-Text experiment) which hyperparameters achieve best performance for each model, and hence are used to report the results. These info can also be left to the Appendix.
Questions For Authors: - Is the dependency between expert errors assumed to be constant across the D latent dimensions? Why do the authors make this assumption?
- Which beta values and latent space dimensions are chosen to get the results for the MMVAE+ model on the MNIST-SVHN-Text dataset, reported in the main text? It is not really clear to me, even after having a look at the Appendix. It seems somehow strange that the model achieves a relatively low performance in this dataset, keeping in mind the results on PolyMNIST and CUB datasets.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your thoughtful and detailed comments. We will respond to your concerns and questions point by point.
## Confusion about the concept of sub-sampling modalities
Thank you for pointing this out. In the paper, we use the concept of sub-sampling to refer to ELBO sub-sampling and to the use of mixture distributions to approximate consensus distributions. We will make this clear in the revised version.
## Alternative Datasets
Based on your and Reviewer FV9Z comment, we provide results on the CELEB-A data. Due to the limited time, we only considered a latent space with 32D. For CoDE-VAE we assume $\rho=0.6$, as we observed to be a value that performs well in other experiments. We obtained the following results
| | CoDE-VAE | MMVAE+ |
|---------------------------|-----------|-----------|
| Conditional FID | 92.11 (0.61) | 97.30 (0.40) |
| Unconditional FID | 87.41 (0.36) | 96.91 (0.42) |
| Conditional Coherence | 0.38 (0.001) | 0.46 (0.001) |
| Unconditional Coherence | 0.23 (0.003) | 0.31 (0.030) |
| Classification | 0.38 (0.066) | 0.37 (0.003) |
which may improve further by cross-validating $\rho$ in CoDE-VAE.
## Comparison to Deep Generative Clustering with Multimodal Diffusion Variational Autoencoders
Thank you for pointing out this paper, which we were not aware of. We acknowledge the importance of the paper in the field of multimodal VAEs and will discuss it in the related work section and will include the experimental comparison on PolyMNIST in the appendix of the revised version of the paper, as the Clustering Multimodal VAE (CMVAE) model introduced in [2] is not 100\% comparable with our proposed CoDE-VAE model. The main focus of our research is to present a novel approach to estimate consensus distributions and to learn the contribution of each ELBO subset, while the goal in CMVAE is to couple multimodal VAEs with clustering tasks by leveraging clustering structures in the latent space and to introduce diffusion decoders, which is certainly a novel and relevant line of research. CMVAE captures clustering structures using a mixture model as a prior, and we hypothesized that this flexible prior in CMVAE plays an important role in the performance of unconditional generative tasks. We will include this discussion and the relation between the methods in the updated manuscript.
## Notation and Clarity
Thank you for pointing this out. We will change the notation in the revised version to ensure consistency in the use of subscripts and superscripts, which will improve the clarity of Section 3.
## Thorough comparison on the generative quality gap for PolyMNIST
We agree that an average generative quality gap would provide a robust comparison. However, the computational costs of such experiment is significant, as to assess the generative quality for each of the 5 modalities as a function of the number of input modalities requires to train at least 12*3=36 times each model (considering 3 different runs to report standard deviations). This will require 252 runs in total, as there are 7 different models in the evaluation of the quality gap (without considering the unimodal VAEs). Given that our research includes 3 data sets, 6 benchmark models, and several ablation experiments, we let such a robust comparison for future research.
## Add grid-search hyperparameters
Thank you for pointing this out. We will add to the Appendix the hyperparameters for the grid search and their optimal values, including the ones for the new experiments on the CELEB-A data.
## Is the dependency between expert errors assumed to be constant across the D latent dimensions?
Yes, for simplicity, we assume a common $\rho$ parameter for all dimensions. However, the CoDE is a flexible approach that does not impose any restriction on the way $\Sigma_d$ is specified. We let future research explore whether model performance can be improved by using different correlation values for different dimension.
## Beta values and latent space for MMVAE+ on the MNIST-SVHN-Text data.
For all models in Section 4.1, we cross-validate $\beta$ using the grid $[0.1,1,5,10,15,20]$. All models, but MMVAE+, assume that the latent space has 20 dimensions as in previous research. To select the dimensionalty of common and modality specific (MS) variables in MMVAE+, we follow a similar approach as in the original paper, where the authors choose the dimensions of these two to be equal to the dimension of the latent space in MMVAE (a model without MS variables) divided by the number of modalities. Therefore, MMVAE+ assumes that both common and MS variables have 7 dimensions. These details are explained in Appendix D3. We also tested to use 10 dimensions in both common and MS variables, so the decoders in the MMVAE+ model would generate modalities based on 20 dimensions, just as the other models. We did not observe significant differences. | Summary: The paper introduces a new method for aggregating multimodal expert distributions in Variational Autoencoders (VAEs) by incorporating the dependence between experts, which has traditionally been ignored in models like the product of experts (PoE) and mixture of experts (MoE). This method, called Consensus of Dependent Experts (CoDE), aims to improve the estimation of joint likelihoods in multimodal data by accounting for the dependencies between different modality-specific distributions. The paper proposes the CoDE-VAE model, which enhances the trade-off between generative coherence and quality, improving log-likelihood estimations and classification accuracy. The authors claim that CoDE-VAE performs better than existing multimodal VAEs, especially as the number of modalities increases.
Claims And Evidence: The paper provides clear empirical evidence to support the claims about the CoDE-VAE’s superior performance. The experimental results on datasets such as MNIST-SVHN-Text, PolyMNIST, and CUB support the assertion that CoDE-VAE balances generative coherence and quality better than existing models. Additionally, the paper argues that CoDE’s consideration of expert dependence leads to better log-likelihood estimations and reduced generative quality gaps compared to models relying on modality sub-sampling. However, the discussion could benefit from more in-depth comparisons in specific edge cases where other models might outperform CoDE-VAE.
Methods And Evaluation Criteria: The methodology behind CoDE is sound, introducing a principled Bayesian approach to account for expert dependence. The CoDE-VAE model builds on existing multimodal VAEs, addressing key challenges like missing modalities and the imbalance in the contribution of different ELBO terms. The evaluation criteria, including generative coherence, log-likelihood estimation, and classification accuracy, are appropriate for comparing multimodal models. However, the explanation of how the model behaves in extreme cases (e.g., when only one modality is available) is not fully addressed.
Theoretical Claims: The theoretical claims regarding the new ELBO formulation and the aggregation method using CoDE are well-supported by the paper’s derivations and lemmas. The paper provides a solid mathematical foundation for the method, with proofs of key results, such as the posterior distribution and consensus distributions. There are no apparent issues with the correctness of these proofs.
Experimental Designs Or Analyses: The experimental design is robust, with comprehensive comparisons to multiple baseline models. The use of multiple datasets (MNIST-SVHN-Text, PolyMNIST, and CUB) provides a good cross-section of real-world multimodal problems. However, more detailed ablation studies or analyses of edge cases where the assumptions about expert dependence may not hold could further strengthen the paper.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The paper does a good job of relating its contributions to the broader literature on multimodal VAEs and expert aggregation methods. The work is clearly motivated by existing challenges in multimodal learning, such as missing modalities and independent expert assumptions. The comparison with PoE and MoE methods, along with references to key multimodal VAE papers, establishes the novelty of the approach.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The CoDE-VAE method is a novel and theoretically sound approach that addresses the challenge of dependent expert distributions.
2. The experimental results are convincing, showing that CoDE-VAE outperforms existing methods in key areas like generative coherence and log-likelihood estimation.
Weaknesses:
1. Some parts of the experimental setup could be explained more clearly, particularly regarding the optimization process for learning the contribution of each ELBO term.
2. More detailed comparisons with edge cases or failures of the model would help to solidify the generalizability of the results.
Other Comments Or Suggestions: 1. Consider adding more analysis on how CoDE-VAE behaves in cases with missing data or when some modalities are not available.
2. A clearer distinction between the CoDE-VAE approach and similar models would benefit readers unfamiliar with multimodal VAEs.
Questions For Authors: 1. How does the performance of CoDE-VAE change when there is a significant imbalance between the available modalities (e.g., when one modality is much more informative than the others)?
2. Could you elaborate on how the CoDE method handles scenarios where the assumption of expert dependence does not hold (e.g., in highly independent modalities)?
3. The paper mentions that CoDE-VAE reaches generative quality similar to unimodal VAEs in certain cases—could you provide more concrete examples of these cases, and how the model behaves with increasing modality count?
4. What are the computational complexities of CoDE-VAE compared to existing models like PoE and MoE, particularly as the number of modalities increases?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and detailed comments. We will address your concerns and questions point by point.
## Experiments and analysis on edge cases
We agree that analyzing CoDE-VAE on edge cases helps to understand its behavior, and makes our research more robust. We believe that the experiments in Section 4.2 Generative quality gap (Figure 3), in Appendix D.4. PolyMNIST (Figure 13), and the classification results in Figures 8 and 10 provide a good indication that the performance of CoDE-VAE improves with the number of modalities and with the cardinality of the subset on which consensus distributions are conditioned on.
To address the scenario where the assumption about dependent experts may not hold, we trained the CoDE-VAE model on PolyMNIST using the modality $m_1$ in the following way. We apply 3 different levels of noise to the modality $m_1$, 0 \%, 25\%, and 95\%. Then, we paired the noisy version with the original modality to obtain a bi-modal data. For each of these data, we train CoDE-VAE assuming $\rho=0$ and $\rho=0.9$, and generate the non-noisy version of $m_1$. When CoDE-VAE is trained with the data with 0\% noise, both modalities are the same and we expect that $\rho=0.9$ will have relatively high generative quality. On the other hand, when CoDE-VAE is trained with the data with 95\% noise, the modalities are uncorrelated and $\rho=0$ is expected to have relatively high generative quality. We obtain the following average FID scores ([link1](https://anonymous.4open.science/r/codevae_icml-27EA/), [backup_link](https://anonymfile.com/k0W28/uncorrelated-experts.pdf)) :
| | 0\% | 25\% | 95\% |
|---------------|-----------|-----------|-----------|
| $\rho=0$ | 29.0 | 31.27 | 48.22 |
| $\rho=0.9$ | 26.12 | 29.90 | 53.68 |
showing that CoDE-VAE correctly captures the dependency between experts distributions through the $\rho$ parameter.
## CoDE-VAE in cases with missing data or when modalities are not available.
The evaluation setup in our research follows the standard method in multimodal VAEs where all possible combinations of missing modalities are evaluated at test time (Appendix B). To handle missing data is not trivial, as we need to estimate consensus distributions. To estimate $q(z|x_1,x_2)$, any aggregation method would require the same number of observations $x_1$ and $x_2$. This problem could be overcome by using only complete pairs $(x_1,x_2)$, and re-weighting the ELBO terms with fewer samples and could be an interesting direction to pursue in future work.
## CoDE-VAE when one modality is much more informative?
CoDE-VAE learns the contribution of each k-th ELBO term to the optimization, balancing the importance of relatively more informative modalities. The empirical results of Section 4.4 show that the text modality in the MNIST-SVHN-Text is relatively more important to the optimization of the ELBO as shown by the weight learned for the subset containing that modality. This result seems reasonable, as there is more noise in the MNIST and SVHN modalities. The ablation experiments of Section 4.5 show that CoDE-VAE achieves higher performance (coherence and FID), when the contribution of each subsets to the optimization, and so each modality, is learned.
## CoDE where the assumption of expert dependence does not hold?
CoDE is a flexible approach that does not impose any restriction on the way $\Sigma_d$ is specified as long as it is invertible. For independent modalities, it should be enough to use $\rho=0$ (see answer on edge cases).
## CoDE-VAE generative quality. Could you provide more concrete examples?
We recognize that the wording of this claim could be improved and made it more concrete. We have replaced the original sentence with *"CoDE-VAE minimizes the generative quality gap as the number of modalities increases, achieving quality similar to unimodal VAEs measured by unconditional FID scores."*, which corresponds to the experiments in Section 4.2 Generative quality gap. Furthermore, Figure 3 shows that CoDE-VAE achieves higher generative performance as the number of modalities used for model training increases, something most of the benchmark models are not able to achieve. We have [added figures](https://anonymous.4open.science/r/codevae_icml-27EA/) ([backup](https://anonymfile.com/rN124/polymnist-gen.pdf)) that show the generated modality $m_0$ as a function of input modalities, as well as generated samples by the unimodal VAE for qualitative comparisons.
## Computational complexities of CoDE-VAE
We agree that this is an important aspect to be considered. Therefore, in Appendix C we mention that CoDE-VAE has a relatively high computational cost $\mathcal{O}(2^M-1)$, where $M$ is the number of modalities. However, given that the size of the matrix $\Sigma_d$ depends only on $M$ (see the question from reviewer FV9z), model training is feasible on a single GPU even for 5-modality datasets.
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal is quite professional and addresses some of my concerns successfully. I will be thinking about editing my initial review and rating after carefully going through other rebuttal contents to other reviewers (but will not require any additional details or raise questions from/to authors).
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. We are pleased to have been able to address your concerns. | Summary: This paper introduces the Consensus of Dependent Experts (CoDE) in the context of multimodal learning with Variational Autoencoders (VAEs). Current approaches for this task, such as: (i) the product of experts; or (ii) the mixture of experts assume cross-modal independence which is restrictive. Towards this end, the current work proposes a novel Empirical Lower Bound (ELBO) that estimates the joint likelihood by learning the contribution of each modality. The proposed method can strike a balance between generative coherence and generative quality. Empirical evaluations are conducted on several datasets.
## Update after rebuttal
Convinced with the responses to my questions by the authors. Hence, raising my score.
Claims And Evidence: Generally, the claims make sense. However, the chief concern with the claims is:
1. The paper does not show how accurately the ELBO is minimized for the different datasets. This is an important shortcoming of the present work.
Methods And Evaluation Criteria: While overall the method is intuitive, the chief concern is as follows:
1. L201, Col 2: Estimating the off-diagonal elements in $\Sigma_d$ in the forward pass could be computationally expensive for high-dimensional scenarios.
2. Also, how does the model deal with cases, where $\Sigma_d$ is not full rank.
Theoretical Claims: Generally, the theoretical claims seem accurate.
Experimental Designs Or Analyses: Overall, the experimental evaluation is pretty broad but has the following shortcomings:
1. Some of the more complex real-world image datasets, CELEB-A [a], CELEB-HQ [b] have not been experimented with.
2. Generation Quality (as measured by FID scores) and Classification accuracy are not the best.
References:
[a] Liu, Z., Luo, P., Wang, X. and Tang, X., 2018. Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15(2018), p.11.
[b] Karras, T., Aila, T., Laine, S. and Lehtinen, J., 2018, February. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In International Conference on Learning Representations.
Supplementary Material: Yes, I reviewed the entire supplementary material.
Relation To Broader Scientific Literature: The idea of combining multimodal distributions relate to prior works that combine hidden markov model distributions (Brown & Hinton, 2001), image synthesis to combine modalities and generate images using generative adversarial networks (Huang et al., 2022), large language models for comparative assessment of texts (Liusie et al., 2024), in early-exit ensembles (Allingham & Nalisnick, 2022), or in diffusion models to aggregate distillation of diffusion policies (Zhou et al., 2024).
Essential References Not Discussed: Most important related works have been discussed.
Other Strengths And Weaknesses: Other strengths:
1. The proposed model can deal with scenarios when one or more modalities are missing.
2. The current formulation allows for the estimation of uncertainties of each of the experts.
Other weaknesses:
See previous sections.
Other Comments Or Suggestions: The following are some additional comments:
1. L98, Col 1: "....method, the derivation..." -> "....method, for the derivation..."
2. L376, Col 1: "...significant..." -> "...significantly..."
Questions For Authors: The following are some questions for the authors:
1. L89, Col 1: Does the proposed approach really "minimize the generated quality as the number of modalities increase". This seems counter-intuitive. Or is this a typo?
2. What would happen if instead of a Categorical distribution, a softmax distribution is used to weigh the experts?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your careful and comprehensive comments. We will address your concerns and questions, point by point:
## How accurately is ELBO minimized:
We are not completely sure if we follow your concern. If the comment refers to how the ELBO is maximized (the loss), we train the CoDE-VAE model until the ELBO converges, confirmed by visual inspection. [These plots](https://anonymous.4open.science/r/codevae_icml-27EA/) ([backup](https://anonymfile.com/LNara/elbo-cub.png) [backup](https://anonymfile.com/Vxpdx/elbo-mmnist.png) [backup](https://anonymfile.com/8p9jO/elbo-mst.png)) show the convergence of the ELBO. If your concern is about how close the ELBO is to the intractable marginal log-likelihood, we calculate log-likelihoods on the test sets using importance-sampling (shown in Figures 2 and 4 for all models). Please let us know If we are misunderstanding your concern.
## $\Sigma_d^{-1}$ costly in high-dimensional data.
$\Sigma_d$ is not a sample covariance matrix and its size depends on the number of expert distributions assessing consensus distributions (CD). So, it is limited by the number of modalities M, which typically is small and only one CD is conditioned on all modalities. In our research the largest M is 5 (PolyMNIST). Therefore, the computational costs to find the inverse of $\Sigma_d$ are affordable.
## $\Sigma_d$ if it is not full rank.
$\Sigma_d$ is guaranteed to be full rank by construction, as $\sigma_{i,j}>0$ for all $i,j$, where $\sigma_{i,i}=\sigma^2_i$. To see this, we need to show that the quadratic form $\beta^T\Sigma_d\beta = 0$, is only satisfied for a zero-vector $\beta$. Let $\kappa$ be the smallest $\sigma_{i,j}$ value, which is positive by construction. Therefore, $\sum_i \sum_j \beta_i \sigma_{i,j} \beta_j > \kappa \sum_i \sum_j \beta_i \beta_j$. Since $\kappa>0$, the only solution that satisfies $\kappa \sum_i \sum_j \beta_i \beta_j=0$ is the zero-vector $\beta$. We will add this discussion in the revised version.
## More complex datasets
We added a new section in the appendix with experiments on CELEB-A for CoDE-VAE and MMVAE+, which is the model that stands out in the other experiments. Due to the limited time, we evaluate both models only for $\beta=1$, assuming a latent space with 32D. For CoDE-VAE we assume $\rho=0.6$, as we observed in other experiments to be a value that performs consistently well. We obtained the following results:
| | CoDE-VAE | MMVAE+ |
|---------------------------|-----------|-----------|
| Conditional FID | 92.11 (0.61) | 97.30 (0.40) |
| Unconditional FID | 87.41 (0.36) | 96.91 (0.42) |
| Conditional Coherence | 0.38 (0.001) | 0.46 (0.001) |
| Unconditional Coherence | 0.23 (0.003) | 0.31 (0.030) |
| Classification | 0.38 (0.066) | 0.37 (0.003) |
which may improve further by cross-validating $\rho$ in CoDE-VAE.
## Generation Quality and Classification accuracy
Multimodal VAEs trade high generative quality with reduced generative coherence [1]. The experimental setup in our research shows that CoDE-VAE performs as well as or better than SOTA multimodal VAEs in terms of balancing the trade off between generative coherence and generative quality. When we assess in isolation whether multimodal VAEs are able to improve generative quality as the number of modalities increases, CoDE-VAE clearly shows a higher performance (Figure 3). It is possible to add modality-specific (MS) latent variables to CoDE-VAE to further improve the generative quality, which requires a careful design of the number of dimensions in the common and MS variables to avoid the short-cut problem [1]. When it comes to the classification results, Figures 8 and 10 in the appendix show that models using the mixture-of-experts are not able to achieve higher classification accuracy as latent representations are learned from subsets with more modalities. On the other hand, CoDE-VAE ranks 1st and 3rd, while balancing the trade-off between generative quality and generative coherence at the same time.
[1] Palumbo et al. Enhancing the Generative Quality of Multimodal VAEs Without Compromises, ICLR, 2023.
## Typos and L89, Col 1
You are correct that there is a typo in L89, Col 1. The sentence should read *"Furthermore, CoDE-VAE minimizes the generative quality gap as the number of modalities increases..."*, referring to the results in Figure 3. We will fix this in our revised version of the paper.
## Softmax instead of categorical
We assume that you are referring to the Gumbel-Softmax distribution, which is useful when we need to sample and backpropagate at the same time. As our main interest is learning the $\pi$ parameters, we do not expect a significant different behavior by using the Gumbel-Softmax distribution. The main concern when using a different distribution, is whether the entropy term in the ELBO of the CoDE-VAE model can be evaluated, ideally in closed form.
---
Rebuttal Comment 1.1:
Comment: I am grateful to the authors for addressing my concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. We're glad we were able to address your concerns and hope you'll consider reviewing your score based on our rebuttal. | null | null | null | null | null | null | null | null |
Evolving Prompts In-Context: An Open-ended, Self-replicating Perspective | Accept (poster) | Summary: The paper proposes PromptQuine, an automated prompt optimization strategy that prunes a given prompt using evolutionary search to improve the performance at a given task. The method outperforms existing methods when validated on classification, multi-choice question answering and reasoning datasets across a wide range of models. Moreover, it is more efficient than previously proposed methods.
Claims And Evidence: PromptQuine claims to outperform all other prompt optimization methods. The considered models and datasets are extensive enough. However, some important baselines like gradient-based methods (Autoprompt or other variants like GCG) are missing. Moreover, the method is said to be efficient but it takes much more time to run compared to the greedy pruning baseline which has similar performance.
Methods And Evaluation Criteria: The method is evaluated on a wide range of datasets using models of varying sizes (from 350M to 70B parameters) and architectures (encoders and decoders).
Theoretical Claims: None
Experimental Designs Or Analyses: The baselines are not well-tuned (see questions).
The 1-shot ICL setting is misleading. You are still using multiple examples for fitness estimation which effectively extract information form these examples. If multiple examples are used for PromptQuine, Best-of-N selection should be used for manual prompts, TAPruning or greedy pruning for fair comparison.
The analysis section is no very substantial. Apart from the task label and signal words, there is no quantitative analysis about the kind of tokens that are pruned by the algorithm. This might help us better understand what kind of tokens models are more sensitive to.
Supplementary Material: The code in the supplementary material is very hard to parse as some files contain more than 11 000 lines.
Relation To Broader Scientific Literature: The paper proposes a prompt search technique. It is a prompt engineering method that seeks to maximize performance without modifying the model weights. It is also related to feature attribution and how LMs respond to unnatural language.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
- The paper contains a detailed appendix explaining in great detail all the experiments.
Weaknesses:
- TAPruning seems to be a weaker version of greedy pruning which is not constrained by the order of the tokens to be pruned. The significance of the proposed method is rather limited as its performance is still very close to the baseline while being far less efficient (as show in Table 10).
Concerning clarity, some parts of the paper like section 5.2 are rather hard to read and seem detached from the rest of the paper.
Other Comments Or Suggestions: None
Questions For Authors: 1- What is the 4-shot performance of TAPruning? It is not reported in Table 2.
2- What is the performance of greedily pruning the tokens? (no left-to-right pruning order as in TAPruning but select the best position to prune each time) Please also include the 4-shot performance of this method.
3 - What is the performance of the baselines when using the same number of examples as PromptQuine? For example, best-of-n selection could be used with manual prompting and greedy pruning.
4 - How does a gradient-based method (like Autoprompt or other varians like GCG) compare to PromptQuine?
5 - Why don't you include the latest approaches (like gradient-based ones) for jailbreaking LLMs in your experiments? What is the main advantage of PromptQuine compared to these methods? PromptQuine relies on steering vectors which require access to the language model.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate Reviewer e5jB for suggestions on our experimental setups. We are delighted that you acknowledge the **richness in detail** of our study. We address your concerns below:
Anonymous link (AL) for several tables: https://anonymous.4open.science/r/ughj/e5jB/README.md
>Recap: Claims & Objective
We argue we didn't claim *Quine outperforms all others*, in abstract, we use "Gibberish always matches or surpasses," in introduction, we use "Pruning can near SOTA," and in each section, we present results objectively. **Instead**, our goal is to deliver **insight that simple ICL pruning** can be on par with work which use various external information, e.g., +tokens, countering common view and contributing to AI emergence from constraints [1,2]. *Quine/TAP are just what we strive to improve under pruning (pursue knowledge over SOTA)*. Also, comparing search algorithms solely by numbers is limiting, as scalable methods e.g. ES can further improve on the same prune landscape. We'll thus add a figure for sample efficiency, comparing TA/ES/greedy.
>Models from 350M
We started from GPT2, 125M (Table 10).
>Not tuned baselines?
Pruning results in Table 2 are reported w/ varied ICL initializations, where ICL baseline are direct comparisons. We can't perform BoN across seeds.
* BoN for TAPruning/"Greedy": We **stated in Algo 2, which used held-out valid for BoN (same samples)**.
* BoN for ICL: The only setup available is to construct exemplars using samples from held-out valid (e.g.,200). It costs many computes(N=51^4*4!=16568064 for 1-shot agnews, taking **4 yrs for a 1-shot dataset BoN w/ vLLM**). We are unable to support so. That is also why all existing work by validation BoN [3,4+] don't do such enumeration and report ICL as us. We follow such convention. Your request requires other principled algos, beyond our work. We show Bo10 under limited samples in AL.
>gradient baselines?(Gb)
Good questions! Let's clarify.
* White-box (wb) jailbreaking: Using steering vector (SV) does **NOT** imply wb is required. Sorry for ambiguity. We indeed view it hard for Pruning (line362-sparse, line365-potentials), thus we explore w/ a small-scale priming study for dense signals w/ tools from interpretability(SV). We'll enhance paper: show black-box/bbx results first and explore SV. We provide such results in AL README(+GCG result). Besides, we plan to explore further under varied ICL setups in paper (see reply to 8j6A), for rich picture of pruning.
* Other tasks: bbx is enough, and Gb is empirically weak under FSL[6,7].
>Not substantial Analysis
We agree the insight is intriguing for analysis. Section 6 presents label words in ICL, where [8,9+] exclusively address it in full papers. Pruned token analysis remains hard, with only viable hypothesis (we) being whether function words are pruned more—an insight we deemed limited, as noted in abstract and intro, let alone by human intuition. Yet, we do plan to add studies for ICL stabilization, inspired by KfU1. Given current density, we leave others for future work.
>Code Files
Apologies for the tight schedule. We clearly state this in README, which we'll release+refactor soon or upon request.
>TAPruning weaker "greedy pruning"?
*We respectfully disagree*. Greedy search may refer to steepest-ascent hill-climbing (SAHC), which has limitations: 1.Search landscape can be multimodal and deceptive (section 4.2), where SAHC can stagnate and TA escapes with speed up [10]. 2.Rigorously, TA has solution regions where SAHC won't touch.
**Case where greedy stagnates entirely**: See AL traces.csv. SAHC is weaker than TA for no progress on this prompt, limiting its generality. We provide new greedy/TA results in AL, where SAHC takes *days* for search.
>ES no significance?
*We respectfully disagree*, ES is indeed a pioneer study for handling multimodal landscape. We stated in line250, agreed by other reviewers.
* Efficiency: TAPruning is indeed our algo. In comparison, Quine is on par with published work in terms of runtime efficiency (Table10). Also it's the first token-level search that can optimize in mins for some tasks(line366). While TAPruning dominate runtime on one GPU, ES has potentials to be parallelized [11] (e.g., reproduction), largely reducing runtime (e.g., line90). We'll make what we claimed clear for "better runtime" in Table10.
* Performance: We argue there're consistent improvements over TA across models in nearly all tables, up to 12%, large when converted to #corrects. Please also refer to TAPruning 4-shot in AL README, where TA/Greedy(e.g., 11% lower) are all weaker than ES. We'll put greedy in paper to enhance why ES for broad open-endedness/deception. [10].
[1] Collective Intelligence for DL [2] Open-Endedness is Essential for ASI [3] AutoPrompt [4] GCG [5] RLPrompt [6] TEMPERA [7] ICL Learns Label Relationships yet not.. [8] Larger LMs Do ICL Differently [9] Abandon Objectives [10] Why Greatness cannot be planned [11] ES as a Scalable Alternative to RL
---
Rebuttal Comment 1.1:
Comment: All the questions were answered by the authors. They also addressed all the concerns raised in the review. I will update the score accordingly. The community would benefit from the contributions in the paper but its clarity ought to be extensively improved. Some important aspects of the algorithms such as efficiency are not emphasized well enough in the paper. Some terms are misleading and underspecified.
---
Reply to Comment 1.1.1:
Comment: Many thanks for the updated score, and thanks for your agreements on the potential impacts of this work's findings towards overall community. As we have responded to almost all reviewers, we will **definitely improve the paper presentation and expand some studies** inspired by all reviewers for **a significantly enhanced paper**. Thanks for the invaluable time on our work! | Summary: The paper introduces a novel prompt design paradigm that challenges the conventional approach of using well-crafted natural language prompts for large language models. The authors demonstrate that pruning random demonstrations into seemingly incoherent "gibberish" can significantly improve performance across a variety of tasks, including classification, multi-choice question answering, generation, and reasoning.
This effect is shown to generalize across different LLMs, regardless of their alignment. The authors propose a self-discover prompt optimization framework called PromptQuine, which employs evolutionary search to automatically identify effective pruning strategies.
The paper provides extensive empirical evidence supporting these findings and discusses their implications for understanding in-context learning and prompt design in LLMs.
## update after rebuttal
I thank the authors for the rebuttal. I think this is an interesting paper, but I am keeping my score because the limited mechanistic analysis (Section 6) leaves the "why" of pruning’s success underexplored.
Claims And Evidence: The claims in the paper are generally well-supported by the empirical results presented in both the main text and the appendix. The authors assert that pruning random demonstrations into "gibberish" enhances LLM performance across diverse tasks, and they provide results in tables (e.g., Table 1, Table 2) showing improvements over baselines like RLPrompt across multiple datasets and models.
However, the claim that pruned prompts work better lacks a comprehensive mechanistic explanation. While section 6 offers some analysis on the role of label words, the evidence is limited and does not fully elucidate why pruning enhances performance.
Methods And Evaluation Criteria: The proposed method, PromptQuine, utilizes an evolutionary search framework based on genetic algorithms to optimize prompt pruning, which is a sensible approach for the discrete optimization problem of finding effective prompt subsequences. This choice aligns with the problem's combinatorial nature, where gradient-based methods are less applicable.
Evaluation criteria include standard metrics such as accuracy for classification and math reasoning.
Theoretical Claims: The paper does not present theoretical claims or formal proofs. It is primarily an empirical study focused on demonstrating the efficacy of the pruning-based approach.
Experimental Designs Or Analyses: The experimental designs appear sound and robust. The authors evaluate their method across many tasks and LLMs, using datasets detailed in Appendix C (Table 6) and comparing against baselines like RLPrompt, LLMLingua, and EvoPrompt.
Supplementary Material: I reviewed the appendix, which provides additional details and results supporting the main claims.
Relation To Broader Scientific Literature: The paper situates its contributions within the literature on prompt optimization and in-context learning (ICL).
Essential References Not Discussed: The paper cites relevant prior work adequately.
Other Strengths And Weaknesses: ### Strengths
- Originality: The idea of pruning demonstrations into "gibberish" seems novel, creatively challenging the norm of natural language prompts and offering a fresh perspective on ICL optimization.
- Empirical Rigor: Extensive experiments across tasks and models provide strong evidence of the method’s effectiveness and generalizability.
- Significance: The findings could inspire new directions in prompt design and ICL stabilization, as noted in the conclusions.
### Weaknesses
- Clarity: The paper is dense, with many details relegated to the appendix.
- Analysis Depth: The limited mechanistic analysis (Section 6) leaves the "why" of pruning’s success underexplored.
Other Comments Or Suggestions: Typos: In the abstract, "let alone human intuitions." should likely be "let alone human intuition"
Questions For Authors: Can you provide more details on the computational resources required for PromptQuine (e.g., GPU hours)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are *very grateful* to Reviewer 8j6A for their highly positive comments on our paper. Thanks for acknowledging our efforts on making this paper *empirically rigor* and the **Significance, Creativity and Originality** of this work towards general *prompt tuning and ICL*. We will **make every effort to further improve** the paper, especially the presentations. We address your questions below:
>Analysis depth
Thanks for the interest.
* **Organize potential research questions into Future work/ Implication**: We also share this with nearly all reviewers. As our work is dense, we plan to leave many of the questions as future work. Indeed, various interpretability questions can arise from our findings, and many of them remain challenging open problems, and we believe each deserves a separate research paper, especially why pruning is effective and why such ICL behavior could work. We hold strong belief addressing them can largely advance the field.
* **Incorporate additional analysis studies**: Inspired by KfU1, we'll add studies showing the failures of ICL pruning, e.g.. while effective for ICL stabilization, the template selection [1] still matters. Further, as some interests may center around jailbreaking, we'll expand studies on that, e.g., extending studies on varied ICL setups, e.g., in-context attack [2], prompt injection attack [3] which go beyond current priming setup. We plan to analyze this challenging task, where the most common metric provides a sparse reward signal and many other formulation exists (e.g., tools for mechanistic interpretability--steering vectors and output joint probability as gradient methods).
>Computational Resources
After further check, we find that we leave many descriptions in text, instead of making explicitly clear in Table 10/12, e.g., LLM with Llama3-it, GPU type, etc. We'll refine the captions. To give a brief overview, for search, in classification/QA tasks, on a single A100 GPU, TAPruning/PromptQuine can run within an hour for low-shot ICL. As parallelization can be an option [4] for PromptQuine, runtime efficiency can improve with more computes. In generation, including reasoning, they take hours as shown in Table 12. TAPruning, which based on first-choice hill climbing, can always be a good start point for particular tasks of interest, as they converge quicker in one GPU. Dealing with long contexts typically take more time, e.g., hours or even days, yet may increase final result. That could be one tradeoff. *One potential follow-up work is to improve the performance/efficiency for long contexts*. Some intriguing starting points could be attempting to exploit model-specific information, e.g., it is also interesting that LLMLingua outperforms LLMLingua2 in this compression as guided search formulation. In summary, we now claim this framework can achieve *decent runtime efficiency*, as what we replied to KfU1.
>Writing Issues
Sorry for the typos. We also find some places that can be improved, including the one suggested by you and KRwQ. We'll ensure that all these will be enhanced/ fixed!
>Clarity, too dense
Apologize for this. As what we replied to KfU1 and KRwQ, we'll **follow all your suggestions** to further structure and highlight the takeaways, and make all settings clear to readers.
[1] Quantifying LMs’ Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting, ICLR [2] Jailbreak and Guard Aligned LMs with Only Few In-Context Demonstrations, arxiv [3] Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses, NeurIPS [4] Evolution Strategies as a Scalable Alternative to RL, arxiv | Summary: The paper applies evolutionary algorithms to the paradigm of LLM prompt pruning, introducing a new algorithm called PromptQuine. PromptQuine autonomously searches for better pruning strategies through an iterative process of mutation and selection, inspired by biological self-replication and evolutionary dynamics. It demonstrates effectiveness across diverse tasks, consistently outperforming existing prompt optimization methods.
Claims And Evidence: Most claims are supported with clear and convincing evidence.
Figure 1 (right), “ES exhibit greater robustness to task difficulty, such as increasing the number of shots, which amplifies solution sparsity.” (grammatical mistake here, should be ES exhibits) and line 237, “the success ratio of RS approaches zero as task difficulty increases”. In Figure 1 (right), I only see one line plotted, and am unsure whether that line is ES or RS. How are the claims derived from Figure 1 (right)?
The paper claims that PromptQuine automatically optimizes the pruning strategies. However, the mutation that pruning strategies undergo is very limited, i.e., having mutations in the pruning tokens by randomly flipping the bits. This means that the search space may not fully explore more complex pruning strategies. As a result, while PromptQuine demonstrates strong empirical performance, its optimization approach might be constrained to local improvements rather than a truly open-ended exploration of prompt space.
There is also the claim that incoherent “gibberish” or “secret language” can be more effective prompts than well-crafted ones. However, the usage of these terms is vague and imprecise, lacking a clear definition or formal characterization. Furthermore, since the mutation strategy and search space in PromptQuine is so limited, i.e., word order is still kept, “gibberish” and “secret language” seems like an exaggeration of the prompts produced.
Methods And Evaluation Criteria: Yes. The authors compare PromptQuine against competitive baselines across a diverse set of tasks.
Theoretical Claims: The definition and usage of “Partial Context Hypothesis” are vague. Terms such as “potentially redundant contexts” and “well-specified natural language prompts” are not carefully defined, making it unclear what specific aspects of a prompt contribute to redundancy or well-specification. Despite this lack of clarity, the hypothesis is referenced multiple times throughout the experimental analysis. A more precise definition and formal criteria for evaluating context redundancy and prompt specificity would strengthen the validity of the hypothesis.
Experimental Designs Or Analyses: All experiments are sound with repeated runs and variance.
Supplementary Material: I looked through the appendix experiments. They support the design choices in the main paper.
Relation To Broader Scientific Literature: The paper builds on prior work in prompt optimization, evolutionary search, and in-context learning. The paper proposes PromptQuine, an evolutionary algorithm for prompt pruning. It extends research on automatic prompt optimization (e.g., RLPrompt, LLMLingua) by using mutation and selection rather than RL or token attribution. The study also connects to evolutionary search in NLP and aligns with findings on LLM sensitivity to prompt design.
Essential References Not Discussed: Since the key contribution is the application of evolutionary algorithms to prompt design, some recent related works are missing from the citations. Specifically, “Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution” and “Evolution through Large Models”. Promptbreeder could be a good approach to compare PromptQuine against too. Furthermore, since Promptbreeder works in a larger search space than PromptQuine, I think this work should definitely be discussed in the paper.
Other Strengths And Weaknesses: Strengths:
- I think this paper is a significant contribution and shows interesting results, that just by pruning prompts, PromptQuine is able to achieve better results across a diverse set of benchmarks.
Weaknesses:
- See above. The overall writing is good, but some of the claims and terminologies used are very vague and ambiguous.
Other Comments Or Suggestions: For Figure 3, it is difficult to see the differences in performance between the models. Truncating the starting percentage of the plots (e.g., displaying only values from 75% onwards) would improve clarity and make the differences more noticeable.
In line 202, “we follow (Krishna et al, 2020; …)”. The citations should not be in parentheses.
Similar to the usage of “gibberish” and “secret language”, the term “conventional wisdom” is also used vaguely. While Wan et al. (2024) is cited as a reference, it is unclear to what extent this truly represents a conventional understanding scientifically. A more precise explanation or broader citation of prior work would help establish its validity.
Questions For Authors: 1. In the abstract, line 31, what does it mean by “low-shot regimes”?
2. Optimization challenges in sparse, multimodal landscapes is not a new challenge in evolutionary algorithms. Are there related works that show the same challenges?
3. To mitigate evaluation noise, re-ranking mechanisms are used. There are related evolutionary works that deal with noise and stochasticity, e.g., Fast and stable MAP-Elites in noisy domains using deep grids.
4. While the paper briefly touches on its limitations and potential future work throughout, it would be more useful to include a dedicated section in the conclusion to discuss these aspects in greater detail.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We want to express our sincere thanks to Reviewer KRwQ for their detailed tips to improve our writing. **This is invaluable, and we do learn a lot.** We are also grateful for viewing our work as **significant contribution** by presenting *interesting results* to the community. Here is the Table/Fig link: https://anonymous.4open.science/r/ughj/KRwQ/README.md
>Vague Terminologies
* **SecretLang & Gibberish**: Apologize for not explicitly defining "secret language" and now clarify it in a new paragraph (Section 2.2) following its intro in Para. 1.*Specifically, secret language refers to unnatural language whose syntax and semantics are incoherent and difficult for humans to parse, yet can be surprisingly effective in certain scenarios. In the absence of ..., such prompts are typically regarded as mysterious, hidden, and inherently non-scalable.* Further, the secret language we use in paper refers to prompts generated by prior methods, e.g., RLPrompt, which limits by algorithmic capacity. For the use of gibberish texts, we discuss in intro (line 30) as *syntactically and semantically strange*, which are used less. Hope this won't cause overclaims.
* **Partial Context Hypothesis**: We agree. So we plan to use formal mathematical languages which avoid use of "redundant" and "well-specified". Intuition: Given a natural prompt (e.g., ICL) x={...} with performance X and a unnatural prompt z (by existing SOTA, e.g. RLPrompt) with Z, is it possible to prune a few tokens {...} which leads to prompt y with enhanced performance Y, which potentially outperforms Z. y can be purely syntactically and semantically unnatural language.
* **Conventional Wisdom**: will cite OpenAI guide and [1,2+] for ICL stabilization (i.e., well-tuned). As also inspired by KfU1, we'll add a ICL study highlighting some structured insights. Sorry for the pointer, due to word limits.
>PromptBreeder+
We cited PB but not explicitly discussed. We'll discuss in related work for any work related to Open-endedness & LLMs. Specifically, we'll explain how ours differs from PB. Re comparisons, we thought before. Unfortunately, they didn't release code for use. As ES involves varied designs, we are unsure if reproduction fully reflects PB's result. In replying to KfU1, we reproduce with numerical results. We may report in paper. For larger space, we generally agree, but want to emphasize: While PB introduces token variations, the search space in our pruning context is also large, typically O(n!). Considering the discrete nature, even a slight token drop can cause large changes, which can be different from ES in continuous space. So I feel that both space by current discrete algo is large, and both **framework** is scalable yet constrained by computations.
>Search Space
Regarding pruning strategies, we admit that bit-flip can be limited in its representation power. We will introduce an ablation study (by KfU1) for mutation rates, e.g., why up to 4 for **solution sparsity**. That is also why we retain word orders, as we observed that many lead to meaningless mutations under selection pressures (+a study). Regarding open-ended exploration of prompt space, we agree and argue for not fully aligned w/ our target (rethink prompt space). The most relevant term is the *open-ended search* (established OES) we highlight in abstract. That is an insight we aim to deliver, e.g., intriguing yet unnatural prompts remain hidden, lurking just within our reach. We suggest the field to pay more attention to OES (cf. [3,4,5,6]), jumping out of linguistics for diverse, novel **unnatural language**.
>Optimization challenges
We argue that Section 4.2 is meant to motivate ES, and the landscape analysis is also new as the insight of pruning towards such large improvement is also new. Also, to answer e5jB, we'll discuss the deception of held-out objective [6,7], which **necessitate reward suboptimal stepping stones** like ES in paper.
>Other Writing Questions
Fig1 is inspired by [8] (Fig4), and we will add relative success rate (RS over/divided by ES) descriptions in caption and polish terminology in section 4.2. Regarding typos/LaTex/future work, we will follow the tips, correct all and state clearly. Regarding low-shot regimes, we borrow this in few-shot learning [9,10], in sense that we search using only low-shot samples, e.g., fitness measure. We can avoid such jargon. Further, we apologize for the ambiguity around noise; We use it to refer to deceptive/imperfect reward (not uncertainty/noise in DG MAP-Elites, thanks!), i.e., stronger prompts are rated as lower ranks, which we address using held-out score (rerank), thought still being limited. We'll make it clear. Fig3:https://anonymous.4open.science/r/ughj/KRwQ/re_Fig3.pdf
[1] Fantastically Ordered Prompts [2] Learning To Retrieve for ICL [3] Minimal criterion coevolution [4] Randomly Wired NN [5] Weight Agnostic NN [6] Abandon Objectives [7] Go-Explore [8] AutoML-Zero [9] Pi-Tuning [10] Loss landscape is all your need | Summary: This paper introduces an evolutionary method called PromptQuine for optimizing few-shot prompts by pruning them. They show that their optimized prompts outperform the original few-shot prompts as well as the RLPrompt baseline on a held-out set of test examples across a wide range of standard language model benchmarks. Interestingly, the resulting pruned prompts are often gibberish text (similar to what was found in other prior work) but retain some key features, including reference to the task at hand.
Claims And Evidence: The evaluation setup appears to be correct, although it could be better explained. In particular, it is imperative that the examples on which the prompts are pruned (training / validation splits) are distinct from the examples on which they are evaluated (test split). This is alluded to in Evaluation Settings (line 182) but it is not sufficiently clearly stated. It would be very useful if the authors can separate out an Evaluation section which pertains to the whole paper and which clearly states the nature of the held-out evaluation, include a specific step-by-step example, so that the reader can ensure the rigour of the evaluation procedure.
The pilot study with hill-climbing search is well-explained and the claims are substantiated by the evidence in this section.
The results for PromptQuine itself are more questionable. The authors choose to only present the positive results in the main text, with the negative results confined to the Appendix (D5 and D6) without further discussion in the main text. In my opinion the authors should be more transparent about the fact that PromptQuine underperforms RLPrompt in various tasks in the main text, and provide some qualitative insights / analyses as to why that might be case.
Similarly I find Section 5.2 a little more difficult to justify as "convincing evidence". The authors introduce a new metric with which to measure performance in jailbreaking, and it is hard to be certain that this metric has not been chosen so as to demonstrate the strength of their method. I would recommend that they use existing metrics / benchmarks in the literature to the extent possible.
The "task label" analysis appears convincing and gives good insight into which parts of the pruned prompts are important. However it is not sufficiently well explained. How are "task label" words identified? Also in Figure 4, the variance appears quite high and this deserves commentary. How much can we conclude from such high-variance experiments?
The examples in Table 16 are useful, and it would be good to see more of these analysed in the main text, for both situations where PromptQuine works and situations where it does not.
Methods And Evaluation Criteria: The choice of benchmark datasets are sensible, although it would be interesting to see results on some more modern tasks too e.g. GPQA and MMLU.
The methods used are well-motivated. However the explanation of the evolutionary algorithm is long-winded, and too much is made of what is essentially a very standard method. It would suffice to describe the algorithm as a standard evolutionary algorithm (citing a textbook in the literature) and then specify the mutation and selection operators, along with the definition of what constitutes a member of the population.
The evolutionary algorithms that are used are rather complicated, incorporating regularization, additional re-ranking, hyper-mutation of hyperparameters etc. It may be that each of these components is necessary and motivated, but it would be really useful to have an ablation study of the components back down to a simple genetic algorithm in at least one setting, to understand how much each component adds (and in which combinations the components must be present).
Theoretical Claims: N/A.
Experimental Designs Or Analyses: I have already commented on this above.
Supplementary Material: Yes, I did not review this in detail but I did look at Appendices D and E.
Relation To Broader Scientific Literature: The related work should not be relegated completely to the Appendix, in my opinion. There should at least be a paragraph in the main text giving details of the most relevant prior methods and the ways in which this contribution differs.
Two particular papers that are useful points of comparison are PromptBreeder and Automated Prompt Engineer. Can the authors comment on the reasons why these were not discussed in more detail, and why their results are not used as baselines for PromptQuine? It is hard for me to assess whether their method is stronger / more efficient than APE and PromptBreeder without a numerical comparison. (Note: both papers are cited, but that is not necessarily enough - they seem to be to be directly comparable methods, so it would be useful to compare the performance precisely, in so far as is possible, or to argue why not).
Essential References Not Discussed: See response to previous question.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: I suggest that the authors restructure the paper by moving the Pilot Study to the Appendix, opening up more space for talking about both the positive and negative results of PromptQuine in the main text, for providing ablations on the components of the methods, for improving the description of the evaluation, and for providing more specific qualitative examples of the pruned prompts that work and that don't work well.
Please can the authors comment on the societal implications of their work? In particular, it would be useful to have their thoughts on the implications of this work for interpretability. If LLMs can be prompted for particular behaviors in ways that are not legible by humans, could this be used to hide bad intent? What mitigations might we think about for such problems?
Questions For Authors: Questions are present in my comments above. Please address these and give your thoughts / responses to the specific weaknesses and suggestions I have provided above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer KfU1 for **extremely detailed review** on many tech details. We are glad that the reviewer appreciates the *contributions of our insights towards prompting and interpretability*. We address your questions below:
Anonymous link (AL) for tables/data: https://anonymous.4open.science/r/ughj/KfU1/
>Evaluation Setup
We agree to restructure paper for clear setups. We'll introduce valid/test separations and what typical objective is when we introduce our algos, and highlight that all follow this. We'll also highlight in section2.2 formalism *Solution Selection*, and make each Exp. clear.
>PromptQuine results & Pilot Study
* **Results**: We fully agree for transparency. The current format is constrained by page limits. We'll improve. Re RLPrompt vs ours, Quine is imperfect in its representation power. As Appendix E claims, future work can combine both for improvements. We'll highlight in limitations by KRwQ. Additionally, RLPrompt in paper is built upon different templates and token positions (line1115vs1172). *Prefix-only* appending forms a *fairer* comparison: Acc on template1&2: Quine(83.1/79.5), RL(77.5/76.3). The gap bridged, suggesting nuanced sensitivity towards token compositions.
* **Qualitative analysis**: Inspired by the numbers swapping ranks of methods, we'll add a section for ICL stabilization (ICLS). While pruning can be viewed as alternative for ICLS, *template selection [1] can still largely affect results* (Why/when not work effectively). Altogether, along with ablations, this provides a broad view of the pruning formulation.
>Jailbreaking
We clarify we didn't manipulate metrics. Varied setups are limitations in many jailbreaking papers, e.g., own held-out sets for valid/test. We follow [2] by prompting Llama-Guard3 (same prompt,line 405) as classifier on AdvBench [2]. The use of these two is indeed popular [3,4,5+], validating our choices. To increase confidence, we use standard Exact Match score [3] to do the eval: 90+ASR. Please check AL (README) for more numbers, which we reply to e5jB. We also provide **vicuna's direct prediction** and mistral-instruct-v0.3 results in the AL for checks. Inspired by e5jB, we'll also expand jailbreaking, and present a clearer picture of success/failures for that task (see our replies to 8j6A, thanks).
>Label Word Study
We agree this introduces variances, also mentioned in line 415. We'll expand the texts to make clear. That is also sth always debated by existing work on ICL mechanistics (cf. [6,7] vs [8]). We attempt to present some insights following [8], e.g., intriguing novel finding: pruning has potentials *even with random label words*, which can inspire future work. We'll expand further following ur suggestions on implications.
>GPQA,MMLU
As MMLU involves many tasks, we show GPQA-main (llama3-it): 33.7 to 35.3 (split done for search&eval). The results are indeed empirical. More research can be done for filling the theoretical gap.
>ES intro & ablation
We'll follow ur textbook tip. For ablation, many designs are indeed helpful, e.g., stagnation in App. D.7. We'll follow section5 in [9], studying reranking-counter deception/overfit (see AL), mutation rate, offspring size-reward lower-quality stepping stones [10], etc. We'll put significant emphasis on the search dynamics discussion in the paper. The studies can help future research.
>Promptbreeder(PB)&APE
We excluded PB due to the lack of released code, making faithful reproduction hard. Here, we reimplemented using GPT-4o mutators(1-shot, 3 seeds), each taking 20min for GPT-2 and 30+min for LLaMA3-it(l3it) w/ OpenAI API. Ours outperforms on GPT-2, ~1–7 min and short initializations, but runtime will increase with longer contexts. Given the impact of implementation on search algos [11], we’ll instead claim *decent runtime efficiency* in paper for rigorousness. We chose not to report APE for its constrained exploration vs. PB...
|PB-Quine|Subj|News|Yahoo|
|-|-|-|-|
|l3it|-2.2|-0.6|-1.4|
|l3it-4shot|-9.5|-1.3|-0.8|
|GPT2|-5.6|-8.9|-21.5|
PB is strong but limited, e.g., worse in weaker models. UnnaturalLang still good across models-unaddressed.
>Writing Questions
Thanks for the tips. We'll follow and add implications/limitations/future work. Re interpretability, our findings inspire directions for ICL, e.g., why random labels can improve (start from chance, w/o scaling up shots&sizes [12]) and a rethink on label words in-context. We'll make clear. Re hiding intention, extensive RLHF could address, yet imperfect and we think answering why pruning works can contribute.
[1] start worrying about prompt formatting [2] Understanding jailbreak success [3] GCG [4] AdvPrompter [5] ArtPrompt [6] Ground-Truth Labels Matter [7] ICL Learns Label Relationships yet not.. [8] Rethinking the Role of Demonstrations [9] Large-scale evolution of Image Classifiers [10] Why Greatness cannot be planned [11] Larger LMs do ICL differently [12] Implementation Matters in DPG
---
Rebuttal Comment 1.1:
Comment: Many thanks for your rebuttal. On a few points your response lacks detail, so I am not minded to update my score at present.
For instance:
1. Can you specifically state the new paragraph that you will insert to explain the train / validation / test split used for Evaluation throughout (and whether it is different for your Pilot Study and for PromptQuine)?
2. Can you be precise about the way you will improve the presentation of the results (i.e. we will present the Figures A,B,C from the Appendix in the main text to show the limitations of PromptQuine)? Can you provide a more detailed paragraph with justification as to why PromptQuine doesn't outperform RLPrompt on these tasks?
3. How exactly are the label words identified?
4. I don't understand the given table comparing PromptBreeder and PromptQuine. Can you provide a table where one column is PromptBreeder, another column is PromptQuine and the rows are different tasks, clearly showing the benefits of PromptQuine?
5. What specific paragraph will you add to discuss the societal implications?
Finally, a quick point on style in writing your rebuttal. In my view, it is much more courteous to the reviewers to write your rebuttal using full sentences, correct grammar and avoiding abbreviations.
---
Reply to Comment 1.1.1:
Comment: We apologize for the word limits and any uncertainty above. We're also preparing a preprint w/ refactorized code.
Q1: First, we'll write *Solution selection* in section2.3: *Once the search converges or terminates, the algorithm returns a selected optimal solution, i.e., an optimal prompt. The optimality of a prompt is typically assessed using an aggregated metric score on a held-out dataset. This dataset is often referred to as the validation set, while the overall performance is reported separately on an official test set using its specific task metric. We ensure strict separation between the validation and test sets to prevent data leakage and enable a reliable assessment of generalization.* This serves as the foundation for all experiments. Further, rigorously, we won't call those in-search samples (for fitness estimation) as training set, as we don't involve training. For TAPruning/PromptQuine, after considerations, we plan to first illustrate the **Baselines** paragraph and then follow a **Evaluation Settings** paragraph in showing the experimental results. This shall look much better than current formats (mix in one paragraph, e.g., line196/346). Since TAPruning & PromptQuine differ in solution selection (e.g., proxies, rerankings), we'll explain these differences in detail. However, they share key similarities, e.g., the importance of using valid samples with task metrics for final prompt selection. We'll clearly separate the baselines and settings for both in two paragraphs to clarify our setup effectively.
Q2: We'll first put pilot study in appendix (introduce TAPruning in main paper) and condense texts for GAs. We promise that we'll reorganize section6 (**A Deeper Look into Pruning Effects on ICL**) with another subsection: *On the limitation of PromptQuine*. Specifically, we plan to discuss: *While pruning tokens are effective at enhancing overall ICL results, we identify its inherent limitations for current PromptQuine. Specifically, we observe two recurring failure cases across tasks: (1) token pruning is not a universally reliable method for stabilizing ICL performance, as its effectiveness remains highly sensitive to the chosen ICL templates; (2) fixed-order prompt subsequence search lacks sufficient exploration power for consistently improving performance.* We'll enhance experiments, by including varied ICL templates (e.g., 83 vs 79 in PIQA) and comparing against PB/RLPrompt w/ varied token insertions (e.g. recent tokens) under different tasks, like we did in the first rebuttal. We'll include examples/numbers for justification, e.g., the numbers in first rebuttal. We'll also discuss the failure by selection pressure: some mutations, e.g.,+ tokens, are more effective in long run, yet filtered where novelty search may help.
Q3: We'll make this clear for section6, where labels in-context are the same as verbalizers. We ensure consistent use of label words (e.g., great/terrible, see Table6) in ICL prompts, following RLPrompt for intuitive verbalizers. Label word identification is based on exact matching, which may introduce slight noise if such words appear in exemplar inputs. However, after further checks, the ICL prompts in the analysis do not contain these words.
Q4: We organize tables as follows (3 seeds, where PB builds upon original ICL prompts). Unless otherwise stated, PB is built upon 1-shot ICL prompt for Llama3-it, in format <prompt+exemplars>. We're considering adding such numbers in paper.
|Task|PB|PromptQuine|PB (4-shot)|PromptQuine (4-shot)|PB (GPT2)|PromptQuine (GPT2)|
|-|-|-|-|-|-|-|
|Subj|84.3|86.5|83.6|93.1|72.2|77.8|
|AGNews|88.6|89.2|88.3|89.4|57.8|66.7|
|Yahoo|62.8|64.2|65.4|66.2|25.7|47.2|
Q5: We'll write in Impact statement, beginning with a summary of our work, followed by open interpretability questions, and concluding with a discussion on its societal implications:
*Moreover, we highlight the direct societal implications of our findings on unnatural language. Notably, our work exposes critical weaknesses in current LLM alignment techniques. Despite extensive training designed to align models with human values and ethical standards when given natural language instructions, our findings reveal that unnatural language can still be used to elicit malicious behaviors—exploiting gaps that developers cannot fully anticipate. As demonstrated in our paper, this vulnerability persists even in large models subjected to extensive red teaming. While continuously iterating on red teaming and eliminating failure cases is beneficial, we advocate for exploring novel alignment techniques that go beyond surface-level fixes. In particular, a stronger focus on inner alignment may lead to more robust improvements. For commercial models, we strongly recommend complementing red teaming with output-level restrictions, as this may provide a more intuitive and effective safeguard—especially given that existing alignment methods are primarily optimized for handling natural language inputs.* | null | null | null | null | null | null |
Testing the Limits of Fine-Tuning for Improving Visual Cognition in Vision Language Models | Accept (poster) | Summary: This work investigates the extent to which task-specific fine-tuning can help VLMs to overcome limitations in two psychologically inspired domains, with a special emphasis on the extent to which benefits of fine-tuning generalize between tasks and across variations within a task.
Claims And Evidence: The experiments very clearly demonstrate that task-specific fine-tuning leads to brittle improvements, in the sense that these improvements do not generalize between tasks, and they do not even clearly generalize to slightly different versions of the same task. I have a few comments about the interpretation of these results. First, the experiments are described as establishing the limits of fine-tuning in general, but it seems more accurate to say that they establish the limits of *task-specific* fine-tuning. It would be good to clearly state that the experiments do not necessarily rule out more robust gains from fine-tuning across a wider distribution of tasks (and they even seem to suggest that this might work, given the relatively improved generalization in the model fine-tuned on both tasks). Second, it seems that the fine-tuned models actually outperform humans within the specific domain that they are fine-tuned in (although this is difficult to tell because the data are not presented in the same plot). In other words, the results clearly demonstrate that fine-tuning yields only very brittle benefits, which does not seem very human-like, but it seems worth mentioning that the performance is actually quite strong relative to humans within these domains.
Methods And Evaluation Criteria: The methods and evaluation criteria are all sensible. In Figure 2, it would be helpful to include additional rows illustrating 1) zero-shot performance of the base model, 2) human performance, 3) zero-shot performance for a state-of-the-art model (e.g. GPT-4o or Claude). This will help to assess not only how fine-tuning generalizes between tasks, but also how the performance of the fine-tuned models compares in absolute terms with humans and other models.
Theoretical Claims: N/A
Experimental Designs Or Analyses: All experiment designs and analyses appear reasonable.
Supplementary Material: I reviewed the supplementary figures.
Relation To Broader Scientific Literature: The discussion of the relationship to the broader literature is generally strong. There are two points where it may be expanded. First, some recent work [1] has found that failures of VLMs can be related to the classic 'binding problem' in cognitive science, in the sense that the failures are similar to those observed for human participants under time pressure. This is also related to theoretical work [2] which suggests that sequential processing (i.e. inference-time compute) is needed to overcome these binding errors. One prediction of this perspective is that, so long as the model is required to generate a response in a single feedforward pass, fine-tuning will only lead to very task-specific benefits, whereas inference-time compute will be needed for more generalizable benefits. The results in this paper seem consistent with that line of reasoning, which may be interesting to discuss.
Additionally, it may be interesting to discuss whether approaches like [3] may be lead to more generalizable benefits, i.e. via fine-tuning on a broader distribution of tasks.
[1] Campbell, D., Rane, S., Giallanza, T., De Sabbata, C. N., Ghods, K., Joshi, A., ... & Webb, T. (2024). Understanding the limits of vision language models through the lens of the binding problem. Advances in Neural Information Processing Systems, 37, 113436-113460.
[2] Frankland, S. M., Webb, T. W., Lewis, R. L., & Cohen, J. D. (2025). No Coincidence, George: Processing Limits in Cognitive Function Reflect the Curse of Generalization.
[3] Binz, M., Akata, E., Bethge, M., Brändle, F., Callaway, F., Coda-Forno, J., ... & Schulz, E. (2024). Centaur: a foundation model of human cognition. arXiv preprint arXiv:2410.20268.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: In Figure 4, why is the model-to-human alignment apparently higher than the human-to-human alignment in some cases (i.e. the bars labelled 'humans')?
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer 3bXm, thank you very much for your comments. We are happy to hear that our experiments “very clearly demonstrate that task-specific fine-tuning leads to brittle improvements”, this was the main point we were trying to convey. We also appreciate your assessment that our “methods and evaluation criteria are all sensible” and that you found our discussion of related work “generally strong”. In the following we discuss the specific concerns that you raised and how we have sought to remedy them. We think addressing them has significantly strengthened our paper, particularly with respect to improving the clarity and including a GPT baseline.
\
\
*The experiments are described as establishing the limits of fine-tuning in general, but it seems more accurate to say that they establish the limits of task-specific fine-tuning.*
\
\
Thank you for raising this point. We agree that our experiments first and foremost establish the limits of fine-tuning on specific tasks, namely, reasoning about the stability of solid block towers. However, we would like to emphasise the long history of using such tasks for investigating visual cognition in humans. We think that they are sufficiently representative for us to make some more general claims about visual cognition in vision language models. Our results present some evidence that fine-tuning models on one cognitive task does not lead to generalization on a related but distinct cognitive task. In contrast to this, human learning is characterised by a robust ability to generalise between distinct but related tasks.
\
\
You are correct to point out that models trained on both tasks do generalize slightly better than models trained on only one (Fig. 2). However, with our current datasets, we cannot evaluate how these models would generalise to a third cognitive task in cubeworld, and we suspect they would generalize poorly, given our current results.
\
\
*Second, it seems that the fine-tuned models actually outperform humans within the specific domain that they are fine-tuned in (although this is difficult to tell because the data are not presented in the same plot). In other words, the results clearly demonstrate that fine-tuning yields only very brittle benefits, which does not seem very human-like, but it seems worth mentioning that the performance is actually quite strong relative to humans within these domains.*
\
\
Thank you for these comments. First, we have addressed the problem of model-human comparison by including a human baseline in Figure 2, and all heatmaps in the appendices. Second, we have emphasised in the discussion that fine-tuned models outperform humans on the domains they have been directly trained on, while being outperformed by them on out-of-distribution domains.
\
\
*In Figure 2, it would be helpful to include additional rows illustrating 1) zero-shot performance of the base model, 2) human performance, 3) zero-shot performance for a state-of-the-art model (e.g. GPT-4o or Claude).*
\
\
Thank you for this suggestion. We agree that this makes it easier to interpret the results. We have therefore added rows showing the zero-shot performance of the base model, human performance and zero-shot performance for GPT-4o. You can see the updated figure here: [Figure 2](http://postimg.cc/8JwwDwYv) (please note this shows Llama-3.2-11B, as other reviewers highlighted the need for bigger models from other families).
\
\
*Relation to broader scientific literature could be expanded.*
\
\
Thank you for the very relevant pointers. We have updated our related works section according to the feedback by you and by reviewers fjkm and uVR9. We were aware of [2] but had not made the connection to the difficulties that models have with generalization. This is a very interesting direction and we thank you for bringing it to our attention. We have also added [3] to better motivate why fine-tuning on human choices is of interest and might lead to more robust generalization.
\
\
*In Figure 4, why is the model-to-human alignment apparently higher than the human-to-human alignment in some cases (i.e. the bars labelled 'humans')?*
\
\
This is an intriguing outcome from our initial results. It indicates that the average agreement between the model and each human rater is higher than the average agreement between every pair of human raters. We speculate that this arises from the fact that there is a sizable variance in human ratings with heavy tails; the fine-tuned model appears to accurately capture the average human rating, resulting in a higher average agreement than the human case, where average agreement is pulled down by raters with different ratings due to the way Cohen’s kappa is computed, since the observed agreement will be low relative to the expected agreement. We should note that our plots have changed slightly, since we now average over three fine-tuning seeds (see the updated [Figure 4](http://postimg.cc/47qjMqfW) with the 7B Qwen model).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for these replies. I appreciate the updates to Figure 2, which I think will make it easier to interpret the results. I still think that the paper would be improved if it were clearly stated that the experiments primarily establish the limits of *task-specific* fine-tuning. I very much agree that the tasks are 'sufficiently representative to make some more general claims about visual cognition in vision language models'. The tasks are reasonable for establishing the visual cognitive abilities of these models. But this doesn't imply that fine-tuning on only these tasks can comprehensively establish the limits of fine-tuning, whether task-specific or over a more general distribution of tasks. It seems at least possible that fine-tuning over a broader distribution of tasks could lead to more robust improvements. I think this is primarily a matter of clearer framing, and not a fundamental criticism of the work, which I think makes a useful contribution.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 3bXm, we again want to thank you for your time and for actively taking part in the discussion process. We are happy to hear that our changes improved the interpretability of the results. We understand your remaining concerns about task-specificity – we agree that our results primarily showcase the limits of fine-tuning on a specific task, even if it is representative for a given cognitive domain. We made the following changes to the introduction to highlight this more clearly (changes marked in bold).
- Abstract line 23: *However, we find that **task-specific** fine-tuning does not contribute to robust human-like generalization to data with other visual characteristics or to tasks in other cognitive domains.*
- Introduction line 84 left: *In this paper, we explore whether fine-tuning VLMs **on single tasks** can improve their performance on intuitive physics and causal reasoning tasks in the visual domain, as well as steer them towards more human-aligned outputs.*
- Introduction line 93 left: *Therefore, we seek to evaluate whether **task-specific** fine-tuning not only improves performance on visual cognition tasks sampled from an identical distribution, but also whether it produces models that can generalize to new, but related, tasks in new domains.*
- Introduction line 102 left: *Our results allow us to appraise the limits of **task-specific** fine-tuning for building performant, human-like machine learning models that can generalize beyond the kinds of data on which they have been trained. Across a range of datasets and models, we do not find evidence that fine-tuning alone can achieve all these objectives.*
- Introduction line 101 right: *In this work, we fine-tune VLMs on **single tasks** from two cognitive domains, intuitive physics and causal reasoning, using tasks designed in a virtual environment we call Cubeworld, in which block stacks are constructed from colored cubes and are subject to realistic physical forces.*
We have also added further clarification about this constraint in the limitations section:
- Discussion line 408 left: *\[...\] models fine-tuned on a mixture of intuitive physics and causal reasoning data performed well in both domains. It is important to note that we primarily showcase the limits of models fine-tuned on a specific task. **While we cannot evaluate how the joint models would generalise to a third cognitive task in Cubeworld, it is possible that fine-tuning models on broader distributions of tasks could lead to more robust improvements.***
- Discussion line 391 right: *Similarly, introducing greater variance into the fine-tuning datasets, **fine-tuning on broader distributions of tasks,** as well as fine-tuning on larger volumes of data might improve model performance.*
- Discussion line 403 right: *Our findings underscore the limits of **task-specific** fine-tuning in achieving robust generalization in vision-language models.*
- Discussion line 412 right: *However, **task-specific** fine-tuning does not lead to the broad, flexible reasoning abilities that characterize human cognition*
We sincerely appreciate your thoughtful assessment of our work and are glad that you see it as a useful contribution. We hope that our clarifications and improvements regarding the framing of our work have successfully addressed your concerns and that, in light of these revisions, you may now see it as a clear accept. | Summary: This paper investigates whether fine-tuning models on intuitive physics and causal reasoning improves performance within these specific domains. The authors conclude that such fine-tuning does not enhance performance on other visual characteristics or tasks in different cognitive domains.
Claims And Evidence: The submission is supported by some evidence. As demonstrated in Figure 2, fine-grained control variable experiments were conducted across various settings, including different fine-tuning datasets and evaluation sets. However, the experiments only utilize Qwen2-VL models with 2B and 7B parameters. I believe that testing a broader range of model architectures, such as LLaVA, and including larger models, like those with 30B or 70B parameters, would strengthen the conclusions.
Methods And Evaluation Criteria: It appears that all experiments are conducted on the cubes dataset; I suggest that incorporating other types of physical understanding tasks could also help verify the findings.
Theoretical Claims: There are no equation in this paper.
Experimental Designs Or Analyses: I have reviewed the experimental designs presented in Figures 2, 3, and 4, and they appear to be fundamentally sound and valid. However, the key points in Figures 3 and 4 are not clearly conveyed, making them difficult to understand. I recommend that the authors add annotations to these figures to clarify the key points.
Supplementary Material: Yes, the appendix provides examples of the data and details of the experimental setup, including information about human annotators' compensation and training curves.
Relation To Broader Scientific Literature: See the below session.
Essential References Not Discussed: This observation is linked to the discussion on the generalization of reasoning in large language models (LLMs) and vision-language models (VLMs). In the context of LLMs, it relates to references [1][2][3], which explore whether these models, when fine-tuned on specific tasks, can generalize to in-domain/out-of-domain or simpler/more complex tasks. I believe these discussions are also relevant to this paper's ideas, as the understanding and intelligence of VLMs are fundamentally derived from LLMs. Consequently, grasping the generalization capabilities of LLMs is essential for comprehending the generalization of VLMs. For vision-language models, this is connected to reference [4].
[1] Faith and Fate: Limits of Transformers on Compositionality. Nouha Dziri et al.
[2] Algorithms Can Transformers Learn? A Study in Length Generalization. Hattie Zhou et al.
[3] Math for AI: On the Generalization of Learning Mathematical Problem Solving. Ruochen Zhou et al.
[4] In-Context Compositional Generalization for Large Vision-Language Models. Chuanhao Li et al.
Other Strengths And Weaknesses: This work addresses the generalization problem of vision-language models (VLMs), which is an intriguing topic. However, the experiments do not provide sufficient analysis on the reasons behind the failures in generalization, which limits the insights offered by the study. Furthermore, I find that the paper is poorly written and lacks organization. I recommend that the authors reorganize the experimental sections and clarify the conclusions. For example, Figures 3 and 4 are challenging to interpret, and the descriptions associated with these figures could be placed in separate sections for enhanced clarity. Additionally, restructuring the discussion section could help present the key points more effectively.
Other Comments Or Suggestions: No.
Questions For Authors: 1. What are the underlying reasons for the failures in generalization? Could the authors provide a more detailed analysis of these factors?
2. Will larger models exhibit different patterns regarding this phenomenon? Like 13B/30B/70B models?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear reviewer uVR9, thank you very much for your thorough review. We appreciate that you find our topic “intriguing” and our experimental designs “sound and valid”. In the following we discuss how we have remedied the concerns that you have raised. Your comments have, in our view, considerably strengthened the paper, particularly regarding the inclusion of larger models and the development of a second, more targeted dataset for evaluating between-task generalisation.
\
\
*Testing a broader range of model architectures and including larger models would strengthen the conclusions. Will larger models exhibit different patterns regarding this phenomenon?*
\
\
Thank you for highlighting this. To remedy this weakness, we added a new family consisting of two larger models: Llama-3.2 Vision 11B and 90B. We find the same pattern across both model families and all model sizes - models are capable of generalising to new tower sizes, particularly when trained on shorter towers, but they fail to generalise to naturalistic data or to a new cognitive task (see here for updated versions of Figures 3 and 13 with 7B as reference: [Figure 3](http://postimg.cc/HVcmqHCm), [Figure 13](http://postimg.cc/HjzJNvx9)).
\
\
*It appears that all experiments are conducted on the cubes dataset; I suggest that incorporating other types of physical understanding tasks could also help verify the findings.*
\
\
Thank you for this suggestion. We chose to design the cubeworld dataset to ensure that models have a fair chance of generalizing given that they have become used to the visual characteristics of the environment. This allows us to infer to some degree whether models’ failure to generalize is due to small differences between the tasks or due to their inability to learn intuitive theories through task-specific fine-tuning. We also designed the tasks in cubeworld based on a long history in cognitive science of using block towers to study physical understanding in humans [1-3]. However, we agree that it is important to be clear about the limitations of our work and have therefore added a sentence to the limitations outlining that we only investigate a subset of intuitive physics here, also taking into account the comments of reviewer mM6z.
\
\
*The experiments do not provide sufficient analysis on the reasons behind the failures in generalization, which limits the insights offered by the study. What are the underlying reasons for the failures in generalization?*
\
\
Thank you for raising this question. To establish whether failures in generalization are due to small differences between tasks, or if the models struggle with learning intuitive theories through task-specific fine-tuning, we added another dataset where differences between the two cognitive domains are kept as minimal as possible. We generate paired images of pyramids, in which the causal reasoning image contains a red block which is removed to generate the intuitive physics image (see this [Figure](http://postimg.cc/3k5Czc3k)).
\
\
In principle, being able to reason about the counterfactual stability of a pyramid ought to predispose models to reason about the factual stability of pyramids. Thus, we expected a transfer from causal reasoning to intuitive physics, especially since we test models using the corresponding images from the pairs they were fine-tuned on. Furthermore, we explicitly tell the models that the red block has been removed. Nevertheless, we do not find evidence of this transfer, suggesting that task-specific fine-tuning does not lead to models learning intuitive theories. Instead, they appear to be learning task-specific superficial shortcuts that do not generalize [4-5].
\
\
*I find that the paper is poorly written and lacks organization. I recommend that the authors reorganize the experimental sections and clarify the conclusions. Figures 3 and 4 are not clearly conveyed, making them difficult to understand.*
\
\
We are sorry to hear this. Conveying our ideas and conclusions clearly is very important to us, so in order to improve this, we added a conclusion for every section of the results. We have also updated the captions of Figures 2, 3, and 4 to make them easier to understand. We hope that these changes improve the readability of our paper.
\
\
*Essential References Not Discussed.*
\
\
Thank you for bringing these references to our attention. We have extended the related works section based on your comments and those of reviewers fjkm and 3bXm.
\
\
\
[1] Baillargeon, R., & Hanko-Summers, S. (1990). Is the top object adequately supported by the bottom object? Young infants' understanding of support relations.
[2] Spelke, E. S., et al. (1992). Origins of knowledge.
[3] Baillargeon, et al. (1992). The development of young infants' intuitions about support.
[4] Ilyas, A., et al. (2019). Adversarial examples are not bugs, they are features.
[5] Geirhos, R., et al. (2020). Shortcut learning in deep neural networks.
---
Rebuttal Comment 1.1:
Comment: I'd like to maintain my current score because of the following reason:
- The paper lacks deeper insights, as many of the conclusions are fairly intuitive. It is already well-known in the language modeling community that finetuning often does not lead to substantial improvements in generalization. Moreover, the experimental setup is overly simplistic. Some findings—such as the one in Line 377 stating that finetuning on human judgments increases alignment with human preferences—are fundamental and expected outcomes in machine learning. Essentially, the experiments amount to finetuning the model on different data distributions and evaluating how well those distributions overlap.
- The title appears to be overstated. "Reasoning" encompasses a wide range of tasks, including mathematical reasoning, abstract reasoning, and spatial reasoning, among others. The experiments presented in the paper do not sufficiently support such a broad claim.
- The study is limited to one finetuning approach—PEFT, specifically QLoRA. If the goal is to explore the limitations of finetuning, the paper should include a broader range of finetuning techniques to substantiate its claims.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer uVR9, we appreciate your engagement with our paper and understand that you retain a few concerns despite our previous efforts to improve the paper. We want to highlight that based on your initial comments we fine-tuned new models from other families, such as Llama-3.2 Vision 11B and 90B, and added a completely new fine-tuning dataset to better understand when generalization fails – changes we think make our paper stronger. In the following, we address your new set of concerns. We hope that we can convince you that our paper does present novel results that are of interest to the community.
\
\
*The paper lacks deeper insights, as many of the conclusions are fairly intuitive. It is already well-known in the language modeling community that finetuning often does not lead to substantial improvements in generalization. Moreover, the experimental setup is overly simplistic. Some findings—such as the one in Line 377 stating that finetuning on human judgments increases alignment with human preferences—are fundamental and expected outcomes in machine learning. Essentially, the experiments amount to finetuning the model on different data distributions and evaluating how well those distributions overlap.*
\
\
Thank you for this comment. While we are aware of concurrent works showcasing fine-tuned models’ difficulty generalizing [1], we want to stress that we are interested in fine-tuning for improving visual cognitive abilities, a topic that has not been investigated before.
\
\
We agree that the experimental setup is simplistic. This is on purpose and allows us to test in which cases fine-tuned VLMs generalize and in which cases they don’t. However, we do not think our results are obvious: we did not expect VLMs to generalize at all – however, we find that they perform well on smaller or larger towers from the task they are fine-tuned on. Thus, our results do not show that generalization does not occur at all, but rather that it is limited to the specific task at hand.
\
\
We were also surprised that this task specificity extends to cases where the visual stimuli are almost identical. Insofar we do not agree that our experiments “amount to fine-tuning the model on different data distributions and evaluating how well those distributions overlap.” The new experiment we added in response to your first set of comments shows that fine-tuned VLMs do not generalize even between very similar data distributions. Here, we fine-tune VLMs on block pyramids with a single red block and ask them whether any other block would fall if the red block was removed. After fine-tuning they can do this very well. But given the same images they saw during fine-tuning, only without the red block, they fail to reason about the factual stability of the tower (see postimg.cc/3k5Czc3k) – even if we explicitly tell them that the red block was removed to help them make the connection between counterfactual and factual stability. This is very interesting and unexpected, because reasoning about counterfactual stability should require reasoning about factual stability. We think this result is of importance to the community, as it suggests that VLMs do not learn robust visual cognitive abilities during fine-tuning, but rather that they rely on task specific shortcuts that do not generalize between tasks.
\
\
*The title appears to be overstated. "Reasoning" encompasses a wide range of tasks, including mathematical reasoning, abstract reasoning, and spatial reasoning, among others. The experiments presented in the paper do not sufficiently support such a broad claim.*
\
\
Thank you for highlighting this. We agree that reasoning is a broad term that incorporates a number of different types of reasoning. We here are interested in visual cognitive reasoning and we investigate tasks that are representative for this type of reasoning. We understand that the title could potentially be misleading. We are happy to change it to the more specific “Testing the limits of supervised fine-tuning to improve visual cognition in vision language models”.
\
\
*The study is limited to one finetuning approach—PEFT, specifically QLoRA. If the goal is to explore the limitations of finetuning, the paper should include a broader range of finetuning techniques to substantiate its claims.*
\
\
We are first and foremost interested in fine-tuning for improving visual cognition. We chose QLoRA as it is a widespread and efficient method, and we think it allows for some generalizable insights into establishing the limits of supervised fine-tuning for improving visual cognition in vision language models. To be more specific about our objective, we propose to change the title to this more specific title as outlined above. We also added clarification on the exact type of fine-tuning used to the introduction and highlighted this limitation in the discussion.
\
\
[1] Chu, Tianzhe, et al. "SFT memorizes, RL generalizes: A comparative study of foundation model post-training." | Summary: This is an interesting piece of work that seems to be among the first to investigate the following: fine-tuning is a widespread approach to improving LLM performance in the text domain, but for domains such as intuitive physics and causal reasoning, which are not text-related and not really purely visual capabilties either, how can can finetuning go to develop or improve such fundamental capabilities in VLMs?
One would not really expect this to work extremely well (unlike text), and I doube the author expected it either, but the paper does shed some light on the extent to which it works. In short, it works somewhat well within the domain (not that surprisingly), but does not generalize to the other domain.
## Update after rebuttal:
I find that the work was/is generally interesting, provides a useful contribution, and perhaps worth publicizing (i.e. acceptance).
My main issues were with the lack of robustness in the evaluation, and the over-generality of the claims, esp. the title. At the very least, the work is not about "reasoning" in general.
Critically contingent on the new expt results (more model, repeats, etc.) claimed by the authors, I upgrade my review score from 2 to 3.
However, I would strongly, strongly recommend that the title needs to revised and scoped down. (Whether the AC/SAC can enforce that, I'm not sure...)
Claims And Evidence: 1) Whatever claims the paper makes, they are severely limited by a number of factors, as already pointed out by the authors themselves in the Discussion.
2) More importantly though, intuitive physics and causal reasoning are very broad capabilities, and the blocksworld-like datasets used here are a very small sliver of these domains/capabilities. I'm not sure any claim about intuitive physics whatsoever is supported. Rather, any claims should be minimally be scoped down to the very specific thing being investigated, e.g. stability, etc. In addition, these need to be caveated that only solid/rigid cubes of uniform density are used, etc. etc.
Methods And Evaluation Criteria: Methods and evaluation criteria were generally sound.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Experimental designs and analyses were generally sound, but subject to a number of limitations, some of which the authors themselves point out (e.g. size of models finetuned, alternative finetuning procedures, etc. etc.)
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: This paper is generally somewaht useful in terms of contributing some knowledge about what works (or not) to what extent for improving intuitive physics and causal reasoning through fine-tuning of VLMs. However, as the authors themselves point out in the Discussion section, there are a number of limitations, which in my view quite significantly limit the usefull of this paper.
Essential References Not Discussed: Nil.
Other Strengths And Weaknesses: Nil.
Other Comments Or Suggestions: Nil.
Questions For Authors: Nil.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer mM6z, thank you very much for your feedback. We appreciate that you think our work is interesting and that the “designs and analyses were generally sound”. Furthermore, we are glad to hear that you agree that our work “does shed some light on the extent to which [vision finetuning] works”. Below, we discuss the specific concerns you raised and the revisions we have made, which we think have strengthened our paper.
\
\
*One would not really expect [fine-tuning to improve fundamental capabilities in VLMs] to work extremely well (unlike text), and I doubt the authors expected it either.*
\
\
Thank you for this comment. As you suggest, finetuning has proven to be an effective method for producing generalisable behaviour in text-only models. However, as also highlighted by reviewer fjkm, we are not aware of prior work that establishes the link between fine-tuning and improved visual cognition in VLMs. Therefore, our work here seeks to investigate whether finetuning can confer an advantage to the visual domain too.
\
\
While previous work has established that pre-trained VLMs do not perform well in visual cognition tasks, we did not know whether fine-tuning could sufficiently improve their performance on these tasks. We were surprised to find that fine-tuning can make models perform well on the specific task they are fine-tuned on. Furthermore, we found that the models can even generalize robustly to unseen towers of sizes that they had not seen during training. Both of these results show that fine-tuning works surprisingly well in improving VLM performance on specific tasks and domains.
\
\
However, the ways in which models can generalize from their fine-tuning data is severely limited. We show that they have trouble generalizing to visually distinct stimuli as well as to visually similar stimuli in another domain. The take-away from this investigation therefore should not be that fine-tuning does not improve VLM capabilities, but rather that it leads to very task-specific improvements that do not generalize well.
\
\
*Whatever claims the paper makes, they are severely limited by a number of factors, as already pointed out by the authors themselves in the Discussion.*
\
\
The main limitations we outline in the discussion were that we: (1) investigate only smaller models of a single model family, and (2) use a single parameterisation for fine-tuning and only one dataset distribution per domain. We have now fixed these limitations by: (1) adding bigger models of another class with 11B and 90B parameter versions of Llama-3.2 Vision; (2) conducting three repetitions of every model on every dataset, using a different adapter weight initialisation and a different sample of fine-tuning data. Our new results follow the same pattern as our previous results. We think that these extensions have significantly improved the generality and robustness of our claims.
\
\
*I'm not sure any claim about intuitive physics whatsoever is supported. Rather, any claims should be minimally scoped down to the very specific thing being investigated.*
\
\
We agree that intuitive physics refers to a broad set of capabilities, of which we only investigate a subset. The tower stacking task is a canonical task that has long been used as a testbed for intuitive physical capabilities in machine learning systems [1-4], drawing on an even longer history in cognitive science using the same task [5-7]. However, we also agree that it is important to be specific about the scope of the investigated capabilities. We have therefore outlined the scope of our experiments more specifically in the discussion section, namely, that we focus on model intuitions for stability judgements involving solid, uniformly dense blocks.
\
\
[1] Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people.
[2] Battaglia, P., Pascanu, R., Lai, M., & Jimenez Rezende, D. (2016). Interaction networks for learning about objects, relations and physics.
[3] Lerer, A., Gross, S., & Fergus, R. (2016). Learning physical intuition of block towers by example.
[4] Piloto, L. S., Weinstein, A., Battaglia, P., & Botvinick, M. (2022). Intuitive physics learning in a deep-learning model inspired by developmental psychology.
[5] Baillargeon, R., & Hanko-Summers, S. (1990). Is the top object adequately supported by the bottom object? Young infants' understanding of support relations.
[6] Spelke, E. S., Breinlinger, K., Macomber, J., & Jacobson, K. (1992). Origins of knowledge.
[7] Baillargeon, R., Needham, A., & DeVos, J. (1992). The development of young infants' intuitions about support. | Summary: This paper explores the limitations of Vision-Language Models (VLMs) in causal understanding of the physical world — a problem that is quite interesting to the community. The authors examine the potential of fine-tuning (FT) as a method to improve performance on intuitive physics and causal reasoning tasks. They fine-tune a VLM on tasks from the cognitive domain, specifically intuitive physics and causal reasoning in CubeWorld and a real-world environment. However, the results indicate that fine-tuning alone does not achieve the desired abilities. The model is evaluated on tasks such as IID stability of towers, OOD generalization with different numbers of blocks, OOD domain transfer from CubeWorld to real-world scenes, and counterfactual tasks.
Claims And Evidence: Yes, but a more rigorous study could be done as pointed out in the "Cons" below.
Methods And Evaluation Criteria: Yes, the benchmark, method, and evaluation criteria make sense. I am a bit unclear about why the alignment with human behavior is of importance that the paper reports.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Yes. I don't have any issues with the design/analyses currently in the paper -- except that there could be a more complete study as discussed in the Cons below.
Supplementary Material: Yes. All parts.
Relation To Broader Scientific Literature: To me, it seems that evaluation studies about the limitations of VLMs are only gradually emerging. Previous studies have not explored the intersection of visual causal understanding, fine-tuning, and VLMs.
Essential References Not Discussed: Essential references appear to be cited.
Other Strengths And Weaknesses: ## Pros
Very interesting findings:
1. Fine-tuning on CubeWorld does not lead to good performance on real world data suggesting the model / fine-tuning process is lacking the ability to work with abstractions.
2. The ability to solve one task does not lend it the ability to solve a different yet related task.
## Cons
1. The 2B and 7B might be too small. I don’t mean to say that a larger model would necessarily resolve all problems, just that 2B and 7B may be too small to claim it as the “limits of fine-tuning”.
2. Also, having only Qwen in the evaluation is a limitation — having at least one other class of VLM in the evaluation would provide a better perspective. Else, the results might be too specific to Qwen and don’t provide a general message to the community.
3. The related work seems too small and could be expanded to include studies that discuss (1) causal understanding/generalization on visual scenes in the pre-VLM era, and (2) Any other relevant VLM works and how they leave a gap with regards to evaluating causal understanding ability. (3) Studies on VLMs focusing on other kinds of reasoning abilities (not necessarily causal understanding/reasoning or perhaps using text only).
Other Comments Or Suggestions: See Cons listed above.
Questions For Authors: > Alignment with human behavior
>
What is the rationale behind seeking alignment with human behavior? (I don’t mean to argue against it, just seeking more clarification.)
> L407 (right): Models’ inability to generalize to another cognitive domain is not due to them being limited in parameters or potential ability — models fine-tuned on a mixture of intuitive physics and causal reasoning data performed well in both domains.
>
Yes, but fine-tuning and how weights encode sub-routines for solving the task can be affected by the number of parameters. Wonder if the authors have thoughts about this and if it would be informative to show whether scaling model size has any effect on the generalization / causal understanding performance. The authors later point out in the discussion that this is a future work.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer fjkm, thank you very much for your comments. We appreciate that you think the findings are “very interesting” and that you agree that our “method and evaluation criteria make sense”. In the following we discuss the specific concerns that you raised and how we have sought to remedy them. Your comments have led to our paper becoming considerably stronger, particularly with the inclusion of larger models and the clarification of key methodological points.
\
\
*The 2B and 7B might be too small and having at least one other class of VLM in the evaluation would provide a better perspective. Fine-tuning and how weights encode sub-routines for solving the task can be affected by the number of parameters.*
\
\
Thank you for raising this point. We agree that testing only the 2B and 7B Qwen models restricts the generality of the claims we can make based on our experiments. To remedy this, we have added two bigger models from another model family: Llama-3.2 Vision 11B and 90B. We find the same pattern across both model classes and all model sizes - models are capable of generalising to new tower sizes, particularly when trained on shorter towers, but they fail to generalise to naturalistic data or to a new cognitive task (see here for updated versions of Figures 3 and 13 with 7B as reference: [Figure 3](http://postimg.cc/HVcmqHCm), [Figure 13](http://postimg.cc/HjzJNvx9)).
\
\
*The related work seems too small and could be expanded to include studies that discuss (1) causal understanding/generalization on visual scenes in the pre-VLM era, and (2) Any other relevant VLM works and how they leave a gap with regards to evaluating causal understanding ability. (3) Studies on VLMs focusing on other kinds of reasoning abilities (not necessarily causal understanding/reasoning or perhaps using text only).*
\
\
Thank you for this comment, which also echoes the comments of reviewers uVR9 and 3bXm. Regarding point (1), we have referenced the CLEVRER dataset and a prominent model that is trained on it [1-3], which are key works studying causal reasoning and generalization in computer vision systems. Regarding points (2) and (3), we have referenced several papers suggested by the other reviewers. In particular, we have included references to other papers discussing causal reasoning and generalisation on other cognitive tasks in LLMs and VLMs [4-7]. We have also included references to works proposing explanations for why VLMs may struggle to generalise to tasks such as intuitive physics and causal reasoning [8-9], as well as work employing more varied fine-tuning datasets to improve performance [10].
\
\
*What is the rationale behind seeking alignment with human behavior?*
\
\
Thank you for this question. We realize that we were not explicit enough about the hypotheses behind seeking alignment with human behavior. We have added further explanation to the related works section, including [10-11], which were also pointed out by reviewer 3bXm. Binz et al. show that fine-tuning on human choices can lead to models that predict human behavior in previously unseen tasks. On our tasks, human choices and the ground truth are not perfectly aligned, and we sought to explore whether fine-tuning could (a) align models with human choices, and (b) whether training on human choices would lead to improved generalisation performance. We were interested in exploring (b) because fine-tuning on human choices could lead to models learning human intuitions, which might be more robust and generalizable. We confirmed (a) but found only limited evidence for (b): fine-tuning on human behaviour only confers a slight advantage at transferring to the naturalistic Lerer et al. (2016) tower blocks, but no detectable advantage for transferring to a different cognitive task.
\
\
\
[1] Johnson, J., et al. (2017). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning.
[2] Yi, Kexin, et al. (2020). Clevrer: Collision events for video representation and reasoning.
[3] Chen, Zhenfang, et al. (2021). Grounding physical concepts of objects and events through dynamic visual reasoning.
[4] Dziri, et al. (2023). Faith and fate: Limits of transformers on compositionality.
[5] Zhou, H., et al. (2023). What algorithms can transformers learn? A study in length generalization.
[6] Zhou, R., et al. (2024). Math for AI: On the Generalization of Learning Mathematical Problem Solving.
[7] Li, C., et al. (2024). In-context compositional generalization for large vision-language models.
[8] Campbell, D., et al. (2024). Understanding the limits of vision language models through the lens of the binding problem.
[9] Frankland, S. M. et al. (2025). No Coincidence, George: Processing Limits in Cognitive Function Reflect the Curse of Generalization.
[10] Binz, M. et al. (2024). Centaur: a foundation model of human cognition.
[11] Binz, M. & Schulz, E. (2023). Turning large language models into cognitive models.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have revised my score. | null | null | null | null | null | null |
Rethinking the Bias of Foundation Model under Long-tailed Distribution | Accept (poster) | Summary: This paper addresses the challenge of learning on long-tail data (and the bias of foundation models). The author defines the imbalance problem as parameter imbalance and data imbalance. They propose a backdoor adjustment method to address the imbalance problem. Experiments conducted on different long-tailed datasets demonstrated the effectiveness of the proposed method.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: The author defines the imbalance problem as parameter imbalance and data imbalance, and propose a method to address both.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - The concept of parameter imbalance and data imbalance is quite important to community, they should be well addressed.
- This paper provides sufficient details and experiments, and I can imagine the author has invested significant time in it. However, some definitions remain unclear or not well-formulated, and further revisions may be needed.
Other Comments Or Suggestions: N/A
Questions For Authors: - How do you formulate the incomplete semantic factors? In your experiments, is the maximum number limited to three? Are there any limitations?
- Why you select CLIP, OpenCLIP, and MetaCLIP to approximate the incomplete semantic factor? What model can be used and what model cannot?
- The final equation pretty sounds like a simple mixture of the large FMs, like MoE, so what's the benefit of the proposed method compared to typical MoE methods? How about compared with the MoE results of CLIP, OpenCLIP, and MetaCLIP? For example, you can use the inference results from the finetuned CLIP, OpenCLIP, and MetaCLIP and make a voting machine from their results (without further training) and compare it to your model.
- You define the imbalanced factor, then how and where it can be used? Will this score get significantly changed on different datasets even if they have the same categories?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback. Below, we summarize your points in quotes, followed by our corresponding replies.
> How to formulate incomplete semantic factors and the question of their maximum number.
The incomplete semantic factor ($C$) represents the semantic region in the image that the foundation model prefers (the foundation model relies on it to make the final prediction). For example, in the first row of Fig. 4, OpenCLIP predominantly attends to the head ($C=0$), whereas MetaCLIP primarily focuses on the body ($C=1$), which is further supported by the experimental evidence in Fig. 8. Actually, different values of $C$ correspond to specific semantic regions, and the possible values of $C$ are infinite and are not limited to 0 (head) or 1 (body). In this way, the granularity of $C$ can be further refined, depending on the number of models. However, **we set the maximum number limited to three as a trade-off between performance and costs.** More details are in Sec.E.5.
> The selection of foundation model and what model can be used and what model cannot?
As shown in Fig.5 in the paper, the path $D \rightarrow C$ represents the incomplete semantic factor that arises due to parameter imbalance, stemming from the imbalance in the pre-training data of the foundation model. Consequently, we have chosen CLIP, OpenCLIP, and MetaCLIP because they are pre-trained on distinct datasets, resulting in varying degrees of parameter imbalance and differing incomplete semantic factors.
In causal theory, Eq. 7 should cover the value space of $C$ as fully as possible for accurate causal effect estimation. Since $C$ has infinite values, we approximate it finitely, with broader coverage improving estimation accuracy. In this way, model selection should take into account the path $D \rightarrow C$. If two foundation models are trained on similar pre-training datasets, only one should be selected, as choosing both would not significantly increase the coverage of the value space of $C$.
>Compare with MoE voting
In Eq. 7, $P(C)$ represents the prior distribution of the confounder. In large-scale datasets, this can be assumed uniform, making the final implementation resemble a voting process (like MoE) among fine-tuned CLIP models. However, unlike MoE, backdoor adjustment provides theoretical guidance for expert selection. For example, the parameter imbalance results in different experts exhibiting distinct tendencies in their prediction distributions: OpenCLIP exhibits a "head of the object" preference ($C=0$), while MetaCLIP exhibits a distinct "body of the object" preference ($C=1$), as shown in Fig.8. Guided by backdoor adjustment, if an additional foundation model that prioritizes the body is introduced, it should be combined with OpenCLIP rather than MetaCLIP. This is because pairing it with OpenCLIP broadens the range of confounders accounted for, whereas MetaCLIP does not provide such an expansion. To verify this, we introduce CLIP-CP, pre-trained on the CommonPool datasets, and combine it with OpenCLIP and MetaCLIP, respectively. The results are shown as follows:
||OpenCLIP|MetaCLIP|
|--|--|--|
||51.6|51.6|
|+CLIP-CP|51.9|51.6|
This experiment shows that combining CLIP-CP with OpenCLIP is more effective. Additionally, following the experiments in Sec. E.6, we calculate the average confidence scores for the head images ($confidence$=0.2732) and body images ($confidence$=0.7122) based on CLIP-CP. These results confirm that CLIP-CP exhibits a "body of the object" preference, consistent with MetaCLIP, and demonstrate the effectiveness of backdoor adjustment in model selection.
>Questions about imbalanced factors
Imbalanced factors (IF) measure dataset imbalance, particularly in downstream tasks. For example, the imbalanced factors for ImageNet-LT, Places365-LT, and iNaturalist2018 are 256, 996, and 500, respectively. Datasets with higher IFs tend to have larger performance disparities between head and tail classes, so addressing these imbalances is crucial for better performance.
For pre-training parameter imbalance, due to the inaccessibility of pre-training data, we can only measure this imbalance by giving a specific downstream dataset. For example, we estimate the label prior using Eq. 3 from the paper on the Places365-LT dataset for different foundation models and then calculate the IF. As shown below, different foundation models exhibit varying degrees of imbalance when evaluated on the same downstream dataset.
||CLIP|OpenCLIP |MetaCLIP|
|--|--|--|--|
|IF|57.50|63.25|60.20|
This metric can vary when samples are drawn from different distributions (domains), even if the category set is the same. For example, in a food classification task, dumplings and noodles may be more common in images from China, while hamburgers are more frequent in images from America. These cultural differences can lead to varying levels of imbalance.
We acknowledge that we will incorporate all the discussions in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification! Most of my concerns are well addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your recognition. We are glad that our explanation has solved your concerns and improved the quality of our work. | Summary: This paper examines the inherent biases introduced by the imbalanced training data used to pre-train foundation models, and how these biases affect downstream long-tailed learning tasks. The authors find that during fine-tuning, parameter imbalance (the imbalance in the pre-trained model parameters) plays a more critical role than data imbalance, and existing re-balancing techniques are ineffective at addressing parameter imbalance. To tackle both parameter and data imbalances, the authors propose a causal learning-based backdoor adjustment method that learns the true causal effect between input samples and labels, rather than just fitting the correlations in the data. This method achieves significant performance improvements on several long-tailed benchmark datasets compared to state-of-the-art methods.
Claims And Evidence: The claims made in the paper appear to be well-supported by the analysis and experimental evidence provided in the contexts. The authors have conducted a thorough investigation of the biases in foundation models and proposed a novel causal learning-based solution that demonstrates clear performance gains on the evaluated benchmarks.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria employed in the paper are well-suited for the problem of long-tailed learning in the context of fine-tuning foundation models, as they address the key challenges and biases identified in the introduction.
Theoretical Claims: I did not see the theoretical claims in this paper, definitions and simple derivations are not theoretical claims.
Experimental Designs Or Analyses: I check all, and overall the experimental designs and analyses presented in the paper appear to be sound and well-justified, with a thorough investigation of the proposed method's performance and its comparison to related work.
Supplementary Material: yes, all
Relation To Broader Scientific Literature: The paper formally defines parameter imbalance and data imbalance, providing a structured way to analyze the impact of these biases, which extends the approach introduced in OLTR (Liu et al., 2019)
The paper proposes a novel backdoor adjustment method to mitigate the negative effects of parameter imbalance and data imbalance, which is distinct from previous causal-based approaches, such as those in (Tang et al., 2020) and (Zhu et al., 2022)
The paper's exploration of the causal relationships between incomplete semantic factors, input samples, and labels contributes to the growing body of work on applying causal reasoning to address challenges in long-tailed learning
Overall, the paper builds upon and extends the existing literature on long-tailed learning, foundation model biases, and the application of causal reasoning to address imbalance-related challenges in machine learning.
Essential References Not Discussed: there are some related papers, but are not published, so it is fine.
Other Strengths And Weaknesses: I don’t see any major drawbacks, but I have a few concerns:
1. The improvement seems minor—only 1.5%. Could this improvement be purely due to randomness?
2. Figures should be self-explanatory, allowing readers to understand them without referring to the main text. Please make all figure legends clearer.
3. While the intuition and motivation of the paper are strong, the methodology remains unclear. For sec4.2, the explanation of the proposed methods is not easy to follow. I suggest making it more straightforward, for example, by clearly outlining the entire fine-tuning pipeline or algorithm.
Other Comments Or Suggestions: None.
Questions For Authors: It seems that the tasks are all on image datasets, what do authors think the parameter imbalance in LLM?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful feedback. In the following, your questions are summarized in quotes, followed by our point-by-point responses.
> Could the improvement be purely due to randomness?
To ensure that the observed performance improvement is attributable to the advantages of our method and not random variation, we conduct experiments with five different random seeds and report the mean and standard deviation for both LIFT and our method on Places365-LT and ImageNet-LT. As shown below, our method outperforms LIFT with a smaller standard deviation (reported in brackets), demonstrating that the performance gains are not due to randomness.
| |Places365-LT|ImageNet-LT|
|--|--|--|
|LIFT|51.4 (0.09)|77.0 (0.04)|
|Ours|53.03 (0.08)|79.59 (0.03)|
In addition, we also conduct a significance test (T-test) between the results of our method and the LIFT, obtaining a $p_{value1}=4.57\times10^{-9}$ and $p_{value2}=3.62\times10^{-10}$ for Places365-LT and ImageNet-LT, respectively. This indicates that our method is statistically significant, with a probability of making no more than a 5% error.
> Please make all figure legends clearer.
After conducting a comprehensive review, we have carefully refined the captions and legends for each figure. Given that figures 3, 5, and 6 previously lacked sufficient detail, we present the revised results below.
Figure 3: The performance of different groups with (a) CE and (b) LA on the Places365-LT dataset. The three rows, from top to bottom, represent the performance of P-Many, P-Medium, and P-Few, respectively. The three columns, from left to right, represent the performance of D-Many, D-Medium, and D-Few, respectively.
Figure 5: The framework of our proposed method. (a) In the confounded setting, $C$ influences both $X$ and $Y$, leading to the backdoor path $X \leftarrow C \rightarrow Y$, thereby introducing confounding bias. (b) After intervening on $X$, its parent nodes $C$ and $U$ are severed, eliminating the unstable backdoor path and leading to a more reliable estimation of the causal effect between $X$ and $Y$.
Figure 6: The performance of different groups with our method on Places365-LT. After applying backdoor adjustment, performance improves across different groups, particularly for samples in both P-Few and D-Few.
Due to word limits, additional revisions of figure captions are not listed here. We promise that we will carefully rewrite all figure legends and captions in the original paper.
> Clearly outlining the entire fine-tuning pipeline or algorithm.
During the fine-tuning phase, our method can be decomposed into two stages. In the first stage, we apply logit adjustment to fine-tune various foundation models (CLIP, OpenCLIP, and MetaCLIP), addressing the data imbalance present in the downstream dataset. In the second stage, we input each test sample into the fine-tuned models, obtaining output logits without additional fine-tuning. As described in Eq. 7, we then ensemble these logits using the importance weight $P(c)$ to correct for parameter imbalance, ultimately producing the final prediction score.
We promise to give an entire algorithm pipeline in the original paper.
>What do authors think the parameter imbalance in LLM?
Thank you for your insightful comments! We also believe that current LLMs suffer from parameter imbalance. We analyze this issue from two perspectives, the influence of parameter imbalance without fine-tuning and with fine-tuning under downstream data:
****(1) Without Fine-Tuning:****
Since LLMs are trained on diverse corpora, some domains (e.g., news articles, Wikipedia, and general web content) dominate the pre-training process. As a result, the model parameters may encode more precise representations of frequent patterns while underrepresenting rare or specialized knowledge. This imbalance becomes apparent when applying LLMs to highly domain-specific tasks, such as specialized scientific research or low-resource languages, where the model struggles due to insufficient parameter representation.
****(2) Fine-tuning:****
When we use LLMs for customized tasks, such as role-playing or simulating a specific individual's tone, parameter imbalance can also have an impact. Since pre-training data is often biased toward general linguistic patterns, the model may struggle to capture highly personalized or domain-specific styles. For example, if an LLM is fine-tuned to simulate the conversational style of a historical figure, but the pre-training corpus contains limited examples of their actual patterns, the model may default to generic language structures instead of faithfully mimicking the target style, making it difficult to fully adapt to the new task.
We promise that we will add this discussion to the original paper | Summary: This paper studies the impact of foundation models' biases—trained on disproportionately distributed data—on downstream tasks with imbalanced labels. The authors characterize two types of imbalance: (1) parameter imbalance, which has its roots in the pre-training stage, and (2) data imbalance, which exists in the downstream task. They show through experiments that different numbers of parameters affect performance more than imbalanced data. They found that standard methods to remedy this, like Logit Adjustment, are not a fix. To repair these issues, the paper suggests a new backdoor adjustment method based on causal inference.
This method tries to mitigate the confounding effect of what the authors refer to as "incomplete semantic factors" (an artifact of parameter imbalance) by ensembling semantic signals from a diverse set of foundation models. Large-scale experiments on ImageNet-LT, Places365-LT, and iNaturalist2018—and ablation studies—demonstrate that the method improves accuracy, particularly for tail classes, and obtains a more balanced performance overall.
## Update after rebuttal
I stay with my original rating, as I agree with the comments of author's rebuttal.
Claims And Evidence: The submission's main points are:
- The observation that downstream data imbalance matters less than parameter imbalance (which is from pre-training) is backed by looking at classes (binned as D-Many, D-Medium, D-Few etc. for parameter groups) and comparing performance under different training strategies.
- The experiments reveal that methods such as Logit Adjustment can neutralize data imbalance but are not effective in the case of parameter imbalance.
- A novel method of backdoor adjustment is presented that helps in mixing predictions from several simple models, and this is found to work better in the majority of tests.
Methods And Evaluation Criteria: The method of conducting this research is novel and well-suited to the problem. Applying causal inference—a backdoor adjustment, in particular—to address the two causes of imbalance is a sensible approach to enhancing existing re-balancing techniques. Evaluation on established long-tailed benchmarks (ImageNet-LT, Places365-LT, iNaturalist2018) using conventional metrics (overall accuracy and class-wise breakdowns) is appropriate and provides a solid basis for assessing the efficacy of the proposed approach.
Theoretical Claims: There are no proofs; (this is fine). The theoretical claims are straightforward intuitions about causal ML that I agree with.
Experimental Designs Or Analyses: The experimental design is sufficiently comprehensive.
Three standard long-tailed datasets are used. Measuring performance across different class groupings (D-Many, D-Medium, D-Few) shows how well bias is reduced. The method is compared with appropriate baseline methods. Analyses of the number of incomplete semantic factors (M) helps elucidate the method's effectiveness.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The paper builds upon prior work in long-tailed learning and bias reduction.
It mentions current methods that tackle data imbalance, like Logit Adjustment, re-weighting, and re-sampling procedures. However, it puts emphasis on the bias caused by foundation models. The causal method is in line with current trends in the application of causal inference for machine learning tasks (as seen in the work of Tang et al. and Zhu et al.). By leveraging concepts from foundation model adaptation (including PEFT approaches) and long-tailed learning, the paper highlights its contributions in these research areas.
A broader discussion that encompasses methods from invariant risk minimization or other approaches to mitigate bias based on causation might better position this work within the current literature.
Essential References Not Discussed: While the paper cites many pertinent sources, it could be enhanced by citing invariant risk minimization approaches to mitigate bias in deep learning.
Other Strengths And Weaknesses: Strengths:
The paper presents a novel causal framework for analyzing bias in foundation models by focusing on an unexplored dimension (parameter imbalance). Experiments are conducted on a range of benchmarks with detailed class-wise comparisons and ablation studies. The use of backdoor adjustment in the training process is a sensible way of reducing bias.
Weaknesses:
I don't see any glaring weaknesses.
Other Comments Or Suggestions: .
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and acknowledgement of our work! In the following, we summarize a series of works on Invariant Risk Minimization (IRM) to reduce bias and briefly introduce the differences between our work and existing studies.
Invariant Risk Minimization (IRM) aims to enhance out-of-distribution generalization by identifying and optimizing invariant features, thereby reducing bias in deep learning models [1, 2, 3]. In linear systems, IRM has strong theoretical guarantees and a clear connection to causal theory. Building upon IRM, numerous variants have emerged [4, 5, 6, 7], aiming to address some of IRM’s challenges, such as its failure in nonlinear tasks [8], the requirement for extensive domain information [2], and optimization difficulties in deep neural networks [9].
While IRM aims to identify and optimize invariant features across domains to reduce bias, our method leverages a causal learning framework that identifies and mitigates biases introduced by spurious correlations, which are induced by the backdoor path $X \leftarrow C \rightarrow Y$. Rather than assuming the existence of invariant features, we treat the incomplete semantic factor as a confounder and apply a backdoor adjustment method to learn the true causal effect, offering a more flexible solution to both parameter and data imbalance.
[1] Arjovsky, Martin, et al. "Invariant risk minimization." arXiv preprint arXiv:1907.02893 (2019).
[2] Lin, Yong, et al. "ZIN: When and how to learn invariance without environment partition?." Advances in Neural Information Processing Systems 35 (2022): 24529-24542.
[3] Deng, Yihe, et al. "Robust learning with progressive data expansion against spurious correlation." Advances in neural information processing systems 36 (2023): 1390-1402.
[4] Ahuja, Kartik, et al. "Invariant risk minimization games." International Conference on Machine Learning. PMLR, 2020.
[5] Krueger, David, et al. "Out-of-distribution generalization via risk extrapolation (rex)." International conference on machine learning. PMLR, 2021.
[6] Robey, Alexander, George J. Pappas, and Hamed Hassani. "Model-based domain generalization." Advances in Neural Information Processing Systems 34 (2021): 20210-20229.
[7] Ahuja, Kartik, et al. "Invariance principle meets information bottleneck for out-of-distribution generalization." Advances in Neural Information Processing Systems 34 (2021): 3438-3450.
[8] Rosenfeld, Elan, Pradeep Ravikumar, and Andrej Risteski. "The risks of invariant risk minimization." International Conference on Learning Representations (2021).
[9] Chen, Yongqiang, et al. "Pareto invariant risk minimization: Towards mitigating the optimization dilemma in out-of-distribution generalization." The Eleventh International Conference on Learning Representations (2023). | null | null | null | null | null | null | null | null |
Black-Box Adversarial Attacks on LLM-Based Code Completion | Accept (poster) | Summary: The authors propose INSEC, a black-box attack that craft a universal perturbation attached to code that, once submitted to LLMs, they include unsafe functionalities that can be later exploited by an attacker.
This universal perturbation is computed on a training set, and it is generated through a n heuristic-based optimizer that ranks the modified prompts in terms of the vulnerability they contain and their functionality.
Since the attack is optimizing a snippet of code, it is easy to provide unit tests that assess the functionality. Also, vulnerability is checked with a state-of-the-art tool, which can introduce reliable results.
Experiments show that, while producing prompts which not always achieve the same functionality (the ratio of passed test is slightly diminished), the number of injected vulnerabilities is incredibly increased, even against commercial products.
Claims And Evidence: Claims are supported by an extensive experimental evaluation considering most of the variants:
* where the adversarial prompt is injected
* how to alter the context in case of sanitization (to avoid trivial solutions which are discarded by preprocessors9
* the hyper-parameters of the optimization process itself
* code is available
Methods And Evaluation Criteria: The methodology is sound, and the evaluation criteria seems legit.
In particular, code can be checked with static analyzers like CodeQL (or others) and easily checked with provided unit tests.
Theoretical Claims: This is an empirical evaluation of an attack, no theoretical findings are given.
Experimental Designs Or Analyses: The design of the experiments is sound, as the authors provided a full ablation study on all the possible parameters of their attack (from the parameters of the optimizers to the placement of the prompt).
Supplementary Material: The supplementary material is provided, along with the code of the plugin and the attack. No unsafe content has been included into the repository that has been shared.
Relation To Broader Scientific Literature: This paper highlights the need for better policies for plugins, since this threat can be attributed to the usual unsafety of programs downloaded from a store. Specifically, this paper can be of interests for researcher of the security community.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: I find the paper very interesting, easy to read, and contributing towards the security evaluation of LLM models in security-related domains.
The method is simple and intuitive, since it is changing one token at the time through a very simple black-box algorithm.
The weaknesses that I find are related to (i) the choice of CodeQL as vulnerability, which can be studied as well as an ablation study; (ii) the unsafe generated code might not be reachable in the victim application, since this evaluation is done on simple snippets of code; (iii) the unit test must be provided, which might not be always possible.
Other Comments Or Suggestions: None.
Questions For Authors: 1) How the results might change with different static analyzers?
2) How sensitive is the methodology to the presence / absence of suitable unit tests?
Ethics Expertise Needed: ['Other expertise']
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their overall positive review and discuss their raised questions below. We will gladly incorporate their feedback into the next revision of the paper.
### **Q1: Do you expect the results to change with different static analyzers? Is using CodeQL an important choice for the attacks' success?**
There are two possible ways in which CodeQL may be substituted with alternative methods: For the optimization step and for the evaluation step.
Regarding the use of CodeQL for optimization, we note that our method does not depend on CodeQL, which was chosen due to its customizability. Attackers may build their attacks using other static analysis tools. However, the tool and its capabilities are not relevant to the attack itself. In fact, we only utilize specific queries from the extensive repository of CodeQL. Attackers could similarly hand-craft a specialized tool to detect precisely the vulnerabilities that they want to inject, use it for training, and then manually assess the quality of the results. This would even increase the severity of the resulting attack, since the injected vulnerabilities would likely not be detected by potentially deployed, publicly available analyzers.
Regarding the use of CodeQL for evaluation, we note that the use of CodeQL is standard in the field [1,2]. It is highly accurate, since our evaluation setting is very controlled, in that we know how vulnerabilities can manifest in the generated code for all test cases. This allows us to carefully select a CodeQL query that works effectively for each test case as described in Section 5.1. We manually analyze the accuracy of CodeQL for our evaluation and determine an accuracy of 98% (cf. Appendix A, Line 660). Other static analyzers or oracle exploits, as explored in concurrent work [3,4], could be used instead, but would need to be assessed for their accuracy in this specific setting.
### **Q2: Can you please comment on the relationship between the reachability of code snippets and their criticality for code security?**
Thank you for raising this important question about the reachability of code snippets in user code. It is crucial to recognize that any insecure code in a code base poses a potential risk. Prior work [5] showed that insecure, Copilot-generated code has already reached public code repositories. Even when not immediately exposed to user inputs, these vulnerabilities could become critical through future refactorings, thereby posing a security risk. Since thus the introduced vulnerabilities are always undesirable and should be avoided by code completion engines, it is common in the literature to analyze generated code snippets [1,2].
### **Q3: How does the attack behave when unit tests are included in the optimization procedure?**
We would like to first highlight that our attack does not explicitly optimize for correctness, i.e., there are no unit tests considered in the optimization. Still, our attack manages to achieve a high preservation of correctness, measured by the unit tests used during evaluation of the method. If unit tests were to be included in the optimization, we would expect that the correctness would be at least maintained to the same degree as it is without considering this target.
**References**
[1] J He & M Vechev. *Large language models for code: Security hardening and adversarial testing*. CCS 23
[2] H Pearce et. al. *Asleep at the Keyboard? Assessing the Security of Github Copilot’s Code Contributions*. IEEE S&P 22
[3] M Vero et. al. *BaxBench: Can LLMs Generate Correct and Secure Backends?*. arXiv
[4] J Peng et. al. *CWEval: Outcome-driven Evaluation on Functionality and Security of LLM Code Generation*. arXiv
[5] Y Fu et. al. *Security Weaknesses of Copilot-Generated Code in GitHub Projects: An Empirical Study*. TOSEM 25
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal, but Q3 is not really answering the question I have posed, which asks what happens when unit tests **are not** present. Can the authors better explain this?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the follow-up question. We want to highlight that unit tests are not present in our presented attack. We use unit tests during evaluation to assess preservation of functional correctness during our attack.
In the first phase of the attack, we optimize an attack string to increase the vulnerability rate of the attacked LLM using random optimization. During the optimization steps (Alg. 1, L. 7), we select the best attack strings based on triggered vulnerabilities. Vulnerability of completions is measured using heuristical classifiers that check for the presence or absence of security critical tokens. At the end of the optimization (Alg. 1, L. 9), we again select the best attack string based on triggered vulnerabilities, this time assessed more precisely using CodeQL. Neither selection takes into account passing or failing unit tests. Therefore, during optimization, unit tests are not present.
In the second phase, the attack is deployed by injecting it into user queries to the attacked LLM. The string is injected indiscriminately into every prompt, i.e., we neither check the potential vulnerability nor the functionality requirements of the query. Therefore, during deployment, unit tests are not present.
Finally, we evaluate the impact of our deployed attack on completions using two separate datasets for vulnerability and functional correctness: We construct a dataset to measure vulnerability using CodeQL. We then use HumanEval to measure function correctness using its unit tests, which is a standard evaluation pratice. | Summary: The paper introduces INSEC, a novel black-box adversarial attack that manipulates LLM-based code completion engines to generate vulnerable code while maintaining functional correctness. The attack works by inserting a specially crafted comment string before the completion cursor, which is derived through a query-based optimization procedure. The authors demonstrate INSEC's effectiveness across multiple state-of-the-art models and commercial services, including GitHub Copilot and GPT-3.5-Turbo-Instruct.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I think this paper does not involve theoretical claims.
Experimental Designs Or Analyses: The experimental design is well-structured, covering attack effectiveness, stealthiness (code functionality), and numerous ablation studies.
However, considering that this paper targets a Realistic Black-box Setting, such as Copilot, in actual practice users developing software are likely facing Project Level development rather than simply function-level problems like those in HumanEval. Therefore, I believe the paper lacks experimentation in scenarios like cross-file code completion to further support the usability of the attack in practical black-box settings.
Additionally, the paper mentions that the attack must maintain its response speed, while the experiments indicate that 10-20 iterations might be needed to implement the attack. The authors do not directly present the time required - if it's at the millisecond level, users might not notice, but if it takes several seconds or even tens of seconds, I believe this would have a significant impact. Therefore, I suggest the authors supplement this with relevant intuitive data.
Supplementary Material: They provide comprehensive code, even including the developed VSCode plugin.
Relation To Broader Scientific Literature: To my knowledge, this paper takes a step further in making actual code completion engines generate insecure code. Many studies may investigate general LLMs generating insecure code or conduct robustness testing (adversarial attacks) on code completion to generate incorrect code, but generating specific insecure code has not been deeply researched yet.
Essential References Not Discussed: There are many related works on robustness and security testing of code completion engines that this paper doesn't discuss. For example:
[1] CCTEST: Testing and Repairing Code Completion Systems (ICSE23)
[2] Attribution-guided Adversarial Code Prompt Generation for Code Completion Models (ASE24)
[3] TPIA: Towards target-specific prompt injection attack against code-oriented large language models
Although TPIA has not been published yet, it adopts a very similar approach, specifically attacking code completion engines through comment insertion. Therefore, I suggest the authors discuss this work as well.
Other Strengths And Weaknesses: **Strengths:**
1. The paper addresses an important problem
2. The writing structure is clear
3. Extensive experimental scale with many ablation studies
**Weaknesses:**
1. Limited novelty: Considering that inserting comment strings to attack code inference engines (or code generation models) is a common approach, I would like the authors to clarify their key innovations, such as which string initialization and mutation types have not been considered by other methods, or provide further insight into what types of strings are more likely to lead to insecure code generation.
2. Lack of experiments in more practical scenarios: Considering most users use Copilot for repository-level coding, I suggest the authors use more complex datasets closer to real-world scenarios to demonstrate effectiveness.
3. Lack of reporting on attack time requirements: Given that the authors claim one of the core templates of the attack is to maintain response speed, I suggest authors directly show the time required for the attack.
Overall, I believe that although this paper has limited novelty, it does research an important practical security issue, which is very meaningful. I would be willing to increase my score if the authors could provide further clarification on the concerns mentioned above.
Other Comments Or Suggestions: no
Questions For Authors: See above weakness points.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful remarks. We briefly answer the raised questions below and will incorporate all feedback into our next revision.
### **Q1: How does your work differ from previous work that attacks Code LLMs through perturbed inputs?**
We thank the reviewer for their references to [1,2,3] and will extend our discussion of related work that attacks LLMs using input perturbations, such as [4,5]. Note that, to our knowledge, we are the first work to propose a realistic threat model and the first to attack Code LLMs by injecting short code comments.
The more common setting is to assess model robustness by perturbing the entire user input. For example, [1,2,4,5] rename variables and functions, among other semantic-preserving perturbations, to trigger functionally incorrect or insecure code completions. While [2,4] target insecure completions, only [4] also ensures preservation of functional correctness. For all methods, their attacks are not suitable for stealthy attacks in our settings, as they assume white-box access or allow expensive search for individual queries. Overall, prior work is designed for model developers who interested in assessing LLM robustness. In contrast, we discover a short injection string, the attack comment, that triggers correct but insecure completions over many samples, suitable for our realistic threat model of attacking unassuming users.
Concurrent work [3] attacks LLMs to trigger insecure completions by injecting code snippets into the RAG context. Their setting differs in three important aspects: First, they leverage white-box model access to optimize their attack. Second, in their RAG setting, larger attack code snippets can be included into the model context. Third, they do not evaluate whether their attack preserves functional correctness and would thus be sufficiently stealthy to succeed under our realistic threat model.
### **Q2: How is functional correctness preserved on a repository-level benchmark?**
We thank the reviewer for their question and refer them to App. D (L. 888 ff.), where we present how INSEC affects performance on the repository-level code completion benchmark RepoBench [8]. We observe that, matching our results on HumanEval, the performance of their exact match and code similarity metrics is preserved successfully with respective rates of over 83%.
As suggested by the reviewer, we will highlight this experiment more in the next revision.
### **Q3: How does your attack ensure minimal runtime overhead?**
Note that our attack is conducted in two phases: First, an attacker performs the optimization procedure in Algorithm 1 and obtains an attack string. This is done offline, before the attack is deployed. Second, the fixed attack is injected into requests sent by the user. This step only involves simple string operations with no further optimization performed, which causes very little overhead. Since our attack string is short (e.g., 5-10 tokens), it causes minimal run time overhead for LLM inference.
To demonstrate the minimal runtime overhead of our attack during deployment, we run code completion with and without attack strings injected. We observe a negligible median increase of generation time of 0.14s (2.5%) on the functional correctness dataset and 0.33s (2.2%) on the vulnerability dataset. The increase stems only from the string insertion operation and additional tokens in the model input.
We will include this analysis in our next revision of the paper.
### **Clarification Request**
We ask the reviewer to clarify their comment in the section *Theoretical Claims*. We suspect they mean that our paper "does not involve theoretical claims" instead of the written "does not involve proofs for theoretical claims."
**References**
[1] Z Li et al. *CCTEST: Testing and Repairing Code Completion Systems*. ICSE 23
[2] X Li et al. *Attribution-guided Adversarial Code Prompt Generation for Code Completion Models*. ASE 24
[3] Y Yang et al. *TPIA: Towards target-specific prompt injection attack against code-oriented large language models*. arXiv 25
[4] F Wu et. al. *DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions*. arXiv
[5] Q Ren et. al. *CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion*. ACL 24
[6] T Liu et al. *RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems*. ICLR 24
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response, which I believe has addressed my main concerns. I hope the author will supplement the advanced content in the subsequent revised version.
At the same time, regarding Theoretical Claims, sorry for the misunderstanding. The author's understanding is correct; what I meant to express was that the paper does not involve theoretical claims, and therefore does not require proof.
I decide to raise my score to 3.
---
Reply to Comment 1.1.1:
Comment: We are happy to hear to have addressed the points of the reviewer, and thank them for raising their score. We also thank the reviewer for clarifying their point regarding "Theoretical Claims".
Finally, we assure the reviewer that we will incorporate our rebuttal (to all reviewers) in the next revision of the paper. | Summary: The authors propose INSEC, a black box attack on code infilling models via adding comments right before the location of code completion that contain an adversarially optimized string. The goal of the attack is to produce functioning code that contains security vulnerabilities. The adversarial comment is initialized via five strategies. Thereafter, INSEC randomly mutates the sequence of tokens representing the adversarial comment in an iterative fashion until some stopping criterion is met. A training dataset is used for the procedure until here. Then, a final adversarial comment is chosen for a validation dataset. Hence, one can argue that INSEC searches for a universal adversarial comment. The authors demonstrate the effectiveness of their INSEC attack on 7 LLMs without comparison to baseline attacks.
Claims And Evidence: The work's empirical evidence largely supports the claims.
The main criticism regarding claims is the use of certain adjectives. The authors call the studied setting, e.g., "realistic" or "practical" without specifying what the terms actually mean. Especially if such terms are used in a way that says that the author's work was more "realistic" (like l. 382, right column). While I understand that the authors take the perspective of a hacker who has to operate on tight knowledge and resource constraints, adversarial robustness is way broader than cybersecurity. According to the seminal work of Szegedy et al. 2014 [1], adversarial robustness is characterized by the (close to) paradox situation that neural networks achieve great performance despite the fact that small/meaningless perturbations can (almost) arbitrarily mislead them. While a real-world adversary might be able to exploit this "intriuging property," from the perspective of a model developer who wants to understand the limitations of their model, a fully white box setting with excessive resourse use is still "practical".
[1] Szegedy et al. "Intriguing properties of neural networks" ICLR 2014
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: The experimental design is largely sensible.
My major concerns with the experimental design are (1) the unconstraint usage of possible tokens. While many jailbreak attacks exclude non-ASCII characters (e.g., GCG in Zou et al., 2023), the authors seem not to constrain the possible strings. (2) Nor do the authors investigate a countermeasure via filtering comments using a perplexity filter. These are substantial differences from the recent jailbreak literature and should be considered.
Supplementary Material: Only superficially but I was not able to locate the code.
Relation To Broader Scientific Literature: The authors are the first to study evasion attacks via adversarial comment insertion and propose a viable universal attack that efficiently generates such adversarial comments.
Essential References Not Discussed: The authors discuss the most relevant works (known to me). However, the authors could be more explicit that random mutations are also heavily used for other adversarial attacks in the jailbreaking literature. E.g., see the recent work by Andriushchenko et al. "Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks" ICLR 2025.
Other Strengths And Weaknesses: The paper proposes a simple procedure to generate universal adversarial comments that trigger LLMs to include vulnerabilities in their code. Due to the existence of universal adversarial comments, the paper reveals a quite catastrophic vulnerability of such code completion tools. Even though the algorithmic contributions are minimal, revealing such a limitation is a meaningful contribution. Although it is somewhat expected that this vulnerability exists.
A major criticism of the submission is the way how the authors detail their INSEC. The authors should emphasize much much more that the INSEC functions as a proof of concept to demonstrate that this vulnerability exists and that a real-world adversary currently does not even need heavy resources to exploit this vulnerability. Or do the authors actually want to provide a recipe for how to attack coding "copilots"?
Other Comments Or Suggestions: 1. It is a bit unusual that the authors themselves find their method "surprisingly" effective. After all, INSEC is a simple "multi-beam" random research procedure that should work well with enough resources (under mild assumptions).
1. When providing a "cheap" procedure on how to attack a model, it is not particularly strong mitigation not to include "concrete optimized attack strings" (l. 455).
1. l. 209: the "six" should be "five", right?
Questions For Authors: 1. How is the compute cost on an open-source model (e.g., runtime)?
1. How does the compute cost scale with the dataset sizes?
1. How are the train, val, and test datasets split? Does train/val vs. test contain similar cases, like rewrites of the same question etc.?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's critical assessment and answer their questions below. We will incorporate all feedback.
### **Q1: Can you adjust your use of certain adjectives such as "realistic" and "practical"?**
Yes. We thank the reviewer for pointing this out! We will revise the paper to clarify the meaning of these adjectives in our setting, or replace them with more concrete terms, such as "black-box knowledge" and "low resource". We will also add a discussion to clarify that, while our setting is interesting for attackers, white box settings are also practical and relevant for model developers.
### **Q2: Does your attack also work when restricted to ASCII characters?**
Yes. We run our attack optimization excluding non-ASCII characters on GPT-3.5-Turbo-Instruct. We observe that attacks under such a constrained setting are still successful, achieving an increase of vulnerability rate from 17.1% to 73.1%, similar to 72.5% in the unconstrained setting. Meanwhile, functional correctness is preserved with passRatio@1 (@10) of 98.3% (99.9%). We will include a more detailed analysis in the next revision.
### **Q3: Can your attack be easily defended by a perplexity filter?**
No. Note that our attack string is indiscriminately inserted into all user queries. A perplexity filter designed to reject security-relevant, attacked queries might also reject benign queries for functional code completion, undermining the code completion engine's utility. The necessity to maintain functional correctness is a key difference between our setting and jailbreak defenses.
To demonstrate this experimentally, we examine perplexity filters as employed by [1]. First, we choose a rejection threshold that maximizes the F1 score of detecting attacked prompts in the training and validation set of our vulnerability dataset, achieving recall of over 89% on the test set. Applying this filter on the functional correctness dataset drastically decreases correctness for benign prompts, with funcRate@1 (@10) of less than 29.8% (29.4%), rendering the defense impractical for completion engine providers. Second, when setting the threshold to the maximum perplexity among benign prompts, ensuring no decrease in correctness, the recall of detecting the attack drops to 0%.
### **Q4: What is the cost of the attack optimization on open-source models?**
The optimization phase of our attack requires around 6 hours to find a highly effective string on commercial GPUs. Assuming a cost of between USD 1 and USD 2 per GPU/hour [2,3] results in estimated cost of USD 6 to 12 - a similar cost per attack as we reported for commercial black-box models.
### **Q5: How does the compute cost scale with the dataset sizes?**
Overall, the cost of the optimization is $O(n*|\mathbf{D}\_{\text{vul}}^{\text{train}}|+|\mathbf{D}\_{\text{vul}}^{\text{val}}|)$, for $n$ steps. We refer the reviewer to Alg. 1. and are glad to provide more detail if desired.
### **Q6: How are the train, validation, and test splits of the vulnerability datasets related?**
The splits are entirely independent. After sourcing our test samples as detailed in L. 253 ff., we split them randomly into train, validation, and test sets. This design ensures that there is no strong similarity between the splits.
For example, for CWE-078 (OS Injection), one training sample is an independent method to execute a local binary and analyze its outputs line by line [4]. The validation sample is a two-method application allowing to build a Rust project [5]. In the test set, a method to call the `ping` command is exposed via Flask [6].
### **Q7: Please discuss related work that uses random search to optimize jailbreak attacks.**
We thank the reviewer for this suggestion! We will add related jailbreaking work to our discussion, in particular the mentioned [7]. Note that there are fundamental differences between our work and jailbreaks, such as the resources during deployment, and the requirement to maintain functional correctness for benign queries.
For example in [7], a jailbreak prompt is optimized by leveraging initialization and random search, similar to our work. However, the resulting prompt is very large, unsuitable for code completion, and not analyzed for impact on benign queries (cf. App.D, "Number of Attack Tokens").
We will adapt our paragraph about the "surprising effectiveness" of INSEC to highlight the unexpected, but relevant, brevity of attack strings and preservation of functional correctness.
**References**
[1] J Geiping et al. *Baseline Defenses for Adversarial Attacks Against Aligned Language Models*. arXiv
[2] https://lambda.ai/service/gpu-cloud#pricing
[3] https://datacrunch.io/products#A100
[4,5,6] Supplementary Material:
- `sec-gen/data_train_val/main_data/cwe-078_py/10.py`
- `sec-gen/data_train_val/main_data/cwe-078_py/13.py`
- `sec-gen/data_test/cwe-078_py/1.py`
[7] M Andriushchenko et. al. *Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks*. ICLR 25 | Summary: The paper introduces INSEC, a black-box attack on LLM code completion, to bias these LLMs to generate insecure code at a higher rate. The attack works by injecting an attack string as a short comment into the completion input, with the comment created through a query-based optimization procedure. INSEC was evaluated on various open-source models and commercial black-box services like GitHub Copilot. The results show an increase of 50% in the rate of generated insecure code.
Claims And Evidence: The evaluation of applicability, effectiveness, and practicality is supported by testing against a number of popular LLMs and using popular security weakenesses.
I wonder whether the results would be different with top of the line coding LLMs such as Claude 3.7 and DeepSeek R1.
Methods And Evaluation Criteria: HumanEval is the main benchmark to evaluate functional correctness in the paper. I have doubts whether HumanEval is relevant for code-completion LLMs nowadays, both because it is likely part of training data, and because the data in HumanEval is not representative of current software projects and software tasks.
RepoBench (tested in Appendix D) alleviates some of this concern, but it is only used for functional correctness and not attack success measurement.
Theoretical Claims: -
Experimental Designs Or Analyses: -
Supplementary Material: The choice of vulnerability rate and pass@k as evaluation metrics is appropriate Using CodeQL for vulnerability assessment is a plus, as it provides a standardized and automated way to detect vulnerabilities.
It would be beneficial to explore the multi-CWE attack further, as the results show a noticeable loss in functional correctness in this case. What is the cause of this drop?
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments and discuss their questions below. We will incorporate their feedback into the next revision of the paper.
### **Q1: Can you please clarify the evaluation setting?**
We highlight that we have two separate datasets for the evaluation of vulnerability rate and functional correctness.
For the evaluation of vulnerability rate, we construct a dataset based on code snippets sourced from Pearce et al. [1], real-world GitHub repositories, and GPT-4 generations. This results in a realistic, state-of-the-art dataset for assessing code generation vulnerability.
For the evaluation of correctness, we leverage HumanEval [2]. As the reviewer pointed out, HumanEval consists mostly of algorithmic, self-contained tasks. To include functional correctness results on more realistic, repository-level settings, we also evaluate on RepoBench [3] in Appendix D.
### **Q2: Would the results be different for chat models like Claude 3.7 and DeepSeek R1?**
We don't expect results to be largely different for recent chat models. Concurrent work [4] evaluated these models on chat-applicable settings and found that they are highly likely to produce insecure code. Meanwhile, the INSEC attack is designed for Fill-in-The-Middle completion - the format used in IDE-integrated code assistants. We evaluated INSEC against the state-of-the-art and industry standard in this domain, Copilot, and successfully broke its self-claimed safeguards to actively prevent vulnerable completions [5]. We evaluate other state-of-the-art open-source completion models and the latest completion-API compatible model by OpenAI, GPT 3.5 Turbo Instruct.
### **Q3: What is the cause of this drop in functional correctness in the Multi-CWE ablation study?**
In our Multi-CWE ablation study, we combine attacks for different CWEs by concatenating attack strings that were optimized for the individual CWEs. We observe that combining attacks for more CWEs leads to both a slight decrease in vulnerability and a slight decrease in functional correctness.
We believe the decreased functionality in this simple experiment is mainly due to the approach of concatenating attack strings. This obfuscates the intention of the user and code. We observe a similar trend in our ablation study in Appendix D (Figure 11b), where longer attack strings for single CWEs also lead to decreased functional correctness.
We expect this effect can be avoided by training a single short attack string, adapting the optimization to target several CWEs at once. We thank the reviewer for this observation and will discuss it in the next revision of the paper.
**References**
[1] Pearce et. al. *Asleep at the Keyboard? Assessing the Security of Github Copilot’s Code Contributions*. S&P 22
[2] T Liu et al. *RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems*. ICLR 24
[3] M Chen et al. *Evaluating Large Language Models Trained on Code*. arXiv
[4] M Vero et. al. *BaxBench: Can LLMs Generate Correct and Secure Backends?*. arXiv
[5] GitHub Blog: *Filtering out security vulnerabilities with a new AI system* at https://github.blog/ai-and-ml/github-copilot/github-copilot-now-has-a-better-ai-model-and-new-capabilities/#filtering-out-security-vulnerabilities-with-a-new-ai-system | null | null | null | null | null | null |
ITBench: Evaluating AI Agents across Diverse Real-World IT Automation Tasks | Accept (oral) | Summary: The paper introduces IT-Bench, a specialized benchmarking framework designed to evaluate AI agents on real-world IT automation tasks across three key domains: Site Reliability Engineering (SRE), Compliance and Security Operations (CISO), and Financial Operations (FinOps). Built from 94 scenarios derived from actual incidents, CIS benchmarks, and FinOps guidelines, IT-Bench operates in realistic environments—such as Kubernetes clusters—integrated with industry-standard tools like Grafana and Prometheus. The framework assesses agent performance using metrics such as pass@1 and time to resolution. Testing reveals significant challenges for even advanced models like GPT-4o, with success rates of 13.8% in SRE, 25.2% in CISO, and 0% in FinOps. These results highlight the complexity of IT automation and the pressing need for enhanced AI capabilities in this field.
Claims And Evidence: The authors assert that IT-Bench is a robust, extensible, and practical framework for evaluating AI agents in IT automation. This claim is supported by several key points:
* Real-World Scenarios: The benchmark incorporates 94 tasks rooted in genuine IT incidents and established industry standards, ensuring relevance and applicability.
* Framework Design: By integrating authentic environments and observability tools, IT-Bench mirrors the conditions AI agents would encounter in practice.
Methods And Evaluation Criteria: IT-Bench models agent-environment interactions as a Partially Observed Markov Decision Process, capturing the inherent partial observability of IT systems. Agents are evaluated using a suite of metrics, including pass@1 (success on the first attempt), fault localization accuracy, and fault propagation chain analysis, among others. These metrics provide a comprehensive measure of performance across diverse dimensions.
Theoretical Claims: The use of POMDP to model agent-environment interactions is theoretically sound and aligns with AI research paradigms. The NTAM metric is a novel contribution, accounting for topology-aware fault localization. However, the paper overlooks potential biases in scenario selection (e.g., over-representation of Kubernetes-based systems) and lacks discussion on generalizability across diverse IT ecosystems. NTAM’s parameter tuning is heuristic-based, needing empirical justification.
Experimental Designs Or Analyses: Experiments
The paper evaluates baseline agents, such as GPT-4o and Llama-3.3-70B, across IT-Bench’s scenarios, yielding the following insights:
* Low Success Rates: Performance is notably weak, with FinOps tasks achieving a 0% success rate.
* Complexity Challenges: Agent effectiveness diminishes as scenario complexity increases.
* Environmental Factors: Non-deterministic elements, such as real-time telemetry data, pose significant hurdles for agents, underscoring the unpredictable nature of real-world IT settings.
Supplementary Material: I have thoroughly examined the supplementary material, which provides additional support for the paper's claims and methods.
Relation To Broader Scientific Literature: The introduction of IT-Bench adds a valuable contribution to the literature on AI agent evaluation for IT automation. Its focus on single-agent performance establishes a strong foundation for assessing individual AI capabilities in these contexts. However, many real-world IT operations involve complex, collaborative environments where multiple agents or human-AI interactions are critical. Extending IT-Bench to incorporate multi-agent collaboration or human-AI workflows could significantly enhance its generalizability, addressing scalability and robustness concerns raised in this review. Such an advancement would align the framework with emerging research on collaborative AI systems, positioning IT-Bench as a versatile tool for evaluating AI-driven solutions in dynamic, multi-actor settings.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths
* Comprehensive Metrics: IT-Bench employs a diverse set of evaluation criteria, including pass@1 for accuracy and time-based metrics like Mean Time to Diagnosis (MTTD) and Mean Time to Repair (MTTR). This multifaceted approach assesses both precision and efficiency, offering a holistic view of agent performance.
* Openness: The framework and baseline agents are open-sourced, encouraging collaboration and further development by the research community.
Weaknesses
* Narrow Agent Focus: The evaluation centers on large language model-based agents (e.g., GPT-4o, Llama), neglecting alternative AI approaches, such as rule-based systems or reinforcement learning, that might outperform in specific IT automation contexts.
Other Comments Or Suggestions: Refer to the former comment.
Questions For Authors: Can IT-Bench incorporate resilience testing (e.g., telemetry noise) to better simulate production unpredictability?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1. Can IT-Bench incorporate resilience testing (e.g., telemetry noise) to better simulate production unpredictability?**
> Yes! We already incorporate a few resilience testing tools like Chaos Mesh in ITBench, which can be used to evaluate agentic technologies under different resilience testing scenarios. ITBench also supports the evaluations of the effectiveness of agents with different kinds and verbosity levels of telemetry data – one can easily turn on or off any telemetry.
> We currently do not add random noises to telemetry data. The mechanism of adding noises is straightforward. However, one of our key principles is to achieve high-fidelity of real-world IT scenarios; we are working on policies for adding noises that can resemble faulty telemetry in the field.
**We also respond to the following important comments.**
**C1. Many real-world IT operations involve complex, collaborative environments where multiple agents or human-AI interactions are critical. Extending IT-Bench to incorporate multi-agent collaboration or human-AI workflows could significantly enhance its generalizability, addressing scalability and robustness concerns raised in this review.**
> Those are excellent suggestions! Multi-agents and human-AI interactions are important components on the roadmap of ITBench, and we will discuss them in the paper. We use agents inexplicitly to refer to multi-agents – our SRE agents already have a form of multi-agents: the mitigation agents interact with the diagnosis agents to determine the resolution based on the root causes. Similarly, CISO agent is also a multi-agent system including skilled agents targeted for OPA-Ansible, Kyverno, OPA-Kubectl.
**C2. Potential biases in scenario selection (e.g., over-representation of Kubernetes-based systems) and lacks discussion on generalizability across diverse IT ecosystems**
> Thanks for the question. We will discuss the generalizability in the final version. The design of ITBench is not specific to Kubernetes-based stacks; it can be easily extended to other IT infrastructures (e.g., Docker Swarm and Nomad from HashiCorp). Certainly, it would require engineering effort.
> Kubernetes is chosen because it is the de facto open-source IT infrastructure for cloud and datacenter systems today. Its design is in principle similar to proprietary infrastructure systems such as Google’s Borg, Meta’s Twine/Tupperware, AWS’s ECS, and SnowFlake’s ECS; Kubernetes as a main platform services are offered by all major cloud services (e.g., Google, Azure, AWS, IBM). To make ITBench an open platform, we intend to use only open-source systems as the components instead of proprietary ones. So, Kubernetes seems to be the best choice. Note that most cloud system research uses Kubernetes as the representative infrastructure (in a similar vein as how Linux is used in OS research and how x86-64 is used in architecture research).
> In the context of ITBench, Kubernetes only serves as a container orchestration infrastructure. Many IT tasks are beyond the Kubernetes layers, e.g., the applications like Hotel Reservation are not specific to Kubernetes, and thus misconfigurations in applications are orthogonal to Kubernetes (they happen in the same form regardless of the orchestration infrastructure). Similar to node failures and network disconnections.
> Note that the term “Kubernetes” refers to the broader container orchestration based IT infrastructure, which is not limited to the original Kubernetes project (https://github.com/kubernetes/kubernetes). In fact, we use different backends like MiniKube, K3d, and Kind (for laptop-based setups). | Summary: This paper is a benchmark paper. It evaluates the recent LLM agent systems in three IT domains: (1) Site Reliability Engineering (SRE),
(2) Compliance and Security Operations (CISO), and (3) Financial Operations (FinOps). The main contribution of this paper is preparing three benchmarks and thoroughly evaluating candidate systems. Besides that, this paper also implements several IT agents following the traditional design. In the experiment section, the paper summarizes the key observations that the current agent systems still can not perform well on these real IT tasks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: yes
Theoretical Claims: N/A, no theoretical claims
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, I check the agent frameworks used by the experiment and related work section in the appendix.
Relation To Broader Scientific Literature: 1. The agent framework used in this paper follows the widely used ReAct style.
2. The evaluation is consistent with the previous setting.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The main limitation of this paper is the theoretical depth. As the current LLM agent systems are black-box due to the LLM, we cannot clearly analyze and understand the decision process. This paper shows that existing agent systems cannot solve current IT challenges but fail to give a theoretical analysis.
Other Comments Or Suggestions: No
Questions For Authors: Can you explain more about the unique challenge of IT tasks compared with the broader SWE tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1. unique challenge of IT tasks compared with the broader SWE tasks**
> IT tasks are more diverse than SWE. Consider SRE, closer to SWE than FinOps/CISO. SRE involves distributed systems (multi-machine, full stack: app, platform, OS, hardware & their integrations). SWE focuses on single programs.
> Key differences between SWE and SRE include:
> * Complexity/Scope: SRE systems are larger-scale and more diverse than single programs. Root causes are broader than just bugs, including hardware/network faults, misconfigs, overload.
> * Diagnosis: SRE diagnosis differs from SWE debugging. SRE issues are often hard to reproduce (dependencies, scale, non-determinism) & lack source code. SWE debugging assumes reproducibility with source/debuggers (GDB).
> * Context/Agent Fit: IT systems' scale/complexity makes full model context infeasible (vs. SWE). Agents are needed; multi-agent/multi-modal designs handle diverse IT data (metrics, traces, logs).
> * Goals/Actions: SRE goal: production service reliability/availability. SWE goal: program correctness. SRE mitigation prioritizes immediate service restoration (rollbacks, feature flags, etc.), beyond just fixing bugs (SWE focus).
> * Environment/Safety: SRE is production; SWE is development. Safety is paramount for SRE agents (unlike SWE agents trying anything for tests); unsafe trial-and-error unacceptable. SRE actions need risk/impact assessment.
> * Evaluation: Evaluating SRE agents is harder than SWE. SWE uses public data (GitHub); replicating production SRE systems is difficult (scale, proprietary). This motivates ITBench for enabling AI in this complex domain.
**We also respond to the following important comments.**
**C1. LLM agent systems are black-box**
> We acknowledge the challenge of understanding LLM agent decision processes. However, ITBench allows us to empirically investigate agent's behavior.
> Agent's behavior can be empirically evaluated because of the following
> * Detailed Trajectory Logging: We record comprehensive logs for each step, including the specific tool used, the exact inputs (including full prompts), and the resulting action. This provides necessary data for analysis.
> * ReAct Framework: Our agent utilizes the ReAct framework, which prompts the LLM to output its reasoning ("Thought") before acting. This captures intermediate reasoning steps, offering insight into the decision process.
> * Error Source Differentiation: By combining detailed logs and ReAct traces, we can distinguish between high-level reasoning failures (strategy errors) and lower-level execution errors (tool usage mistakes). An automated process categorizes failures, enabling quantitative analysis of failure modes.
> We provide 2 case studies as exemplars on SRE agent:
> **Prompt problem**: Trajectory analysis revealed flaws in an agent using Granite-3.1-8B (e.g., tool misuse linked to prompt errors). Fixing the prompts based on this analysis significantly improved success rates (3.3% to 8%), reduced errors (incorrect tool calls 7% to 0.7%), and balanced tool usage.
> **Planning problem** To quantitatively assess reasoning strategy (e.g., SRE diagnosis), we compare the agent's explored path (from 'Thoughts'/tool use) against the ground-truth fault propagation chain (causal sequence). Rationale: Diagnosis often traces this chain in reverse. Metrics developed across trajectories:
> * Detoured services: Avg services explored off the ground-truth path (lower = better focus).
> * Relative covered services: Avg ratio of relevant on-path services explored vs. ground-truth length (higher ≈ 1 = better alignment).
> We analyzed these metrics separately for successful and unsuccessful trajectories, focusing on GPT-4o versus Granite-8B:
* > For successful trajectories: GPT-4o demonstrated significantly better reasoning quality. It achieved much higher alignment with the ground-truth path (avg Relative Covered Services: 0.75 for GPT-4o vs. 0.30 for Granite-8B) and substantially less deviation into irrelevant services (avg Detoured Services: 0.98 for GPT-4o vs. 2.00 for Granite-8B).
* > For unsuccessful trajectories: Even when failing, GPT-4o maintained better reasoning metrics compared to Granite-8B. GPT-4o still covered more of the relevant path (avg Relative Covered Services: 0.48 vs. 0.27) and explored fewer irrelevant services (avg Detoured Services: 2.1 vs. 3.1) than Granite-8B did during its failures. | Summary: This paper presents IT-Bench, a framework that benchmarks AI agents for IT automation across roles including Site Reliability Engineering, Compliance and Security Operations and Financial Operations. It offers 94 real-world scenarios with automated, partial scoring evaluation and a leaderboard to ensure reproducibility. The framework models each scenario as a tuple of metadata, environments, triggering events, and desired outcomes, and benchmarks the agent's performance. Evaluations using various LLMs reveal modest success rates with FinOps unresolved. This demonstrates the challenges in automating IT complex tasks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims are proposed in this paper.
Experimental Designs Or Analyses: Yes
Supplementary Material: No supplementary material provided.
Relation To Broader Scientific Literature: This paper presents a novel direction for benchmarking LLM agents in real-world IT tasks, extending their application far beyond SWE applications. IT-Bench provides a comprehensive framework that covers multiple IT personas and reflects the complexity of actual IT operations.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The paper introduces a comprehensive benchmark that reflects real-world IT automation challenges. It unifies multiple IT roles into one framework, ensuring broad applicability and practical relevance. Moreoever, the use of real-world scenarios and industry benchmarks enhances its authenticity. Drawing on actual incidents and standards, the benchmark offers a realistic testbed for AI agent performance.
2. The evaluation pipeline is automated and systematic. It proposes several well-defined metrics with real-world implications. The design of a leaderboard can provide performance insight.
Weaknesses:
1. The domain coverage is imbalanced. FinOps contains only 2 tasks. A success rate of 0% cannot support any claims in difficulty due to the limited dataset size.
2. The framework's complexity and infrastructure demands may hinder accessibility. The detailed environment setup and the challenge of integrating new benchmarks could restrict broader adoption and ease of extension.
Other Comments Or Suggestions: I still have reservations regarding the motivation for employing AI agents to address IT challenges. While the paper cites the CrowdStrike incident to demonstrate the need for intelligent IT resolution, it is not clear to me how deploying agents will prevent such failures in practice. In production-grade environments where errors can have significant consequences, how to ensure the reliability of AI agents is crucial. For example, what if an alarm fix inadvertently triggers a cascade of additional errors? I believe a deeper discussion on the built-in safeguards, error mitigation strategies and the overall reliability assurances of agents in IT is required.
Questions For Authors: 1. In the abstract, you mention that the benchmark can be easily extended through community contributions. Could you elaborate on the process for how one can add new tasks to IT-Bench? Given that task scenarios often involve complex, task-specific setups and requirements, how do you ensure that integration is manageable for contributors? Is there any guidelines designed to standardize the addition of new tasks to IT-Bench?
2. The abstract claims that IT-Bench supports partial scoring. Could you clarify how partial scoring is implemented during evaluation? Specifically, how are partial scores computed and used to assess agent performance?
3. For the natural language to code tools in the SRE/FinOps settings (e.g., NL2Kubectl), which LLM backbone is used to translate natural language into specific code? Is this backbone the same model used for the agents, or is it fixed to a particular model?
4. What is the average token consumption for running the agent on one task?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1.1 Complexity and infrastructure demands may hinder accessibility**
>The framework’s complexity is abstracted from the agent interface, which is designed for accessibility, similar in principle to SWE-agent. AI researchers in the broader community have been able to use ITBench. Environment setup is automated (“push-button”) using provided scripts, masking infrastructure details. The framework runs on laptops (≥16GB RAM, Linux/MacOS) for smaller tasks, allowing developers to quickly pull and work on tasks. Workstations or cloud VMs are needed for larger problems.
>ITBench aims to enable research on complex real-world problems. Reducing the inherent complexity is a non-goal; modeling it is necessary to rigorously evaluate agents on realistic IT infrastructures. Simplifying would impair task fidelity and evaluation validity. Our principle is to provide an accessible interface while preserving the necessary complexity for meaningful evaluation, balancing scalability and resource efficiency.
**Q1.2 On extensibility**
> Extensibility is a first-class design principle of ITBench and is treated seriously. We promote and welcome open contributions from researchers and practitioners. Unlike the agent interface, extending the benchmarks requires expertise in IT infrastructure to maintain accuracy and realism.
> We provide clear guidelines in our repositories (anonymized for the double-blind policy) to standardize the addition of new tasks based on their required extensions. The main effort comes from (1) ensuring the reproducibility of the problems in the tasks, and (2) defining tasks-specific criteria for partial scoring (the other criteria is automated based on whether the alarms are resolved by the agents). In our experience, the setup is rarely a problem as it is largely automated, and the reproducibility is verified through an automated Continuous Integration (CI) pipeline.
**Q2. How partial scoring is implemented**
> Partial scoring is fundamental to our benchmark for systematic, fine-grained assessment of agent reasoning in IT tasks. It values intermediate steps when perfect solutions are hard to achieve, necessitating novel metrics tailored to specific IT domains. We exemplify partial scoring for root-cause diagnosis for SRE scenarios. Given large topologies (100K+ nodes), exact identification is difficult, but recognizing topologically close components demonstrates valuable reasoning. To quantify this, we developed the Normalized Topology Aware Match (NTAM) metric using expert-validated principles: topological proximity, node importance within the fault chain, effective search space reduction, and output length constraints. Inspired by information retrieval ranking (like BM25), NTAM measures prediction relevance and features tunable hyperparameters (see Appendix F).
> Crucially, partial scoring methods are domain-specific. For FinOps, we supplement NTAM with other proximity metrics evaluating alignment with optimal cost/efficiency values by measuring relative difference, rather than requiring exact matches. This tailored approach ensures nuanced performance evaluation across diverse task types.
**Q3. LLM for NL-to-code: Agent's or fixed?**
>We use the same LLMs for both the planner and the tools (e.g., NL2Kubectl). Exploring hybrid LLM configuration is our future work. For example, we can potentially use small models for NL-to-code tools, but our experience shows that small models are not yet good at generating code or using tools effectively.
**Q4. Token utilization**
> Average token consumption varies by model and task type; GPT-4o uses ~675k ± 205k for SRE, ~208k ± 263k for FinOps and ~23k ± 32k CISO tasks .
**We also respond to the following important comments.**
**C1. FinOps contains only 2 tasks**
>ITBench is an evolving benchmark. FinOps tasks increased from 2 at submission to 10 currently, and we continue adding tasks. FinOps initially had fewer tasks as it's a less established field; we are actively defining and adding scenarios. We evaluated agents on these 10 tasks; smaller models struggle, while GPT-4o achieved a 0.2 pass@1 score. We acknowledge the current scarcity and lack of statistical representativeness for FinOps (thank you) and will clarify this in the updated paper. Overall, the benchmark is growing, e.g., SRE tasks increased from 42 to 98 since submission via community contributions.
**C2. Overall reliability assurances of agents in IT is required**
> We agree that safety and reliability are critical, and we will add deeper discussion as suggested. We are also enhancing ITBench for finer-grained safety feedback within the SRE, FinOps, and CISO contexts. Furthermore, ITBench already models real enterprise IT settings where SRE, FinOps, and Compliance tasks are interlinked. An agent's action (e.g., an unsafe SRE command) can trigger cross-domain issues (such as Compliance violations or FinOps costs) that ITBench measures. However, this is subject of future work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ efforts to clarify the points I mentioned. I have no further questions and have revised my score accordingly. I am voting for acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time and updating the score. | null | null | null | null | null | null | null | null |
Two Tickets are Better than One: Fair and Accurate Hiring Under Strategic LLM Manipulations | Accept (poster) | Summary: This paper studies a new variant of strategic classification applied to automated hiring decisions when applicants use large language models (LLMs) to improve (i.e., “manipulate”) their resumes. The authors observe that LLM-based enhancements can blur the line between skilled and unskilled applicants, especially when different applicants have access to different-quality models.
Through theoretical analysis and experiments on real resumes, the paper shows that this two-ticket approach can simultaneously improve both overall accuracy (true positive rate) and fairness (reduce disparities between applicants who have access to high-quality LLMs and those who do not).
Claims And Evidence: 1. Point 1
a. Claim: Traditional hiring pipelines are susceptible to unequal LLM access—applicants who can afford better models can often achieve substantially higher resume scores and thus enjoy higher acceptance rates.
b. Evidence: Empirical experiments on 520 real resumes demonstrate that using strong LLMs (like GPT-4 variants) can significantly increase a resume’s relevance scores—sometimes making an unqualified resume appear indistinguishable from a qualified one (as shown in Figure 1).
2. Point 2
a. Claim: A “two-ticket” approach can improve both overall accuracy and fairness. The hiring algorithm can re-manipulate the candidate’s final resume with the “two-ticket” approach.
b. Evidence: The authors present formal theorems (Theorem 2 and Corollary 2) and a set of experiments to show that, under a “no false positives” objective (prioritizing zero FPR and maximizing TPR), giving every candidate a second manipulation step (via the hiring algorithm’s own LLM) raises TPR for both privileged and unprivileged groups while reducing TPR disparity.
Methods And Evaluation Criteria: Methods
Modeling Framework: The authors adopt a strategic classification framework in which each applicant’s resume is a feature vector split into “fundamental” (e.g. actual skills/experience) and “style” features (e.g. grammar, resume organization). The LLM stochastically rewrites style features but does not alter fundamental features.
Scoring Function: The hiring side uses a fixed, off-the-shelf resume “scorer” (like an applicant-tracking system) to assign numeric relevance scores. The classifier decides on a threshold: those with scores above the threshold receive a positive decision.
No False Positives Objective: To reflect the practical cost of hiring unqualified candidates, they primarily study the setting where the hiring system tries to maintain zero FPR and maximize TPR.
Evaluation Criteria
1. True Positive Rate (TPR) – fraction of truly qualified applicants who pass the threshold.
2. False Positive Rate (FPR) – fraction of unqualified applicants who pass the threshold (aiming for zero).
3. Fairness Metric: TPR disparity across “privileged” vs. “unprivileged” LLM-access groups.
Theoretical Claims: 1. Inequity Under Traditional Schemes: If privileged and unprivileged groups have different-quality LLMs (the privileged group’s LLM stochastically dominates the unprivileged group’s), the TPR of the privileged group will be higher. (Formalized in Theorem 1 and Corollary 1.)
2. Improvement via Two-Ticket Scheme: By giving everyone an additional LLM pass from a strong model, the paper proves that:
a. TPR disparity can only decrease.
b. TPR for both groups can only increase (or remain the same).
c. Overall accuracy remains the same or improves (since TPR rises at zero FPR).
3. Constant Threshold: Under mild assumptions (e.g., the hiring LLM does not exceed the privileged group’s LLM in quality), the threshold that satisfies a zero-FPR constraint does not change when moving from traditional to two-ticket schemes—so gains in fairness/TPR occur “automatically” without requiring re-tuning for each group.
Experimental Designs Or Analyses: 1. Data: They use 520 resumes (balanced across UI/UX design and project management roles) from an open-source dataset that includes anonymized real resumes.
2. LLM Conditions: Different groups get different LLMs (e.g., GPT-3.5 vs. GPT-4 variants); the better model is assigned to the “privileged” group. Applicants submit whichever version of their resume scores higher (original or LLM-manipulated).
3. Two-Ticket Step: The hiring algorithm re-manipulates each submitted resume with its own GPT-4-based LLM, and the final acceptance decision is based on the best of those two.
Supplementary Material: 1. The paper contains an Appendix with additional experiments (e.g., multiple rounds of manipulations, additional model prompts, and cost comparisons).
2. The authors show diminishing returns when applying the same LLM to a resume repeatedly. They also present an extended prompt design to mitigate hallucinations and highlight examples of manipulated resumes.
3. The proofs of the theorems appear in the Appendices.
Relation To Broader Scientific Literature: 1. The paper’s strategic classification framework extends Hardt et al. (2016) and subsequent works on manipulations that alter one’s “input” to a classifier. However, unlike classical strategic classification where manipulations often incur a cost, the authors note that cost here is nearly zero—anyone can prompt an LLM.
2. The fairness approach (equalizing TPR or at least reducing disparity) connects to the well-known “equalized odds” concept, but it emphasizes a special case: no false positives, maximizing TPR.
Essential References Not Discussed: Potentially under-discussed areas that might be valuable:
1. Work in human-AI collaboration or human-in-the-loop hiring, which could complement or replace purely automated scoring.
2. Emerging “AI detection” strategies or watermarking of LLM output. The authors mention not knowing whether the text was manipulated but do not deeply discuss whether detection-based solutions might reduce disparities.
Other Strengths And Weaknesses: Strengths
1. As generative AI becomes more common in hiring processes, this is among the first formal frameworks analyzing LLM-based resume enhancements.
2. The theorems clearly demonstrate why (and when) two-ticket classification can reduce disparities without sacrificing accuracy.
3. Empirical tests on genuine resumes using an open-source application-tracking system add credibility to the arguments.
Weaknesses
1. I am concerned about the novelty of this paper.
(1) Using one LLM model for different job roles may limit real-world applicability, as each role’s requirements can differ.
(2) The sample size of 520 resumes might be too small to capture broader labor market heterogeneity.
2. This work relies on generic APIs. Results rely on off-the-shelf LLMs with no domain-specific training. This raises questions about reproducibility if LLMs or prompts change. Future LLM updates or new models could alter outcomes significantly.
3. They rely on a fixed resume-scoring model and do not deeply test alternative commercial ATS or more complex ML-based hiring systems. Real hiring pipelines may also incorporate more than a single numeric score.
4. They model just two groups (privileged vs. unprivileged). Real-world disparities can be multi-faceted.
5. This work focuses on TPR under No-False-Positives. This can make sense in certain scenarios but may not align with all employers’ priorities (some might tolerate a small FPR for a greater TPR). The paper’s approach is specialized, and a user wanting a more flexible trade-off might need a different theoretical framework.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments and feedback on our work. In particular, we appreciate the suggestion to connect the work to human-in-the-loop hiring and “AI detection” strategies, and will discuss them in the next version of the paper.
We want to highlight that our main contributions are (a) adjusting the strategic classification model for LLM manipulations, and (b) suggesting a first generic solution with theoretical guarantees for this problem. The novelty of our paper comes from the unique analysis of the system effects of some candidates deciding to use (more advanced) LLM tools for their resume. The experiments are added to demonstrate the applicability of the results as empirical computer science experiments. We leave a broader study of this effect across the labor market to future work by domain experts. We would like to provide additional clarifications for the weaknesses listed:
1. **Novelty Concerns**: (1) Using one LLM for all roles would be limiting; however, we test the manipulation of different LLMs on resumes of different roles available in the open-source dataset that we use. We are not entirely sure if this was the question the reviewer intended to ask, but we are happy to clarify and answer the question with additional clarification.
(2) Since the goal of our experiments is to validate our theoretical results, our experiments focus on a few job descriptions and consider only a subset of a diverse labor market (We have added additional experience for 8 more job descriptions). We believe that for our purposes, 520 real resumes are enough.
2. **Reproducibility**: The reviewer makes a great point in highlighting the importance of replicability. We have strongly prioritized replicability by using only open-source resumes, an open-source resume scorer, and listing the specific API version of the models we use. While new models might be introduced, our results can always be replicated with the model versions we report. Our work on general-purpose LLMs is motivated by the resume prompts in real-world datasets such as WildChat. Domain-specific training for resume editing is an interesting suggestion, but out of scope of our current work. However, it is important to note that our model and theoretical guarantees are general enough to be applicable to privileged groups accessing domain-specific tools as well.
3. **Real Hiring Pipelines**: We agree with the reviewer that real hiring pipelines may incorporate more than a single score. Indeed, local hiring laws such as NYC144 incentivize humans to be in the loop for hiring decisions in order to avoid algorithmic auditing ([source](https://www.fisherphillips.com/en/news-insights/new-york-lawmakers-aim-to-close-loopholes-nycs-ai-bias-audit-law.html)). We want to clarify that we are focused on the first stage of the hiring process: resume filtering. This is a necessary step candidates must pass and is done primarily by ATS software such as the one we test. We focus on the only available open-source ATS scoring system (to the best of our knowledge) in order to ensure reproducibility of our results. Moreover, our model of the resume scoring system is general and can incorporate and combine multiple metrics used to make a filtering decision: we only require that the system is monotonic in the resume features. We will make sure to address that in the next version of the paper.
4. **Multifaceted “privileged” groups**: While we address only two levels of privileged groups, all our results can be easily generalized to multiple levels of privilege, so long as the Hirer uses a Pareto-dominant LLM (that is at least as good as the most privileged group’s LLM): this follows from how we define LLM manipulations stochastically. We will make sure to address this in the next version of the paper! In case the reviewer means other types of disparities, our protected attribute that we introduce and are interested in is model-access privilege.
5. **Minimizing False Positive Rates:** Our research provides an initial step into utilizing LLMs to reduce inequities in hiring procedures as a result of LLM manipulations, with theoretical guarantees on improvements with the Two-Ticket scheme. Our research is most relevant to job positions receiving large numbers of applications. Similarly to related works, we find that there is sufficient evidence to believe that a FPR=0 constraint is necessary when there is a large volume of applicants for a select number of positions, as per the standard in hiring for tech sector positions nowadays. For example, one mid-size company may receive over 25,000 applications for just 6 summer intern positions in 2024 ([source](https://www.linkedin.com/posts/patreon_patreonintern-launchingwithpatreon-internprograms-activity-7067615159833280512-A42d/)). | Summary: * This paper proposes and investigates a theoretical model for LLM strategic manipulations in the job application market, motivated by empirical observations.
* The model is motivated by three empirical observations: (1) LLMs tend to improve the score of a resume in an automated ranking system, (2) higher-cost LLMs tend to induce a larger improvement in the resume score, and (3) repeated application of LLM improvement steps yields diminishing improvements to the resume score.
* In the formal model, each candidate is represented by a triplet $(x,g,y)$. The feature vector $x\\in\\mathbb{R}^d$ is the candidate’s original resume, represented as a concatenation of immutable (fundamental) and style features. The group $g$ (privileged/unprivileged) represents the candidate’s manipulation ability, and $y$ is a binary label indicating the true qualification level of the candidate. It is assumed that $x$ is independent of $g$, and that the label is independent of group membership.
* LLM manipulation $L$ is modeled as a stochastic function which replaces some of the features in the resume with independent samples from random variables. Hiring decisions are represented by a non-decreasing score function $s(x)$, and it is assumed that each group has access to its own LLM $L_g$, and that the hiring agent also has access to an LLM.
* The goal of the hiring is to maximize the hiring rate of qualified candidates, under the constraint that no unqualified agents are hired. The goal of the candidate is to maximize their probability of acceptance, and they can choose whether to manipulate their resume using their group’s LLM.
* In the theoretical analysis, the authors show that hiring schemes that ignore manipulation induce group disparity. In response, the authors propose the two-ticket scheme, in which the hiring agent may apply additional LLM manipulation on top of the submitted application, and show that the two-ticket scheme decreases disparity under assumptions.
* Finally, the empirical section simulates the process using real resumes and using OpenAI LLMs for manipulation, showing favorable results.
## Update after rebuttal
Thank you again for the response. The rebuttal addressed some of my concerns regarding the validity of the assumptions (in particular, I agree that it is reasonable to assume that monthly LLM subscription is more accessible to privileged groups), while I still believe that other foundational assumptions would greatly benefit from stronger grounding or refinement (in particular, the formal model of LLM manipulation). With this, I maintain my original assessment, and I believe that the paper is a step is the right direction.
Claims And Evidence: The paper’s theoretical claims and empirical evaluations seen to be well-supported. Connection to practical problems relies on very strong assumptions, and the degree to which this disparity may appear in practical scenarios is not completely clear:
* Resumes typically contain thousands of tokens, and the current LLM prices for refining documents with thousands of tokens are currently in the order of cents, even for the most expensive LLMs - A relatively negligible amount.
* LLMs are often used interactively, with multiple revision steps that might not align with the single-shot manipulation modeled here.
* The model relies on the assumption that “style features” can be decoupled from “fundamental features” - However it is not clear to what degree this assumption holds in practice. For example, product management and UX design (the jobs under consideration in the empirical evaluation section) rely on communication skill, where the ability to communicate well in written form is a key requirement. It is therefore unclear whether it is reasonable to decouple “fundamental” and “style” features in this case.
* While in practice it seems very reasonable to assume that LLM proficiency varies across groups, it’s not clear whether the conditional independence assumptions on $x$ and $y$ are likely to hold in this case.
Methods And Evaluation Criteria: The proposed evaluation method appears sound.
Theoretical Claims: The theoretical derivations appear sound at a high level. A deeper verification of the proofs can confirm the robustness of these claims.
Experimental Designs Or Analyses: * The overall experimental design is reasonable.
* The reports results on 520 CVs, the supplementary material only includes revised samples from 100 CVs. This discrepancy should be clarified.
Supplementary Material: See point above.
Relation To Broader Scientific Literature: * The paper provides a practical classification for strategic classification, motivated by LLMs. The traditional strategic classification literature is focused on classic statistical learning problems (classification/regression), and modeling strategic behavior in the context of generative AI is an emerging research frontier.
* Not clear if the proposed model is applicable to other scenarios beyond labor markets.
Essential References Not Discussed: It may be valuable to acknowledge related work that intersects strategic classification and labor market analyses, such as Somerstep et al. (“Learning In Reverse Causal Strategic Environments With Ramifications on Two Sided Markets”, ICLR 2024).
Other Strengths And Weaknesses: Strengths:
* Clean theoretical model, which relies on a simple and natural empirical observation.
* Theoretical claims are supported by empirical observations.
* Paper is very clear and easy to follow.
Weaknesses:
* Model relies on strong assumptions, and it is unclear whether the described disparity is likely to be significant in practical scenarios.
Other Comments Or Suggestions: Some of the notations in Section 4 were confusing - L199 says that “the formulation is based on the observation that LLMs can standardize style features…”, however Definition 4.1 formally defines the LLM manipulation $L$ as a transformation which modifies the values of the “fundamental features” $x_i$ (defined around L133), which seems to contradict this.
Questions For Authors: * Is it possible to describe a practical scenario where access to LLMs is likely to be the significant factor in decision making, while the independence/conditional independence assumptions still hold? (i.e. a scenario $x$ and $y$ are independent/conditionally independent of the group as assumed, and there is a difference in LLM accessibility for resume improvements)
* How would the model and its results change if candidates were allowed to interact iteratively with the LLM rather than performing a single-shot manipulation?
* What happens if one LLM does not Pareto-dominate another? (i.e., when the conditions in Definition 5.3 do not hold)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed comments and feedback on our work. Here we will focus on answering questions from the reviewer:
1. **Practical scenario of X and Y independence**: Just to clarify our assumptions, our paper builds on existing fairness literature and assumes that “X and G are independent, and Y and G are conditionally independent given X.” One example with these assumptions is non-native English speakers who are equally qualified for a project management job, but the applicant who uses the better LLM might achieve a higher scoring resume in the system. We are also happy to clarify further if we didn’t fully answer this question.
2. **Iterative interaction with LLMs**: While LLMs can indeed be used interactively with multiple revisions, our single-shot manipulation model provides a foundational understanding of strategic classification in the era of LLMs. For example, we observed significant improvements for "style features’’ under one application of LLM manipulation. As LLMs improve, we find these multiple revision steps are less helpful. That said, future work can explore extending the model to incorporate iterative, interactive revisions, potentially offering even finer-grained control and adaptability in practical scenarios.
To address iterative manipulations on the Candidate's side, we can assume the distribution of manipulated features already captures this. On the Hirer’s side, iterative manipulations would be more challenging in practice as it will require them to have a person in charge of such interactions, which might not be ideal considering that the candidate knows their background better than anyone else. However, we think such interactions will have diminishing returns of improvement, and that having access to more advanced LLMs is more crucial. We will add this discussion in the next version of the paper.
3. **No Pareto-dominance**: Great question! When there are free vs. premium versions of an LLM, we find Pareto dominance is a reasonable assumption. Without Pareto-dominance, group outcomes depend highly on the scoring system deployed, and the two-ticket scheme bias mitigation could be reduced. For example, if Group P’s LLM improves feature 1 and Group U’s LLM improves feature 2, and the Hirer uses the same LLM as Group U, it could be that the screening software gives no weight to feature 2. An interesting direction for future work could be to apply multiple different LLMs to improve a single resume, both from the Candidate’s side and the Hirer’s side. We will clarify this in the next version of the paper.
We will offer several clarifications to some points raised around broader applicability.
- **Token-based costs**: We agree that token-based costs for refining long resumes are minimal. However, many candidates are unaware/don’t know how they pay per token and simply default to whichever premium LLM they can access, which can widen disparities if others rely on free or older models. Our motivation also addresses cases in which specialized, high-cost resume-editing tools can be used by some people (We briefly discuss this assumption in Section 4.2).
- **“Fundamental” and “style” features**: The decoupling of "style features" from "fundamental features" in our model is for analytical purposes to demonstrate that LLMs can enhance the style of resumes without altering their core qualifications. This does not imply that style features are unimportant or that evaluation scores do not depend on them. Moreover, if every feature of relevance is a style feature, then arguably LLMs make resumes a bad indicator of qualifications for those jobs (in which case LLM detection methods could be relevant).
- **Experiments in the supplementary material**: We used 100 sampled CVs in the supplementary material due to budget restrictions. We are happy to expand the experiments to more resumes.
- **Applicability**: Beyond the labor market, our work is applicable in any application system, e.g., college and graduate school applications, where a judge’s perception of quality is based on features that an LLM can improve (without fabricating lies).
- **Somerstep et al.**: Great suggestion! We specifically note the relevance of this work, which explores causal strategic classification and its impact on labor market dynamics. This line of research complements our work by providing insights into how strategic behavior can influence both employer and labor force outcomes, highlighting the importance of considering these dynamics in developing fair and effective hiring practices. We will ensure to incorporate this discussion in the paper.
- **Section 4 Notation**: Thank you for noticing this! We assume LLMs change only style features. We will correct this.
We thank the reviewer for the detailed reading of our work. We hope that our response provided clarification and strengthened our contribution. Please let us know if we can answer any additional questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response! It clarifies some of my concerns, and I maintain my original score. | Summary: The paper explores challenges of fairness and accuracy in hiring when job seekers use generative AI tools to enhance resumes. It proposes a "two-ticket" scheme, where employers also manipulate resumes using AI. The study demonstrates, theoretically and empirically, that this approach improves fairness and accuracy in the recruitment process.
Claims And Evidence: The claims made in the paper are well-supported by both theoretical reasoning and empirical evidence. However, broader validation across a more extensive range of job roles and industries could strengthen the generalizability of their findings. Additionally, more details about the potential variability in effectiveness with different types of jobs and applicant tracking systems could further validate the claims
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for addressing fairness and accuracy challenges in hiring when candidates use generative AI tools. The "two-ticket" scheme and the use of real-world resume datasets effectively demonstrate improvements in these areas under the study's framework.
Theoretical Claims: Yes,the paper provides detailed mathematical formulations and accompanying proofs aimed at demonstrating the validity of the “two-ticket” scheme and its impact on fairness and accuracy in hiring processes. Any issues would require direct examination and verification of these proofs by an expert in the field.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes,all of them
Relation To Broader Scientific Literature: The paper addresses fairness and accuracy issues in algorithmic hiring processes influenced by generative AI tools that manipulate resumes. It proposes the "two-ticket" scheme to mitigate bias. The study builds on existing research in algorithmic bias, strategic classification, and AI's impact on resume quality, offering new insights to balance fairness and accuracy.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: What technical or logistical challenges might arise in implementing the “two-ticket” scheme across various industries, and how can these challenges be addressed?
What measures are in place to ensure the protection of candidates’ data privacy when resumes are manipulated using generative AI, and how can ethical concerns be mitigated?
How might the “two-ticket” scheme affect employee performance and satisfaction in the long run, and what plans are there for further research into its long-term impact and adaptability across different job sectors?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful summary and comments on our paper. We address the questions below:
1. **Technical/Logistical Challenges**: In the real world, there may be practical challenges in implementing our “two-ticket” scheme. A major challenge is for companies to decide which model to use when applying the second manipulation. Companies will likely test different models to determine which one best improves resume scores in their ATS system, based on their specific job description style, before selecting a model for the second manipulation. Not all companies may have the technical expertise to do this kind of model selection. We will address this in the next version of the paper.
2. **Protection of Data Privacy**: This is an excellent question. We assume that companies use APIs in a way that queries are not stored by the company (this is the case, for example, for the LLMs accessible to people from the university the authors are affiliated with). Different companies offer ways to opt out of data collection. We do not anticipate the resume manipulation itself to be any less private than the companies storing a candidate's resume. Another way that privacy might be compromised is from the chosen threshold. The threshold might be sensitive to the resume score of a single individual, violating exact differential privacy. For the threshold to be privacy-preserving, we recommend using DP threshold functions [1]. We will make sure to discuss this in the next version of the paper.
3. **Long-term / Downstream Impacts**: Considering the downstream impacts is a fascinating perspective on the system effects of LLM tools used for job applications. In this work, we look at the applicant filtering stage of the hiring process. The subsequent hiring decisions made by humans may introduce further disparities that may affect employee satisfaction in the long run. In future work, we plan to do an empirical study of the impact of LLM-aided application materials across a variety of industries in consultation with economists.
We thank the reviewer for providing insightful and thought-provoking questions about the practical implications of our work. We hope our answers help address the reviewer’s concerns and strengthen the contributions of our paper. Our main contribution is to provide an intuitive theoretical framework for modeling LLM manipulations in the applicant screening process. We are happy to answer any additional questions.
References:
[1] Bun, M., Nissim, K., Stemmer, U., & Vadhan, S. (2015, October). Differentially private release and learning of threshold functions. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (pp. 634-649). IEEE.
---
Rebuttal Comment 1.1:
Comment: The author's rebuttal effectively reduce my concerns, and I maintained my score | Summary: The paper considers the problem of hiring in the scenario when applicants use LLMs to assist in CV writing (as well as hirers can have their own LLMs). This potentially can lead to unfair and inaccurate hiring, if, say some applicants use paid version of LLM while others do not. To mitigate this, the authors propose a two-ticket scheme where the hiring algorithm considers both the original CV and a manipulated version of it. They provide theoretical proof that this scheme improves fairness and accuracy in hiring when TPR is maximized s.t. FPR = 0.
Claims And Evidence: Claim 1: The paper establishes that hiring can be unfair when different applicants use different LLMs -- Figure 1.
Claim 2: Two-ticket scheme is introduced and proved that it improves disparity -- In two-ticket scheme the hiring algorithm considers both the original and a manipulated version of each resume. Theoretical improvements in fairness and accuracy are demonstrated through Theorem 2 and Corollary 2, which works under No False Positives Objective.
Claim 3: The results are empirically validated through a case study in Section 7.
Methods And Evaluation Criteria: Yes, different LLMs are considered, uncertainty intervals are reported.
Theoretical Claims: I read Appendix E and theoretical results in the main paper - they appear correct to me.
Experimental Designs Or Analyses: Appendix C provides more details on evaluation and seems reasonable.
Supplementary Material: I read Appendix E more carefully and semi-carefully the rest of the Appendix.
Relation To Broader Scientific Literature: The paper extends the strategic classification framework to address LLM-driven manipulations, which introduce stochasticity and complexity. This paper offers a valuable initial step towards fairness analysis in LLM-based hiring scenarios. The claims are supported both theoretically and empirically.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The paper is well-written and is easy to read.
Among weaknesses, some of the assumptions of the theoretical analysis might be strict, including no false positives objective.
Other Comments Or Suggestions: NA
Questions For Authors: Does the framework hold if manipulations apply to style features?
Accuracy did not change much for the two-ticket scheme compared to traditional evaluation, increasing by around 5%. Do you observe a connection between manipulation strength and accuracy change?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the comments and for reading our paper (even the appendix!). We will answer the questions below:
> Does the framework hold if manipulations apply to style features?
Our framework considers manipulations made to style features rather than fundamental features. We accidentally referred to style features as fundamental when we introduced them. The typo occurs in lines 134 - 137; style features should be $(x_1, …, x_{d_1})$ while fundamental features should be $(c_1, …, c_{d_2})$. This notation is consistent in the rest of our paper. In this framework, style features are manipulable through LLM manipulations while fundamental features such as technical experience and programming skills are preserved. We thank the reviewer for catching this key typo.
> Accuracy did not change much for the two-ticket scheme compared to traditional evaluation, increasing by around 5%. Do you observe a connection between manipulation strength and accuracy change?
Regarding accuracy change, we expect that stronger manipulations will lead to a greater accuracy improvement in our two-ticket scheme. For example, we expect to see the highest improvement in accuracy when qualified and unqualified resumes are separable originally and become more difficult to separate after a round of candidate manipulation. In Figure 1, we see that for PM resumes, this effect was more pronounced for Claude-3.5-Sonnet, GPT4o, and Llama3-70B. In contrast, when there is not much change in scores, the accuracy remains the same for both traditional and two-ticket schemes. We will make sure to discuss this in the next version of the paper!
We are happy to answer any additional questions! We thank the reviewer for understanding and appreciating our non-traditional contribution.
---
Rebuttal Comment 1.1:
Comment: Thank you for more details. I have read other reviews and responses and will keep my score as is. | null | null | null | null | null | null |
On Learning Parallel Pancakes with Mostly Uniform Weights | Accept (spotlight poster) | Summary: This paper is concerned with learning mixtures of $k$ Guassians, when given i.i.d samples from the mixture. When the mixture weights and covariances of the individual components are unknown and arbitrary, the best-known algorithm from the literature (due to Bakshi et al. 2022) has sample complexity $d^{O(k)}$. A lower bound instance due to Diakonikolas et al. 2017, known as the "parallel pancakes" mixture, comprises of a family of mixtures of $k$ Gaussians having identical covariances. For this family, any SQ based algorithm necessarily requires sample complexity $d^{\Omega(k)}$. However, the weights of the mixture in the family of the lower bound instance end up having to be as small as $2^{-\Omega(k)}$. A natural question here is: can the lower bound be circumvented if the mixture weights are constrained to be at least $1/poly(k)$? Recent results by Anderson et al. 2024, Buhai and Steurer 2023 show that the answer is yes. In particular, they show an algorithm that correctly clusters most points drawn from a $k$-mixture of Gaussians using only $d^{\log(1/w_{min})}$ many samples, where $w_{min}$ is the smallest weight in the mixture.
The first main result in this paper shows that this sample complexity bound is essentially optimal. Theorem 1.3 constructs a family with uniform mixture weights, such that any SQ algorithm must necessarily have sample complexity $d^{\Omega(\log(k))}$.
The algorithms due to Anderson et al. 2024, Buhai and Steurer 2023 have sample complexity $d^k$, even if only a single mixture weight is $2^{-k}$. One can ask if this can be improved. Specifically, consider the setting where $k'$ out of the $k$ mixture weights are allowed to be arbitrary, but the rest are the same. The second main result in the paper (Theorem 1.4) shows that for the task of distinguishing the standard Gaussian from a mixture of this form, there is an algorithm that has sample complexity that scales as $1/{w_{min}}$, which is a better sample complexity that of Anderson et al. 2024, Buhai and Steurer 2023. Note however that the upper bound is only for the testing problem, and not for the learning problem.
For the first result, the authors use a previous construction by Kane 2015, and combine it with a construction from Diakonikolas et al. 2017, to obtain a one-dimensional discrete distribution over $2^{O(t)}$ elements that matches $t$ moments of the standard Gaussian. For the second result, the authors use that their construction for the previous part is essentially optimal. Namely, the uniform distribution on any $k$ points cannot match more that $O(\log k)$ moments of the standard Gaussian. This can be extended to arguing that distributions that are arbitrary on $k'$ out of the $k$ points, but uniform on the rest of $k-k'$ points, cannot match more than $O(\log(k)+k')$ moments of the Gaussian. This fact allows distinguishing between a $k$-mixture from the standard high-dimensional gaussian using higher order tensors that reveal the differences in the higher order moments.
## update after rebuttal
I thank the authors for the clarifications. It would be useful to include them appropriately in the revision of the paper. I maintain my evaluation of the paper, and my score.
Claims And Evidence: The claims and evidence appear convincing to me.
Methods And Evaluation Criteria: NA
Theoretical Claims: I only glanced over the proofs, and did not verify calculations line-by-line. They appear correct to me, and the progression in the overall analysis checks out to me.
Experimental Designs Or Analyses: NA
Supplementary Material: NA
Relation To Broader Scientific Literature: The problem of learning mixtures of Gaussians is a fundamental problem in statistics that goes back to the days of Karl Pearson from the 1890s, with applications to a variety of sciences. Pinning down the computational and statistical complexity of this problem in different regimes is central to the theory of this problem.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The paper is generally quite well-written. Personally, I find that the results in the paper add to the gaps in understanding the sample complexity of learning mixtures of Gaussians, a classical problem in statistics. In particular, the conclusion that the recent postive results due to Anderson et al 2024, Buhai and Steurer 2023 are essentially optimal, is important and satisfying. Furthermore, the additional positive result for the slightly weaker testing problem also indicates algorithms with better dependence on the minimum cluster weight may be attainable for the learning problem. Overall, I find the conclusions in the paper to be important, and they tangibly further our understanding of the gaussian mixture model problem.
Other Comments Or Suggestions: Please see questions below.
Questions For Authors: 1) Am I correct in understanding that the general sample complexity upper bound of $d^{O(k)}$ in Bakshi et al. 2022 applies to arbitrary Gaussian mixtures (arbitrary weights, arbitrary means, arbitrary covariances), whereas the lower bound of Diakonikolas et al. 2017 has an additional special property that it is a family of mixtures where for each mixture, the covariances of the components are identical (although means are different, and weights can be exponentially small)?
2) The results of Anderson et al. 2024, Buhai and Steurer 2023 seem to be about clustering points drawn from a mixture model. Could you comment on the differences between this task, and on the task of learning the mixture (i.e., either estimating the parameters/learning a distribution close in TV)?
3) Could you comment on your thoughts on how the positive result for testing (your second result) could extend to a positive learning result? Presently, while the sample complexity of your testing algorithm is better, it is only for a weaker problem. In particular, what are the primary difficulties in employing standard conversions of testing algorithms to learning algorithms?
4) In terms of technical novelty: it seems that beyond the conclusions, the primary technical novelty in the paper is the connection between the construction of Kane 2015, and the past result of Diakonikolas et al. 2017 which shows that one can approximate $t$ moments of the standard Gaussian using a distribution having support only on an interval of size $O(\sqrt{t})$. Has this combination of the Kane 2015 construction and Lemma 3.3 from Diakonikolas et al 2017 been used in the literature in the past?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and positive assessment of our work. We respond to the individual questions below:
(**Difference between Bakshi et al. 2022 and Diakonikolas et al. 2017**) Yes, the reviewer is correct that the algorithm from Bakshi et al. 2022 applies to all GMMs, while in the lower bound of Diakonikolas et al. 2017, the mixtures share a common covariance matrix and potentially different means and weights. This special structure *strengthens* the lower bound, because even the simpler case of common covariance is as hard to learn. We also achieve the common covariance structure in our lower bound, and most importantly, we get a lower bound with *uniform* weights as opposed to the exponentially small weights in Diakonikolas et al. 2017. If one wants to further explore the nuances between common and different covariances, we would like to point out that it is an open and interesting problem to refine the computational landscape based on whether the covariances are the same or not. For example, one open problem that we mention in the paper is whether a learning algorithm with complexity $d^{O(\log(1/w_{\min}))}$ exists that works for arbitrary covariances, which would improve over the $d^{O(k)}$ algorithm by Bakshi et al.
(**Comparison of different kinds of learning guarantees**) The main difference between these guarantees is that for clustering or parameter estimation, one must assume that the mixture components are statistically separated (see the pairwise mean separation assumption in Buhai and Steurer 2023); otherwise, the clustering or parameter estimation goal is information-theoretically impossible. In contrast, if the goal is simply to output a mixture that is close in total variation (TV) distance to the mixture that generated the samples (as in Bakshi et al. 2022), then the separation assumption is not necessary. We emphasize that both the Diakonikolas et al. 2017 lower bound construction and our new construction yield GMMs whose components are indeed pairwise separated, thus they imply lower bounds for all of clustering, parameter estimation and learning-in-TV-distance.
(**Testing -> Learning?**) There are multiple ways to define a learning version of our problem, which we comment on below:
Learning a parallel pancake distribution in the form defined in Problem 1.1, under the weight restriction assumption of Theorem 1.4 (with $k’$ arbitrary weights and $k-k’$ uniform weights):
* First, it should be possible, though not immediately straightforward, to learn the hidden direction $v$ of the parallel pancake distribution (the direction along which the distribution is non-Gaussian). The reasoning behind this is that, since the difference between the population moment tensor of the standard Gaussian and the one for the pancake distribution is the tensor power of the unknown vector $v$, a more sophisticated argument—such as performing tensor SVD to estimate the top eigenvector (in contrast to the simpler argument of our testing algorithm, which just estimates the norm of the moment difference)—might work for learning $v$. Once $v$ is learned, one could attempt to project the samples into direction $v$ to learn the non-Gaussian distribution $A$. There are multiple levels of approximation here: approximating the moments from samples, relating the top eigenvector of the moment tensors to the one from the population version, and bounding the learning error of $A$ along the learned direction. Thus, although the result seems plausible, we have not yet worked out how this propagation of errors can be analyzed.
* Learning an unknown $k$-mixture of GMMs where k’ weights are arbitrary and $k-k’$ are uniform: This is a much more general problem, thus our testing result provides very limited insight for this problem. As mentioned, there are interesting open problems in this direction.
(**Has the combination of Kane 2015, and Diakonikolas et al. 2017 been used before?**) To the best of our knowledge, the combination of Kane 2015 and Diakonikolas et al. 2017 has not been used in prior work. One key advantage of this approach is that it enables us to establish the lower bound for exactly uniform mixtures, rather than for mixtures with weights that are only polynomially related or differ by a constant factor. Given that we are presenting the first SQ lower bound for equal-weight Gaussian mixtures, it is unlikely that other works would have employed the results and techniques of Kane 2015. | Summary: The paper studies a hypothesis testing problem where the main task is to distinguish (with as few as possible samples) between a standard Gaussian $N(0,I_d)$, and a "parallel pancakes" distribution. This distribution is characterized by k discrete mean points along an unknown line in d dimensions. Now the distribution orthogonal to the line is standard Gaussian, but along the line, the variance is squeezed by a factor $1-\delta$.
The main results are:
1. a $d^{\Omega(\log(k))}$ lower bound against distinguishing a standard normal from a pancake mixture where all mixture weights are equal (resp $> 1/poly(k)$). This matches under the same weight assumptions an upper bound of Anderson 2024, $d^{O(\log(1/w_min))}$.
2. When even one $w$ is not bounded below, the upper bound $d^{O(\log(1/w_{min}))}$ becomes $2^{O(k)}$. As a consequent next step the authors provide a testing algorithm with a dominating factor of $(kd)^{O(\log(k) + k')}$ where $k'$ weights are unbounded and $k-k'$ are bounded below by $1/poly(k)$. The minimum weight still plays a role but is only linear in $1/w_min$, so must be super tiny to dominate.
The analyses are highly technical and are based on the observations that there are discrete distributions that share many but not too many smaller moments with the standard Gaussian. The lower bound comes from the fact that the pancakes with the support set as means and spread along a random directional vector are hard to distinguish from the standard normal in d dimensions. And the upper bounds are shown by showing that only a relatively small number of moments are so close that they are indistinguishable. So the algorithm can check all $i$-th moment tensors up to $i \in O(\log(k) + k')$ and must then find at least one larger deviation.
### update after rebuttal:
The authors have addressed my points satisfactorily. Remaining issues are easily resolvable typos. I will therefore retain my initial score "4: Accept".
Claims And Evidence: All claims are rigorously proven.
Methods And Evaluation Criteria: N/A purely theoretical paper
Theoretical Claims: All theoretical claims seem correct.
Experimental Designs Or Analyses: N/A purely theoretical paper
Supplementary Material: I checked some of the proofs in the appendix, where the high level description was not entirely clear to me. I also checked the refined algorithm in the appendix, since the one given in the main paper is simplified a little (for the sake of presentation).
Relation To Broader Scientific Literature: Testing of Gaussians and mixtures thereof seems to be an important and active field.
Essential References Not Discussed: None that I know of.
Other Strengths And Weaknesses: The paper is super well written. The high level description of even very technical parts can be followed without prior knowledge in the field.
Other Comments Or Suggestions: There are a few lines that are unclear to me, possibly typos:
- p6: "A cannot match more than O(log k + k') moments with the standrad Gaussian" - it would be more clear to say that it refers to the *smallest* O(...) moments. Or os there some monotonicity property that I am not aware of (such as if m-th moment matches then also all m'-th moments for m'<m match)?
- p6: \epsilon := \lambda/d^m = (d/\delta)^{(C-1)m}: by the previous bound on lambda, shouldn't the final expression be ((\delta/2)^{C}/d)^m, it seems to be stated inversely?
- line 6 of algo 1 (also algo in the apendix): (i_1 ... j_i), should be (j_1...j_i) or even (1..i) ?
- definition of p(x) is sometimes prod (x-mu_i) (p6) and sometimes prod (x-mu_i)^2 (p7). maybe better to stick to the former and then work with p or p^2 as required.
- p7 l.356: it should be g(x)=f(x), no? f^2 comes only in the next step.
Questions For Authors: I have no additional questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you to the reviewer for the positive assessment of our work and their thorough reading. When we refer to matching the $m$ moments, we always mean the first $m$ moments. We will ensure to clarify this whenever it is not currently explicit. The other points raised are typos, and we will fix them in the final version. | Summary: This paper studies the problem of learning mixtures of Gaussians where each component in the mixture has a shared covariance.
First, an SQ lower bound is proved, matching a recent positive algorithmic result and indicating that it likely cannot be improved. It is shown that even in the case when the mixture is of $k$ equally weighted Gaussians, the problem of distinguishing the $k$-GMM from a spherical Gaussian has SQ complexity $d^{\Omega(\log(k))}$. This indicates that a recent result showing an algorithm with a $d^{O(\log(1/w_{min}))}$ is likely optimal when $w_{min} = \Omega(1/\mathrm{poly}(k)))$.
Next, this paper works on understanding the optimal complexity dependence on $w_{min}$. It is shown that for instances that are close to uniformly-weighted mixtures (but with a small number of arbitrarily-weighted components), the complexity in $k$ and $w_{min}$ can be somewhat decoupled, and solved with $d^{O(\log(k))}$ operations/samples instead of $d^{O(\log(1/w_{min}))}$ complexity.
Claims And Evidence: Yes, the proofs are clear and I was easily able to follow the general strategy despite this being a very technical work.
Methods And Evaluation Criteria: Yes, proofs make sense
Theoretical Claims: I checked the proofs in the main text, and they seemed correct
Experimental Designs Or Analyses: N/A
Supplementary Material: I did not
Relation To Broader Scientific Literature: There is a large literature on learning Gaussian Mixture Models. This paper represents new fundamental advances in our understanding of the complexity of this problem, which is not yet fully settled. I think that this paper will be well appreciated within its literature.
Essential References Not Discussed: I am not aware of any
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: * Typo in Fact 3.2, the infimum should be over V_t \setminus \{0\} instead of over V \setminus \{0\}.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort and their positive assessment of our work. | Summary: This paper studies the hypothesis-testing problem of parallel Gaussian pancakes, specifically under structural assumptions on the component weights. The goal is to distinguish between the standard gaussian and the k gaussian pancakes with collinear centers and common covariance. For learning the general mixture of k gaussians, the best known algorithm have sample complexity $d^{O(k)}$, and in the statistical query model, this problem requires sample complexity $d^{\Omega(k)}$. Recent work considers the setting where the minimum weight of component is $w_{\min}$ and components have common covariance. In this case, they provide $d^{O(\log(1/w_{\min}) )}$ time algorithm to learn the GMM.
In this paper, they first provide a SQ lower bound which shows that even when all component weights are uniform, distinguishing between such a mixture and the standard Gaussian requires $d^{\Omega(\log k)}$ complexity. This implies that the algorithm above is essentially best possible in this case.
Then they provide an algorithm for the hypothesis testing problem when most of the weights are uniform but a small fraction can be arbitrary. Their algorithm has the complexity $(kd)^{O(k'+\log k)} + \log k/w_{\min}$ where $k'$ components have arbitrary weights and the minimum weight is $w_{\min}$. Their algorithm is more efficient than the previous algorithm even if there is one component has an exponentially small weight, like $2^{-k}$. Their results refine existing complexity bounds and offer new insights into the role of weight distributions in learning Gaussian mixtures.
Claims And Evidence: Yes, the claims are all supported by rigorous theoretical analysis.
Methods And Evaluation Criteria: Yes, the lower bound and algorithm makes sense and uses various techniques from statistical query complexity, moment-matching analysis, and probability theory.
Theoretical Claims: Yes, I checked the correctness of the proofs for theorem 1.3 and 1.4.
Experimental Designs Or Analyses: N/A
Supplementary Material: Yes, I reviewed parts in appendix C and all in appendix D.
Relation To Broader Scientific Literature: The paper builds on prior work in Gaussian mixture model learning, particularly results on statistical query hardness and moment-based learning methods. It extends previous findings by refining complexity bounds and considering more structured weight distributions. The techniques used in the paper can potentially have a broader impact on the learning theory and statistics.
Essential References Not Discussed: No, the paper cites the related literature thoroughly.
Other Strengths And Weaknesses: Strengths: The paper provides a solid theoretical contribution by tightening known complexity bounds and introducing novel proof techniques. They also provide a new algorithm that can achieve better complexity for the testing problem of Gaussian pancakes when the weight distribution is structured.
Other Comments Or Suggestions: Minors:
- Fact 3.2, Line 236, $p \in V_t \setminus$ {0}.
- Lemma 3.5, $p\neq 0$ in the statement, and Line 260, $|x| = O(\sqrt{t})$.
- Line 280, $\epsilon = (\delta / d)^{Cm}$?
- Line 356, g(x) = f(x) ?
- Line 409, $O(\sqrt{t})$ ?
- Appendix D.3, the second Corollary 4.5 should be the restatement of Lemma 4.4.
Questions For Authors: 1. Given the analysis of the algorithm, is it possible to argue about the lower bound in the case? Is this $O(\log k/ w_{\min})$ term necessary for this problem in the worst case? It seems this is only used to check whether the components are more than $O(\sqrt{d})$ far away from the origin?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort and their positive assessment of our work. We will fix the typos pointed out in the final version. We respond to their question below:
(**Is the $\log(k)/w_{\min}$ term in the sample complexity necessary?**) This term corresponds to (roughly) the number of samples required to observe at least one sample from each Gaussian component, which is necessary for distinguishing the hypotheses. Suppose that the parameter $\delta$ is sufficiently small such that distinguishing between $N(0, I)$ and $N(0, I - \delta vv^\top)$ requires more than $\log(k)/w_{\min}$ samples. If the parallel pancakes distribution has all but the $i$-th component centered at the origin, then the algorithm cannot make the correct prediction unless it sees a sample from that special component. This shows that $1/w_{i}$ samples are necessary. Since the algorithm does not know which component is the special $i$-th one, our analysis applies a union bound over all components and uses the bound $w_i > w_{\min}$ to arrive at the $\log(k)/w_{\min}$ term in the sample complexity. While there might be some room for improving the $\log(k)$ factor with a potentially better argument than the naive union bound, the $1/w_{\min}$ term is essentially required by the initial argument. | null | null | null | null | null | null |
Leveraging Offline Data in Linear Latent Contextual Bandits | Accept (poster) | Summary: This paper introduces a linear version of latent bandit models, where the reward for user u and action a of feature $\phi_{u,a}$ at step t is $Y_{u,a} = \phi_{u,a}U\theta+\varepsilon_t$ where $\varepsilon_t$ is an iid subgaussian noise, U an unitary matrix and $\theta$ a low-dimensional latent vector. U and $\theta$ are unknown. Moreover, the authors assume the access to a prior offline set of N short trajectories to speed up online learning. The goal is to minimize the cumulative regret at fixed horizon T.
First, an offline algorithm called SOLD is introduced to learn a confidence set and an estimator of the latent subspace by estimating the matrix $UU^ T$ on the offline data. Second, a LinUB-inspired algorithm named LOCAL-UCB is proposed and uses arm indices leveraging the confidence set learned on the offline data and the iteratively updated confidence set on online data to get lower-bound matching guarantees on the regret, at the cost of computational tractability. Eventually, another online algorithm named ProBALL-UCB avoids solving the difficult optimization problem in LOCAL-UCB at the price of no longer matching regret upper bounds.
Experimental results are provided on several baselines and synthetic and real-life data sets.
## update after rebuttal
All my questions have been answered and I keep my very positive score on this paper.
Claims And Evidence: To me, all claims (regret bounds, empirical results, conditions for stateless decision processes being a latent bandit) are backed by convincing evidence.
Methods And Evaluation Criteria: Experiments are performed both on synthetic and real-life data sets relevant to the recommendation task. The selected baselines (LinUCB for linear bandits without relying on offline, mTS and mUCB for latent bandits without the linear structure, different concentration bounds and hyperparameter values for ProBALL-UCB) are relevant.
Theoretical Claims: I did not check the proofs in Appendix.
Experimental Designs Or Analyses: See Methods and Evaluation Criteria.
Supplementary Material: I only reviewed Section G, on robust version and extension of some of the proposed algorithms, and Section H on experimental details.
Relation To Broader Scientific Literature: The idea of splitting trajectories to learn the latent subspace in the SOLD algorithm comes from a prior work [1]. Linearity is a popular structure in the bandit literature [2] and latent bandits with optimism have been investigated since at least 2014 [3].
[1] Kausik, C., Tan, K., & Tewari, A. (2023, July). Learning mixtures of markov chains and mdps. In International Conference on Machine Learning (pp. 15970-16017). PMLR.
[2] Li, L., Chu, W., Langford, J., & Schapire, R. E. (2010, April). A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (pp. 661-670).
[3] Maillard, O. A., & Mannor, S. (2014, January). Latent Bandits. In International Conference on Machine Learning (pp. 136-144). PMLR.
Essential References Not Discussed: Lines 81-82 on page 2: “To the best of our knowledge, this is the first lower bound in a hybrid (offline-online) sequential decision-making setting”: a paper from 2023 [1] gives lower bounds on the cumulative regret for a structure type of latent bandits (with hidden clusters) where their algorithmic contribution leverages an offline matrix completion oracle during online learning. It is not exactly the same “hybrid” setting with offline collected trajectories and their lower bound does not take into account offline information, but, as it is a very similar setting, I would advise discussing (shortly) this paper and rephrasing the quoted sentence.
[1] Pal, S., Suggala, A. S., Shanmugam, K., & Jain, P. (2023, April). Optimal algorithms for latent bandits with cluster structure. In International Conference on Artificial Intelligence and Statistics (pp. 7540-7577). PMLR.
Other Strengths And Weaknesses: Strengths
- The theoretical contributions are novel, strong and diverse (conditions for learning unconfounded estimators, regret lower bound, generality of latent bandits).
- The algorithmic contributions are novel and interesting (latent subspace and confidence set learning from offline data, lower-bound-matching algorithm, tractable counterpart). The latter can be extended to Bayesian approaches to regret minimization.
- The empirical results seem robust with an acceptable number of iterations (30), provide variations on hyperparameter values, and clearly show an improvement over the state-of-the-art in related settings, as the linear latent bandit was first introduced in this paper.
- The problem of cumulative regret minimization in structured latent bandits makes sense.
- The paper is well-written.
- The code is available, reusable (presence of a specific Python module) and reproducible (presence of notebooks for each data set).
Weaknesses
- I am surprised that LinUCB performs that well in Figure 2, second plot, as it does not rely on offline data. Could you explain this?
Other Comments Or Suggestions: None.
Questions For Authors: See Weaknesses. This is a minor concern to me.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We are very grateful to the reviewer for their kind and constructive comments. We are excited to hear that the reviewer highlights so many strengths of our work, including:
1. The strength of our theoretical contributions.
2. Our problem formulation.
3. Our novel and interesting algorithmic contributions.
4. The writing of the paper.
5. The reproducibility of our code (thank you for looking through the code!).
6. The robustness of our empirical results.
Again, we appreciate your thorough review, and address your comments below.
## Essential References Not Discussed
We were indeed not aware of the lower bound in [1] (Pal et al, 2023). As the reviewer remarks, there is a subtlety since this is also a purely online setting unlike our hybrid offline-online setting, although it relies on an offline matrix completion oracle. Nevertheless, we agree that this paper is relevant to ours and a remark should be added within our paper addressing it. We thank the reviewer for bringing this to our attention.
## Other Strengths and Weaknesses
Note that in the second plot of Figure 2, LinUCB performs better than the algorithms mUCB and mmUCB from [2] (Hong et al, 2020), the same as H-ProBALL-UCB, and worse than E-ProBALL-UCB and M-ProBALL-UCB.
1. **Why LinUCB is better than mUCB and mmUCB:** The outperformance over mUCB and mmUCB makes sense, since these two algorithms only work with standard multi-arm bandits with finitely many latent states without a linear structure. LinUCB can leverage an approximately linear structure in the MovieLens data by working in a linear bandit setting. We know that given K arms in a linear bandit, standard UCB algorithms have Ksqrt{T} regret, while LinUCB has d\sqrt{T} regret, which can be lower if $d<K$. It is certainly still a bit surprising that even using offline data in mUCB and mmUCB is not enough to outperform LinUCB - a purely online algorithm that simply leverages a potential linear structure in rewards.
2. **Why LinUCB performs the same as H-ProBALL-UCB:** ProBALL-UCB performs best when we achieve tight subspace concentration in constructing our subspace confidence sets. Specifically, given loose confidence sets, ProBALL-UCB switches to LinUCB very early on. As the Hoeffding (H) confidence sets are not as tight as the empirical Bernstein (E) and martingale Bernstein (M) confidence sets, the performance of H-ProBALL-UCB accordingly is similar to that of LinUCB.
3. **Why LinUCB is worse than E-ProBALL-UCB and M-ProBALL-UCB:** As we mention above, ProBALL-UCB is able to leverage offline data better when we use tight subspace concentration bounds in constructing our subspace confidence sets. The empirical Bernstein (E) and martingale Bernstein (M) confidence sets are much tighter than Hoeffding confidence sets. So, ProBALL-UCB is able to work in the learnt low-dimensional subspace for long enough before having to switch to LinUCB.
## Refs
1. Pal et al, 2023. Optimal algorithms for latent bandits with cluster structure.
2. Hong et al, 2020. Latent Bandits Revisited.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my question. I have no further comments and keep the score as it is. | Summary: This paper explores linear latent contextual bandit problems and how offline data can be used to speed up online learning. The authors introduce an offline algorithm that learns a low-dimensional latent subspace with provable guarantees. Building on this, they propose an online algorithm that achieves minimax-optimal regret, meaning its performance is as good as theoretically possible. They also present a more practical version of this algorithm that is computationally efficient but comes with a slightly weaker guarantee. Empirical results further support their theoretical findings, demonstrating the effectiveness of their approach.
Claims And Evidence: The overall claims in the paper are clear and well-supported by both theoretical analysis and empirical results.
Methods And Evaluation Criteria: Overall, the proposed methods make sense for the problem setting. However, one concern is that they assume the proposed algorithm (including in the empirical results) knows $d_K$, which is often difficult to determine in practice. The authors mention that $d_K$ can be estimated heuristically, but the empirical results do not include any experiments where $d_K$ is estimated rather than given. Including such results would have strengthened the evaluation and provided a clearer understanding of the method’s practical applicability.
Theoretical Claims: The theoretical claims are strong and well-reasoned. While I didn’t go through every detail of the proofs, they seem correct.
Experimental Designs Or Analyses: In Figure 2, the first plot (Simulation Study) shows that the regret curves of the proposed algorithm increase rapidly at certain points, with a noticeably steep slope. In particular, the red line has a steeper incline than LinUCB. This raises concerns about the practical effectiveness of the proposed methods for larger timesteps, as it is unclear whether they consistently outperform LinUCB in the long run.
Supplementary Material: I have primarily reviewed Appendix C, D, and E, along with parts of other sections.
Relation To Broader Scientific Literature: This paper expands the scope of linear bandits by incorporating latent states and hybrid offline-online learning settings. It introduces novel algorithms that leverage offline data to accelerate online decision-making while accounting for latent structures in user behavior or environments.
Essential References Not Discussed: I think the paper covers the related work quite thoroughly.
Other Strengths And Weaknesses: The lower bound analysis and empirical results clearly strengthen the paper, demonstrating both theoretical optimality and practical effectiveness of the proposed methods.
Other Comments Or Suggestions: No other comments.
Questions For Authors: 1. Is the offline data generated from multiple different latent states, rather than just from the latent state $\theta^\star$? If so, could you explain this in more detail?
2. In the "simulation study" experiment, does your proposed algorithm outperform LinUCB when the number of timesteps is sufficiently large?
3. In the experiments, what happens if we estimate $d_K$ from the data instead of assuming it's known?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review and confidence in our paper! We are grateful for your appreciation of:
1. The clarity of our claims and the strength of the theoretical and empirical support for them.
2. The expansion of the scope of linear bandits to include latent states and hybrid offline-online learning.
3. The novelty of our algorithms.
4. The thorough coverage of literature in our related work section.
5. The strengthening of the paper due to the lower bound analysis and the empirical results.
We address your qualms below. **If our responses have adequately addressed your concerns, we would be delighted if you would consider raising your score.**
## Methods and Evaluation Criteria
We are grateful for your appreciation of our proposed method and our heuristic for estimating $d_K$. We do in fact perform experiments for estimating $d_K$ in the Appendix, and we thank you for allowing us to reiterate our experiments involving this principled estimation method below.
1. **In practice, we don’t need to know $d_K$:** As you point out, in lines 206-209 (right column), we mention that one can use our theory to derive a principled estimate for $d_K$.
2. **Experiments estimating $d_K$ in Appendix H.1:** We direct the reviewer to Appendix H.1, where we provide experiments where we determine $d_K$ from offline data within the MovieLens experiments.
3. **Effect of $d_K$ estimation on downstream regret:** For your satisfaction, we also discuss the effect on downstream regret here. Overestimating $d_K$ is not a huge issue, as this simply leads to a slightly larger confidence set. Underestimating $d_K$, on the other hand, can lead to the learning of a misspecified subspace, and the regret bound then degenerates into the $d_A\sqrt{T}$ bound.
We apologize for not explicitly mentioning appendix H.1 in the experiments section and will do so in the camera-ready version.
## Experimental Designs Or Analyses
We appreciate your concern about the practical applicability of ProBALL-UCB, but we want to point out that we discuss in the paper how the experimental observations are consistent with our expectations and highlight the superiority of ProBALL-UCB over Lin-UCB.
1. As we discuss in lines 290-297 (left column) of the paper as well as show in the algorithm for ProBALL-UCB, ProBALL-UCB switches to standard Lin-UCB after a certain threshold is reached.
2. As we discuss in lines 366-368 (left column), the kinks or “rapid increases” that we see in Figure 2 correspond to switching to Lin-UCB once the aforementioned threshold is reached.
3. This means that the growths of these graphs after the rapid increases are _identical_ to Lin-UCB, not greater or lesser. An example of this can be found in Appendix H.2, Figure 5, in the right-most subfigure labeled $\tau=10$. Within this subfigure, ProBALL-UCB switches over to Lin-UCB earlier than in the illustration in Figure 2, with a similar initially steeper slope, but the regret of ProBALL-UCB is never worse than that of Lin-UCB.
So, the warm-start provided by ProBALL-UCB makes it superior to vanilla Lin-UCB, and ProBALL-UCB is at least as good as Lin-UCB after the "rapid increase."
## Questions for Authors
### Question 1
Indeed, as we mention in line 131 (left column) as well as in other parts of the paper, the offline data comes from multiple latent states. In fact, as encapsulated in Assumption 2, there have to be enough latent states in the offline data to cover the low-rank latent subspace.
### Question 2
Yes, it does, as we clarify conceptually in our response to your qualm under “Experimental Designs or Analyses” above.
### Question 3
As clarified in our response to your qualm under “Methods and Evaluation Criteria” above, we do in fact do so in Appendix H.1. We apologize for not mentioning this in the main paper, and we will do so in the camera-ready version. | Summary: In this paper, the authors study the linear latent contextual bandit problem. They consider a setting in which the latent reward vectors lie within a low-dimensional subspace. An offline dataset from tasks whose hidden reward vectors share the same subspace is assumed to be available. They first present an algorithm that estimates the subspace, which is proven to recover the subspace with a bounded error. Then, using the subspace estimation algorithm, they show that, under the assumption that the distribution of latent reward vectors spans the space, the algorithm can provably utilize the offline dataset to achieve a better regret bound. A complementary regret lower bound is then presented to show the near-optimality of the algorithm. As the previous algorithm is not computationally efficient, they present an algorithm that approximates the confidence set with a corresponding regret bound. Finally, experiments on both synthesized and real data are presented to demonstrate the practical performance of their algorithm.
Claims And Evidence: The major problem is that the regret bound presented in Theorem 2 is not minimax optimal, as the regret upper bound depends on the coverage factors $\lambda_\theta$ and $\lambda_A$. In contrast, the lower bound presented in Theorem 3 has no dependency on either $\lambda_\theta$ or $\lambda_A$. As a result, in the case where the feature vector or the hidden latent reward vector does not span the whole space, the algorithm is not minimax optimal and has no advantage compared to the vanilla LinUCB algorithm for contextual bandits.
Methods And Evaluation Criteria: N/A
Theoretical Claims: All the claims are clear and proved.
Experimental Designs Or Analyses: They apply their algorithm to both synthesized and real data.
Supplementary Material: I didn't go through the supplementary material.
Relation To Broader Scientific Literature: This paper extends the understanding of leveraging offline data in bandits problem, showing that the offline data can provably improve the performance of the algorithm.
Essential References Not Discussed: They have cited all relevant papers to my knowledge.
Other Strengths And Weaknesses: Although ProBALL-UCB is presented as an approximation of the LOCAL-UCB algorithm to ensure computational efficiency and has been shown to leverage offline data in experiments, it is not provably established that the algorithm achieves a better regret bound in general. Specifically, since $\hat{U}$ is determined by the algorithm routine, it is unclear in which situations $\phi(x_t, a_t)$ lies in the span of $\hat{U}$. It would be valuable to formally establish the cases in which ProBALL-UCB can effectively leverage offline data.
Other Comments Or Suggestions: See above
Questions For Authors: - Do you think the data coverage assumption (i.e., Assumption 2) is necessary? In comparison, it has been shown that in multi-task linear bandits, this assumption is not required to obtain a provably improved regret bound by leveraging data from other tasks (see, e.g., Yang et al., 2022).
Yang, Jiaqi, et al. "Nearly minimax algorithms for linear bandits with shared representation." arXiv preprint arXiv:2203.15664 (2022).
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your comments. We appreciate that you recognize:
1. The clarity and veracity of our theorems and proofs.
2. The application of our algorithms to both synthetic and real data.
3. Our contribution to the problem of leveraging offline data in bandits.
We address your qualms below. **If our responses have adequately addressed your concerns, we implore you to consider raising your score.**
## Claims and Evidence
While we agree there is subtlety in its nature compared to typical purely online bounds, we maintain that our regret bound is optimal. We appreciate the opportunity to clarify this and will include the discussion in the camera-ready appendix.
1. **Disappearance of $\lambda_\theta$ and $\lambda_A$ in the worst-case bound:** As you would agree, minimax lower bounds are worst-case bounds that optimize over “all problem instances.” So, instance-specific parameters like $\lambda_\theta$ and $\lambda_A$ naturally disappear in the worst-case expression. This is standard in lower bounds for bandits. However, it is crucial to carefully choose the space of “all problem instances,” especially in our offline+online setting.
2. **The insufficient offline coverage case is trivial, we need a more informative lower bound:** The key challenge is in selecting an instance space that yields an informative lower bound.
a. **Typical and trivial choice of problem-instance space:** In online linear bandits, it’s common to vary all parameters after fixing the dimension and bounding the reward parameter $\beta$. Applying that here yields a trivial $d_A\sqrt{T}$ bound, achieved when offline data has insufficient coverage (when offline data "does not span the whole space," as you said). Many algorithms (including ours) attain this, showing minimax optimality but not any benefit over Lin-UCB.
b. **Our more informative choice of problem-instance space:** To demonstrate a real advantage over Lin-UCB, we constrain the instance space further - we fix the offline data quality by assuming $\lambda_\theta$ is bounded below. This models scenarios with sufficient offline coverage, and our lower bound shows that even in these non-trivial settings, no algorithm can outperform ours. Further we establish a clear advantage over the $d_A\sqrt{T}$ bound for Lin-UCB.
3. **Potential for more instance-dependent lower bounds:** While we analyze worst-case performance over a meaningful class, there’s room for future work on sharper, instance-dependent lower bounds that reflect explicit dependence on both $\lambda_\theta$ and $\lambda_A$. However, we believe that our introduction of a nontrivial lower bound in this hybrid offline+online setting is already a significant step.
## Other Strengths and Weaknesses
It is unclear what you mean by “better regret bound in general.” You might be comparing to either LOCAL-UCB or Lin-UCB - we address each of the two below:
1. **ProBALL-UCB vs LOCAL-UCB:** We explicitly state that ProBALL-UCB has weaker guarantees than LOCAL-UCB in general (lines 86 and 303). However, as noted in lines 303–310 and proven in Appendix E.2.1, ProBALL-UCB matches LOCAL-UCB in “good” cases, like when the feature set is an $\ell_2$ ball, since then $\phi(x_t, a_t)$ lies in the span of $\hat{\mathbf{U}}$. Such an assumption is standard in the literature, including [1] (Yang et al, 2022), which you referenced. We acknowledge not citing Appendix E.2.1 in the main text and will correct that in the camera-ready version.
2. **ProBALL-UCB vs Lin-UCB:** ProBALL-UCB is better than LinUCB, as we can see from Theorem 4. In fact, ProBALL can improve significantly on Lin-UCB, as we see both from Theorem 4 and the experiments.
## Questions for Authors
### Question 1
Yes, this assumption is essential—due to a key difference between our setting and that of [1] (Yang et al, 2022).
1. **[1] is purely online:** In [1], the setting is purely online and the learner chooses actions in each of M concurrent bandit instances. This allows for coordinated exploration across tasks.
2. **Our “multi-task” dataset is collected offline:** We work with an offline dataset of trajectories spanning multiple bandit instances. The learner has no control over the behavior policy that collected the data. Without structural assumptions on the dataset, estimating a useful subspace becomes infeasible, and regret degenerates to the standard $d_A\sqrt{T}$.
Coverage assumptions similar to ours are commonplace within the offline linear MDP literature, like in [2,3], and they all take the form of concentrability-type assumptions [4] within the broader offline RL literature proper.
## Refs
1. Yang et al. (2022), Nearly Minimax Algorithms for Linear Bandits with Shared Representation
2. Jin et al. (2021), Is Pessimism Provably Efficient for Offline RL?
3. Duan et al. (2020), Minimax-optimal off-policy evaluation with linear function approximation
4. Zhan et al. (2022), Offline Reinforcement Learning with Realizability and Single-policy Concentrability | Summary: This paper studies the setting of _linear latent contextual bandits_. If you are given multiple trajectory data under some unknown behavior policy, with possibly different latent states for each trajectory, how do you efficiently use it in an online setting? This paper proposes three algorithms and their analysis — an algorithm for estimating the linear subspace for the latent variables, a minimax optimal (under expected regret) online algorithm, and a computationally efficient algorithm that is almost optimal under some settings. Finally, the paper shows the generality of latent bandits by defining a notion of exchangeable and coherent stateless decision processes and showing that every such process is a latent bandit.
Claims And Evidence: The paper is exceptionally clear and thorough. Almost every claim is convincingly supported. It also does an excellent job conveying the intuition behind the definitions, algorithms and the proofs.
Methods And Evaluation Criteria: This is a paper of theoretical nature, but does a great job of demonstrating the practical utility of the proposed algorithms by doing some experiments demonstrating their efficiency against reasonable baselines.
Theoretical Claims: No, I cannot attest to checking the proofs carefully. I only skimmed some of the proofs in the appendix. However, what I read made sense and it seems the authors have given full proofs of everything.
Experimental Designs Or Analyses: I checked the appendix detailing the experimental results and they look good to me.
Supplementary Material: Not everything. See above.
Relation To Broader Scientific Literature: The paper studies a more general setting of latent bandits introduced by Hong et al. (2020).
Essential References Not Discussed: Nothing comes to mind here.
Other Strengths And Weaknesses: As I said before in the review this is a very nicely written piece of work. As a reviewer it is always appreciated to review a paper from which I learn a lot.
I really like the SOLD algorithm. The idea of trajectory splitting is very cool!
Other Comments Or Suggestions: There is a small typo when defining $\overline{\mathbf{D}}_{N,i}$ in the additional notation section.
Questions For Authors: 1. Why is the map $U_\star$ assumed linear?
2. I did not understand the argument why $U_\star$ can be assumed to be orthogonal without loss of generality. It may not even be a square matrix! A change of basis enforced by $A^{-1}$ does not necessarily makes $U_\star$ orthogonal.
3. On line 123, it is stated that "permuting the labels and rewards". But $A$ is permuting latent states.
4. From the discussion in section 8, I am unable to see how a latent bandit is an SDP. The definition of latent bandit has an extra measure-valued function $F$ which an SDP doesn't have.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We are grateful for your review and your confidence in our paper! We appreciate that you enjoyed our:
1. Clarity and thoroughness of evidence.
2. Presentation of intuition and writing quality.
3. Demonstration of practical utility through experiments.
4. Generality over existing work in leveraging offline data for bandits.
5. Technical ideas behind SOLD.
We acknowledge the typo and will address it in the camera ready version!
## Questions
### Question 1
Linearity is a standard structural assumption in bandits, and it is not unreasonable that a continuous latent state could have a linear effect on the reward parameters of a bandit instance. This neatly generalizes the tabular assumption that each latent state comes with a specific reward parameter while allowing us to tackle continuous latent states. We demonstrate the practical relevance of this assumption by evaluating our algorithms on real-life MovieLens data.
However, we agree that more complex relations between the latent state and reward parameters are possible, and we leave the general function approximation case to future work.
### Question 2
This seems to be a misunderstanding in choice of terminology, and we are glad you brought it up. $U_\star$ is indeed not a square matrix at all! By virtue of our setting, it is a $d_K \times d_A$ matrix, and so in fact has very skewed dimensions.
By orthogonal, we simply mean that the columns of $U_\star$ are orthonormal, not the rows. That is, $U_\star^\top U_\star$ is the $d_A$-dimensional identity matrix, but indeed, $U_\star U_\star^\top$ is almost never the $d_K$-dimensional identity matrix. Naturally, we only rely on the first fact (that $U_\star^\top U_\star = I$) in our proofs, which does hold WLOG.
We will clarify our language in the camera ready version.
### Question 3
By "permuting labels and rewards together," we mean permuting the latent labels so that the reward trajectories assigned to a given label stay together. This is the same as permuting latent states.
We recognize the confusion this language can create and will clarify by simply saying "just like in the case of finitely many latent states, observations are not changed by permuting the latent states. That is, observations are not changed by permuting latent trajectory labels while keeping trajectories with the same label together."
### Question 4
The latent bandit is indeed a special case of an SDP, where the function $\mathcal{F}_H$ is induced by the latent state random variable $F$.
Specifically, as we state in the definition of a latent bandit, a latent bandit is an SDP where the function $\mathcal{F}_H(a_1, \dots a_H) = Y_1, \dots Y_H$ for SDPs is defined by drawing $Y_1 \dots Y_H$ independently conditioned on $F$ according to the distributions $Y_h \sim F(a_H)$. So, an SDP does not have a latent state $F$, but a latent bandit is a special case of an SDP where the functions $\mathcal{F}_H$ are induced by the latent state $F$ associated with the latent bandit.
Just to help clarify this point, special cases of a general object can and usually do have extra structure. So, a latent bandit has the extra structure $F$ on top of the SDP functions $\mathcal{F}_H$ induced by $F$, just like how a linear bandit comes with the extra feature map $\phi$ and reward parameter $\beta$ on top of the general bandit reward function $r$ induced by $\phi$ and $\beta$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. It is helpful.
I will keep my acceptance score. | null | null | null | null | null | null |
Behavior-agnostic Task Inference for Robust Offline In-context Reinforcement Learning | Accept (poster) | Summary: This work analyzes the shortcomings of existing In-context Reinforcement Learning (ICRL) methods, pointing out their inability to handle context shift scenarios. The authors theoretically analyze the necessity of maximizing the true mutual information between context representation and task indices. Building on this foundation, they propose Behavior-agnostic Task Inference (BATI), which ensures that the context representation focuses solely on the environmental dynamics. Finally, experiments conducted in environments with noisy dynamics demonstrate the effectiveness of BATI.
## update after rebuttal
Thanks for the author's reply, I have no further questions.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I have reviewed Theorem 3.1 proposed by the authors, but I cannot guarantee that the proof is entirely correct.
Experimental Designs Or Analyses: The experimental design is basically reasonable
Supplementary Material: I reviewed all supplementary material.
Relation To Broader Scientific Literature: The authors' discussion on context shift in ICRL is quite intriguing and provides new insights for future work on generalization in ICRL.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**
1. The authors' analysis of context shift appears to be insightful. Particularly, the example in Figure 1 aptly illustrates their motivation.
2. The analysis of the UNICORN and CSRO works theoretically explains the motivation behind BATI's focus on environmental dynamics.
3. The experimental results demonstrate that BATI's performance is competitive, especially in scenarios where the environmental dynamics noise increases.
**Weaknesses**
1. Although BATI avoids the impact of behavioral policies on context shift by predicting environmental dynamics, it seems to also limit the information capacity of the task inference encoder.
2. Despite the experimental results are promising, the types of experimental environments are somewhat limited. In addition, the comparisons do not include baselines like Algorithm Distillation, which use Transformer as a backbone, thereby reducing the persuasiveness of BATI.
Other Comments Or Suggestions: No other comments.
Questions For Authors: From my perspective, methods like Algorithm Distillation handle both task inference and context-conditioned policy within a single model, such as a Transformer. Compared to these methods, what are the advantages and disadvantages of the authors' approach, which uses two separate models for learning?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review! We answer your questions below.
> Q1. Types of environments.
R1: We use a set of MuJoCo environments that is standard in the field of meta-RL and consistent with our baselines. To further demonstrate the generality of BATI, we have additionally conducted a preliminary multi-agent experiment. We choose Kuhn Poker, a two-player card game with discrete state and action spaces, differing from the continuous MuJoCo environments used in our paper. We generate different player-2 (opponent) policies as "tasks" and learn an adaptive policy for player-1 over 10 episodes (20 steps) of contexts. As shown in the table below, BATI maintains superior performance over all baselines, further showcasing its capabilities and generalization.
| Method | Oracle (pre-IQL) | BATI | CSRO | FOCAL | Recon | UNICORN |
| -- | --- | --- | --- | --- | --- | --- |
| Episodic Return | $0.0734$ | $\mathbf{-0.049 \pm 0.025}$ | $-0.180 \pm 0.032$ | $-0.191 \pm 0.020$ | $-0.185 \pm 0.052$ | $-0.243 \pm 0.042$ |
> Q2. Comparison with single-model, transformer-based approaches.
R2: We note that Algorithm Distillation imposes additional requirements on the training data (i.e. generated by the learning histories of an RL algorithm), making it less general than BATI and unsuitable for our evaluation setup. In general, we remark that the family of single-model approaches performs Bayesian inference to deduce the posterior given a context trajectory, and is thus similarly flawed as the context encoders of our baselines, as argued in Sec. 3 and 4 in our paper.
To support this argument, we add a new, recent baseline [DPT](https://arxiv.org/abs/2306.14892) that also makes use of a transformer to encode the context and predict the optimal action, outperforming Algorithm Distillation. Due to the computational requirements of training transformers and the tight rebuttal schedule, we implement DPT and evaluate it in AntDir and HalfCheetahVel, as shown below. It can be seen that DPT is outperformed in both environments.
| Environments | Oracle (pre-IQL) | BATI | DPT | CSRO | FOCAL | Recon | UNICORN |
|----------------|---|------------------|------------------|------------------|------------------|------------------|-------------------|
| AntDir | $58.7$ | $46.4 \pm 2.9$ | $-8.8 \pm 0.9$ | $28.9 \pm 3.0$ | $-46.5 \pm 2.2$ | $-18.3 \pm 13.7$ | $-32.0 \pm 4.2$ |
| HalfCheetahVel | $-139.3$ | $-122.8 \pm 1.6$ | $-408.9 \pm 3.3$ | $-134.8 \pm 8.9$ | $-279.4 \pm 9.4$ | $-201.8 \pm 7.4$ | $-267.3 \pm 20.5$ |
>Q3. Design and capacity of task embeddings.
R3: We demonstrate the expressitivity of our task embedding design with extensive experiments in our paper, showing that BATI outperforms the baselines across the board. To further support the capabilities of BATI, we conduct a new OOD task generalization experiment in AntDir, which is even more demanding for the task embedding, requiring generalization to both OOD contexts and tasks. We sample training goal directions from $[0, \pi)$ and testing directions from $[\pi, 2\pi)$ to build disjoint distributions of training and testing tasks. The results are shown in the table below. It can be seen that while the performance decreases compared to the in-distribution case, BATI still outperforms baselines by a large margin and is the only method to achieve positive returns. This further validates the generalization capability of BATI and its task embedding designs.
| Method | BATI | CSRO | FOCAL | Recon | UNICORN |
|-------------|----------------|------------------|-----------------|------------------|------------------|
| Episodic Return | $12.1 \pm 0.5$ | $-45.8 \pm 14.5$ | $-74.7 \pm 6.1$ | $-51.8 \pm 37.0$ | $-104.6 \pm 9.1$ |
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply, I have no further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your time and effort! We are glad to see that our response addresses your concerns and would greatly appreciate it if you would consider revising your score. | Summary: Authors propose Behavior-agnostic Task Inference (BATI) approach for meta RL problems which is claimed to be more robust to noisy dynamics compared to previous methods like UNICORN or CSRO and which works at the same level in noise-free cases.
Claims And Evidence: Claims are supported by the evidences. But experimental design is not clear for me which makes evidences questionable. E.g. CSRO which is the strongest baseline was not designed for the evaluation under the adversarial context and it is not clear whether it is tested under the same protocol with BATI or not. More on this in following sections. Replacing BRAC with IQL in baselines without demonstrating the difference is not reliable. Also, I haven’t found details on hyperparameters tuning for BATI and baselines which is crucial for RL.
Methods And Evaluation Criteria: Evaluation criteria and used benchmarks (meta MuJoCo environments) make sense and common in the field.
But I would like to offer additional set of experiments from the side of tasks. How would performances change if we have different distributions for train and test tasks? E.g. in AntDir what if train goals are sample from $[0, \pi)$ and test goals from $[\pi, 2\pi)$? UNICORN and CSRO should be able to handle this as demonstrated by the corresponding works and it is not clear for BATI because of it’s design (the finite codebook of latent representations for tasks Z).
Theoretical Claims: I did not carefully checked theoretical claims and did not observe issues during reading. I did not read theoretical part in Appendix.
Experimental Designs Or Analyses: Authors provide valid experimental design to support claims that their method is robust to noisy dynamics and irrelevant contexts.
However, it is not clear whether the comparison against CSRO is correct. I haven’t seen the test protocol for CSRO which is designed to infer task by collecting the context on it’s own. Was it provided with the same context as BATI? It is not supposed to work with adversarial contexts and might perform better if the context is collected from scratch.
Based on the previous remark, I thing it is important to check how the algorithms will behave without adversarial contexts, i.e. collecting entire contexts needed for task inferences on their own (zero-shot).
If I understood the BATI correctly, there is a finite “code-book” (table) for latent representations of the tasks build based on the training instances. First, I did not understand whether the size is equal to amount of training tasks or not. Anyway, intuitively, BATI might heavily depend on this table size and there is no experiment which demonstrate the sensitivity. In case the table size matches training tasks, it is important to compare BATI and baselines under various sizes of training dataset in terms of number of tasks.
There is also a set of experiments about OOD environments instances I mentioned in “Methods And Evaluation Criteria” which I find lacking.
Authors replaced BRAC which was used in CSRO and UNICORN as offline RL algorithm with IQL. While in Appendix it is stated with text that IQL is more stable, I consider it as a major change for baselines and wish to see the exact performance difference between approaches backed up with BRAC and IQL. Now I can imagine that BRAC was not working for BATI while working for other baselines. It would be also nice to make a reference that IQL and BRAC-like algorithms demonstrate competitive performance (https://arxiv.org/abs/2210.07105).
I did not found any details on the hyperparameters tuning for BATI or baselines. Given the fact that authors changed offline RL algorithm from BRAC to IQL and collected their own datasets, it is important to show that BATI and baselines (or at least the most competitive CSRO) were tuned with similar hyperparameters tuning budget.
It would be good to also see BATI sensitivity to it’s hyperparameters, e.g. # Latent Samples N or task latent dimensionality.
Supplementary Material: I’ve checked all supplementary except Appendix A.
I found hyperparameters tunning details and some datasets collection information (e.g. amount of train instances, amount of rollouts and their sizes) to be missing.
Relation To Broader Scientific Literature: Authors present approach which is more robust to noisy dynamics and adversarial contexts -- two aspects which seem essential for meta RL. They demonstrate that prior powerful offline meta RL methods are not able to handle such shifts easily.
Essential References Not Discussed: While mentioning Algorithm Distillation, authors do not discuss follow-up works based on it’s findings or similar approaches based on scalable Transformer architecture. I believe those works should be at least mentioned in the context of the work and ideally compared against (but I do not require it now as an important change due to the computational costs). Transformers are known for being able to adopt to completely novel tasks after a certain scale. Here is a list of works I would recommend discussing:
1. Prompt DT (https://arxiv.org/abs/2206.13499) which solves offline meta RL tasks in-context by training transformer to predict next action given the context of interactions with environment.
2. https://arxiv.org/abs/2312.03801 where authors demonstrate that transformers are able to adapt to completely novel environments in-context after particular offline pre-training.
3. Headless AD (https://arxiv.org/abs/2312.13327) proposes an AD modification which is able to adapt to completely novel action space, i.e. novel environment dynamics.
Other Strengths And Weaknesses: The latent Z table might be a strong limitation when compared to previous methods which are based on context encoder models. The experiments I’ve asked for in this regard should reveal whether it is true.
Other Comments Or Suggestions: I did not understand where from comes the $L_{recon}$ training objective. While numerator seems intuitive, I do not understand why there is denominator. Wouldn’t $h_{\psi}$ just try to produce large positive numbers to reduce the loss? Why is it there at all?
Questions For Authors: All of my questions were asked in previous sections of the review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing our paper and the detailed review! We address your concerns below. **Due to space limits, all the figures and tables referred below are available anonymously [on this website](https://sites.google.com/view/bati-icrl).**
> Q1. Experiment protocol of BATI and baselines.
R1: We clarify that all methods, including CSRO, are evaluated on the same context distribution $p_\text{test}(M, \mu)$ during test time, as described in Section 5.1. We design our testing protocol to be as challenging as possible since this aligns better with real-world scenarios where dynamics are noisy and the policy may not have control over the context (e.g. provided by a human operator). Other baselines like UNICORN and FOCAL also use a similar protocol. To address your concerns more thoroughly, we evaluate CSRO on HalfCheetahDir using their self-collection protocol and obtain a performance of $-16.2 \pm 25.3$. While the stronger protocol improves performance, CSRO still lags behind BATI by a lot.
> Q2. Choice of base offline RL algorithm and hyperparameter tuning.
R2: We switched from BRAC to IQL because we observed in some cases that BRAC simply failed to learn; see website Fig. 2 for results in WalkerRandParams. We also performed little hyperparameter tuning and directly adopted hyperparameters from previous works. The IQL- and CSRO-related ones are borrowed from their respective papers (CSRO CLUB weights are divided by 10 since their implementation has a weight of 10 for the FOCAL loss while ours has 1) while the rest comes from UNICORN, including those of data-collection runs (with learning steps and dataset sizes slightly adjusted to accelerate training).
Furthermore, we would like to remark that as argued in Sec. 3 and 4, the core problem with baselines lies in task inference and not policy learning; the performances of baselines may even worsen with hyperparameter tuning, since the meta-policy would then get better at executing a ***wrong*** policy.
> Q3. OOD task experiments.
R3: OOD generalization for tasks is extremely challenging, and to the best of our knowledge, the primary experiments of UNICORN and CSRO focus on **generalization to OOD contexts on in-distribution tasks**, same as BATI. Only UNICORN attempted an OOD task setting with a model-based RL approach, which is out of scope for our paper.
Nevertheless, we acknowledge the importance of OOD generalization and conduct the experiment you requested in AntDir with training goal directions sampled from $[0, \pi)$ and testing directions from $[\pi, 2\pi)$. The results are shown on the website (Tab. 1). It can be seen that while the performance decreases compared to the in-distribution case, BATI still outperforms baselines by a large margin and is the only method to achieve positive returns. This further validates the generalization capability of BATI and its task embedding designs.
> Q4. Task embedding table size; sensitivity to hyperparameters.
R4: The size of the task embedding table is the same as the number of training tasks, as described in Section 4.2. To demonstrate the robustness of BATI with respect to the number of training tasks, we conduct an ablation in AntDir that splits the 40 tasks into different train/eval splits. The results are shown on the website (Tab. 2). It can be seen that BATI outperforms baselines in all settings and is highly stable.
We conduct another ablation in WalkerRandParams to validate the robustness of BATI with respect to task embedding size. Compared with the main result of $565.2 \pm 7.9$ with embedding size 32, size 16 and 64 yield performances of $556.4 \pm 1.7$ and $536.2 \pm 6.4$, respectively, demonstrating the robustness of BATI.
> Q5. Discussion and comparison with transformer-based methods.
R5: Thank you for the additional related works! We remark that all of the three methods belong to the same paradigm where a transformer is trained to predict actions conditioned on the context, an act of Bayesian inference similarly flawed as the context encoders of our baselines.
To support this argument, we add a new baseline [DPT](https://arxiv.org/abs/2306.14892) that also uses a transformer to encode the context and predict the optimal action, outperforming AD. Due to limited time, we implement DPT and evaluate it in AntDir and HalfCheetahVel, as shown on the website (Tab. 3). DPT is inferior to BATI in both environments due to spurious correlations.
> Q6. Dynamics model training objective $\mathcal{L}\_\text{recon}$.
R6: Thank you for pointing this out! We apologize for the confusion, the correct objective for Eq. 7 should be $\mathcal{L}\_\text{recon}^{X, Z}(\phi, \psi) := \frac{(X^t - g\_\psi(X^b, Z))^2}{\exp h\_\psi(X^b, Z)} + h\_\psi(X^b, Z)$. We design $p(X^t \mid X^b, Z)$ to be a Gaussian distribution with mean and (log) variance parameterized by neural networks g,h, and L_recon is its negative log likelihood (up to a constant). This will be fixed in the revision. | Summary: The paper introduces Behavior-Agnostic Task Inference (BATI) to improve offline in-context reinforcement learning (ICRL) under distribution shifts. BATI, a model-based maximum-likelihood approach, infers task representations robustly by focusing on environmental dynamics. Results show BATI outperforms existing methods, especially with context shifts and noise.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The proposed method, BATI, makes sense for addressing the identified problem of distribution shifts in offline ICRL. It effectively shifts the focus from Bayesian posterior inference to a maximum-likelihood estimation of environmental dynamics, thus being more robust to context shifts. The evaluation criteria, using MuJoCo environments and varying noise levels, are appropriate for assessing BATI's performance and robustness in different scenarios.
Theoretical Claims: The theoretical result is standard.
Experimental Designs Or Analyses: The experimental designs and analyses are generally sound, using MuJoCo environments, relevant baselines, ablation studies, and appropriate evaluation metrics.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper improves offline in-context reinforcement learning (ICRL) under distribution shifts.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
The paper is generally well-written and easy to follow. The authors clearly explain the problem, the proposed solution, and the experimental setup.
The paper addresses a critical challenge in ICRL, which is the vulnerability to distribution shifts. Overcoming this limitation is essential for the broader applicability of ICRL in real-world scenarios.
Weaknesses
The experiments are primarily conducted in MuJoCo environments. While these are standard benchmarks, such as MetaWorld. it would be beneficial to see results in other domains to further demonstrate the generalizability of BATI.
The theoretical analysis in Section 3 and Section 4.1 seems to have less connection to the proposed method. Why the proposed method can address the theoretical issue of the previous works shown in Section 3 and Section 4.1? Is there any theoretical analysis for the proposed method?
Other Comments Or Suggestions: The paper spends several pages to explain the issue of the previous method. Some issues, such as spurious correlation of the context learner. The paper can be strengthened by the analysis of the proposed method on the issue.
Questions For Authors: Please address the above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive review. We're encouraged by the overall positive assessment that our paper `"is generally well-written"` and `"addresses a critical challenge in ICRL"`. Below, we address the main concerns raised regarding the generalizability of our method beyond MuJoCo environments and the theoretical connections between our analysis and the proposed BATI approach. As demonstrated in our preliminary multi-agent experiments and theoretical explanations, BATI effectively addresses the identified limitations of previous methods while maintaining strong performance across diverse domains. The following are our detailed responses:
> Q1. Results in other domains.
R1: Due to the tight rebuttal schedule, we have conducted a preliminary multi-agent experiment to demonstrate the general applicability of BATI in other domains. We choose Kuhn Poker, a two-player card game with discrete state and action spaces, differing from the continuous MuJoCo environments used in our paper. We generate different player-2 (opponent) policies as "tasks" and learn an adaptive policy for player-1 over 10 episodes (20 steps) of contexts. As shown in the table below, BATI maintains superior performance over all baselines, further showcasing its capabilities and generalization.
| Method | Oracle (pre-IQL) | BATI | CSRO | FOCAL | Recon | UNICORN |
| -------- | --- | -------- | -------- | --- | --- | --- |
| Episodic Return | $0.0734$ | $\mathbf{-0.049 \pm 0.025}$ | $-0.180 \pm 0.032$ | $-0.191 \pm 0.020$ | $-0.185 \pm 0.052$ | $-0.243 \pm 0.042$ |
> Q2. Theoretical analysis of our method.
R2: We apologize for any potential confusion. We establish the preliminaries and analyze the prior methods in Section 3. Building on these insights, Section 4.1 is primarily dedicated to our method that circumvents the prior failure modes. In Section 4.1, we provide two perspectives on the reason why BATI works: 1) the graphical model perspective (Fig. 2) where BATI achieves robustness by blocking $X^b$; 2) the robust likelihood perspective (page 4 right, line 179-219) where the core objective of BATI can be understood as a robust version of the full likelihood that does not depend on $\mu$ through derivations. We will communicate these points more clearly in the revision and greatly appreciate any suggestions to further improve the writing. | Summary: The authors propose a modification to the way offline context-based meta-RL methods supervise task identification, which they call BATI. The core idea is to remove the correlation between the task estimate and the behavior of the policy collecting the context. That way, the test-time context policy can be significantly different from the policy used to collect training context, and the method will still accurately select a task representation that leads to high returns when used as an augmented state feature for a standard offline-RL policy. The authors evaluate BATI in the standard meta-RL extensions of gym locomotion tasks under a setup that highlights robustness to stochastic dynamics and adversarial test-time context.
## Update After Rebuttal
I appreciate the authors' reply and have no further concerns. I maintain my positive score.
Claims And Evidence: Yes, the main claims are well studied by the experiments.
Methods And Evaluation Criteria: The authors evaluate their method in a set of mujoco locomotion benchmarks with a small set of randomized objectives and dynamics that have been staples since the first wave of deep meta-RL papers. These tasks play to the strengths of methods that can learn precise task estimates from short context over a narrow range of behavior. These benchmarks are quite saturated at this point. However, diverse and affordable alternatives are still starting to emerge, and this choice isn’t meaningfully impacting my review. The authors go to great lengths to try and make these tasks more interesting by adding noise to their dynamics and creating adversarial context sequences at test-time (mainly consisting of actions chosen from the expert data collection policy for the least similar task). Still, I wonder whether the method would scale to diverse training distributions that might require large M (the size of the task set and number of embeddings).
Theoretical Claims: I reviewed the appendix including the argument of Theorem 3.1
Experimental Designs Or Analyses: The experiments seem sound and implement all the baselines fairly on top of the same codebase. Transferable improvements (such as a change in the base offline RL algorithm) are applied across multiple baselines. The authors are relying on a nearly worst-case-scenario setup to evaluate their method on the tasks chosen, but this may reflect real applications.
It may be useful to report some oracle (task-conditioned or single-task expert) references scores for the experiments where the dynamics of the locomotion envs (like AntDir) are given significant noise.
The low seed counts are a bit of an issue given the very thin margins between most of the baseline results.
Supplementary Material: I read the Appendix.
Relation To Broader Scientific Literature: The paper’s use of the term “in-context RL” (ICRL) is confusing — if this term even means anything unique at this point (and maybe it doesn’t). To review, “in-context learning” is primarily used to describe sequence models’ (almost always Transformers/LLMs) ability to understand a task at test-time based on a small sample of input/output pairs. This ability is often attributed to *implicit* Bayesian inference of the task inside the activations of the model, though there are some other explanations of this effect in NLP.
Modern Meta-RL and Meta-IL have begun to borrow the term when using Transformers to *implicitly* learn the ability to improve based on their input sequence. It is broadly used to rebrand traditional subsets of black-box meta-learning by using the connection to LLMs to highlight the flexibility of implicit task inference outside of standard few-shot meta-RL benchmarks (e.g., to few-shot prompting, opponent modeling). Or, in the case of Algorithm Distillation, to the implicit ability to learn the RL gradient update itself. BATI’s explicit task inference by conditioning the online policy on one of M task embeddings that minimizes a reconstruction objective on data provided by another behavior policy fits in neither category. There is no task estimate emerging implicitly in the context length of a sequence model. I had never seen the term “in-context RL” used this way. The authors write “We identify a critical limitation in existing offline ICRL methods” and then cite several sources, but the only one of these sources that actually call it “in-context RL” is the most recent UNICORN (Li et al., 2024), which I assume is how this came to be. This might be a losing battle, as the vocab and taxonomy of meta-RL has been messy for quite some time, but I really don’t think the term “in-context” applies here at all.
Essential References Not Discussed: The literature is well covered, but I am interested in the authors’ thoughts on the connection to [Dorfman et al., 2022](https://arxiv.org/abs/2008.02598). My understanding is that the evaluation setting in BATI is a similar worst-case scenario to what is discussed there, in that the context dataset is collected by an expert in a single task and therefore its behavior heavily implies the task identity.
Other Strengths And Weaknesses: The main argument of behavior-agnostic task identification from context sets generated by single-task experts is made clearly and given plenty of motivation.
Other Comments Or Suggestions: I think the paper would benefit from some more details in the Appendix related to the network architecture and overall implementation. Currently, the authors are relying on UNICORN for much of this information, which prevents the paper from standing alone. That’s a shame because the core argument for the change in inference objective is given enough space in the main text that someone would not necessarily have to read the baseline papers to follow it.
Questions For Authors: 1. How would the authors expect results to change when the train-time context data is generated by policies that are adapting over some distribution of tasks (rather than single-task experts trained by SAC)?
2. What would be the challenges involved in scaling BATI to domains with extremely large task spaces?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your kind words and insightful review! We are happy to answer your questions below.
> Q1. Training contexts generated by adaptive policies.
R1: The result would likely depend on the exact behavior of the adaptive policy. For example, if the adaptive policy is bad, it may randomly walk around the state space in any task and reveal little information about the true task identity, in which case all approaches would fail due to the uninformative nature of the context. At the other end of the spectrum, if the adaptive policy is good and finds the optimal policy quickly, its behavior may diverge significantly between tasks, resembling the single-task expert case where BATI works and baselines don't. In general, we expect BATI to outperform or at least remain competitive with the baselines for all cases, and the baselines to work only when the adaptive policy behaves similarly in all tasks yet still reveals a lot of task-related information.
> Q2. Challenges brought by extremely large task spaces.
R2: A good question! Indeed, this is one of the potential future works we'd like to explore. The task inference procedure of BATI has a time complexity of $\mathcal{O}(NL)$ where $N$ is the number of samples (lower-bounded by the number of training tasks) and $L$ is the length of the context and can be fully vectorized. This allows the inference procedure to run reasonably fast. However, if the task space is extremely large over which we couldn't practically enumerate, more sophisticated techniques could be employed to perform the inference. For example, we can use [diffusion posterior sampling](https://arxiv.org/abs/2209.14687) to sample from the conditional posterior $p(x \mid y, c)$ where $x$ is the task latent, $y$ is $X^t$, and $c$ is $X^b$. Under this formulation, the task prior $p(x)$ is represented by a diffusion model instead of a table of embeddings, and (robust) likelihood $p(y \mid x, c)$ is provided by the dynamics model whose gradient $\nabla\_x p(y \mid x, c)$ serves as a form of classifier guidance. Leveraging the powerful expressitivity of diffusion sampling procedures, we can efficiently and robustly sample from a much larger task space. However, to demonstrate the advantage of such an approach, a benchmark with a sufficiently complicated task space is required, which is still emerging, as pointed out in your review, and we will explore this further in our future work.
> Q3. Connection with BOReL (Dorfman et al., 2022).
R3. Thank you for bringing our attention to this related work! We note that the training data of BOReL are actually *augmented* via their reward relabelling (RR) procedure, in effect rolling out all context collection policies in all tasks and making $M$ and $\mu$ independent. This avoids the spurious correlation issue during training, but requires additional properties (e.g. tasks differing only by rewards) and ground-truth reward functions. Indeed, as noted in BOReL, *"With reward relabelling ablated ... the agent believes the reward is at the point it first visited on the semi-circle"* (Page 8, lower left), which is a consequence of the spurious correlation phenomenon. Our BATI can be applied to BOReL to replace RR without any additional requirements.
> Q4. Oracle references and number of seeds.
R4: We have updated the main results to include oracle references by single-task experts, available anonymously [here (Fig. 1 and Tab. 4)](https://sites.google.com/view/bati-icrl). BATI is able to match or even slightly outperform the expert data in most cases due to optimizations performed by IQL. Due to limited time and computational resources, we have run two additional seeds for the main experiments, bringing the total to `5 seeds`; the changes are also reflected in the figure and table above.
> Q5. Usage of "in-context RL".
R5.: Thank you for the detailed and insightful comments! In BATI, we treat ICRL as mostly synonymous with context-based meta-RL, i.e. learning policies that can ***efficiently adapt to the context and improve on-the-fly*** without laborious fine-tuning, regardless of transformer or implicitness. We argue that this "functional" definition better captures the essence of ICRL. For example, there are [state-space model approaches](https://arxiv.org/abs/2303.03982) that produce a fixed-length representation of the context without transformers, and methods like [DPT](https://arxiv.org/abs/2306.14892) that are trained to perform explicit Bayesian inference of optimal actions. However, we stress that **we greatly value inputs from the community about terminological choices, and are open to changes if deemed necessary**.
> Q6. Additional hyperparameter and implementation details.
R6: Thank you for your advice! As the ICML 2025 reviewing policy prevents authors from changing the PDF during rebuttal, we'll add relevant details to the appendix for a more self-contained reading experience when we get to update the paper. | null | null | null | null | null | null |
Geometric Hyena Networks for Large-scale Equivariant Learning | Accept (spotlight poster) | Summary: This paper introduces an SE(3)-equivariant extension of the Hyena model (Poli et al. 2023) which employs long convolutions evaluated in the Fourier domain. This Geometric Hyena model is used for property predictions of large biological molecules. The authors show that their model outperforms transformer models in terms of runtime and memory as well as performance in several numerical experiments.
Claims And Evidence: The claimed runtime- and memory benefits over transformer models are convincing. The claimed performance benefits are largely convincing. However, as the authors point out themselves, the performance benefit in the RNA property prediction task and on the geometric associative recall task is likely due to the alternating local and global context and not the long convolutions (the error bars for G-Transformer and G-Hyena overlap).
Methods And Evaluation Criteria: The chosen problems largely seem to be appropriate to evaluate the model. However, I am missing a baseline of a non-equivariant state space model (e.g. the non-equivariant Hyena model), since the claimed benefit of the proposed model is the combination of the efficiency state-space models with the performance of equivariant models. In light of recent doubts about the importance of equivariance (e.g. [arXiv.2311.17932], [arXiv: 2410.24169], AlphFold3) it would be interesting to see how the Geometric Hyena compares to efficient non-equivariant models.
Theoretical Claims: The equivariance of the proposed layers is obvious.
Experimental Designs Or Analyses: The experimental designs seem appropriate.
Supplementary Material: I read the related works section which the authors moved into the supplementary material.
Relation To Broader Scientific Literature: Equivariant architectures are a standard tool in quantum mechanical property prediction. Since state-space models are a recent development in other domains, it is interesting to consider an equivariant version of a popular state-space model.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
Strong experimental results and thorough ablations.
Weaknesses:
The presentation of the paper is seriously lacking. There are numerous typos and grammatical errors in the manuscript (e.g. "Heyna" instead of "Hyena" in the tables). The related works section is an important part of the paper and has to be part of the main text, it does not belong in the supplementary material. For removing such an important section from the main text, the manuscript seems wordy at times, e.g. the "invariant-equivariant subspace interaction" section can be shortened considerably and the beginnings of sections 4.1 and 4.2 -> "implementation details" are almost the same. Also the equations given lack precision. For instance, the domain and co-domain of $\Psi$ given above (1) do not match the sets in (1) and the hatted quantities are not introduced. The notation $F^H$ in (2) is not introduced. Similarly, the gating function $\gamma:\mathbb{R}^3\times\mathbb{R}^d\rightarrow[0,1]$ seems to act per token as it is introduced, is that correct?
Other Comments Or Suggestions: For some typos, see above.
Questions For Authors: No further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ## Computational and performance benefits:
We appreciate the reviewer acknowledging the runtime and memory benefits of our method, as computational efficiency is the main purpose of our work.
Second, we believe there may be a misunderstanding regarding performance improvements. G-Hyena significantly outperforms the G-Transformer model—with non-overlapping error bars on Tc-Ribo in both the backbone and all-atom settings (-7\% error reduction in both cases). It also shows substantial gains on the Protein-MD task in both backbone and all-atom scenarios (-26\% and -36\% error reduction, respectively). Since the primary architectural difference between G-Hyena and G-Transformer is the geometric long convolution, we attribute these performance improvements to that component. In addition, the associative recall experiment (Figure 3, bottom left) shows that G-Hyena scales more favorably with respect to the hidden dimension compared to G-Transformer. This improved scaling behavior is practically relevant, as it potentially enables more efficient model size reduction and further lowers memory usage. Taken together, we consider both the computational and performance advantages of our model to be significant in the context of the conducted experiments.
## Non-equivariant state space model:
We agree that the extended comparison with the non-equivariant Hyena model will further strengthen the experiments. In addition to Hyena in the associative recall experiments, we test a standard Hyena model trained with data augmentation on Open Vaccine, Ribonanza-2k, Tc-ribo, and Protein-MD datasets.
|Results (*backbone*) |Open Vaccine|Ribonanza-2k|Tc-ribo|Protein-MD|
|-----|------------------------|------------------------|-------------------|----------------|
|Hyena|0.447±0.037|0.810±0.124|0.560±0.002|48.94±9.03|
|G-Hyena|**0.363±0.045**|**0.529±0.005**|**0.517±0.025**|**1.80±0.009**|
|Results (*all-atom*)|Open Vaccine|Ribonanza-2k|Tc-ribo|Protein-MD|
|-----|------------------------|------------------------|-------------------|----------------|
|Hyena|0.393±0.002|0.605±0.017|0.569±0.001|55.34±5.51|
|G-Hyena|**0.339±0.004**|**0.546±0.006**|**0.552±0.003**|**2.49±0.037**|
Results suggest that standard non-equivariant Hyena trained with extensive data augmentation still significantly lags behind properly equivariant Geometric Hyena (with G-Hyena performance gain ranging from -3\% error reduction on Tc-Ribo all-atom to -53\% error reduction Ribonanza-2k backbone). Furthermore, the non-equivariant Hyena model struggles to generalize on the Protein-MD dataset where it underperforms significantly compared to equivariant methods. The results will be included in the revised version of the paper.
## Presentation of the paper and clarifications:
- We will refine the text of the paper to correct grammatical errors and typos as soon as openreview allows revision. We will further streamline the writing in "invariant-equivariant subspace interaction" section and remove redundancies in the beginnings of sections 4.1 and 4.2.
- In Eq. (1), $\hat{\mathbf{x}}$ and $\hat{\mathbf{f}}$ refer to the output of Geometric Hyena.
- Indeed, the more precise domain and co-domain formulation would be $\Psi: (\mathbb{R}^{3} \times \mathbb{R}^{d})^{N} \rightarrow (\mathbb{R}^{3} \times \mathbb{R}^{d})^{N}$ as our model operates on a sequence of $N$ scalar vector feature tuples $(x_1, f_1), (x_2, f_2), \dots, (x_N, f_N)$ with each feature tuple being $\mathbb{R}^{3} \times \mathbb{R}^{d}$.
- The notation $F^H$ in Eq. (2) refers to the Hermitian transpose of the FFT matrix.
- The gating function indeed acts per token.
- We appreciate the reviewer's focus on clarity and precision of the presentation of the paper! We will fix typos, add clarification and adjust notation accordingly in the revised version.
## Related work section:
Due to space constraints, a comprehensive review of related work could not be included in the main text. Instead, we focused on discussing the most essential prior work in the introduction, while providing a more detailed review in Supplementary Section A. This structure is not uncommon with multiple top-tier conference ML papers (e.g., [1,2,3]) adopting a similar format. If the reviewer prefers an explicit related work section in the main text, we are happy to include a shortened version there while retaining the extended related work in the supplementary material.
We thank the reviewer for their constructive feedback! We believe we have addressed all questions and comments. In light of this, we kindly ask the reviewer to consider raising the score.
[1] Gu et al. Mamba: Linear-Time Sequence Modeling with Selective State Spaces. COLM 2024. \
[2] Yi et al. Bridge the Modality and Capability Gaps in Vision-Language Model Selection. NeurIPS 2024. \
[3] Razin et a. Implicit Regularization in Deep Learning May Not Be Explainable by Norms. NeurIPS 2020. | Summary: The authors introduce a novel equivariant model designed for modeling long geometric sequences. To ensure compatibility with higher-order geometric tensors in global convolution, they propose an equivariant global convolution specifically for vectors. Additionally, they demonstrate that these computations can be performed efficiently using the Fast Fourier Transform (FFT), building on the concept of scalar-valued global convolution. The Geometric Hyena model achieves superior results across various RNA sequencing datasets.
Claims And Evidence: **The Geometric Hyena is designed for tasks that require modeling invariant and equivariant features in geometric graphs.** -- Geometric graphs can appear in different formats. They can be a point cloud with features associated with each point, a mesh, or a big molecule. One issue of using the Hyena or Mamba style model in these cases is converting the geometric graphs into a sequence.
For example, to mamba or state space model for computer vision, researchers explored various ways to convert the $2D$ image into a $1D$ sequence.
However, this case is completely ignored in this case.
The authors might have mostly dealt with RNA data - which, unlike generic geometric graphs, is a definite sequence.
However, unlike RNA seq, most of the geometric graphs do not come with an ordering.
This means the proposed model is not for all "geometric graphs", it is for a small subset of "geometric graphs" where we have some ordering on the nodes. Thus the work do not comply with the main claim.
Methods And Evaluation Criteria: As claimed to be a generic model for Geometric data, authors should consider all the datasets from EQUIFORMER and VN-Transformer.
Also, as mentioned in line 307 -- "For the other models, we choose hyperparameters so their depth and total parameter count match those of Geometric Hyena, ensuring a fair comparison" --, It is not clear why it is not possible to use the hyper-parameters proposed in the original works.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiments done are valid but not enough to demonstrate the significance.
Supplementary Material: I skimmed over the supplementary materials, especially "Additional details on architectural components of Geometric Hyena"
Relation To Broader Scientific Literature: The work may have a significant contribution to RNA-Seq analysis. However, as discussed earlier, in general cases, geometric graphs do not have order, thus severely limiting it's application.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. The idea of " Geometric associative recall" should be explained further. For example, the statement "In geometric associative recall, each bigram (key and value) is connected by a rotation matrix" is not well understood if not explained in more detail.
Other Comments Or Suggestions: 1. Type line 241 ”Harry Potter” --> ``Harry Potter''
Questions For Authors: 1. Regarding "Vector long convolution," is this proposed by authors, or is it a standard definition?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Application focus of Geometric Hyena:
Regarding the generalizability of our method to arbitrary geometric graphs, we refer the reviewer to the Limitations section (lines 424–439R), where we explicitly discuss the limitations of G-Hyena for point clouds and emphasize that our method is best suited for problems where a canonical ordering can be established. Also, note that Reviewer YW9h acknowledges in their review that we have noted this limitation explicitly. With this, we disagree that we overclaimed the contribution of our method to arbitrary geometric graphs. That said, we will revise the text to make our target application domain more explicit. Specifically, we will update the phrasing to: "The Geometric Hyena is designed for tasks that require modeling invariant and equivariant features in geometric graphs where canonical order can be established". This will be clarified in the revised version of the paper.
Second, we would like to point out that the class of geometric graphs with canonical ordering is quite broad and includes large bio-molecules such as proteins, antibodies, peptides, enzymes, and RNA - systems that are highly relevant to biosciences and drug discovery. In this context, we also remind the reviewer that our submission is under the Application-Driven Machine Learning track, which explicitly includes biosciences as a focus area in ICML call for papers (icml.cc/Conferences/2025/CallForPapers). In addition, large bio-molecules represent a well-established and rapidly growing application area by its own in the machine learning community, with numerous top-tier papers focused on this domain [1,2,3,4,5]. Viewed in this light, the reviewer’s remark that our work may “have a significant contribution to RNA-Seq analysis” also reflects its broader relevance to machine learning for biosciences. Finally, we note that our method applies beyond RNA, as demonstrated in the protein molecular dynamics task in Section 4.5.
## Equiformer and VNT:
Our method is designed for large bio-molecules, such as RNA or proteins, which can consist of hundreds or thousands of atoms. In contrast, Equiformer and VNT are developed for small geometric systems. For example, the Ribonanza-2k dataset we use has bio-molecules with up to 11300 atoms, whereas the MD17 dataset used to evaluate Equiformer has molecules with only up to 25 atoms. The high dimensionality of our data introduces unique computational challenges that our method is specifically designed to address that do not arise in low-dimensional settings and that existing equivariant models are not equipped to handle at scale. As a result, it is also infeasible to directly reuse the hyperparameters from Equiformer paper (while VNT paper does not report exact optimal hyperparameters). Still following reviewer's suggestion, we tried running Equiformer with the smallest default configuration from the original paper and it runs out of memory on our data even with a batch size of 1 highlighting further the importance of our model for large-scale bio-molecular graphs. Finally, our initial reason for running the models to match the parameter count of G-Hyena was to ensure that we could fairly compare the benefits of these models by isolating the impact of model scale on performance which is a standard practice [6,7].
## Associative recall:
We appreciate the reviewer’s attention to the clarity of presentation of the geometric associative recall task. To aid understanding, we illustrated how “each bigram (key and value) is connected by a rotation matrix” in Figure 4 of the supplementary material. To further improve clarity, we propose updating the caption of Figure 4 to: “A geometric sequence consists of key and value vector tokens, where consecutive key-value pairs form bigrams. Geometric associative recall requires retrieving the value vector corresponding to a query, where the query matches one of the keys in the sequence.”
## Vector long convolution:
Vector long convolution is a novel component introduced in our work that we demonstrate how to implement efficiently in sub-quadratic time. It is not a standard definition.
We thank the reviewer for their constructive feedback! We believe we have addressed all questions and comments. In light of this, we kindly ask the reviewer to consider raising the score.
[1] Groth et al. Kermut: Composite kernel regression for protein variant effects. ICLR 2024.\
[2] Jing et al. AlphaFold Meets Flow Matching for Generating Protein Ensembles. ICML 2024.\
[3] Nori et al. RNAFlow: RNA Structure & Sequence Design via Inverse Folding-Based Flow Matching. ICML 2024.\
[4] Gong et al. Evolution-Inspired Loss Functions for Protein Representation Learning. ICML 2024.\
[5] Tan et al., Deciphering RNA Secondary Structure Prediction: A Probabilistic K-Rook Matching Perspective. ICML 2024.\
[6] He et al. Deep Residual Learning for Image Recognition. CVPR 2015.\
[7] Tan et al. Rethinking Model Scaling for Convolutional Neural Networks. ICML 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
Even though the requirement of 'canonical ordering' is mentioned in the limitation, I firmly suggest making it explicit in the introduction and contribution (line 091). Also, make this distinction clear with other equivarinat models in the introduction.
For example, in section 2, the author states in line 59 that "A geometric graph of N nodes .... by a **set** of features." Here, I think the authors should have used an ordered set or an index set. Otherwise, the setup seems indistinguishable from the permutation equivarinat setups considered for generic graphs (until the limitation section).
While I agree that the proposed method is practical, effective, and scalable for sequence data with geometric features, the write-up and explanation of the paper (in their current state) do not clearly convey this idea.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging the practicality, effectiveness, and scalability of our method. We appreciate the reviewer's suggestion to emphasize the assumption of canonical ordering more explicitly, and we commit to making it more explicit in the camera-ready version. In addition to the edits in our previous response, we will :
- Rephrase paragraph 4 in the introduction (051R-063L) "... we propose Geometric Hyena that efficiently models global geometric context in sub-quadratic time while preserving equivariance to rotations and translations. In this work, we focus on a subset of geometric graphs where the canonical order can be established such as biomolecules. The focus on ordered geometric graphs differentiates Geometric Hyena from other equivariant frameworks. Having canonical ordering, we can leverage efficient sequence operators - long convolutions. For equivariance, we introduce vector long convolution that utilizes vector products ... "
- Rephrase the first contribution to "We propose Geometric Hyena, the first equivariant long-convolutional architecture specifically tailored for large geometric graphs with canonical ordering, designed to efficiently process global geometric context in sub-quadratic time."
- In 056R-061R, we will clarify: "The Geometric Hyena is designed for tasks that require modeling invariant and equivariant features in geometric graphs where canonical order can be established such as biomolecules."
- At line 061R, we will add: "The canonical ordering of a geometric graph implies a unique and unambiguous enumeration of its nodes. For instance, in biomolecules, such canonical ordering is naturally established by IUPAC rules [1]. We refer to geometric graphs with canonical ordering as ordered geometric graphs."
- Instead of the set notation, we will use sequence notation: "An ordered geometric graph of $N$ nodes can be written as a sequence $(x_1, f_1) ... (x_N, f_N)$"
We believe that these modifications address your concerns, and we are committed to improving the clarity based on your recommendations! With this, we kindly request you to consider these revisions toward increasing your overall score.
[1] Damhus et al. Nomenclature of inorganic chemistry: IUPAC recommendations. Chem. Int 27, 2005. | Summary: This paper presents a novel equivariant SO(3) neural network for processing geometric graphs with sequence structure and invariant node features. The network a geometric version of Hyena and implements equivariant long convolution in fourier space, allowing for global information flow with subquadratic complexity. The model is evaluated over a synthetic recall task and several real world RNA property prediction tasks and protein molecular dynamics prediction.
Claims And Evidence: The primary claim of the paper is that geometric hyena can outperform both local equivariant methods and global equivariant attention methods in terms of accuracy and compute time and memory efficiency. This is well supported both in the design of the method and in the strong empirical results.
Methods And Evaluation Criteria: - The method builds on Hyena by making it equivariant. The extension is non-trivial and the method is clearly novel. For the long convolution layer, elementwise products of scalar are replaced with cross products of vectors. The authors show this operation can also be performed in the fourier domain in subquadratic time. vector-scalar interactions are also supported.
- One limitation of this work is that the input can have only scalar node features (and vector positions). However, the authors have a section in the appendix which extends their framework to higher order SO(3) features and a suggestion on how to implement it efficiently.
- The use of E(N)-GNN for the projection is reasonable since it is fairly efficient and also operators on geometric graphs with scalar node features. The authors improve E(N)-GNN by adding global context tokens.
- A geometric selective gating mechanism is proposed to emphasize certain tokens, similar to softmax.
- The authors include an ablation study to highlight the importance of the key contributions. In particular, they compare to the G-transformer which is similar to their method but uses self-attention instead of long convolution.
- A potential limitation of the method, which the author comment directly on, is that long convolution is not permutation equivariant. Thus for point cloud tasks without a prior sequence structure, a sequence would need to be imposed. This would likely require significant data augmentation. The authors also note, this could be considered an advantage in tasks like the one they select in which there is a useful sequence structure in the data.
- The normalization of the key and values is considered and implemented to prevent numerical instability.
Theoretical Claims: The authors include the appropriate theoretical support for their method, proving the equivariance of geometric Hyena in the appendix.
Experimental Designs Or Analyses: - The authors consider a strong set of experiments to test their methods. The real world RNA and protein experiments demonstrate the method can scale and is practical for interesting real world applications.
- The results are strikingly impressive, with geometric Hyena showing substantial accuracy improvements with much better efficiency and scaling.
- The experiments are carefully chosen to showcase the strength of the method, the inputs have only scalar node features and there is a preferred sequence to use for the points.
- The baselines are well-chosen, representing a selection of well-known, popular, and often SoTA equivariant methods including global attentional methods and local convolutional methods. These include Equiformer, VectorNeurons Transformer, SchNet, Tensor Field Network, E-GNN, and more.
- I believe non-equivariant methods are included in the recall experiment, but not in the RNA experiments. It would be nice to see how they compare, for example, with ample data augmentation. Non-equivariant methods can often perform well too, especially when controlling for memory/compute budget which a focus of the current work.
- Many of the local equivariant methods are targeted more for smaller scale material science tasks, so while they are good to include as baselines, it may not be surprising they do worse on these larger scale tasks with sequence structure. Is there some way to encode the sequence structure for these models? Also, although it is not the goal, I wonder how well would Geometric Hyena perform on the types of materials benchmarks those baselines target.
Supplementary Material: I skimmed the appendix with particular attention to the ablation and the section on higher order generalization of the method. The higher order generalization is quite elegant and I suppose only did not make it into the main since the added complexity was not justified by any improvement in results. Where the higher order methods tested?
Relation To Broader Scientific Literature: It is a bit unusual to not include a background section or related works section on prior works. That said, I think the narrative and explanation is clear and that past work and current contributions are clear as written.
Essential References Not Discussed: No
Other Strengths And Weaknesses: ### Strengths
- Compute and memory efficiency is a large problem under focused on in the equivalence literature. The focus of this paper, the design of the method, and strong results are very encouraging. Figure 1 is quite striking.
Other Comments Or Suggestions: - 073R Shouldn’t the input and output spaces for $\Psi$ be the N-fold product of these spaces?
- 111-113R: I found this a touch confusing. Are they sequences of sets of scalars? Maybe if the index i is across a sequence, you could use parenthesis instead of curly braces to indicate the order matters.
- Why is the equivariant projection layer called a “projection”? Isn’t it just a mapping?
Questions For Authors: - Is the Fourier transform truncated at some maximum frequency? If so, does it matter where?
- 190L-192L: Does this scalar mapping result in a lot of lost information?
- 220L-223L: Are there any trade-offs with omitting some message pathways this way?
- How big is the sequence length for the protein task? What is the input feature for the model?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive feedback on contributions of our paper, highlighting the novelty ("the extension is non-trivial and the method is clearly novel") of our method and its experimental validation ("the authors consider a strong set of experiments to test their methods").
## Non-equivariant baselines:
We agree that the extended comparison with non-equivariant data augmentation baselines will further strengthen the experiments. We train and test Hyena and Transformer models with data augmentation on Open Vaccine, Ribonanza-2k, Tc-ribo and Protein-MD datasets.
|Results (*backbone*) |Open Vaccine|Ribonanza-2k|Tc-ribo|Protein-MD|
|-----|------------------------|------------------------|-------------------|----------------|
|Hyena|0.447±0.037|0.810±0.124|0.560±0.002|48.94±9.03|
|Transformer|0.400±0.004|0.637±0.006|0.556±0.001|75.83±6.35|
|G-Hyena|**0.363±0.045**|**0.529±0.005**|**0.517±0.025**|**1.80±0.009**|
|Results (*all-atom*)|Open Vaccine|Ribonanza-2k|Tc-ribo|Protein-MD|
|-----|------------------------|------------------------|-------------------|----------------|
|Hyena|0.393±0.002|0.605±0.017|0.569±0.001|55.34±5.51|
|Transformer|0.399±0.004|0.633±0.007|**0.553±0.002**|79.68±24.2|
|G-Hyena|**0.339±0.004**|**0.546±0.006**|**0.552±0.003**|**2.49±0.037**|
Results suggest that on Open Vaccine and Ribonanza-2k data, non-equivariant models lag significantly behind Geometric Hyena (with G-Hyena delivering up to 34% error reduction), and on Tc-Ribo all-atom Transformer performs on par with. Yet, non-equivariant models struggle to generalize on the Protein-MD task where non-equivariant Hyena and Transformer perform poorly compared to Geometric Hyena.
## Local methods with sequence-structured priors:
Extending local methods with sequence-structured prior presents a non-trivial direction for future work that extends beyond the scope of our paper. However, to foster future research, we conducted an extra experiment where we employed a Hyena layer to extract sequence features and then ran EGNN on top (Seq-EGNN). Alternatively, we can first run EGNN to extract geometric features and then run a sequence model on top (EGNN-Seq).
|Results|Open Vaccine (*backbone*)|Open Vaccine (*all-atom*)|
|-----|-------------------------------|--------------------------------|
|EGNN|0.529±0.006|0.511±0.005|
|Seq-EGNN|0.527±0.006|0.506±0.009|
|EGNN-Seq|0.489±0.002|0.490±0.004|
|G-Hyena|**0.363±0.045**|**0.339±0.004**|
Our initial results suggest that adding sequence prior has little effect when EGNN is applied on top of sequence features. However, when the sequence model is applied on top of EGNN features, we observe slight improvement, suggesting potential for further research in this direction.
## Future benchmarking on materials:
In our work, we focus specifically on large bio-molecules. We agree that extending experimental comparison to a broader domain, including materials, presents an important direction for future work, and we plan to do so in the future. We would appreciate if the reviewer could point us to relevant material benchmarks that include large molecules for our future work.
## Higher-order Geometric Hyena:
Recent works [1,2] observed that higher-order representations provide diminishing performance improvement while significantly increasing computational requirements and memory footprint. This is also supported by the excellent performance of scalarization-based equivariant GNNs [3]. Based on this evidence, we decided not to proceed with a higher-order version of Geometric Hyena since our focus is on computational and memory efficiency.
## Other comments and questions:
- Indeed, our model operates on a sequence of $N$ scalar vector feature tuples $(x_1, f_1), (x_2, f_2), \dots, (x_N, f_N)$ rather than unordered set, we thank the reviewer for pointing this out. With this, a more precise formulation of the domain and co-domain would be $\Psi: (\mathbb{R}^{3} \times \mathbb{R}^{d})^{N} \rightarrow (\mathbb{R}^{3} \times \mathbb{R}^{d})^{N}$.
- Projection terminology is used for alignment with the terminology in the Transformer and Hyena papers.
- In our implementation, we use standard discrete FFT with circular convolution. There is no explicit frequency truncation and all available frequencies are used up to the Nyquist limit.
- 190-192L: Yes, similar to the scalarization trick [3], this results in a lossy compression of information.
- In Protein-MD each protein has 855 and 3341 atoms in backbone and all-atom version. Atom identities are used as feature for all methods.
- All clarifications, discussion, and new results will be added to the revised version of the paper.
[1] Brandstetter et al. Geometric and Physical Quantities Improve E(3) Equivariant Message Passing. ICLR 2022. \
[2] Wand et al. Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks. ICLR 2024. \
[3] Satorras et al. E(n) Equivariant Graph Neural Networks. ICML 2021.
---
Rebuttal Comment 1.1:
Comment: I've read the other reviews and responses. I don't really see anything that changes my opinion. I appreciate the answers to my questions and non-equivariant baselines. | null | null | null | null | null | null | null | null |
Fair Clustering via Alignment | Accept (poster) | Summary: The paper introduces an in-processing fair clustering approach that matches two instances from different protected groups and assigns them to the same cluster. The approach directly minimizes the clustering cost with respect to both the matching map and cluster centers simultaneously. The proposed method is theoretically proven and evaluated on three datasets and four baselines/competitors. The experimental results show that the proposed method outperforms the baselines and effectively controls the trade-off at multiple fairness levels.
## update after rebuttal
Claims And Evidence: The decomposition process is proved theoretically, and the experimental results support their claim. However, in the experiments, the authors do not compare the results of proposed method with the method of Chietichetti et al. (2017).
Methods And Evaluation Criteria: The proposed approach is technically sound. However, the authors do not use the well-known measures to evaluate the clustering quality.
Theoretical Claims: I have checked the proofs of theoretical claims and they are sound.
Experimental Designs Or Analyses: In general, the experimental results are promising. However, 3 datasets are not yet appropriate to make a comprehensive assessment. Besides, the selection of features (on each dataset) is not yet well explained. For example, the feature “fnlwgt” (Adult dataset) is usually ignored in many relevant works.
Supplementary Material: I checked the supplementary material on the proofs of the theorems and the explanation of the experiments
Relation To Broader Scientific Literature: The paper introduces a new in-processing fair clustering method which outperforms the baselines in terms of performance and complexity.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The idea of matching data from different protected groups is similar to the fairlet-based method, but the authors clarified their contrast and superiority.
Other Comments Or Suggestions: None
Questions For Authors: Please refer to my comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *To reviewer Nujs: We sincerely appreciate your review and thank you for the opportunity to improve our work. Please refer to our point-by-point responses below.*
------
### Claims And Evidence
> 1: However, in the experiments, the authors do not compare the results of proposed method with the method of Chietichetti et al. (2017).
- In the current version, instead of directly comparing with Chietichetti et al. (2017), we compared FCA with SFC (Backurs et al., 2019), which developed a more scalable fairlet-based method.
Notably, Backurs et al., (2019) reported SFC's competitive/better performance when compared to Chietichetti et al. (2017) across three real datasets.
- However, in light of your suggestion, we have newly conducted experiments comparing FCA and Chietichetti et al. (2017).
The results are provided below, showing ourperformance of FCA, which we will add to Appendix C.3.1 in the camera-ready version.
| Dataset / Bal* | Adult / 0.494 | | Bank / 0.649 | | Census / 0.969 | |
|:-------|:---:|:---:|:---:|:---:|:---:|:---:|
| | Cost (↓) | Bal (↑) | Cost (↓) | Bal (↑) | Cost (↓) | Bal (↑) |
| Chietichetti et al. (2017) | 0.507 | 0.488 | 0.378 | 0.639 | 1.124 | 0.941 |
| SFC | 0.534 | 0.489 | 0.410 | 0.632 | 1.015 | 0.937 |
| FCA ✓ | **0.328** | **0.493** | **0.264** | **0.645** | **0.477** | **0.962** |
------
### Methods And Evaluation Criteria
> 1: However, the authors do not use the well-known measures to evaluate the clustering quality.
- The reason why we consider the `cost' for measuring clustering quality is because it is the clustering objective function to be minimized.
- **(Comparison in terms of the silhouette score)**
However, as you suggested, measuring the cost alone may not fully capture the clustering quality, we additionally consider another measure, called the silhouette score.
The silhouette score is computed as the average of $ (d_{\text{near}} - d_{\text{intra}}) / \max(d_{\text{intra}}, d_{\text{near}})$ over all data points, where $d_{\text{intra}}$ denotes the intra-cluster distance and $d_{\text{near}}$ represents the average distance to the nearest neighboring cluster.
Surprisingly, the results in the following table show that **FCA is also superior or competitive to baselines in terms of the silhouette score**.
We will add it to Appendix C.3.1 of the camera-ready version.
| Adult / Bal* = 0.494 | Silhouette (↑) | Bal (↑) |
|:---:|:-----:|:---:|
| Standard (fair-unaware) | 0.227 | 0.223 |
| FCBC | 0.173 | 0.443 |
| SFC | 0.071 | 0.489 |
| FRAC | 0.156 | 0.490 |
| FCA ✓ | **0.176** | **0.493** |
------
### Experimental Designs Or Analyses
> 1: However, 3 datasets are not yet appropriate ... Besides, the selection of features (on each dataset) is not yet well explained...
- **(The number of datasets)**
Please note that **we analyzed 6 datasets in total**.
That is, not only the three tabular datasets, but we also analyzed on two image datasets (Section 5.3 and Appendix C.3.2) and a large synthetic dataset (Appendix C.3.7).
- **(Analysis on an additional dataset)**
However, in light of your comment, we additionally conducted an analysis on Credit Card dataset from Yeh and Lien (2009), which was also used in Bera et al., (2019) and Harb \& Lam, (2020).
We used gender as the sensitive attribute and set $K = 10.$
The results are provided in the table below (to be added to Appendix in the camera-ready version), showing the outperformance of FCA.
| CreditCard / Bal* = 0.656 | Cost (↓) | Bal (↑) |
|:---:|:------:|:---:|
| Standard (fair-unaware) | 0.392 | 0.506 |
| FCBC | 0.492 | 0.629 |
| SFC | 0.682 | **0.653** |
| FRAC | 0.510 | 0.649 |
| FCA ✓ | **0.402** | **0.653** |
(References)
Yeh and Lien (2019): The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 2009.
- **(About the feature selection)**
Existing fair clustering works use **continuous variables** as features (Backurs et al. 2019; Bera et al., 2019; Esmaeili et al., 2021; Ziko et al., 2021), so we have made the same choice.
For reference, the features we consider in this paper are the same as VFC (Ziko et al., (2021)).
Furthermore, please note that the feature `final weight (fnlwgt)' has also been selected in the prior works (Esmaeili et al., 2021; Ziko et al., 2021).
We will add the above detailed explanation to Appendix C.1 in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your clarification. I will keep my score. | Summary: This paper introduces a fair clustering method called Fair Clustering via Alignment (FCA), which aims to balance the trade-off between fairness and clustering utility. The authors propose a decomposition of the fair k-means clustering objective into two components: the transport cost and the clustering cost. The key idea is to alternately align data from different protected groups into a common space and optimize cluster centers in this aligned space. The authors claim that FCA theoretically guarantees approximately optimal clustering utility for any given fairness level without complex constraints. Empirical results demonstrate the effectiveness of FCA in terms of both fairness and clustering utility.
Claims And Evidence: The claims made in the paper are generally well-supported by both theoretical analysis and empirical results. The authors provide a clear theoretical foundation for their decomposition of the fair clustering objective, and they validate their claims through extensive experiments on benchmark datasets.
Methods And Evaluation Criteria: The proposed method, FCA, is well-motivated and makes sense for the problem of fair clustering. The authors use a combination of optimal transport theory and standard clustering algorithms to align data from different protected groups and optimize cluster centers. The evaluation criteria (Cost and Balance) and the benchmark datasets are widely used.
Theoretical Claims: The theoretical claims are supported by rigorous proofs, which are provided in the supplementary material. The key theoretical contribution is the decomposition of the fair clustering objective into transport and clustering costs, which is proven in Theorem 3.3. The authors also provide an approximation guarantee for their algorithm in Theorem 4.3.
Experimental Designs Or Analyses: The experimental design is sound and comprehensive. The authors compare FCA against several baseline methods across multiple datasets, including pre-processing, in-processing, and post-processing approaches.
Supplementary Material: The supplementary material includes additional proofs, algorithm details, and extended experimental results.
Relation To Broader Scientific Literature: The paper builds on established research in fair clustering, particularly extending fairlet-based methods and in-processing approaches.
Essential References Not Discussed: There are no critical references missed.
Other Strengths And Weaknesses: Strengths:
1. The alignment-based approach introduces a novel and effective method for enforcing fairness in clustering.
2. FCA provides both theoretical guarantees and practical improvements over existing clustering methods.
3. FCA-C enhances the method's flexibility by enabling fairness–utility trade-offs.
The empirical evaluation is thorough, covering diverse datasets and baseline methods.
Weaknesses:
1. The computational complexity of solving the Kantorovich problem may hinder scalability to very large datasets.
2. The method requires careful tuning of the fairness control parameter in FCA-C for optimal performance.
3. The paper primarily addresses binary sensitive attributes, with limited exploration of extending FCA to multiple protected groups.
Other Comments Or Suggestions: 1. It is better to include a more detailed discussion on the limitations of the approach, particularly in terms of scalability and applicability to non-binary sensitive attributes.
2. Consider discussing potential strategies to further reduce the computational cost of solving the Kantorovich problem.
3. Expanding the approach to clustering with non-Euclidean distance metrics is better.
Questions For Authors: Please refer to the weakness and the other comments.
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns']
Ethical Review Concerns: This paper may need an ethics review because it deals with algorithmic fairness in clustering, which directly relates to issues of discrimination and bias in machine learning.
1. It addresses clustering with respect to sensitive attributes such as race and gender. While the goal is to ensure fairness, there is a potential risk that the algorithm could inadvertently reinforce or mask existing biases if the underlying data contains structural inequalities.
2. It defines fairness based on proportional balance between groups, but it does not address potential fairness concerns beyond this measure.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *To reviewer 4J8b: We sincerely appreciate your review and the opportunity to improve our work. Please refer to our point-by-point responses below.*
------
### Weaknesses
> 1: The computational complexity ...
- **Section 4.3 introduces a partitioning technique to reduce the computational complexity.**
Our experiments (Figures 6–7 and Table 9 in Appendix C.3.3) demonstrate that it yields reasonable results and significant runtime reduction.
To evaluate scalability, **we already conducted experiments on a dataset of one million data points, showing that FCA outperforms VFC (Appendix C.3.7).**
> 2: The method requires ...
- We think that $\epsilon$ is not a tuning parameter, but $\epsilon$ itself serves as a fairness level in FCA-C.
In particular, on Page 5 we define $\mathbf{A}\_{\epsilon}$ as the set of assignment functions where the sum of unfairness across clusters is bounded by $\epsilon.$
FCA-C always converges to a local minimum for any given $\epsilon.$
- On the other hand, Proposition 3.2 implies that we can control balance by controlling $\epsilon.$
> 3: The paper primarily addresses ...
- We have already discussed extending FCA to handle multiple groups (see Appendix A.3).
To numerically validate this extension, we newly conducted an analysis using Bank dataset with 3 groups single/married/divorced (following VFC (Ziko et al., 2021)).
The results are in the table below (to be added to Appendix A.3): **FCA can practically handle multiple groups and outperforms VFC** (a lower cost $0.222 < 0.228$ with a higher balance $0.182 > 0.172$).
| Bank | 3 groups (single/married/divorced)| |
|:---:|:---:|:--:|
| Bal* = 0.185 | Cost (↓)| Bal (↑)|
| VFC | 0.228 | 0.172|
| FCA ✓ | **0.222** | **0.182** |
------
### Other Comments Or Suggestions
> 1: It is better ...
- Please see our response to 'Weaknesses' above.
> 2: Consider discussing ...
- **(Sinkhorn algorithm)**
We also applied the Sinkhorn algorithm and compared with solving the Kantorovich problem using the linear programming.
The Sinkhorn algorithm of Cuturi et al., (2013) optimizes $\mathbf{C} + \lambda \cdot ent(\Gamma),$ where $\mathbf{C}$ is the cost matrix defined in Section 4.1, $\lambda$ is a regularization parameter, and $ent(\Gamma)$ denotes the entropy of the coupling matrix $\Gamma.$
The result table below suggests that:
careful tuning of $\lambda$ is crucial, while the runtime reduction may not be significant in practice ($\lambda = 0.01$) and a large regularization ($\lambda = 1.0$) significantly degrades performance while reducing runtime (10\% decrease).
This experiment also suggests that reducing computational cost (beyond the partitioning technique) for solving the Kantorovich problem would be difficult but is a promising future study.
We will include this discussion in Appendix.
| Adult / Bal* = 0.494 | Cost (↓) | Bal (↑) | Runtime / iteration (sec) |
|:---:|:---:|:---:|:---:|
| FCA (Sinkhorn, λ = 1.0) | 0.350 | 0.271 | 4.98|
| FCA (Sinkhorn, λ = 0.1) | 0.315 | 0.463 | 5.12|
| FCA (Sinkhorn, λ = 0.01) | 0.330 | 0.491 | 5.55|
| FCA (Linear program) | 0.328 | 0.493 | 5.67|
(References)
Cuturi et al., (2013): Sinkhorn Distances: Lightspeed Computation of Optimal Transport
> 3: Expanding ...
- In Appendix A.4, we already showed that FCA can be similarly applied to $L_{p}$ norms for all $p \ge 1.$
Experimentally, Appendix C.3.9 (Table 17) shows that under the $K$-median ($L_{1}$ norm) setting, FCA outperforms the fairlet-based method SFC.
- During the rebuttal, **we realized that the argument for the extension to $L_{p}$ norm in Appendix A.4 can be applied to any distance satisfying the triangle inequality**.
We will add this comment in the camera-ready version.
------
### Ethical Review Concerns
> 1: This paper may need an ethics review ...
- Our paper primarily aims to **achieve proportional (group) fairness in clustering, in line with many existing works**.
We acknowledge that focusing solely on proportional fairness may overlook other fairness notions.
In light of your feedback, **we will add the following paragraph to our Impact Statement in the camera-ready version**:
- **(A paragraph to be added to `Impact Statement')**
*While this work focuses on proportional (group) fairness, we acknowledge that such an approach may overlook other fairness notions, such as individual fairness or social fairness (e.g., balancing clustering costs across groups), which are not explicitly addressed within the proportional fairness framework.
We therefore encourage readers to interpret our results with this scope in mind: our goal is to improve the trade-off between clustering cost and proportional fairness level, but care must be taken when considering the broader implications for other fairness notions.
In conclusion, we anticipate future studies that integrate fair clustering research with various notions of fairness*.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal made by the authors. I will keep my score. | Summary: In group fair clustering, a clustering objective is optimized in light of a group fairness constraint. If each data point is assigned some class, we require the proportions of these classes in each cluster to be the same as, or close to, their proportions out of the total dataset.
This work considers the problem of group fair clustering using in-processing algorithms to simultaneously find a fair alignment between points and a good k-means clustering. They propose an algorithm which uses an alignment, or a joint probability distribution over points of two different classes, together with a select set of centers to design an objective that optimizes both at the same time. While either the alignment or the set of centers is fixed, the other is optimized. The algorithm iterates, alternating between the two, to find a solution.
Alignment is found using optimal transport methods from Kantorovich. The center selection is performed via one of many clustering techniques, including K-means and gradient descent. A variant of this algorithm that allows for non-perfect fairness is also introduced as a paramterized option.
On the theory side, they show that their designed combined objective is equivalent to optimizing for the clustering objective in light of the fairness constraint. This can then be used to show their parameterized algorithm approximates the clustering objective with small violation to the fairness constraint. The approximation factor is within a constant factor of the vanilla clustering algorithm used, and the violation is linearly dependent on an error constraint epsilon.
This paper also validates their findings with experiments. They select a handful of baseline fair clustering algorithms that represent in-processing, pre-processing, and post-processing fairness techniques. These experiments show that, on tested cases, their algorithm provides the best fairness-cost tradeoff. It is also numerically stable and does not require too much additional computation time.
Claims And Evidence: Their claims are supported by theoretical and empirical evidence. Theoretical evidence comes as proofs deferred to the appendix. Experiments are partially in the appendix but the major results are shown in the paper to support their findings.
Methods And Evaluation Criteria: The methods seemed to have no issues.
Theoretical Claims: Theoretical claims were believable, but for the most part, proved in the appendix. I did not check the appendix.
Experimental Designs Or Analyses: I did not find any issues
Supplementary Material: No
Relation To Broader Scientific Literature: Fair clustering is a relatively new yet also pretty extensively studied area of research. Many methods have been proposed, particularly for group fair clustering. To my knowledge, less has been done regarding group fair k-means, which is the problem this paper approaches. This paper additionally proposes a new way to combine the fairness constraint and clustering objective into a single objective that turns out to be equivalent to the given problem. This allows them to iteratively optimize the two terms of the combined objective to arrive on a solution. This is somewhat like a repeated pre-processing method, which is interesting in its own right. Combined objectives have also been studied before, but in the context of a different formulated problem, where fairness is not a constraint but given to be part of the objective.
Essential References Not Discussed: None that I am aware of
Other Strengths And Weaknesses: Strengths
- New methods
- Formulations very nicely represent the original problem
- Technically involved
- Experimentally verified
Weaknesses
- Extremely notationally dense, with a fair amount of notation that is not introduced properly (it generally assumes a pretty strong understanding of advanced probability and its notation)
- New methods aren't entirely novel, they bear some similarities to existing methods
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Would you say your techniques are sort of like iterated preprocessing? You do have a kind of "before" and "after" part of the algorithm, where before attempts to achieve fairness and after then selects centers accordingly.
2. Can you clarify what you mean by numeric stability?
3. How does the epsilon parameter control a tradeoff? In theorem 4.3, I only see it affecting the fairness violation.
4. Are the benchmark clustering methods designed for K-means? I know much of fair clustering literature has looked at k-center and k-median instead of k-means.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *To reviewer AM68: We sincerely appreciate your review and thank you for the opportunity to improve our work. Please refer to our point-by-point responses below.*
------
### Weaknesses
> 1: Extremely notationally dense ...
- Answer:
- When preparing this work, we initially considered summation-based notations for expressions such as
$ \frac{1}{n\_{s}} \sum\_{i=1}\^{n\_{s}} \sum\_{k=1}\^{K} \mathcal{A}\_{s}(\mathbf{x}\_{i})\_{k} \bigg( \frac{\Vert\mathbf{x}\_{i}-\mathbf{T}(\mathbf{x}\_{i})\Vert^2}{4} + \left\Vert \frac{\mathbf{x}\_{i} + \mathbf{T}(\mathbf{x}\_{i})}{2}-\mu\_k \right\Vert\^2 \bigg), $
for eq. (2) in Theorem 3.1.
However, we decided to use the probability-based notations instead, since the readability of the proofs is much improved.
- We were also concerned about this and **made efforts to explain the probability-based formulations by use of summation-based notation in several instances (e.g., the clustering cost $C(\mu, \mathcal{A})$ in Section 2, the definition of balance in lines 119-120, and assignment functions of FCA in Section 4)**.
> 2: New methods aren't entirely novel ...
- Answer:
- We agree that FCA is similar to the fairlet-based methods in the sense that the idea of matching is used.
In this sense, FCA can be viewed as a novel extension of the fairlet-based methods.
- Although the fairlet-based methods also rely on matching, it would be not optimal in view of fair clustering. In contrast, **our method guarantees to find a matching map, which provides a (local) optimal fair clustering, by Theorems 3.1 and 3.3.**
Also, please see Remark 3.2 for further discussion.
- It would be hard to modify the fairlet-based methods to control fairness level (for non-perfect fairness).
FCA can be modified to find an (local) optimal solution to control fairness level (i.e., FCA-C algorithm).
- This novel extension is made possible by **the novel reformulation of the fair clustering objective function using the matching map (or alignment)**, as presented in Section 3.
------
### Questions
> 1: Would you say ...
- Answer:
- We thank you for the insightful question.
Yes, you are right.
However, in fact, **the 'before' step of our approach (i.e., finding matchings) is not solely for achieving fairness, but also simultaneously for minimizing the clustering cost**.
Specifically, it solves a modified Kantorovich problem, where the modification is aimed at minimizing the clustering cost.
- Furthermore, the proposed algorithm can be extended to control the fairness level (e.g., non-perfect fairness), while other pre-processing algorithms such as fairlet-based methods are not designed for it.
> 2: Can you clarify ...
- Answer:
- Numerical stability means that **FCA does not suffer from numerical instability compared to VFC**.
The main reason for this stability is that FCA does not have any regularization parameters while VFC requires to choose a fairness regularization parameter, which can make the algorithm numerically unstable.
- Specifically,
(i) FCA is more robust to dataset pre-processing, while VFC operates reliably when data points are $L_{2}$-normalized.
(ii) FCA can achieve near-perfect fairness, whereas VFC may fail to do so.
In detail, without $L_{2}$-normalization - especially in high-dimensional settings - VFC can become numerically unstable due to overflow.
See Appendix C.3.1 for details.
> 3: How does the epsilon ...
- Answer:
- **(Fairness bound by $\epsilon$)**
**Please refer to Proposition 4.2, which shows that the fairness level (balance) is bounded by $\epsilon.$**
- **(The trade-off)** The clustering cost for a larger $\epsilon$ is always lower than that for a smaller $\epsilon.$
That is, the optimal fair clustering with $\epsilon$ is included in $\mathbf{A}_{\epsilon'}$ for $\epsilon'>\epsilon.$
- **(Note: definition of $\epsilon$)**
$\epsilon \in [0, 1]$ represents the size of the set $\mathcal{W}$ (i.e., $\mathbb{Q}((\mathbf{X}\_{0}, \mathbf{X}\_{1}) \in \mathcal{W}) \le \epsilon$), as defined below eq. (5).
The set $\mathcal{W}$ consists of the aligned data points to which the standard $K$-means clustering cost is applied.
That is, $\epsilon$ used in the FCA-C algorithm is the fairness level: as $\epsilon$ decreases, the resulting clustering of FCA-C becomes fairer.
> 4: Are the benchmark ...
- Answer:
- Yes, you are right.
Among the four baseline methods we consider in our study (FCBC, SFC, FRAC, and VFC), three methods have been developed for $K$-means (as well as other $K$-clustering), while SFC is originally designed for $K$-median.
- **Please also note that, in Appendix C.3.9 we already discussed how to modify FCA for the $K$-median clustering setting**.
As shown in Table 17, FCA outperforms SFC under the $K$-median clustering setting.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications, they were helpful. I don't see enough reason to change my score, however. It is a nice work that should be published somewhere, possibly ICML. | Summary: The paper is focused on fair clustering. In particular, it suggests a new method based on decomposing the clustering cost which claims to have higher flexibility than prior methods in trading the clustering cost and fairness violations. Theoretical analysis and experiments comparing to baselines are conducted to prove these claims.
Claims And Evidence: I don't find the claims convincing. Convincing evidence would be through a theorem with superior guarantees in comparison to prior work. Or experiments where the results are much better.
Methods And Evaluation Criteria: Yes. Theorems and experiments are valid ways to establish guarantees. The issue is that the arguments in both are not strong.
Theoretical Claims: I did not verify the correctness of any proofs
Experimental Designs Or Analyses: -The authors can clarify this, but it does not seem that the performance of the algorithm is really much better than existing baselines. Further, some the baselines work for many groups instead of just two as done in this paper. Also, in Table 1 for Adult doesn't FCBC have a smaller cost?
Supplementary Material: No.
Relation To Broader Scientific Literature: Prior work is well-cited. Although greater connection to Ziko et al and Esmaeili at al would be good since both trade-off clustering cost and fairness.
Essential References Not Discussed: Essential References are discussed.
Other Strengths And Weaknesses: ### Strengths:
-the objective of the paper is interesting.
### Weaknesses:
-the main issue I think is that this is not the first paper to trade-off the clustering cost and fairness. While some claims are made that prior work follows an "in-processing" or "post-processing" approach. The issue is that the paper does not directly establish superior performance. For example, in Theorem 4.3 what is \epsilon? Further O(\tau) is not sufficient, what is the exact constants hidden in the O(.) notion. Prior work was specific in including the factors. How can we be convinced that this is a better algorithm if the approximation factors are not better, based on the current presentation since they are not given they may even be worse. For the experiments, please my points above on experiments above.
More points on weaknesses:
-lines (184-187) claim that previous methods are sub-optimal, but that has to be the case since the problem is NP-hard. Even this method is sub-optimal as otherwise it would be solving an NP-hard problem exactly.
-the paper assumes that there are only two groups which is a weakness in comparison to prior algorithms that can handle an arbitrary number of groups.
### Minor Points:
-presentation can be improved by:
1-not referring to Remark B.1 on the appendix and maybe including the main intuition in the text.
2-finding another way instead of the boxes on page 5 which are very heavy in text
Other Comments Or Suggestions: See weaknesses above.
Questions For Authors: See weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: *To reviewer 1gG2: We sincerely appreciate your review and the opportunity to improve our work. Please refer to our point-by-point responses below.*
------
### Claims And Evidence
> 1: I don't find ...
- First, we would like to highlight our main contributions:
- A novel reformulation of the fair clustering objective function using the matching map (Section 3).
- Our method guarantees finding a matching map that provides a (local) optimal fair clustering (Theorems 3.1 and 3.3).
- For the theoretical aspect, please see our response to 'Weaknesses' below, where we suggest that **our proposed algorithm is competitive with existing methods in terms of approximation rate**.
- For discussion on the experimental aspect, please see our response to 'Experimental Designs Or Analyses' below, which emphasizes **the empirical superiority of our proposed algorithm**.
------
### Experimental Designs Or Analyses
> 1: The authors can clarify this ...
- The main contribution of our work is the development of **a numerically stable in-processing fair clustering algorithm**.
While in-processing algorithms such as VFC require difficult regularization parameter tuning,
**FCA does not have any regularization, guaranteeing a (local) optimal solution without tuning** (Theorems 3.1 and 3.3).
For example, FCA achieves near-perfect fairness and is more robust to data pre-processing than VFC (Section 5.2 and Appendix C.3.1).
- As expected, FCA significantly outperforms the fairlet-based pre-processing algorithm SFC: achieving nearly 40% lower cost on average and higher fairness (Table 1 in Section 5.2).
- Surprisingly, FCA performs even better on visual clustering tasks, competing with two image-specific algorithms and dominating SFC and VFC (Table 3 in Section 5.3).
> 2: Further, some ...
- Please see our response to 'Weaknesses-3' below.
> 3: Also, in Table 1 ...
- Although FCBC achieves a lower cost than FCA, it is less fair (i.e., lower balance).
**Table 1 compares the costs at the highest fairness levels each algorithm can achieve**.
FCBC achieves a lower cost because the highest fairness level it can achieve is lower than that of FCA.
- To compare fairly, we obtained a fair clustering using FCA-C with a cost of 0.313 (similar to the cost of FCBC, 0.314).
The table below shows FCA-C achieves a higher balance (0.473 vs. 0.443), suggesting FCA offers a better trade-off than FCBC.
This result will be added to Appendix C.3.1.
| Bal* = 0.494 | Cost (↓) | Bal (↑) |
|:-:|:----:|:---:|
| FCBC| 0.314 | 0.443 |
| FCA-C ✓ | 0.313 | **0.473** |
------
### Relation To Broader Scientific Literature
> 1: Although greater connection ...
- We appreciate the suggestion.
In fact, we already mentioned the two methods in Section 1 and Appendix C.2.3.
Experimentally, FCA outperforms VFC and FCBC (Tables 1, 2, 5, and 6; Figures 4 and 5).
------
### Weaknesses
> 1: the main issue ...
- Our theoretical bound in Theorem 4.3 is not claimed to be the sharpest one, but is to claim that FCA is not much worse than the global optimal fair clustering.
The main advantages of FCA include (i) greater numerical stability than an in-processing method VFC (Tables 2 and 6; Figures 3–5) and (ii) empirical superiority over post-processing methods (Tables 1 and 5).
- A detailed version of Theorem 4.3 is in **Theorem B.3**:
**FCA-C provides a $(\tau+2)$-approximate solution with a violation $3R\epsilon$, where $R$ is a bound on the data norm ($\sup_{\mathbf{x} \in \mathcal{X}} \Vert \mathbf{x} \Vert^{2} \le R$).**
- **A comparison of the detailed approximation rate** with several existing methods:
- Bera et al. (2019) achieved a $(\tau + 2)$-approximation and a violation $3.$
**FCA-C provides the same approximation rate of $\tau + 2,$ but can attain a smaller violation** when $R = 1,$ as $\epsilon \in [0, 1].$
- Schmidt et al. (2018) provided a **$(5.5\tau + 1)$-approximation, which is worse than $\tau + 2$** for $\tau > 2/9,$ e.g., $\tau = 8(\log K) + 2$ for the K-means++ algorithm.
- We will include this discussion following Theorem 4.3 in the camera-ready version.
- (References) Schmidt et al. (2018): Fair Coresets and Streaming Algorithms for Fair k-means.
- (Definition of $\epsilon$)
**Please see our response to 'Questions'-3 for Reviewer AM68 due to space limitation.**
> 2: lines (184-187) ...
- The term 'suboptimal' in Remark 3.2 is used **not because the clustering problem is NP-hard**, but because our method guarantees finding a matching map providing a (local) optimal fair clustering (Theorems 3.1 and 3.3), whereas the matching map of the fairlet-based methods do not offer this guarantee.
> 3: the paper assumes that ...
- **Please see our response to 'Weaknesses'-3 for Reviewer 4J8b due to space limitation.**
> 4: (Minor) presentation ...
- Thank you for your careful review.
(1) Remark B.1 will be moved to Section 3.1 and (2) the boxes will be replaced by algorithm expressions, in the camera-ready version. | null | null | null | null | null | null |
BoxLM: Unifying Structures and Semantics of Medical Concepts for Diagnosis Prediction in Healthcare | Accept (poster) | Summary: This paper introduces BoxLM, a novel framework for diagnosis prediction in healthcare that aims to unify the semantic understanding of medical concepts with their underlying structural relationships. This approach integrates ontology-driven hierarchies and EHR-driven visit patterns with semantic embeddings from pre-trained LMs, and it further proposes a box-aware diagnosis prediction module and volume-based similarity measurement to model the temporal dynamics of patient visits and enable accurate diagnosis prediction. Overall, this paper has a clear motivation, a relatively complete theoretical section, and provides sufficiently comprehensive experimental results.
## update after rebuttal
The authors have addressed my comments and I will maintain the original score.
Claims And Evidence: BoxLM's main claim of outperforming baselines for diagnosis prediction is well-supported by MIMIC-III/IV experiments. Evidence is strong across metrics and data settings, including few-shot. Ablations and case studies validate components. However, the introduction overstates the novelty of box embeddings for hierarchical concepts, which has been proposed by BoxCare. The introduction should acknowledge BoxCare and clarify BoxLM's specific innovations. Despite this minor intro overclaim, core claims are valid. BoxLM builds on and improves prior work, showing strong results. Evidence supports the paper's conclusions.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and make sense. The use of MIMIC-III and MIMIC-IV is well-justified and provides a realistic benchmark. The evaluation metrics are standard metrics in the field of EHR-based diagnosis prediction and are suitable for assessing the performance of the models.
Theoretical Claims: The proofs in Appendix C are correct.
Experimental Designs Or Analyses: The experimental designs are well-controlled, and the analyses are appropriate for supporting the claims of the paper. No major issues are identified in the experimental design or analyses.
Supplementary Material: I did not review the supplementary materials of this paper.
Relation To Broader Scientific Literature: This work significantly builds upon the recent advancements in representing medical concepts and their relationships using box embeddings, most notably following the approach introduced by BoxCare. It extends BoxCare by more comprehensively leveraging hierarchy from both ontologies and EHR data. Crucially, it integrates pre-trained language models by leveraging their inherent prior knowledge.
Essential References Not Discussed: No essential related works appear to be missing.
Other Strengths And Weaknesses: ## Strength
- The paper is well-motivated, clearly articulating the research problem. The explanation of the proposed methods and formulas is presented with sufficient clarity, facilitating understanding.
- The experimental evaluation yields compelling results, demonstrating consistent and statistically significant outperformance against a comprehensive suite of baselines across two large-scale, real-world EHR datasets. This robust empirical validation strongly supports the proposed approach.
- The runtime analysis indicates that BoxLM achieves competitive computational efficiency, and even outperforms some baselines in terms of speed. This efficiency is a significant advantage for practical deployment and real-world application within clinical settings.
## Weakness
- While the paper effectively leverages box embeddings to model hierarchical medical concepts, building upon the foundational work of BoxCare, the primary innovation appears to reside in the enhanced integration of EHR information. The framing in the introduction may inadvertently overemphasize the novelty of the core box embedding concept itself, potentially leading to a perception of overclaim regarding the overall contribution relative to BoxCare.
- To improve initial reader comprehension, the paper would benefit from a more explicit and upfront articulation of the specific task being addressed and the primary research objectives immediately following the introduction. This would provide essential context and guide the reader more effectively.
- While box embeddings offer advantages, they are inherently more complex than traditional vector embeddings. The paper could benefit from further simplification or more intuitive explanations of box embedding concepts and operations, especially for readers less familiar with this technique.
- While the paper mentions setting the embedding dimension and Gumbel scale, there appears to be limited discussion on the sensitivity analysis of these hyperparameters, particularly those specific to box embeddings (e.g., dimensionality, different distance/volume metrics). Exploring the impact of these choices would strengthen the robustness of the findings.
- While paper claims interpretability, the authors should provide an example in the experimental section to visualize how different levels of concepts are represented in box embeddings, thereby enhancing the interpretability of BoxLM.
Other Comments Or Suggestions: NA
Questions For Authors: See weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the overall positive evaluations and many detailed suggestions. In the following, we focus on several of the main issues to provide our feedback:
> **Reviewer yPUD.W1:** The difference between BoxLM and BoxCare.
As described in Section 2, BoxCare only uses box embeddings to represent ontology hierarchies without fully integrating the rich semantics and hierarchical structures inherent in EHRs. In contrast, BoxLM innovatively integrates PLM with box embeddings, unifying ontology-driven and EHR-driven hierarchies with LM-based semantics for more accurate diagnosis prediction. We will clarify the difference in the introduction section.
> **Reviewer yPUD.W2:** Explicitly stating the specific task and research objectives upfront after the introduction for better reader comprehension.
We have provided the problem definition and an overview of BoxLM in Section 3.1. To further improve readability, we will reorganize the paper structure by moving the methodology section to Section 2, immediately following the introduction.
> **Reviewer yPUD.W3:** Box embedding concepts and operations are relatively complex.
Box embeddings only introduce **one** **additional offset embedding** compared with traditional vector representation, and the time complexity for diagnosis prediction remains comparable to that of traditional vector-based approaches. Specifically, the time complexity is $\mathcal{O}(N \times C \times d)$, scaling linearly with the embedding dimension $d$, where $N$ and $C$ denote the number of patients and CCS codes, respectively.
> **Reviewer yPUD.W4:** Adding hyperparameter studies.
We have evaluated the impacts of the box embedding dimension $d$ (Table 1), the Gumbel distribution scale $\beta$ (Table 2), and the box volume calculation (Table 3), which will be included in the final version.
For box embedding dimension $d$, the results show that increasing $d$ slowly enhances model performance for both datasets. For a fair comparison with baselines, we set $d=16$.
**Table 1: Experimental results (%) for diagnosis prediction with varying box dimension $d$.**
| Dataset | MIMIC-III | | MIMIC-IV | |
| -------- | --------- | ------ | -------- | ------ |
| Metric | P@10 | Acc@10 | P@10 | Acc@10 |
| $d=4$ | 40.33 | 29.53 | 36.14 | 26.24 |
| $d=8$ | 41.88 | 30.57 | 41.00 | 29.36 |
| $d=16$ | 43.88 | 31.74 | 42.04 | 29.94 |
| $d=32$ | 44.84 | 32.52 | 50.72 | 35.74 |
For the Gumbel distribution scale $\beta$, a larger $\beta$ makes the distribution closer to uniform, while a smaller $\beta$ causes its probability density function to approach a hinge function, leading the random variable to degenerate into a constant [1]. In this paper, we set $\beta=0.2$, balancing model accuracy with the distinctiveness among boxes.
**Table 2: Experimental results (%) for diagnosis prediction with varying Gumbel distribution scale $\beta$.**
| Dataset | MIMIC-III | | MIMIC-IV | |
| ----------- | --------- | ------ | -------- | ------ |
| Metric | P@10 | Acc@10 | P@10 | Acc@10 |
| $\beta=0.1$ | 42.91 | 31.05 | 41.10 | 29.18 |
| $\beta=0.2$ | 43.88 | 31.74 | 42.04 | 29.94 |
| $\beta=0.4$ | 45.55 | 33.00 | 41.71 | 29.78 |
| $\beta=0.6$ | 45.81 | 33.25 | 40.70 | 29.14 |
For the box volume calculation, we compare the soft volume calculation [2] with our used Bessel volume calculation (i.e., Eq. (9)). They both effectively mitigate the training difficulties that arise when disjoint boxes should overlap.
**Table 3: Experimental results (%) for diagnosis prediction with varying box volume calculation.**
| Dataset | MIMIC-III | | MIMIC-IV | |
| ------------- | --------- | ------ | -------- | ------ |
| Metric | P@10 | Acc@10 | P@10 | Acc@10 |
| Soft volume | 43.91 | 31.79 | 42.23 | 30.10 |
| Bessel volume | 43.88 | 31.74 | 42.04 | 29.94 |
**Reference**
[1] When Box Meets Graph Neural Network in Tag-aware Recommendation, KDD, 2024.
[2] Smoothing the geometry of probabilistic box embeddings, ICLR, 2018.
> **Reviewer yPUD.W5:** Providing examples for the interpretability of the proposed method.
We have provided a real case study in Section 4.3, visualizing different levels of medical concepts (e.g., diagnosis, CCS, patient). As shown in Lines 403-420, BoxLM accurately predicts the development of *coronary atherosclerosis and other heart diseases* (CCS code 101) for the patient diagnosed with *essential* *hypertension* by calculating the overlap between patient and CCS boxes, which aligns with clinical knowledge that hypertension increases the risk of atherosclerosis.
---
Rebuttal Comment 1.1:
Comment: Thanks author's rebuttal. The authors have addressed my comments and I will maintain the original score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer yPUD,
We thank your response and appreciation of our work and rebuttal. We will make sure to incorporate the new results and discussions into the final version.
Best,
Authors | Summary: In this paper, the authors focus on a critical task: diagnosis prediction. They introduce a unique approach, the BoxLM representation, to represent Electronic Health Records (EHR) and diseases. While the paper is well-written and easy to follow, it has two significant shortcomings: 1. The evaluation metric used is not comprehensive. Typically, recall is considered an essential metric in AI-assisted diagnosis, as missing a diagnosis can have serious implications. 2. There is a lack of discussion and comparison with existing approaches in diagnosis prediction.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design also has some flaws. In line with previous works, the authors should include recall as an evaluation metric.
Supplementary Material: yes
Relation To Broader Scientific Literature: no
Essential References Not Discussed: Moreover, the paper lacks comparison with existing works on diagnosis prediction, such as [1] and subsequent works referencing [1].
[1] Choi, Edward, et al. "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks." Machine Learning for Healthcare Conference. PMLR, 2016.
Other Strengths And Weaknesses: no
Other Comments Or Suggestions: no
Questions For Authors: 1. In Section 4.3's ablation study, the contribution of BoxLM should be evaluated. For instance, it could be compared with the classic vector representation.
2. What's the difference between CCS and diagnosis
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the overall summary. As for the several weaknesses and questions you mentioned, our responses are listed as follows:
> **Reviewer UQNn.W1:** The evaluation metric suggests including recall.
Following [2-6], we adopt visit-level Precision@k (P@k) and code-level Accuracy@k (Acc@k) for the comprehensive evaluation of diagnosis prediction from coarse-grained and fine-grained perspectives, respectively (detailed in Appendix D). Additionally, we have included Recall@k (k=10, 20) in our evaluation (Table 1), further demonstrating the effectiveness of BoxLM.
**Table 1: Recall@k results (%) for diagnosis prediction on both datasets with 5% training data.**
| Dataset | | MIMIC-III | | |
| ------- | ------------- | ------------- | ----- | ------ |
| Metric | **Recall@10** | **Recall@20** | P@10 | Acc@10 |
| RETAIN | 26.68 | 42.52 | 34.73 | 25.73 |
| TRANS | 27.97 | 43.82 | 36.64 | 26.96 |
| BoxCare | 28.61 | 44.63 | 38.21 | 28.12 |
| BoxLM | 34.49 | 50.71 | 43.88 | 31.74 |
| **Dataset** | | **MIMIC-IV** | | |
| Metric | **Recall@10** | **Recall@20** | P@10 | Acc@10 |
| RETAIN | 27.22 | 39.95 | 33.47 | 24.52 |
| TRANS | 23.38 | 34.84 | 28.70 | 21.64 |
| BoxCare | 29.76 | 42.33 | 35.13 | 25.58 |
| BoxLM | 34.85 | 48.65 | 42.04 | 29.94 |
**Reference**
[2] Unveiling Discrete Clues: Superior Healthcare Predictions for Rare Diseases, WWW, 2025.
[3] Stage-Aware Hierarchical Attentive Relational Network for Diagnosis Prediction, TKDE, 2024.
[4] Predictive Modeling with Temporal Graphical Representation on Electronic Health Records, IJCAI, 2024.
[5] MEGACare: Knowledge-guided Multi-view Hypergraph Predictive Framework for Healthcare, Information Fusion, 2023.
[6] INPREM: An Interpretable and Trustworthy Predictive Model for Healthcare, KDD, 2020.
> **Reviewer UQNn.W2:** Missing baseline Doctor AI [1].
We have included Doctor AI [1] in our benchmarks for comparison. As shown in Table 2, BoxLM outperforms Doctor AI by up to 40.30% gains in Recall@10 on MIMIC-IV. Notably, both Doctor AI and RETAIN [7], developed by the same research group, use RNNs to capture the temporal dynamics of patient visits. Doctor AI slightly surpasses RETAIN with short visit sequences on MIMIC-III, while RETAIN performs better on MIMI-IV via its reverse time attention mechanism. In contrast, BoxLM integrates temporal modeling with richer representations of medical concepts from EHR data and ontological sources, leading to superior generalization. We will add this discussion to the final version.
**Table 2: Experimental results (%) for diagnosis prediction on both datasets with 5% training data.**
| Dataset | | MIMIC-III | | |
| ------------- | --------- | --------- | ----- | ------ |
| Metric | Recall@10 | Recall@20 | P@10 | Acc@10 |
| **Doctor AI** | 27.36 | 43.06 | 35.72 | 26.31 |
| RETAIN | 26.68 | 42.52 | 34.73 | 25.73 |
| BoxLM | 34.49 | 50.71 | 43.88 | 31.74 |
| **Dataset** | | **MIMIC-IV** | | |
| Metric | Recall@10 | Recall@20 | P@10 | Acc@10 |
| **Doctor AI** | 24.84 | 37.09 | 30.65 | 22.69 |
| RETAIN | 27.22 | 39.95 | 33.47 | 24.52 |
| BoxLM | 34.85 | 48.65 | 42.04 | 29.94 |
**Reference**
[7] RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism, NeurIPS, 2016.
> **Reviewer UQNn.Q1:** In Section 4.3's ablation studies, the contribution of BoxLM should be evaluated, including the comparison with the classic vector representation.
Our ablation studies in Section 4.3 have included the comparison with classic vector representations (i.e., the "Base" model in Table 3). Specifically, the "Base" model uses BioBERT embeddings without structure modeling, which represents visits as single points. By comparison, "+ Onto" and "+ EHR" introduce box embeddings for CCS codes, diagnoses, and visits with our ontology-based and EHR-based hierarchy modeling, achieving significant improvement by up to 25.33% gains in P@10 on MIMIC-IV. These experimental results provide a comprehensive understanding of the importance and reliability of our designed techniques.
> **Reviewer UQNn.Q2:** What's the difference between CCS and diagnosis?
Sorry for the confusion. CCS is a classification system that groups diagnosis codes into clinically meaningful categories for analytical purposes, whereas a diagnosis refers to the identification of a specific disease or condition in a patient's visit. We will add this explanation in the final version.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns in their rebuttal, I increase the score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer UQNn,
We really thanks for your time in the review and discussion. We will make sure to properly incorporate the new results and discussions into the final revision.
Best,
Authors | Summary: The paper proposes BoxLM, a framework unifying structural (ontology and EHR hierarchies) and semantic (language model embeddings) representations of medical concepts using box embeddings. Key contributions include a structure-semantic fusion mechanism, an evolve-and-memorize patient box learning module for temporal dynamics, and volume-based similarity for diagnosis prediction. Experiments on MIMIC-III/IV datasets demonstrate state-of-the-art performance, particularly in few-shot scenarios. The authors emphasize interpretability through geometric overlap analysis and case studies.
## update after rebuttal
Authors addressed most of my concerns, I have raised my Overall Recommendation score from 2 to 3.
Claims And Evidence: Mostly yes. Interpretability claims rely on a single case study (Figure 4); quantitative metrics for interpretability (e.g., user studies or consistency scores) are absent.
Methods And Evaluation Criteria: Generally make sense.
- Box embeddings effectively model hierarchical inclusion and EHR-driven visit patterns. The fusion of ontology/EHR structures and evolve-and-memorize mechanism are novel and appropriate.
Theoretical Claims: N/A
Experimental Designs Or Analyses: - Validity: 5-fold cross-validation and Adam optimizer are standard. Few-shot experiments (1%–15% training data) robustly demonstrate generalizability.
- Reproducibility: Runtime comparisons (Figure 3c) lack hardware details. Code availability is promised but not provided.
Supplementary Material: N/A
No supp. attached.
Relation To Broader Scientific Literature: Properly credits prior work (e.g., BoxCare, BioBERT) but might overlook recent geometric approaches in other fields (not limited to EHR / healthcare, I'm not sure).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: My concerns are mainly the interpretability, and your motivation for designing the method. It might not be proper to say`Luckily, we find that box embeddings offer a promising solution by representing entities as highdimensional hyperrectangle.` as it's not straightforward to encode highdimensional embeddings to hyperrectangle from my side.
Other Comments Or Suggestions: - Typos: "Exampled" → "Example" (Figure 4 caption).
- Section 3.3.1 needs a diagram for patient box evolution.
- Include ablation on box dimensionality (see Questions).
Questions For Authors: - How does BoxLM handle missing or incomplete hierarchical links in EHRs (e.g., unrecorded parent-child relationships)?
- What is the impact of box dimensionality (dim) on performance and computational cost?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments on our paper. In response to the shortcomings mentioned, we provide the following answers to several questions:
> **Reviewer skUY.W1:** Adding quantitative metrics for interpretability.
In our study, CCS (Clinical Classification Software) codes from the MIMIC-IV dataset can be grouped into 22 distinct body systems, serving as hierarchical classification labels for the predicted CCS codes. To assess the interpretability of BoxLM, we have introduced the **Consistency@k** metric (k=10, 20) [1, 2], which quantifies the alignment between model predictions and the underlying hierarchical structure.
The metric can be computed as: $Consistency@k=\frac{1}{\min(k,|H_v|)}\sum_{i=1}^k\mathbb{I}(\hat{h}_i=h_i)$, where $|H_v|$ is the number of ground-truth hierarchical labels associated with the CCS codes in visit $v$, and the numerator counts the number of correct hierarchical predictions in the top-k.
The results (Table 1) demonstrate that BoxLM preserves consistency on hierarchy.
**Table 1: Consistency@k results (%) for diagnosis prediction.**
| Dataset | | MIMIC-IV | | |
| ------- | ------------------ | ------------------ | ----- | ------ |
| Metric | **Consistency@10** | **Consistency@20** | P@10 | Acc@10 |
| RETAIN | 41.81 | 46.55 | 33.47 | 24.52 |
| TRANS | 35.13 | 38.43 | 28.70 | 21.64 |
| BoxCare | 47.36 | 52.06 | 35.13 | 25.58 |
| BoxLM | 56.53 | 62.44 | 42.04 | 29.94 |
**Reference**
[1] Hyperbolic Embedding Inference for Structured Multi-Label Prediction, NeurIPS, 2022.
[2] Modeling Label Space Interactions in Multi-Label Classification Using Box Embeddings, ICLR, 2021.
> **Reviewer skUY.W2:** Lacking hardware details and code availability.
We have provided the code link in the bottom left of Page 6 for reproducibility. Additionally, we have added the following hardware details: All experiments were conducted on two NVIDIA GTX 3090 Ti GPUs.
> **Reviewer skUY.W3:** Adding related works on recent geometric approaches.
We have included relevant work on box embeddings from other fields (see Appendix A). Additionally, we will add the discussion and proper citations for [3, 4] in our related work, covering polygon and hypercube embeddings in areas such as knowledge graphs and recommendation systems.
**Reference**
[3] ExpressivE: A Spatio-Functional Embedding for Knowledge Graph Completion, ICLR, 2023.
[4] Thinking inside The Box: Learning Hypercube Representations for Group Recommendation, SIGIR, 2022.
> **Reviewer skUY.W4:** The saying for introducing box embedding might not be proper, as it's not straightforward to encode highdimensional embeddings to hyperrectangle.
We understand that the sentence "... representing entities as high-dimensional hyperrectangles" (Lines 67-68) may not have clearly conveyed our design. Specifically, we propose using box embeddings to capture complex relationships (e.g., inclusion and intersection) in medical data. This representation aligns well with the hierarchical structure of medical ontologies and EHR-driven visit patterns. We will revise this sentence in the final version.
> **Reviewer skUY.W5:** Some typos exist.
Thanks for pointing this out. We will fix it in the final version.
> **Reviewer skUY.W6:** Section 3.3.1 needs a diagram for patient box evolution.
We have included a diagram for the evolve-and-memorize patient box learning mechanism in the upper right of Figure 2. For clarity, we will explicitly reference it in Section 3.3.1.
> **Reviewer skUY.Q1:** How does BoxLM handle missing or incomplete hierarchical links in EHRs?
BoxLM uses box embeddings to model medical concepts from ontology and EHR data. The geometric properties of these embeddings allow the model to infer implicit relational structures, such as transitivity and entailment. This capability, demonstrated in tasks like taxonomy expansion [5] and knowledge base completion [6], enables BoxLM to recover missing or incomplete hierarchical links.
**Reference**
[5] A Single Vector Is Not Enough: Taxonomy Expansion via Box Embeddings, WWW, 2023.
[6] BoxE: A Box Embedding Model for Knowledge Base Completion, NeurIPS, 2020.
> **Reviewer skUY.Q2:** What is the impact of box dimensionality on performance and computational cost?
We have added experiments to analyze the impact of box embedding dimension $d$ (please refer to Table 1 in **Reviewer yPUD.W4**), which will be included in the final version. The results show that increasing $d$ slowly enhances model performance for both datasets. Notably, the time complexity of diagnosis prediction scales as $\mathcal{O}(N \times C \times d)$ ($N$ and $C$ denote the number of patients and CCS codes), which grows linearly with $d$ and is similar to traditional vector-based approaches. For a fair comparison with baselines, we set $d=16$.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I have raised my scores accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer skUY,
Thank you for your feedback. We are glad our response addressed your concerns. It seems the scores haven't been updated in the system yet. We'd be grateful if you could kindly double-check when convenient.
Best,
Authors | null | null | null | null | null | null | null | null |
Counterfactual Effect Decomposition in Multi-Agent Sequential Decision Making | Accept (poster) | Summary: This paper studies the attribution of counterfactual outcome to agent actions and states, that are total agent-specific effect and reverse state-specific effect. Moreover, it futher decompose the total agent-specific effect into individual agent effect, and the reverse state-specific
effect (r-SSE) into r-SSE-ICC. Through experiments on the Gridworld environment and sepsis management simulator, the interpretability of the method is demonstrated.
Claims And Evidence: This paper try to explain the counterfactual effect of agent action. To achieve this, this paper defines a series of metrics such as tot-ASE. However, the definitions are complicated and not easy to understand. I observe the experimental results is mainly composed of measuring these metrics. I think the work does not accomplish the goal of interpretability.
Methods And Evaluation Criteria: The same to the previous part. The proposed method and evaluation criteria do not improve the interpretability of the agent actions.
Theoretical Claims: I chech the proof of Theorem 3.3, which I think is the cornerstone of the paper. However, I think the proof is wrong. The third equation does not relate the Definition 3.1 and Definition 3.2.
Experimental Designs Or Analyses: I chech the experimental parts. The main concern is still that the results are composed of the proposed metrics which is not interpretable enough. I suggest authors to relate the interpretability result to some quantitative analysis which is easy to understand by human.
Supplementary Material: I do not run the code.
Relation To Broader Scientific Literature: I think the contribution of the paper lies in the integration of causal media analysis and the multi-agent decision. Although it propose finer grained metrics of effect beyond Triantafyllou et al., 2024, I feel it does not help to better interpretability.
Essential References Not Discussed: I think the core references have been discussed and cited in the paper.
Other Strengths And Weaknesses: The main strengths and weaknesses have been discussed in the previous parts.
Other Comments Or Suggestions: In the last equation of Eq. (2), the difference of (1) \\(\\mathbb{E}[Y|\\tau; M]_{M^{do(I)}}\\) and the other term is not clear.
Questions For Authors: In Definition 5.2, the definition of $ASE^{S\cup \\{j\\}}$ is not clear, because the previous definition of ASE is designed for $\\{1,2,3,...,n\\}$
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. Please find below our response to your comments and questions.
## Response to Comments and Questions
**Proof of Theorem 3.3.** We respectfully disagree with your concern regarding the correctness of the proof of Theorem 3.3. While we understand that the result may initially seem surprising, the proof follows in a straightforward manner. The first step is licensed by Definition 2.1, while the third step by Definition 3.1 and Equation (2). In the second step, we apply a standard add-and-subtract trick by introducing $\mathrm{E}[Y|\tau;M]_{M^{do(I)}}$ into the expression.
**Interpretability.** Causal mediation analysis consists a well-established approach to interpreting the effect of an exposure variable to a response variable, by analyzing how it propagates through causal paths. Based on a similar principle, our proposed approach explains the effect of an agent's action on the outcome of an MMDP, by analyzing how it propagates through its influence on the agents’ behavior and the environment dynamics. Additionally, we have evaluated the interpretability of our decomposition approach by conducting extensive experiments on two RL environments featuring heterogeneous agents and diverse interaction protocols.
The results of these evaluations indicate that our approach yields interpretable insights and aligns well with standard intuitions.
Could we kindly ask the reviewer to elaborate on why they think our approach and evaluation protocol are not suitable for interpretability? If there are any specific concerns regarding our method, we would be glad to hear them and if possible try to also address them.
**Equation (2).** Let $\tau$ be a trajectory generated by the MMDP-SCM $M$. Both $\mathrm{E}[Y_{a_{i,t}}|\tau]\_{M}$ and $\mathrm{E}[Y|\tau;M]\_{M^{do(I)}}$ measure the expected counterfactual value of outcome $Y$ in $\tau$, but under different interventions. The former corresponds to the intervention $do(A_{i,t} := a_{i,t})$, which fixes the action of agent $i$ at time-step $t$ to $a_{i,t}$. The latter corresponds to the intervention $do(I)$, which fixes all agents' actions, following time-step $t$, to the values that they would naturally take under intervention $do(A_{i,t} := a_{i,t})$.
If it helps, we would like to kindly point out that in Section 6.1 of our paper, and particularly in the paragraph **Counterfactual effects**, you can find a detailed explanation of all introduced counterfactual measures, including r-SSE (Eq. 2), in the context of our Gridworld experiment. Furthermore, Appendix O provides a graphical illustration of all counterfactual estimates appearing in our paper using the Sepsis example from the introduction section of (Triantafyllou et al., 2024) and the causal graph used therein.
**Definition 5.2.** $\text{ASE}^{\\{1, ..., n\\}}$ corresponds to the total agent-specific effect, which quantifies the effect of an intervention that propagates through all the agents. In contrast, $\text{ASE}^{S \cup \\{j\\}}$ quantifies the effect that propagates through only the agents included in the subset $S \cup \\{j\\} \subseteq \\{1, ..., n\\}$. Note that this distinction is explicitly highlighted in lines 283-290 (left column) of our manuscript.
## Conclusion
We thank you again for your comments and questions. We would be happy to answer anything else in addition.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. Please give more details about the standard add-and-subtract trick in the proof of Theorem 3.3.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up. Please find our response below.
In the second step of our proof, we are adding and subtracting the term $\mathrm{E}[Y|\tau;M]_{M^{do(I)}}$. Specifically, the expression
$\mathrm{TCFE}\_{a_{i,t}, \tau(A_{i,t})}(Y|\tau)\_M = \mathrm{E}[Y_{a_{i,t}}|\tau]_M - \tau(Y)$
is rewritten as
$\mathrm{TCFE}\_{a_{i,t}, \tau(A_{i,t})}(Y|\tau)\_M = \mathrm{E}[Y_\{a_{i,t}}|\tau]\_M - \tau(Y) + \mathrm{E}[Y|\tau;M]\_{M^{do(I)}} - \mathrm{E}[Y|\tau;M]\_{M^{do(I)}}.$
We hope this resolves any concerns about the proof’s validity.
Please let us know if any other part remains unclear, we would be happy to further clarify. If all your concerns are addressed, we would also be grateful if you could consider updating your score. | Summary: The paper proposes a causal explanation formula for multi-agent Markov Decision Processes (MMDPs). It uses Structural Causal Models (SCMs) to decomposes the total counterfactual effect of an agent's action by attributing to each agent and state variable a score that results from their respective contributions to the outcome. Shapley values have been used to attribute the total effect to individual agents.
Similarly, intrinsic causal contributions (ICC) have been used to decompose the reverse state-specific effect (r-SSE). Experiments were conducted on Gridworld environment with LLM-assisted agents and a sepsis management simulator.
## update after rebuttal:
I read the rebuttal of the authors as well as the questions of the reviewers. I maintain my score.
Claims And Evidence: The main claim of the paper is that counterfactual effects of an agent's action in multi-agent sequential decision-making can be decomposed into agent-specific and state-specific contributions. This provides insights into accountability.
The claim is theoretically justified by theorems and proofs in the Appendix. Experiments support the claims; evaluations of the estimation error of the results and the robustness of the noise monotonicity are provided in the Appendix
Methods And Evaluation Criteria: The proposed methods makes sense for the problem of counterfactual effect decomposition in multi-agent systems.
The methods, Shapley values and ICC, provide fair attribution and accountability, and are theoretically sound.
Evaluation on benchmark datasets (Gridworld and Sepsis) are suitable to the problem.
Theoretical Claims: I checked the proofs of theorem 3.3. and theorem 4.2. in Appendix H. They look sound to me.
Experimental Designs Or Analyses: The experimental designs are sound and clear. The validity of the counterfactual effects have been done on multiple trials. Computational complexity has been discussed. I checked both the Gridworld and the Sepsis experiments.
Supplementary Material: Yes, I reviewed the supplementary material, e.g., the proofs of theorems 3.3 and 4.2 in Appendix H.
Also, the discussion on computational complexity and causal assumption in appendices I and J respectively.
Relation To Broader Scientific Literature: The paper draws ideas from various fields, including mediation analysis, intrinsic causal contributions (Janzing et al), Shapley values, blame attribution (Halpern et al).
Essential References Not Discussed: Not that I'm aware of.
Other Strengths And Weaknesses: The paper addresses significant issues: accountability and explainability of AI systems. It is well written.
One potential weakness would be scalability and the computational complexity involved. To that end, experiments on real-world use cases with many agents would be helpful.
Other Comments Or Suggestions: N/A
Questions For Authors: The assumption of noise monotonicity guarantees counterfactual identifiability. How realistic is this assumption in real-world use cases and how would the paper's framework treat the cases where the assumption is violated?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and positive score. We are glad to see that you find our paper well-written and addressing significant issues of AI systems. We are also happy to hear that you find our method insightful and sensible, and our experimental setup sound and clear. Please find below our response to your comments and questions.
## Response to Comments and Questions
**Scalability and Generalizability.** We recognize that improving and evaluating the scalability of our approach would be important for its practical applicability in systems with larger numbers of agents. However, we view this as an independent research challenge that, while important, lies somewhat outside the main scope of this paper. Our primary focus here is to provide an interpretable solution to the problem of counterfactual effect decomposition in multi-agent MDPs.
That said, we note that there are multiple impactful multi-agent settings that naturally involve only a small number of agents, e.g., AI assistants or AI supervision. Nonetheless, empirically evaluating the strategies we propose in Appendix I for mitigating computational complexity, within our setting, represents a promising direction for future work.
**Identifiability Assumptions.** The noise monotonicity assumption imposes a structural restriction on the counterfactual distribution of the causal model. Similar to other causal assumptions, such as *monotonicity* in binary models (Pearl, 1999), noise monotonicity cannot be verified from observational or interventional data. Therefore, assessing its validity in real-world (non-simulated) settings remains challenging.
As you note in your review, we include an empirical robustness analysis in our paper to assess the sensitivity of our findings in the Sepsis experiment to potential violations of noise monotonicity. The results of this analysis are reported in Appendix N.
In Appendix J.1, we discuss extending the applicability of our effect decomposition approach to non-identifiable domains through *partial counterfactual identification* (Manksi, 1990). Rather than relying on point estimates, this approach would operate with bounds on counterfactual quantities. As such, the practical applicability of our method will broaden, albeit on the expense of *efficiency*, i.e., not attributing the full effect.
A partial identification method directly compatible with our setting was recently proposed by (Zhang et al., 2022). Although assessing and enhancing the informativeness of their derived bounds, as well as examining the scalability of their approach in our context, lie beyond the scope of this paper, we regard this as a promising and important avenue for future research.
## Conclusion
We thank you again for your comments and questions. We would be happy to answer anything else in addition.
## References
Pearl, Judea. "Probabilities of causation: Three counterfactual interpretations and their identification." Synthese. 1999.
Manski, Charles F. "Nonparametric bounds on treatment effects." The American Economic Review. 1990.
Zhang, Junzhe, Jin Tian, and Elias Bareinboim. "Partial counterfactual identification from observational and experimental data." ICML. 2022. | Summary: This paper focuses on the causal analysis of decision-making in multi-agent cooperative frameworks, specifically the decomposition and attribution of counterfactual effects in the decision-making process of agents. The key contribution is the decomposition of the total counterfactual effect into two parts: the agent-specific effect, which propagates through the subsequent behaviors of other agents, and the state-specific effect, which propagates through environmental state transitions. Furthermore, the authors refine the attribution of the agent-specific effect by leveraging the Shapley value to distribute it among individual agents. The proposed causal analysis method is validated in two experimental environments, demonstrating the feasibility of the counterfactual effect decomposition approach.
### update after rebuttal:
I thank the authors for their response and have taken into account the perspectives of the other reviewers. I will maintain my score, as it is already a positive one.
Claims And Evidence: The paper's main claim is that its proposed causal analysis method can decompose the counterfactual effect in multi-agent decision processes into effects propagated through agent behaviors and effects propagated through state transitions. The main claim is well-supported by both clear theoretical justifications and empirical results from two experiments, providing sufficient evidence to validate the proposed approach.
Methods And Evaluation Criteria: I think the proposed methods and evaluation criteria are appropriate. The approach is intuitive, systematically decomposing the total counterfactual effect into agent-specific and state-specific effects, followed by further attribution using Shapley values and ICC. This provides a comprehensive causal analysis framework for multi-agent decision-making.
Theoretical Claims: I reviewed the theoretical proofs regarding the decomposition of counterfactual effects, and overall, the proofs appear complete and sound.
Experimental Designs Or Analyses: I find the experimental setups in the Gridworld and Sepsis environments to be reasonable. Both environments are designed with well-defined agent interactions, and the authors employed posterior sampling and attribution methods to validate the robustness of their proposed framework.
Supplementary Material: I reviewed the appendix section and the submitted code.
Relation To Broader Scientific Literature: One of the paper’s main contributions is its novel causal analysis perspective on multi-agent cooperation frameworks. Most existing multi-agent cooperation research focuses on designing coordination mechanisms to improve performance on specific tasks. In contrast, this paper provides an analytical tool for attributing decision-making outcomes, which could further enhance the design of cooperative frameworks by enabling a more detailed understanding of agent interactions.
Essential References Not Discussed: The paper discusses a sufficient number of related works.
Other Strengths And Weaknesses: One notable innovation in this paper is the introduction of Shapley values to further decompose the total agent-specific effect at the individual agent level. This approach adds granularity to counterfactual effect attribution.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your positive score and your kind words about our work. We are glad to hear that you find our approach well-supported, and appreciate its novelty, intuitiveness, and the comprehensive causal analysis it provides for multi-agent decision-making. We are also happy to see that you find our experimental evaluation reasonable and providing sufficient evidence to validate our approach.
Thank you again for all your positive comments. We are happy to answer any further questions or comments you might have. | Summary: This paper proposes a novel decomposition of counterfactual effects in multi-agent sequential decision-making settings. Building upon prior work on agent-specific counterfactual effects (cf-ASE), the authors present a bi-level decomposition separating the impact of an agent’s action into (i) how it affects the reachability of outcome-relevant states (r-SSE), and (ii) how much it contributes once those states are reached (ICC).
## Update after rebuttal:
I have read the authors' rebuttal. As I initially gave a weak acceptance, I have decided to maintain my original score.
Claims And Evidence: The authors present a bi-level decomposition of counterfactual effects into reachability (r-SSE) and individual contribution (ICC) terms and formalize this using Structural Causal Models in MMDP settings.
Clear theoretical definitions and mathematical formulation have been shown, and empirical evaluation in environments previously used in causal multi-agent studies (e.g., Sepsis), shows that the decomposition captures interpretable trends across different trust parameters.
However, one caveat is that some claims about practical applicability (e.g., potential use in real-world accountability systems) remain speculative since the experiments are limited to simulated domains, and assumptions such as noise monotonicity may not always hold in practice.
Methods And Evaluation Criteria: The method builds on Structural Causal Models over MMDPs, and introduces a bi-level decomposition into r-SSE (state reachability) and ICC (individual causal contribution). This method is conceptually grounded in prior causal inference frameworks and is logically appropriate for the multi-agent sequential setting.
For evaluation, the authors use two established benchmark environments — Graph and Sepsis — which are well-suited for modeling agent interdependencies and measuring counterfactual effects. Especially, the Sepsis environment (AI-clinician trust model) is directly relevant to applications in human-AI collaboration and accountability.
The limitation is that the experimental evaluation is focused only on simulated environments, and does not include other complex multi-agent RL benchmarks.
Theoretical Claims: The introduced theoretical claims appear valid and aligned with established causal inference literature.
Experimental Designs Or Analyses: The analysis includes variation across noise models, averaging over rollouts, and decomposition term breakdowns. The error bars and trend lines are presented clearly. In Figure 4, the authors demonstrate how ICC and reachability vary independently, which supports their claim that both aspects are necessary for full interpretation.
However, the environments are simulated and relatively low-dimensional. While the designs are sound for demonstrating the proposed decomposition, it remains unclear how well the method would generalize to high-dimensional and/or real-world domains.
Supplementary Material: The supplementary material shows the detailed proofs of Theorems 4.1 through 4.3, the pseudocode for the r-SSE and ICC estimation algorithms, and experimental details & additional results.
Relation To Broader Scientific Literature: This work extends Agent-Specific Effects (ASE) and counterfactual ASE (cf-ASE), introduced in the authors’ prior work (ICML 2024). The current paper decomposes the previously aggregated cf-ASE into two interpretable components, r-SSE (state reachability) and ICC (individual causal contribution), enhancing the interpretability of causal influence.
This paper leverages SCM formalism applied to sequential settings, building on prior causal RL literature, which uses SCMs to define interventional queries in dynamic systems. However, previous works typically focus on single-agent or non-decomposed effects, whereas this work targets multi-agent interactions with decomposition.
The decomposition proposed in this paper contributes to the growing interest in interpretable multi-agent RL and causal accountability, allowing system designers to trace which agents are causally responsible for which parts of an outcome, a key issue in safety-critical or social contexts.
Essential References Not Discussed: None
Other Strengths And Weaknesses: + Strengths
> The paper builds upon prior work (cf-ASE) and introduces a bi-level decomposition that adds interpretability and granularity to multi-agent causal analysis. While incremental, the conceptual refinement is practically meaningful.
+ Weaknesses
> The evaluation is restricted to relatively simple domains (e.g., Sepsis, Graph), and the decomposition is not tested in complex multi-agent RL benchmarks or real-world tasks. This limits the generalizability of the conclusions.
> Although well-executed, the core idea is a decomposition of a quantity (cf-ASE) already introduced by the same authors. It seems to be regarded as an incremental extension.
> The paper may be difficult to fully appreciate without reading their ICML 2024 paper, as many definitions and motivations are tightly linked to cf-ASE. This slightly affects self-containment.
Other Comments Or Suggestions: Some key notations appear before being fully explained. Consider adding a notation table or briefly introducing them earlier for clarity.
Questions For Authors: While the appendix discusses the computational complexity with the increasing number of agents N, are there other challenges (beyond computation) that may arise when scaling the decomposition to large multi-agent systems?
Your proposed decomposition allows for quantitatively separating agent or state influence into r-SSE and ICC. Could the authors provide evidence or examples showing that this separation leads to different decisions or policy adjustments, compared to using cf-ASE alone?
The method assumes access to an accurate SCM or simulator, but real-world models may be misspecified. How robust are r-SSE and ICC to such inaccuracies in structural functions?
While it is common to treat experimental control variables (like the trust level in Sepsis) as fixed scalars, some recent work considers them as a dynamic or latent quantity that evolves over time. Do the authors see the potential for extending their decomposition framework to handle such dynamic control variables, possibly by integrating them into the SCM or policy structure?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We are glad to see that you find our proposed method clear, logically appropriate and enhancing the interpretability of decision-making outcomes. We are also happy to see that you find our experimental testbed well-suited. Please find below our response to your comments and questions.
## Response to Comments and Questions
**Scalability and Generalizability.** Since Reviewer LcKu had a similar comment, we would like to kindly point you to our response to their review.
**Comparison to ASE.** Our approach to decomposing tot-ASE, termed ASE-SV, builds on the notion of agent-specific effects (ASE) introduced by (Triantafyllou et al., 2024). Fortunately, ASE is already well-defined in the MMDP setting, allowing us to leverage this concept directly. However, it is crucial to clarify that simply summing the agent-specific effects of individual agents does not yield tot-ASE. In fact, our experiments in the Sepsis environment reveal discrepancies of up to 95% in certain scenarios. To derive an *efficient* attribution method, we formulated the problem as a *cooperative game*. This formulation enabled us to define Shapley value in the setting of agent-specific effects. Furthermore, in Section 5 and Appendix F, we explain and formally define in this context a set of well-known desirable properties uniquely satisfied by Shapley value.
Crucially, our work addresses a newly posed and complex problem that is clearly distinct, though related, from the focus of (Triantafyllou et al., 2024). Our key contribution lies in integrating concepts from a wide range of fields, including multi-agent systems (MMDPs), counterfactual reasoning (ASE), mediation analysis (Theorem 3.3), game theory (ASE-SV), information theory (r-SSE-ICC) and more, to tackle this novel challenge.
**Notation Table.** Thank you for your suggestion. We would like to kindly point out that there already exists a notation table in Appendix B of our paper.
**Q1.** While we have not explicitly tested how increasing the number of agents affects the practical performance of our approach, we speculate that scaling to larger multi-agent systems may require drawing more samples from the posterior distribution to ensure reliable counterfactual inference. This is because a larger number of agents increases the dimensionality of the joint action space, which in turn could introduce greater variability in our counterfactual estimates.
**Q2.** Appendix J.2 of our paper demonstrates how the proposed decomposition can be applied to accountable decision making, specifically in the context of *blame attribution* in multi-agent systems. Revisiting the Sepsis scenario from Section 1, we illustrate how our method can be used to determine the proportion of blame attributable to the clinician for their action at time-step 10. Note that this level of fine-grained accountability assessment cannot be achieved by solely examining the total effect and ASE.
**Q3.** First, we would like to clarify that in our Sepsis experiment, we do **not** assume access to the underlying causal model. Instead, our method relies solely on observational distributions. To enable counterfactual inference in this setting, we assume that noise monotonicity holds w.r.t. a chosen set of total orderings. To assess the robustness of our findings in the presence of potential violations of this assumption, we repeated the experiment from Section 6.2 across 5 randomly selected additional total orderings. The full results of this analysis are reported in Appendix N.
These results suggest that while the accuracy of individual counterfactual estimates can degrade when our causal assumptions are violated, the overall conclusions drawn from our analysis in Section 6.2 remain largely robust. This is encouraging, as it provides evidence supporting that our causal explanation formula (including r-SSE and ICC) can yield valuable insights in simulated domains where: (a) effect sizes and randomness reflect real-world settings, and (b) our theoretical assumptions might not hold.
**Q4.** This is an intriguing question, and one we have actually also considered ourselves. The short answer is yes: our effect decomposition framework can be extended to handle dynamic control variables by integrating them into the SCM. One way to do this is by modeling the underlying MMDP as a *mechanised SCM* (Kenton et al., 2023). In this formulation, dynamic control variables can be represented as *mechanism variables* controlling the functional behavior of the MMDP variables. For example, in the Sepsis environment, the clinician's trust level could be modeled not only as evolving over time, but also as being influenced by changes in the AI agent's policy.
Kenton, Zachary, et al. "Discovering agents." Artificial Intelligence. 2023.
## Conclusion
Thank you again for your comments and questions. We would be happy to answer anything else in addition. | null | null | null | null | null | null |
A Proximal Operator for Inducing 2:4-Sparsity | Reject | Summary: This paper proposes to get 2:4 weight sparsity gradually using the proximal gradient method on a per-matrix level for pruning large language models (LLMs) after pretraining.
1) Firstly, a special regularizer with 2:4 sparsity null space is proposed, and the authors show that we can control the structured sparsity of its corresponding proximal operator continuously by changing a hyper-parameter $\lambda$, both theoretically (see Lemma 10) and experimentally (see Figure 1).
2) Secondly, the authors propose to solve the proposed non-convex proximal operator efficiently by dividing it into three convex subproblems (see Theorem 7), based on the optimal point classification lemma (see Lemma 4) and the convexity of the space composed of points with positive semi-definite hessian matrices (see Lemma 5).
3) Thirdly, the masked-gradient update is proposed to partially compensate for the pruning loss, which can be adopted by all the methods for post-training pruning.
4) Lastly, experiments in toy settings and real LLM pruning tasks demonstrate the proposed method can achieve state of the art performance.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence, though the improvement seems marginal in LLM pruning experiments.
Methods And Evaluation Criteria: The proposed method is instructive for post-training structured sparsity, though the application of it is limited to N:M sparsity for N < 3.
Theoretical Claims: 1) The proofs of Theorem 1, Fact2, Lemma3-5 are correct. Corollary 6 seems trivial and a little confusing, because the authors do not clearly point out which targets have linear constraints. A better implementation of Algorithm 1 may be reduce dimension when negative numbers appear during the gradient descent, according the technique used to prove Lemma 3. I do not check the correctness of Fact 8, and the rationality of Conjecture 9.
2) There is a minor mistake in line 748: $\partial_if(w)<0$ is false, it should be $\partial_if(w)>0$, and this does not affect the validity of the whole proof.
Experimental Designs Or Analyses: The experimental designs are sound in general, yet with some omissions:
1) In table 1, results of prox without GD do not provided.
2) There is no ablation study on the number of calibration samples (see Table 17 in [1]).
[1] A simple and effective pruning approach for large language models. In ICLR, 2024.
Supplementary Material: None.
Relation To Broader Scientific Literature: This paper explores regularizers for 2:4 sparsity, and develops a special proximal gradient method to enhance the sparsity of weights gradually. The squared loss can be found in formula (1) of [2].
[2] Sparsegpt: Massive language models can be accurately pruned in one-shot. In ICML, 2023.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: 1) Table 1 shows that the performance improvement achieved by proposed method is marginal on models with fewer than 8B parameters, and does not exist on models with more than 8B parameters.
2) The masked-gradient update seems useless for wanda when pruning models with more than 8B parameters (see the validation perplexity on Wiki in Table 1).
3) The proposed method is more time-consuming than the previous approaches. Further, it requires a extra hyper-parametric scheduling and there is no theory to guide its implementation.
4) This paper is well-written and easy to follow, though in my view, Section 3.3 (Solution of the 2:4-Proximal Operator) needs to be reorganized for better understanding (see Theoretical Claims).
5) There are some inappropriate marks and layouts used in this paper: $w\in W$ in Equation (8) (see line 190) is incompatible with Equation (6) (see line 210); the position of Figure 1 is inappropriate since it appears on the top of page 2 and is quoted by Section 3 on page 4.
Other Comments Or Suggestions: None.
Questions For Authors: 1) Why do you adopt exponential schedules for $\lambda$ instead of others?
2) Figure 2 shows that the proposed method is more effective when the correlation between input features is high. What does such correlation look like for LLMs? Is there any real-world task to prove your proposed method has a prominent advantage?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for thoroughly checking our theoretical claims and helping to improve our presentation. We will carefully take your considerations into account and will revise Section 3.3 around Corollary 6 to ensure it is easier to follow.
We want to briefly reply to some of your comments and hope that this helps for clarification. In particular we ran a new ablation study on the relevance of calibration samples that we will add to the paper.
> There is no ablation study on the number of calibration samples
Thank you, this is an important point and was also raised by reviewer 5gi2. We ran a similar ablation study to wanda on the 3B model. Except when using only a single sample, our proposed prox+GD outperforms all other methods when using the same amount of calibration data. Please check our reply to 5gi2 for the full results.
> In table 1, results of prox without GD do not provided.
Indeed this is on purpose, we do not want to propose prox without masked gradient as a method. This would mean that the optimization stops as soon as we have a 2:4 mask, without allowing enough iterations for the remaining weights to optimally adjust to this mask.
>Why do you adopt exponential schedules for instead of others?
First note (Section 3.2) that having a regularizer that grows exponentially guarantees that the regularizer eventually is large enough such that the solution to the proximal operator will be exactly 2:4 sparse, which we need to ensure finite runtime. In other words: without making assumptions about the problem space, the regularizer eventually needs to be scheduled towards infinity. Furthermore, as we can empirically see from Figure 5 in the appendix our exponential schedule leads to a good balance of runtime and accuracy. We believe that at the initial steps of the algorithm it is important to not increase the regularizer too quickly, as this results in committing to a certain sparsity pattern too early. On the other hand once the sparsity pattern is mostly established we can drive the sparsification faster. This is well done by an exponential schedule.
> Figure 2 shows that the proposed method is more effective when the correlation between input features is high. What does such correlation look like for LLMs? Is there any real-world task to prove your proposed method has a prominent advantage?
Please also see our reply to reviewer fgi2 ("Performance on 70B scale"). We are in fact struggling to fully nail down what exactly it is that changes between the model scales. On small models using off-diagonal information seems to be relevant, whilst on large models it does not help significantly. Whilst currently the research community (and us) are focusing on pruning open source Llama models, it is conceivable that on other future models, the off-diagonal information will again be more relevant even at larger scales.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. The ablation study on the number of calibration samples for pruning the 3B model is useful, and your explanation of using exponential schedules makes sense.
I agree that treat the prox + masked gradient update as a complete method, yet I still think that prox without GD, or prox with different GD steps can be an important ablation experiment for your paper.
To better characterize the properties of the matrix from real-world LLMs and analyse the effectiveness of your proposed method, I think references on Basis Pursuit for Compressed Sensing will be helpful. You can also check the Kruskal Rank, Mutual Coherence, Restricted Isometry Property of LLM matrices with different sizes.
---
Reply to Comment 1.1.1:
Comment: Thank you for the timely reply and the opportunity to follow up once more.
> I still think that prox without GD, or prox with different GD steps can be an important ablation experiment for your paper.
Indeed as an ablation to gain further insights and understanding of the prox+GD method, it helps to run an ablation for the GD steps impact also for prox. We did run this for OpenLlama3B now as well and present the results in the table and created a plot here https://ibb.co/Nnd8JMhN . Aside from the number of GD steps, the setup is identical to the one used in the main paper.
Our main observations: First, even with zero GD steps after masking, prox outperforms wanda with 1000 GD steps (c4: 18.23, wiki: 33.05 | Table 1 of submission) . This confirms that prox indeed finds a better mask for this model. Second, we observe that the masked gradient steps further improve the perplexities. However, it quite quickly converges. These finding is aligned with the GD steps ablations we did for wanda on the 13B model (see Figure 5).
We will streamline those additional ablations in the final version of the paper and discuss the insights in the main part.
| steps after masking | 0 | 50 | 100 | 200 | 400 | 800 | 1000 | 1600 |
|---------------------|------:|------:|------:|------:|------:|------:|------:|------:|
| c4 | 16.92 | 16.70 | 16.56 | 16.41 | 16.31 | 16.27 | 16.27 | 16.26 |
| wiki | 30.63 | 29.99 | 29.58 | 29.12 | 28.84 | 28.66 | 28.62 | 28.63 |
> To better characterize the properties of the matrix from real-world LLMs and analyse the effectiveness of your proposed method, I think references on Basis Pursuit for Compressed Sensing will be helpful. You can also check the Kruskal Rank, Mutual Coherence, Restricted Isometry Property of LLM matrices with different sizes.
Thank you the additional pointers. We already tried to gain further understanding by looking at the effective rank for example or the relation of weight mass on the diagonal vs off-diagonal (of the Hessian). We will look into the additional references you provided and will include them in the discussion. We hope to gain some further insights here -- again those are equally important to understand the existing methods Wanda and sparsegpt. However, at the time being, we can of course not *promise* anything in this regard.
One simple hypothesis we have ruled out is that the Hessian is more diagonal in larger models. If that were the case, Theorem 1 would imply that Wanda performs well and that GD steps have limited effect. However, our empirical measurements do not support this—the Hessian is not noticeably more diagonal in larger models than in smaller ones. We believe that the true cause might be hidden in the interplay of weight and activations, which makes the analysis more challenging.
Thank you for the constructive feedback—we hope this clarifies our contributions and that you will consider supporting our paper. | Summary: The proposed method solves the complex proximal operator for 2:4 sparsity by optimized masked gradient updating. The theoretical analysis provides clear support for the mechanism of the proposed method. The authors have conduct extensive experiments to validate the effectivenss of the proposed method.
Claims And Evidence: Yes, the authors provide clear therotical and experimental results to prove their claim.
Methods And Evaluation Criteria: The authors benchmark the proposed method for sparsifying representative LLMs, particularly demonstrating its effectiveness in improving the performance of exisiting 2:4 sparsity methods. The evaluation criteria is generally sound. The authors further provide more context on speedups, with code showcases 2:4 sparsity benchmark.
Theoretical Claims: I have checked the correctness of all proofs for theoretical claims.
Experimental Designs Or Analyses: The experimental designs and analyses are sound. The authors provide both toy experiments to illustrate the mechanism of the propose method and extensive comparative results against existing LLM pruning approaches.
Supplementary Material: I have reviewed the supplementary material, which provide detailed proofs for the theorem in the manuscript and more implementation details of the proposed method
Relation To Broader Scientific Literature: This paper offers fresh insights for 2:4-Sparsity, particularly for its application to LLMs.
Essential References Not Discussed: No. All nessasry related work are cited and discussed.
Other Strengths And Weaknesses: Strengths:
* This paper presents a complete review of the proximal operator's solution, demonstrating its effective resolution in the context of 2:4 sparsity, providing substantial theoretical support for future studies.
* The method can be directly applied to large language model compression and optimization in resource-constrained environments, considerably enhancing inference efficiency.
Weaknesses:
* Although the proposed method is theoretically valid, its complexity may cause computational and implementation burdens, particularly with LLMs.
Other Comments Or Suggestions: It would be better to show some real speedup for 2:4 sparsity in LLMs.
Questions For Authors: Minor typo: Is there some specific reasons to use 'WandA' instead of 'Wanda'?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for checking the correctness of all proofs for our theoretical claims and your positive assessment. We want to briefly answer two comments:
> It would be better to show some real speedup for 2:4 sparsity in LLMs.
We kindly ask you to check our response to reviewer 8YCw where we provide more context on speedups -- the code for our benchmark is attached below. We believe that realizing the full speedup of 2:4 is an important effort orthogonal to our contribution which focuses on the accuracy.
> Minor typo: Is there some specific reasons to use 'WandA' instead of 'Wanda'?
Yes, we thought it is more instructive to use **W**eights **and** **A**ctivations as abbreviation. However, we realized that the authors themselves used Wanda / wanda. So we will change it to their notation for consistency.
We are happy to answer any further questions that might come up.
---
---
### 2:4 sparsity benchmark with pytorch
```
import pandas as pd
import torch
import time
from torch.sparse import to_sparse_semi_structured
shapes = {
"8b_qkv_proj": (4096 + 1024 + 1024, 4096), # assuming fused QKV
"8b_o_proj": (4096, 4096),
"8b_up_gate_proj": (14336 * 2, 4096), # assuming fused Up/Gate
"8b_down_proj": (4096, 14336),
"70b_qkv_proj": (8192 + 1024 + 1024, 8192), # assuming fused QKV
"70b_o_proj": (8192, 8192),
"70b_up_gate_proj": (28672 * 2, 8192), # assuming fused Up/Gate
"70b_down_proj": (8192, 28672),
}
REPEATS = 10
WARMUPS = 10
BATCH_SIZES = [1, 16, 32, 8192, 8192*2, 8192*4]
results_df = pd.DataFrame(columns=["Layer", "Batch Size", "Speedup"])
with torch.no_grad():
for matrix in shapes:
for bs in BATCH_SIZES:
out_featues, in_features = shapes[matrix]
# Create input and dense weight
dense_weight = torch.Tensor([0, 0, 1, 1]).tile((out_featues, in_features//4)).half().cuda()
x = torch.rand(in_features, bs).half().cuda()
sparse_weight = to_sparse_semi_structured(dense_weight)
# Function to benchmark
def benchmark_matmul(weight, x):
total_time = 0
torch.cuda.synchronize()
for _ in range(WARMUPS):
y = torch.mm(weight,x)
for _ in range(REPEATS):
torch.cuda.synchronize()
start = time.time()
# torch.cuda.synchronize()
y = torch.mm(weight,x)
torch.cuda.synchronize()
end = time.time()
total_time += (end - start)
return total_time / REPEATS
# Run benchmarks
dense_time = benchmark_matmul(dense_weight, x)
sparse_time = benchmark_matmul(sparse_weight, x)
# print(f"Dense time per run: {dense_time * 1000:.3f} ms")
# print(f"Sparse time per run: {sparse_time * 1000:.3f} ms")
print(f"Matrix: {matrix}, bs: {bs}, Speedup: {dense_time / sparse_time:.2f}x")
results_df = pd.concat([results_df, pd.DataFrame({"Layer": matrix, "Batch Size": bs, "Speedup": dense_time / sparse_time}, index=[0])], ignore_index=True)
results_df.to_csv("sparse_matmult_results.csv", index=False)
```
---
Rebuttal Comment 1.1:
Comment: I acknowledge and appreciate the authors' efforts in providing the speedup results of 2:4 sparsity. I also agree with the authors that practical speedup doesn't constitute a key limitation to this individual paper, which requires collective progress across the field. I maintain my positive assessment of this work.
---
Reply to Comment 1.1.1:
Comment: Thank you for the explicit response to our rebuttal and for the positive assessment. | Summary: This paper proposes a proximal operator to improve the one-shot N:M weight pruning for large language models. The paper finds better sparsity masks in trained models by minimizing a regularizer jointly with local squared loss though deriving the proximal operator. Besides the algorithm for better masks, the paper introduces masked gradient updates to further minimize the local squared loss given a mask. These techniques can improve existing pruning methods with 2:4 sparsity on up to 13B models. On 70B models, the performance is on par. The authors also illustrate their method on toy problems.
Claims And Evidence: The claims on the effectiveness of gradient descent and proximal operator made in the submission are supported by clear and convincing evidence
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem: inducing 2:4-sparsity in a one-shot manner and evaluating the perplexity.
Theoretical Claims: I have checked the correctness of proofs for these theoretical claims:
Proof B.1. for Theorem 1
Proof B.2. for Fact 2
Proof B.3. for Lemma 3
Proof B.4. for Lemma 4
Proof B.5. for Lemma 5
Proof B.6. for Corollary 6
Proof B.7. for Theorem 7
Proof B.8. for Fact 8
I also did not find counterexamples for Conjecture 9
Experimental Designs Or Analyses: I have checked the validity of the experimental designs and analyses.
Experiments are valid:
1. Toy experiments in Section 4.1 illustrate that prox can better utilize the correlation between input features.
2. Table 1 tests the dense models, state of the art sparse models with and without gradient updates, and prox with gradient updates. The result can support the claims: “On models up to 13B we improve over previous state of the art algorithms, whilst on 70B models we match their performance”.
Supplementary Material: There is no supplementary material to review
Relation To Broader Scientific Literature: Although 2:4-sparsity is an important hardware feature provided by NVIDIA GPUs, it is unclear how to tailor LLMs to such structural sparsity without sacrificing model performance. This paper proposes an effective method for inducing 2:4-sparsity.
Essential References Not Discussed: I didn’t identify essential references not discussed in the paper.
Other Strengths And Weaknesses: The proximal operator is effective for small and medium models but there is no comparison of the inference latency between dense and sparse models.
Other Comments Or Suggestions: I don’t have other comments or suggestions.
Questions For Authors: Are there better optimization objectives than the squared loss (Equation (1)) given that perplexity is the ultimate objective?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for checking all our theoretical statements and the review of our paper. Below we provide a discussion on other optimization objectives as suggested in your review and some data and considerations on speedups.
### Optimization objectives beyond squared loss
First we recall the motivation for using the squared loss. This builds on Hassibi & Stork (1992) "Second order derivatives for network
pruning: Optimal Brain Surgeon", Section 2. If the Assumption A1: "the model is trained to convergence" holds, then by a Taylor expansion of the loss, a squared loss of the "global" hessian is the first relevant order.
Note, however, that this global Hessian scales quadratically with the number of total parameters, which is completely impossible to handle for LLMs. Now we can make a second assumption A2: "suppose that the Hessian is *block-diagonal*, where each block contains the parameters of a single linear layer."
Combining assumptions A1 & A2 we have that in first relevant order the *local squared loss* is the best objective to optimize -- which is what we consider. Furthermore, a linear squared loss allows for efficient optimization with theoretical guarantees. It would be non-trivial to (efficiently) apply our proximal optimization to other objectives.
That said, neither assumptions A1 and A2 are fully met in practice.
Recent work of Dong et al (ICML'24) "Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models" makes a systematic study of which information is relevant for pruning. However, their approach requires significantly more computational resources, as it requires end-to-end gradient information. Furthermore, they consider only a static pruning approach, i.e., no gradual weight updates during pruning. Unfortunately, their codebase does not support the llama-3 series of models and we were unable to compare directly.
### Speedup
NVIDIA reports 1.1x to 1.2x speedups with 2:4 sparsity on ResNext-101 model (https://tinyurl.com/uhd2rhvb). However, we are not able to independently verify their results.
Unfortunately, we were also unable to measure realized speedups using the current vLLM package with a 2:4 sparse checkpoint. We ran a Pytorch benchmark on an A100 GPU. Note that Pytorch support is in itself experimental. Notably, we observed significant slowdowns in many memory-bound workloads.
However, we do see speedups up to 1.6x on some shapes which is consistent with what wanda reported (their Table 5). We anticipate that as the hardware and software stack matures, these speedups will improve.
We hope this is not considered a shortcoming of our work. Fully realizing the speedup is an important effort orthogonal to our contribution.
[Benchmark script in the response to reviewer fybz]
| Layer | Batch Size | Speedup |
|------------------|------------|---------|
| 8b_qkv_proj | 1 | 0.07 |
| 8b_qkv_proj | 16 | 0.08 |
| 8b_qkv_proj | 32 | 0.08 |
| 8b_qkv_proj | 8192 | 0.92 |
| 8b_qkv_proj | 16384 | 1.22 |
| 8b_qkv_proj | 32768 | 1.32 |
| 8b_o_proj | 1 | 0.05 |
| 8b_o_proj | 16 | 0.06 |
| 8b_o_proj | 32 | 0.06 |
| 8b_o_proj | 8192 | 0.75 |
| 8b_o_proj | 16384 | 1.08 |
| 8b_o_proj | 32768 | 1.26 |
| 8b_up_gate_proj | 1 | 0.20 |
| 8b_up_gate_proj | 16 | 0.23 |
| 8b_up_gate_proj | 32 | 0.24 |
| 8b_up_gate_proj | 8192 | 0.83 |
| 8b_up_gate_proj | 16384 | 0.90 |
| 8b_up_gate_proj | 32768 | 0.88 |
| 8b_down_proj | 1 | 0.11 |
| 8b_down_proj | 16 | 0.12 |
| 8b_down_proj | 32 | 0.12 |
| 8b_down_proj | 8192 | 1.29 |
| 8b_down_proj | 16384 | 1.51 |
| 8b_down_proj | 32768 | 1.52 |
| 70b_qkv_proj | 1 | 0.15 |
| 70b_qkv_proj | 16 | 0.18 |
| 70b_qkv_proj | 32 | 0.18 |
| 70b_qkv_proj | 8192 | 1.28 |
| 70b_qkv_proj | 16384 | 1.41 |
| 70b_qkv_proj | 32768 | 1.47 |
| 70b_o_proj | 1 | 0.13 |
| 70b_o_proj | 16 | 0.15 |
| 70b_o_proj | 32 | 0.15 |
| 70b_o_proj | 8192 | 1.26 |
| 70b_o_proj | 16384 | 1.43 |
| 70b_o_proj | 32768 | 1.44 |
| 70b_up_gate_proj | 1 | 0.53 |
| 70b_up_gate_proj | 16 | 0.56 |
| 70b_up_gate_proj | 32 | 0.56 |
| 70b_up_gate_proj | 8192 | 0.91 |
| 70b_up_gate_proj | 16384 | 0.90 |
| 70b_up_gate_proj | 32768 | 0.93 |
| 70b_down_proj | 1 | 0.30 |
| 70b_down_proj | 16 | 0.32 |
| 70b_down_proj | 32 | 0.33 |
| 70b_down_proj | 8192 | 1.53 |
| 70b_down_proj | 16384 | 1.61 |
| 70b_down_proj | 32768 | 1.53 | | Summary: The paper presents a post-training pruning method to induce N:M sparsity. The two main innovations of this article are:
1) It proposed a Gradient Descent approach that can be implemented to any post-training pruning method to compensate the pruning loss.
2) Combining with the proposed Regularization, this article is able to induce N:M sparsity gradually.
Claims And Evidence: Most of the claims made in the submission are supported by clear and convincing evidence.
However, the author claims "Our approach will consume more compute and time, but this is well invested. Compared against the cost of training the models and the inference cost, the additional cost to find a better mask and model is well invested." But on large models, such as LLaMA3.1-70b, the proposed methods don't show an advantage against merely Wanda on wikitext2 datasets. Also, the running time of the proposed method on the large models is not reported.
Methods And Evaluation Criteria: The proposed methods make sense for the problem.
Theoretical Claims: I checked the theoretical claims and I didn't find mistakes.
Experimental Designs Or Analyses: I have checked the experimental designs
Supplementary Material: I have reviewed all the supplementary material.
Relation To Broader Scientific Literature: This article proposed a gradient descent method that can be implemented to all the existing post-training pruning methods.
Essential References Not Discussed: The related works are well discussed in this article.
Other Strengths And Weaknesses: **Weakness:**
1. The proposed method does not provide an advantage for large models. The authors should investigate why it fails to improve performance at larger scales.
2. The proposed method requires 1,000 calibration samples, which is significantly more than those used in other post-training pruning methods such as Wanda and SparseGPT. The paper should also examine how the number of calibration samples impacts the effectiveness of their approach.
3. My concern is that, despite using a large number of calibration samples and having a long runtime, the proposed method still fails to sufficiently bridge the performance gap caused by N:M pruning.
As the article said, the pruned 13B/7B models fall clearly behind the dense 7B/3B model. Therefore, I didn't see the potential of the proposed method.
Other Comments Or Suggestions: Please see the questions.
Questions For Authors: 1. The article claims the proposed method "Consume more compute and time, but this is well invested." Could you also report the runtime for the 70B model? .
2. What is the number of calibration samples used in SparseGPT and Wanda for comparison?
3. Could you perform a sensitivity analysis on the number of calibration samples to assess its impact on performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the detailed review and confirming that our theoretical claims and the proposed method make sense. We understand that your reluctance comes from missing ablations on *calibration samples* as well as the unconvincing *performance* on the current 70B scale Llama models. We provide a new ablation on calibration samples below and discuss the 70B performance.
We hope that our conceptual contributions as well as the strong empirical improvements over existing SOTA on smaller models and across calibration samples can offset your concerns on the 70B model. Please let us know if there is anything else we can elaborate on.
### **Ablation study on the effect of calibration samples:**
First let us clarify that for our experiments in the submission we were using the exact same number of calibration samples for all methods (we will clarify this at the start of Section 4.3). Additionally we now ran a new ablation on the calibration samples for the 3B model (we consider the same number of samples as in Table 17 of the wanda paper).
We make three main observations:
- a/ wanda (which uses only the diagonal of the Hessian) converges quickly as known from prior work.
- b/ methods that use masked gradient updates can consistently improve their validation perplexity when using more calibration data. However, with diminishing returns (note that the number of samples is increased exponentially here).
- c/ Except when using only a single sample, our proposed prox+GD outperforms all other methods when using the same amount of calibration data.
We are thankful for this suggestion and include those results in the updated paper, as they provide valuable further insights.
| calibration samples | 1 | 1 | 16 | 16 | 32 | 32 | 64 | 64 | 128 | 128 | 256 | 256 | 512 | 512 | 1024 | 1024 | 2048 | 2048 |
|---------------------|:------:|:--------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| Pruning Method | C4 | Wiki | C4 | Wiki | C4 | Wiki | C4 | Wiki | C4 | Wiki | C4 | Wiki | C4 | Wiki | C4 | Wiki | C4 | Wiki |
| wanda | 32.11 | 72.89 | 30.20 | 63.03 | 29.67 | 60.16 | 29.70 | 61.63 | 29.66 | 61.56 | 29.58 | 61.51 | 29.57 | 60.82 | 29.52 | 60.47 | 29.50 | 60.54 |
| wanda + GD | 35.72 | 100.41 | 21.01 | 40.92 | 19.51 | 36.43 | 18.99 | 34.86 | 18.70 | 33.96 | 18.50 | 33.71 | 18.36 | 33.55 | 18.23 | 33.05 | 18.16 | 32.88 |
| sparsegpt | 557.04 | 2,859.53 | 21.30 | 41.01 | 19.92 | 35.97 | 19.31 | 33.92 | 19.23 | 33.60 | 19.01 | 35.03 | 18.79 | 33.83 | 18.80 | 33.95 | 18.64 | 33.38 |
| sparsegpt + GD | 302.03 | 1,675.12 | 19.77 | 38.06 | 18.01 | 32.14 | 17.44 | 30.88 | 17.12 | 30.13 | 16.89 | 30.21 | 16.81 | 30.13 | 16.74 | 29.56 | 16.62 | 29.32 |
| prox +GD | 70.96 | 149.96 | 18.85 | 35.80 | 17.53 | 30.32 | 17.11 | 29.70 | 16.73 | 29.19 | 16.38 | 28.49 | 16.34 | 28.51 | 16.28 | 28.58 | 16.19 | 28.09 |
---
---
### **Performance on 70B scale:**
We fully agree with your concern that for the Llama-3.1 70B models our method despite using more resources does not improve performance. *We thus would recommend to use wanda for those models*. While we aimed to be transparent about this, we will make it even clearer to not misguide researchers interested in this model particularly.
Now for the question of *why* it does not work well on 70B models:
Notice that we can qualitatively define a hierarchy between the methods by how much information of the Hessian matrix is used. Wanda uses only the diagonal entries (see also Theorem 1), SparseGPT uses already the full Hessian, and prox makes even more heavier use of the Hessian (see our toy experiment of Figure 2).
We observe that on the smaller models, SparseGPT does quite consistently outperform Wanda --> It seems that the off-diagonal elements of the Hessian are relevant. It is those cases, where doubling-down on using Hessian information (through prox or masked gradient updates) further improves performance. Now on the other hand, on the 70B scale using plain Wanda seems close to optimal -- neither SparseGPT, prox, nor masked gradient updates improve there relevantly. This seems to indicate that the behavior on this scale is largely determined by the diagonal entries of the Hessian matrix -- in which case wanda (by our Theorem 1) is optimal.
This is also an interesting discussion and will add it to the paper as well. | null | null | null | null | null | null |
Elucidating the Design Space of Multimodal Protein Language Models | Accept (spotlight poster) | Summary: The paper discusses the design space of multimodal protein language models through the lens of discretizing protein structures. The paper attempts to address a key challenge with information loss of fine-grained structural details upon tokenization. Multimer structures are considered, and BIT-based modeling is used to improve structural tokenization and generation from multimodal models.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: Well contextualized and motivated with respect to challenges in the field, specifically MultiFlow and ESM3.
Essential References Not Discussed: May be of interest to the authors:
Lu, Amy X., et al. "Compressing the Latent Space of Single-Sequence Protein Predictors for Multimodal Generation." ICML 2024 Workshop on Efficient and Accessible Foundation Models for Biological Discovery.
Other Strengths And Weaknesses: Considering representations of multimeric systems is an important problem in protein design and progress in this area could have significant downstream applications. Addressing limitations in current structure tokenization methods is similarly important and well-motivated.
Can the authors comment on any simple baselines or alternative approaches they pursued to resolve the identified problems (O1, O2, and O3)?
The paper discusses SaProt but does not explicitly compare to it or other related "structure-aware" pLMs beyond ESM3. Can the authors comment on their reasoning for including/not including specific comparisons?
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your comments that have greatly improved our manuscript! We address your concerns as below. We sincerely thank you once again and welcome any further feedback!
> Q1: Can the authors comment on any simple baselines or alternative approaches they pursued to resolve the identified problems (O1, O2, and O3)?
We recap our approaches to address O1-O3 and discuss potential alternative approaches below.
**O1: Structure tokenization results in information loss**
**Our approach:** We train an additional continuous diffusion module ResDiff to recover the lost residuals of discrete tokens.
**Alternatives:** Training DPLM2 on continuous tokens. However, it's unclear whether this approach would work for joint modeling of structures and sequence, which is inherently discrete.
**O2: High reconstruction accuracy of structure tokenizer does not ensure better structural generative results.**
**Our approach:** We primarily improve the designs of latent structure modeling, including structure-aware generative modeling, architectural designs, and representation alignments.
**Alternatives:** Adopt direct modeling in the data space following a similar manner as Proteina [1]. Similarly, it's unclear if direct modeling is robust for joint modeling of discrete sequences and continuous 3D structures.
**O3: Multimodal PLM gets index-based structure token prediction miserably wrong.**
Small bit-wise changes could result in a dramatically different index label. This challenge intensifies as codebook size grows, making direct index prediction even more difficult.
**Our approach:** Finer-grained token prediction might resolve such challenge. We achieve a finer-grain prediction on the "dimension-level" using bit-based labels in contrast to index-based labels.
**Alternatives:** Another potential direction is to hierarchically tokenize the structures to achieve fine-grained tokens at the "resolution" level following methods like RQ-VQVAE [2] and VAR [3]. However, it remains elusive how to well define "resolution" of proteins as a natural choice.
> Q2: The paper discusses SaProt but does not explicitly compare to it or other related "structure-aware" pLMs beyond ESM3. Can the authors comment on their reasoning for including/not including specific comparisons?
Thanks for your question. In our submitted version, we focus on the generative capabilities of protein structure (e.g., folding). However, SaProt focuses on the protein representation learning ability, which utilizes structure tokens as the representation for structure input and is not capable of generating structure.
In the table below, we also provide the results of structure-aware protein representation learning, following the setting of SaProt. Due to the limited time, we only evaluate on the HumanPPI and DeepLoc Subcellular tasks, and we will provide results of more protein predictive tasks in the final version.
As seen, the proposed bit-based modeling is able to further improve the representation learning performance of DPLM-2. We hypothesize that the bit-based modeling, which is finer-grained, is more suitable to learn and enables the multimodal PLM to capture the structural pattern in latent space more effectively, thus enhancing structure-aware representation learning.
| Model | HumanPPI | DeepLoc Subcellular |
|---|---|---|
| | Acc (%)↑ | Acc (%)↑ |
| SaProt | 86.41 | 85.57 |
| DPLM-2 | 84.44 | 82.98 |
| DPLM-2 Bit | 88.89 | 83.39 |
[1] Geffner et al. Proteina. ICLR 2025.
[2] Lee et al. RQ-VAE. CVPR 2022.
[3] Tian et al. VAR. NeurIPS 2024.
> Essential References Not Discussed: Lu, Amy X., et al. "Compressing the Latent Space of Single-Sequence Protein Predictors for Multimodal Generation."
Thanks for pointing this out to us, and we will elaborate on our discussion below in the final version.
This work compresses the latent space of ESMFold to form a joint latent space of protein sequence and structure, and learn a continous generative model to sample compressed latent embeddings, which is then decoded by the ESMFold structure head and a learned sequence head. On the other hand, the multimodal protein language models like DPLM-2, use discrete diffusion generative framework to generate the discrete structure token, then decode using the learned structure detokenizer.
The CHEAP embedding introduced by this work is an effective compression of ESMFold latent space. Integrating the CHEAP embedding with multimodal protein language models holds great potential for effective structural modeling. This is a very exciting direction and we will leave it for future work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. As I recommended acceptance in my initial review, I will maintain that recommendation and score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer RSLA,
Thank you for taking time to carefully review our work and your thoughtful feedback. We sincerely appreicate your feedback and the efforts in engaging with our rebuttal, and we are grateful for the opportunity to address your concerns.
Best,
Authors | Summary: The manuscript systematically explores the design space of multimodal PLMs, aiming to identify and address their existing limitations. The authors highlight tokenization loss and inaccurate structure token predictions as major bottlenecks in structure prediction performance. To mitigate these issues, they propose multiple strategies, including bit-wise prediction, quantization residue modeling, and a hybrid generative approach that integrates the pretrained encoder-decoder with the PLM. The study is supported by comprehensive analyses and experiments, which empirically validate the conclusions and demonstrate notable performance improvements.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: The claims in the manuscript are supported by empirical results.
Experimental Designs Or Analyses: Expriments are well designed and alalyses are logically reasonable.
Supplementary Material: I reviewed all the supplementary material.
Relation To Broader Scientific Literature: Multimodal protein language models have been shown in previous research to be highly effective for protein representation. However, their performance in structure prediction remains suboptimal. This paper makes key contributions by identifying the major limitations that restrict performance and proposing targeted strategies to overcome them, leading to significant improvements in folding prediction. Enhancing structure prediction not only increases modeling accuracy but also improves structure generation, ultimately advancing protein design.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
The paper is well-written, with a clear structure and comprehensive experiments.
Weaknesses:
1. As mentioned in the supplementary material, the manuscript does not currently address structure-conditioned protein generation or structure-aware protein predictive tasks, which are crucial for functional protein design.
2. The performance on multimer data is particularly intriguing. When trained on SwissProt without PDB multimer training data, the TM-score is higher than when fine-tuned with PDB multimer data, yet the RMSD is worse. A more in-depth analysis of this phenomenon would provide valuable insights.
Other Comments Or Suggestions: No other comments.
Questions For Authors: Will the hybrid generative approach significantly increase training time? Providing detailed training time measurements would be helpful for assessing its feasibility and computational cost.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your comments that have greatly improved our manuscript! We address your concerns as below. We sincerely thank you once again and welcome any further feedback!
> W1: The current manuscript does not address structure-conditioned protein generation or structure-aware protein predictive tasks.
Thanks for your suggestion. We provide additional results on structure-conditioned protein generation and protein predictive tasks below.
**(1) Inverse Folding (structure-conditioned sequence generation)**.
We report the amino acid recovery (AAR) and self-consistency TMscore on the Cameo dataset. Our results show that the 650M-parameter DPLM-2 variant, with bit-based modeling, outperforms the 3B-parameter baseline. Incorporating geometric architectural designs and representation alignments leads to further improvements. These findings indicate that these design methods contribute positively beyond structure generation.
| | Inverse Folding - Cameo 2022 | |
|---|---|---|
| | AAR↑ | TMscore↑ |
| DPLM-2 650M | 0.4962 | 0.8816 |
| DPLM-2 3B | 0.5236 | 0.8900 |
| DPLM-2 Bit | 0.5586 | 0.8907 |
| Geo + Bit | 0.5665 | 0.8886 |
| Geo + Bit + REPA | **0.5681** | **0.8909** |
**(2) Structure-aware protein representation learning.**
In the table below, we provide the results of structure-aware protein representation learning, following the setting of SaProt. Due to the limited time, we only evaluate on the HumanPPI and DeepLoc Subcellular tasks, and we will provide results of more protein predictive tasks in the final version.
As seen, the proposed bit-based modeling is able to further improve the representation learning performance of DPLM-2. We hypothesize that the bit-based modeling, which is finer-grained, is more suitable to learn and enables the multimodal PLM to capture the structural pattern in latent space more effectively, thus enhancing structure-aware representation learning.
| Model | HumanPPI | DeepLoc Subcellular |
|---|---|---|
| | Acc (%)↑ | Acc (%)↑ |
| SaProt | 86.41 | 85.57 |
| DPLM-2 | 84.44 | 82.98 |
| DPLM-2 Bit | 88.89 | 83.39 |
> W2: The model fine-tuned without PDB multimer data achieved a higher TM-score but a lower RMSD. A more in-depth analysis would provide valuable insights.
Thanks for bringing up this discussion. RMSD captures local atom deviations, while TM-score measures global structural similarity. In multimers, chains are typically spaced farther apart than individual connecting residues, leading to a higher RMSD for models trained on only monomers. This highlights the importance of finetuning with multimer data to reduce RMSD.
Meanwhile, fine-tuning on Swissprot (200K) helps learn the global protein structures better due to its larger dataset compared to PDB-Multimer (3.5K). As shown in Table 10 of our paper, incorporating Swissprot consistently achieves higher TM-score on both PDB-Multimer and Cameo.
> Q1: Does the hybrid generative approach with flow matching significantly increase training time?
We report the training time of the following DPLM-2 variants on 16 H100s for 300k training steps.
The increased training time of flow matching (FM) primarily comes from the on-the-fly computation of noisy structure $\mathbf{x}_t$ using structure encoder, as we can precompute the discrete structure tokens when FM is not applied. FM remains highly effective when inference efficiency is a priority, as it accelerates the sampling process by 10x by requiring fewer sampling steps.
| | # Sampling steps | Training Time (300k steps) |
|---|---|---|
| w/o FM | 100 | 46 hrs |
| w/ FM | 10 | 81 hrs | | Summary: This paper performs an exploration of how to improve multimodal protein language models that jointly model both protein sequences and structures. The paper identifies limitations in the existing literature of token-based multimodal PLMs and propose (and explore) many design choices for such PLMs. The paper identifies two issues in token-based multimodal PLMs: (i) information loss when converting continuous 3D structures into discrete tokens and (ii) difficulty in accurate structure token prediction. They develop several design choices to address this and show that their design methods improve the structural modeling capabilities of multimodal PLMs.
Claims And Evidence: The paper explores many design choices and evaluates them empirically. Most claims and design choices are given appropriate empirical evaluation which seems to be well supported. I'll provide three examples of this. First, bit-wise modeling seems to improve structure prediction accuracy as showcased in Table 3 and Table 4. Second, representation alignment does seem to enhance model performance (shown in Table 6). Third, position offset and chain linkers help in multimer modeling (Table 9, Table 10).
The overall analysis is done on relatively small PLMs (up to 3B). I don't see this as an issue to the claims and I understand it's hard to scale up the model size if compute is limited, but it would be great to acknowledge more explicitly that the tests are done on smaller PLMs and that there is uncertainty in how it scales. This is especially true because there are no evaluations about how this performance and gains scale with the size of the PLMs.
Methods And Evaluation Criteria: The paper's methods for evaluating different design choices is nice and convincing. It provides limitations of existing token-based multimodal PLMs (especially information loss from structure tokenization). The proposed methods are quite broad, they target multiple aspects of the problem at once (tokenization, architecture, representation learning, data) with the experiments showing improvements from the design choices. Their methods primarily build on DPLM-2.
Given that this is a protein structure prediction task, it seems some benchmarks are missing (RoseTTAFold, OmegaFold, AlphaFold3). I'm happy for the authors to generally claim that your goal is to work on protein language models that might differ from the nature of, say, AlphaFold3, and that your model sizes are different, such an analysis would still be useful to see by how far off the existing research is.
Theoretical Claims: There are no proofs. The formulations of diffusion process, learning objectives for DPLM-2 and residual diffusion models, are clear.
Experimental Designs Or Analyses: There are quite a few experimental designs in this paper (it's even a bit hard to follow all of them). Two main questions.
First, on structure tokenization analysis. The bit-level and index-level evaluation is really stark. How come? Is there any broader intuition behind this that you could check experimentally?
Second, the ablation study comparing different improved methods is nice and well presented. For some improvements (e.g. in Table 4), it's hard to know what's a genuine improvement because there are no uncertainty intervals (e.g. standard deviations for splits).
Supplementary Material: No
Relation To Broader Scientific Literature: The paper fits into several areas. First, the paper identifies information loss during structure tokenization and propose ways to deal with this by connecting it to a few different literature areas: (i) vector quantization literature; (ii) latent diffusion model literature; and (iii) the more recent bit-wise discrete modeling literature, connecting it to image synthesis by Han et al. (2024). Second, the geometric structure-aware architecture design uses many ideas from AlphaFold2 and AlphaFold3. Third, the representation alignment connects well with knowledge distillation.
Overall, this paper aims to bridge the gap between sequence-based and structure-based models, and demonstrates how techniques from various fields (e.g. computer vision) can be adapted to protein modeling.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other strengths: Many different design choices are proposed as well as evaluated.
Other weaknesses: In some cases, the design choices feel somewhat ad-hoc (i.e. why did you exactly decide to choose to test this design choice as opposed to another one?). It is quite difficult to follow structure-wise and could benefit from greater clarity in terms of the presentation structure (e.g. a summary table linking the whole design choices). Lastly, it would benefit to have more uncertainty estimates in the tables, especially because some of the values highlighted seem to have small gains.
It would also be beneficial if you could provide a greater discussion about whether and how the design space exploration is novel relative to other work (what other work has explicitly done/explored and what is completely novel), especially within a summary table.
Other Comments Or Suggestions: N/A
Questions For Authors: Described above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your comments that have greatly improved our manuscript! We address your concerns as below. We sincerely thank you once again and welcome any further feedback!
> The bit-level and index-level evaluation is really stark. Is there any broader intuition behind this that you could check experimentally?
Index-level prediction is highly challenging because small changes at the bit level can result in drastically different indices, as shown in the example below. This issue becomes even more problematic as the codebook size increases, further exacerbating the difficulty of direct index prediction.
| Continous struct token | Quantized struct token (bit level) | Index |
|---|---|---|
| **+0.1**, -1.5, -3.2, +0.7 | **+1**, -1, -1, +1 | **9** |
| **-0.1**, -1.5, -3.2, +0.7 | **-1**, -1, -1, +1 | **1** |
> W1: Some design choices feel somewhat ad-hoc due to the lack of discussions on motivation. It could benefit from greater clarity with a summary table linking the whole design choices.
Thank you for this comment. To make our presentation more structured, we provide a summary [TABLE](https://anonymous.4open.science/r/icml-BF61/Table1.pdf) with greater discussion on all the design choices, traditional methods, motivations, and findings.
> W2: It would benefit to have more uncertainty estimates in the tables, especially because some of the values highlighted seem to have small gains.
Thank you for your comment. We are actively launching three more independent runs of the following selected models to provide uncertainty estimates using the mean and the 95% confidence interval (CI). We will include the uncertainty estimates of other variants in the final version.
| | Cameo2022 | | PDB Date | |
|---|---|---|---|---|
| | RMSD↓ | TMScore↑ | RMSD↓ | TMScore↑ |
| Bit + FM | 6.210 ± 0.100 | 0.840 ± 0.005 | 2.811 ± 0.052 | 0.914 ± 0.010 |
| Bit + FM + ResiDiff | 6.088 ± 0.085 | 0.844 ± 0.001 | 2.777 ± 0.161 | 0.916 ± 0.005 |
> W3: It would be beneficial to provide a summary table with greater discussion about the novelty of the design space.
Thank you for your suggestion. We provide the summary table above in our response to W1.
> The overall analysis is done on relatively small PLMs (up to 3B). I don't see this as an issue and I understand the constraints of computation resources, but it would be great to acknowledge more explicitly.
Thanks! We acknowledge this statement and will make it explicit in our final version as you suggested. While scaling is effective as shown in DPLM-2, we'd like to reassert that correct design choices are critical. In our work, we show that our 650M multimodal PLM could achieve on par results with the 3B specialized ESMFold on structure folding, showing high parameter efficiency.
> Some missing benchmarks (RoseTTAFold, OmegaFold, and AlphaFold3) on structure prediction tasks could be useful, even though I'm happy for the authors to claim that the focus is on protein language models, which differ from the nature of these benchmarks.
Thanks for your suggestions. We will make a clear claim on our focus on PLMs and will add these benchmarks to the table in the final version. | Summary: This paper aims to systematically improve current multimodal protein language models in the following respects: (1) generative modeling, where the authors argue that structural information loss caused by index-based structure tokenization cannot be resolved by improving reconstruction accuracy, and opt for a finer-grained supervision through bit-wise cross entropy loss; (2) geometry-aware architecture and representation alignment to improve higher-order structural modeling, (3) extended data coverage on multimer besides monomer. By doing so, the authors show that the enhanced model built upon DPLM-2 achieves better structural modeling abilities, e.g. reducing the RMSD from 5.52 to 2.36 on PDB test set.
## update after rebuttal
The authors addressed all my concerns. I raised my score to 4.
Claims And Evidence: The claims in this paper are supported by convincing evidence.
Methods And Evaluation Criteria: This paper focuses on structure modeling ability of multimodal PLMs, so I think evaluation on structure prediction in RMSD / TMscore suits it fine.
Theoretical Claims: N/A (no theoretical claims in this paper)
Experimental Designs Or Analyses: I have checked the experimental designs, but some settings are unclear to me. Please see Q2.
Supplementary Material: N/A (no supplementary material is provided)
Relation To Broader Scientific Literature: The work is a meaningful attempt at extending multimodal diffusion protein language model. Previous work DPLM-2 incorporates structural tokens, where discretization brings the benefit of reduced noise and the potential of scaling. This paper builds on top of DPLM-2, and proposes several improvements in advancing its structure modeling ability.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength:
- This paper explores a range of design choices for multimodal protein language models, specifically DPLM-2, which is informative and complementary to the previous study.
- The authors conduct an in-depth preliminary study on structure tokenization and prediction by LM.
- Detailed and insightful ablation studies.
Weakness:
- This paper would benefit from a more structured organization of its contributions to the design space of multimodal protein language models. Rather than presenting incremental improvements in a fragmented manner (as "DPLM-2 + X" throughout different sections), the authors should consider providing a comprehensive taxonomy of the design choices explored, highlighting how these choices are orthogonal and can be composable, and positions these innovations within the broader context of multimodal protein models. Given the authors' ambitious goal of elucidating design spaces for multimodal PLMs, a more systematic synthesis of findings would substantially enhance the paper's impact and utility to the community.
Other Comments Or Suggestions: - L92 "DPLM-2: An multimodal extension of DPLM" -> "A multimodal"
- Table 4 caption is a bit confusing.
Questions For Authors: 1. What is the final version of the recommended design choice? It seems to me that the authors separately propose specific improvements to DPLM-2, and the entire paper seems like an extensive ablation study for it. But if all the designs are modular and complementary to each other, why didn't the authors show the overall performance of combining GeoDPLM + FM + RESDiff + folding SFT + REPA altogether?
2. I did not quite understand the setting in Table 4. As far as I know, DPLM-2 already incorporates a diffusion LM in it, so did "+RESDIFF" add another continuous diffusion to it? And how about "+FM"? L240 states that the authors "finetune such a model with flow matching", does that (DPLM-2 + FM) mean the original diffusion LM is finetuned by FM generative objective (both seq and struct), and "w/ folding SFT" denotes the generative objective of structure only (i.e. denoise structure given sequence)?
3. Following this, the so-called "hybrid structure modeling method" seems to rescue the skewed structure modeling ability of DPLM-2, which adopts the pre-trained sequence model DPLM and finetunes for additional structure tokens. This raises the question that direct fine-tuning approach on DPLM may be suboptimal - why didn't authors train from scratch instead of inheriting DPLM-2 parameters? Wouldn't it yield even better performance if both modalities are jointly considered throughout training?
4. I am curious about how the representation alignment to folding model boosts structure diversity as well as accuracy. Can the authors elaborate on that? Does this have something to do with the potentially unbalanced training set for original DPLM in terms of protein sequence length?
I would consider raising my score if the authors solve these questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your comments that have greatly improved our manuscript! We address your concerns as below. We sincerely thank you once again and welcome any further feedback!
> W1: the paper would benefit from a more structured presentation of its contributions (e.g. a taxonomy)
Thank you for your suggestion. We present a comprehensive taxonomy of design choices in a table and discuss their orthogonality and synthesis, as compiled in this [TABLE](https://anonymous.4open.science/r/icml-BF61/Table1.pdf).
> Q1: What is the final recommended design choice? (e.g., combining GeoDPLM + FM + RESDiff + folding SFT + REPA altogether.)
We structured the paper to clearly present the impacts of each design. To provide a complete picture, we now include **new results** that combine all design methods.
| | PDB Date | | Cameo2022 | | Uncond. Gen. |
|---|---|---|---|---|---|
| | RMSD↓ | TM↑ | RMSD↓ | TM↑ | Diversity↑ |
| DPLM-2 650m | 5.307 | .8306 | 7.703 | .7936 | 0.700 |
| Bit | 3.221 | .9043 | 6.403 | .8380 | 0.825 |
| Bit + FM | 2.870 | .9099 | 6.183 | .8418 | 0.525 |
| Bit + FM + ResDiff (w/o SFT) | 2.788 | .9146 | 6.077 | .8456 | 0.525 |
| Bit + FM + SFT + ResDiff | 2.370 | .9270 | 5.847 | .8442 | N/A |
| New results | | | | | |
| Geo + Bit | 2.551 | .9254 | 5.955 | .8520 | 0.900 |
| Geo + Bit + FM | 2.443 | .9261 | 6.172 | .8404 | 0.575 |
| Geo + Bit + REPA | 2.507 | .9264 | 6.192 | .8412 | 0.875 |
| Geo + Bit + REPA + SFT | 2.404 | .9322 | 5.754 | .8424 | N/A |
| Geo + Bit + FM + SFT + REPA + ResDiff | 2.379 | .9297 | 6.200 | .8398 | N/A |
**SFT**: Folding SFT improves the structure folding but sacrifices the model's ability for multimodal co-generation, as it is fine-tuned specifically for the folding task. For models finetuned with the folding SFT objective, we skipped the evaluation of unconditional co-generation.
**The recommended setting** — Geo + Bitwise. Geometric architectures are compatible with bitwise modeling and their combinations achieve comparable results with models finetuned with folding SFT on structure folding, and further obtains an effective improvement on unconditional generation quality & diversity. This setting is also effective in terms of training efficiency as it avoids additional computational overhead from other methods like FM and REPA.
**REPA and bit-based modeling**. Both REPA and bit-based modeling enhance structure folding and generation diversity. Meanwhile, as shown in the table, their combinations do not lead to further improvements. We suggest that this is because REPA and bit-based modeling both help through enabling smooth and high-dimensional learning signals compared to index-based discrete tokens, hence their non-orthogonal effects.
**Hybrid modeling (w/ FM) and geo module**. FM effectively improves folding, but the benefits diminish with Geo modules, and can reduce generation diversity due to its ODE nature. However, benefitting from the same nature, FM accelerates the sampling process by requiring 10x fewer sampling steps.
**ResDiff**. Similar to the results in the paper, ResDiff does not bring a significant boost to folding metrics. The major benefit of ResDiff is to provide a finer local structure as discussed in the Fig. 7 of our Appendix.
> Q2: Clarifications on experimental configurations in Table 4 (+ResDiff, +FM, and w/SFT).
We elaborate on our experiment settings, as compiled in this [TABLE](https://anonymous.4open.science/r/icml-BF61/Table2.pdf)
> Q3: Hybrid structure modeling method seems to rescue the skewed structure modeling ability of DPLM-2, which adopts the pre-trained sequence model DPLM and may be suboptimal compared to training from scratch.
Thanks for raising this insightful discussion. We conducted experiments as you suggested
| | Cameo2022 | | PDB Date | |
|---|---|---|---|---|
| | RMSD↓ | TMScore↑ | RMSD↓ | TMScore↑ |
| Bit & FM from pretrained DPLM-2 | 6.1825 | 0.8414 | 2.8697 | 0.9099 |
| Bit & FM from scratch | 10.9815 | 0.7090 | 6.4453 | 0.8070 |
As shown, DPLM-2 initialization outperforms training from scratch (see table), likely because: (1) sequence pretraining improves structural modeling (cf. ESMFold); (2) Sequence data vastly outnumbers experimental structures, making pretraining more effective.
> Q4: On REPA boosting diversity and accuracy? Is it related to the unbalanced training set for the original DPLM?
We agree that the unbalanced DPLM training set may contribute to REPA's benefits. REPA further aids structure generation by:
- Overcoming discrete token limitations: Quantization loses finer details (Table 1), while representation alignment helps preserve structural nuances via smooth, informative and high-dimensional learning signals.
- On learning diffusion model: REPA paper points that high-quality representations are key to diffusion models; we bridge this gap by transferring structural features from folding models.
---
Rebuttal Comment 1.1:
Comment: I really appreciate the authors' efforts in solving my concerns. I will raise my score to 4 in hopes that this paper gets accepted, and I recommend the authors to include these important clarifications in their revision.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 4dFc,
Thank you for reading our rebuttal and we sincerely appreciate your supportive feedback and the increased rating! We would like to once again thank you for your insightful comments, and we will surely make more efforts to include these clarifications, discussions and new results in the final version.
Best,
Authors | null | null | null | null | null | null |
Direct Density Ratio Optimization: A Statistically Consistent Approach to Aligning Large Language Models | Accept (poster) | Summary: This paper introduces a new offline reinforcement learning approach for LLM alignment, motivated by the lack of statistical consistency in the traditional BT model, which underpins popular offline algorithms such as DPO. The author proposes directly optimizing the density ratio using a Bregman divergence loss and proves that the estimated parameters exhibit good statistical consistency under certain assumptions. Experiments conducted on Ultrafeedback datasets show marginal improvements over existing offline RL methods.
Claims And Evidence: The primary claim is well supported by the theoretical analysis.
Methods And Evaluation Criteria: The proposed method is theoretically sound, and the empirical studies are well-designed. However, my concern lies in its performance. While the work establishes a solid theoretical foundation for offline RL methods, the proposed approach does not demonstrate compelling advantages over existing models, which significantly limits its practical applicability.
Theoretical Claims: Yes, I checked the proof shown in Appendix.
Experimental Designs Or Analyses: The experiments are well-designed, but the performance of method indicates that the work is incremental to the research of alignment.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The proposed model could be a valuable addition to the family of offline RL methods. The authors clearly establish connections with existing approaches and provide solid theoretical analysis, which is often lacking in prior work。
Essential References Not Discussed: Most of the related papers are well cited.
Other Strengths And Weaknesses: see my comments above
Other Comments Or Suggestions: The draft needs further proof-reading.
For example:
Line 165: Here we consider a situation where he data...-> `he' must be `the'
Line 358: the citation for MMLU is a '?'
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review and feedback on our paper. We address your points below.
1. **Performance concerns:**
The reviewer expressed concern that DDRO does not demonstrate compelling advantages over existing models, potentially limiting its practical applicability. Regarding the magnitude of improvement, it's important to evaluate this within the context of single-iteration preference optimization. We believe that achieving dramatic performance gains from just one optimization step is challenging. In fact, this tendency for incremental progress is common in the field (see works such as ORPO[1], CPO[2], and SPPO[3], which often report similar levels of improvement after one optimization step).
Considering this background – where substantial leaps are not always expected from a single iteration – we believe DDRO's results are noteworthy. We highlight three key points demonstrating its effectiveness:
- Broad Consistency: DDRO achieves the best performance on the majority of benchmarks and model sizes tested. This result could indicate that explicitly modeling preferences causes a failure to model the preferences necessary for many benchmarks, limiting good performance only to those benchmarks where the model luckily matches the required preferences.
- Significant Gains on Specific Tasks: On specific benchmarks like GSM8K with Llama-7b, DDRO shows substantial improvements over KTO (over 50\% relative gain) and BCO (over 20\% relative gain), which we believe are non-trivial, especially in the context where large gaps are not always expected after one iteration of alignment.
- Effectiveness in Paired Settings: DDRO achieves performance better than DPO in the paired setting, despite discarding the comparison information that DPO utilizes. This underscores the effectiveness of DDRO.
References:
- [1] Hong, J., Lee, N., & Thorne, J. (2024). ORPO: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691.
- [2] Xu, H., Sharaf, A., Chen, Y., Tan, W., Shen, L., Van Durme, B., ... & Kim, Y. J. (2024). Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417.
- [3] Wu, Y., Sun, Z., Yuan, H., Ji, K., Yang, Y., & Gu, Q. (2024). Self-play preference optimization for language model alignment. arXiv preprint arXiv:2405.00675.
2. **Proofreading errors:**
Thank you for pointing out the specific typo ("he" -> "the" in Line 165) and the broken MMLU citation (Line 358). We apologize for these oversights. We will meticulously proofread the manuscript, correcting these and any other errors, before submitting the camera-ready version.
We appreciate your feedback and hope these clarifications address your concerns about DDRO's performance and practical relevance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I would like to maintain my rating. One additional suggestion: it is better to present evaluation results with tables. It is very difficult to recognize the performance (and thus recognize the practical metrits of the proposed method) through the bar charts.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and the additional constructive suggestion regarding the presentation of our results.
We completely agree that presenting the evaluation results in tables will improve clarity and make it easier to assess the performance differences and practical merits of our proposed method compared to the bar charts. We appreciate this feedback and will incorporate tables in the revised version as suggested.
We believe this clearer tabular format will help demonstrate the significance of the improvements achieved by DDRO.
To illustrate this, here is an excerpt of how the results will be presented.
| Base Model | TruthfulQA KTO | TruthfulQA BCO | TruthfulQA DDRO | GSM8K KTO | GSM8K BCO | GSM8K DDRO |
|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| Pythia 1.4B | 0.1958 | 0.2228 | **0.2326** | **0.0190** | 0.0159 | 0.0167 |
| Pythia 2.8B | 0.1946 | 0.2032 | **0.2277** | 0.0235 | 0.0212 | **0.0265** |
| Pythia 6.9B | 0.1885 | 0.2301 | **0.2338** | 0.0212 | 0.0220 | **0.0265** |
| LLaMA 7B | 0.1701 | 0.2166 | **0.2203** | 0.0591 | 0.0819 | **0.0993** |
As the table demonstrates, DDRO achieves consistent improvements over the compared methods.
Based on calculations from the table, it shows that DDRO achieves a 1.050x improvement (average of 1.044x, 1.120x, 1.016x, 1.017x) over the second-best on TruthfulQA and a 1.107x improvement (average of 0.88x, 1.129x, 1.207x, 1.121x) over the second-best performing method on GSM8K. We believe these average gains are significant, especially considering that the improvement reached as high as 1.207x in the best-performing case compared to the second-best method.
Furthermore, we understand your concern regarding practical applicability stemming from the perceived magnitude of improvement. We would like to place these results in the context of the highly competitive and challenging field of single-iteration preference optimization. Incremental gains are often the standard, even in state-of-the-art research recognized by the community:
1. **DiscoPOP (NeurIPS 2024):** When training the zephyr-7b-gemma-sft model, DiscoPOP achieved an LC winrate of 65.18, representing a 1.029x improvement compared to training the same base model with DPO (which achieved 63.34).
2. **Iterative RPO (NeurIPS 2024):** Similarly, training Llama-2-70b-chat with Iterative RPO resulted in an ARC-Challenge score of 84.8, a 1.024x improvement over training the same base model with DPO (which scored 82.8).
3. **XPO (implemented in the popular trl library):** Comparing the first iteration of training the Llama-3-8B-Flow-SFT model, XPO-iter1 (score: 63.1) showed a 1.0014x gain on MMLU compared to DPO-iter1 (score: 63.01) applied to the same base model. On other benchmarks for the same comparison, XPO-iter1 performed slightly worse than DPO-iter1 (GSM8K: 75.97 vs 77.79, 0.977x; LC winrate: 22.14 vs 23.27, 0.952x).
These examples demonstrate that the improvements achieved by DDRO are not only comparable to, but in some cases exceed, the gains reported in highly-regarded works accepted at top venues (NeurIPS 2024) or integrated into widely-used libraries (trl). This strongly suggests that such improvements are considered significant contributions within the research community and are indicative of practical value, especially given the inherent difficulty of achieving large leaps in performance with single-iteration alignment techniques.
We hope that this clearer tabular presentation, combined with the provided context comparing DDRO's gains to recognized SOTA methods, will facilitate a positive reassessment of the practical merits and significance of our work.
Thank you once again for your thorough review and valuable feedback, which are helping us strengthen the paper.
References:
1. Lu, C., Holt, S., Fanconi, C., Chan, A., Foerster, J., van der Schaar, M., & Lange, R. (2024). Discovering preference optimization algorithms with and for large language models. *Advances in Neural Information Processing Systems, 37*, 86528-86573.
2. Pang, R. Y., Yuan, W., He, H., Cho, K., Sukhbaatar, S., & Weston, J. (2024). Iterative reasoning preference optimization. *Advances in Neural Information Processing Systems, 37*, 116617-116637.
3. Xie, T., Foster, D. J., Krishnamurthy, A., Rosset, C., Awadallah, A., & Rakhlin, A. (2024). Exploratory preference optimization: Harnessing implicit q*-approximation for sample-efficient rlhf. *arXiv preprint arXiv:2405.21046*. | Summary: This paper introduces Direct Density Ratio Optimization (DDRO), a novel alignment method for large language models (LLMs) that addresses the statistical inconsistency of existing approaches reliant on restrictive preference models (e.g., Bradley-Terry). By directly estimating the density ratio between preferred and unpreferred output distributions, DDRO bypasses explicit preference modeling, ensuring statistical consistency—guaranteed convergence to true human preferences as data scales, regardless of the underlying preference structure. Theoretical analysis validates this property, while experiments across some benchmarks demonstrate the effectiveness of DDRO.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: This paper studies the problem of direct alignment algorithms for unpaired data which is practical and necessary.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ### Strength
1. The paper is well written and easy to follow.
2. This paper studies a necessary problem of direct alignment algorithms on unpaired data.
3. The proposed DDRO is reasonable and provide a novel perspective.
### Weakness
1. The author compares their empirical results of DDRO with KTO. The improvement seems limited. Can authors provide average results on all datasets to better demonstrate the effectiveness of DDRO.
2. [1] also discusses the density ratio estimation. Can authors discuss the main difference and advantage of their method?
3. In the experiment section, it's better for the author to compare with the state-of-the-art method like SPPO to further verify the effectiness of DDRO.
[1] How to Leverage Demonstration Data in Alignment for Large Language Model? A Self-Imitation Learning Perspective.
Other Comments Or Suggestions: Please refer to Strengths and Weakness part.
Questions For Authors: Please refer to Strengths and Weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and providing valuable feedback. We address your points below.
1. **Limited improvement and average results:**
The reviewer observed that the empirical improvement of DDRO over KTO seems limited and suggested providing average results across datasets. Regarding the magnitude of improvement, we would like to place this in the context of one-iteration preference optimization. It is generally understood that achieving dramatically large performance gains from a single iteration of alignment can be challenging. This characteristic of incremental progress is commonly observed in the field, as evidenced by recent works like ORPO, CPO, and SPPO, which also often report performance improvements of a similar nature after one optimization step. Viewed within this context – where substantial leaps are not always expected from one iteration – we believe DDRO demonstrates notable effectiveness. We wish to highlight three key points:
Consistency: DDRO achieves the best performance on the majority of benchmarks and model sizes tested.
Significant Gains: On specific benchmarks like GSM8K with Llama-7b, DDRO shows substantial improvements over KTO (over 50\% relative gain) and BCO (over 20\% relative gain), which we believe are non-trivial, especially in the context where large gaps are not always expected after one iteration of alignment.
Paired Setting: DDRO achieves performance better than DPO in the paired setting, despite discarding the comparison information that DPO utilizes. This underscores the effectiveness of DDRO.
Regarding averaging results: While technically feasible, averaging scores across different metrics and scales can be misleading. Metrics with smaller variance or lower chance rates might be unjustly de-emphasized. We believe presenting the full suite of results, as in Figure 1, provides a more transparent and informative picture of DDRO's broad effectiveness, rather than potentially obscuring performance variations through averaging.
2. **Comparison with GSIL [1]:**
Thank you for your comment regarding GSIL. While both GSIL and our method use density ratios, their objectives and approaches are fundamentally different.
First, construction of the objective is different. GSIL aims to minimize $\mathrm{KL}(\pi_{\text{data}}) \| (\pi_\theta)$, effectively making the density ratio $(\pi_{data} / \pi_\theta)$ approach 1. Our method, however, aims to match the policy density ratio $(p_\theta / p_{ref})$ to the true density ratio $(p_+ / p_{ref})$. Thus, our core objectives differ significantly.
Second, GSIL requires multiple iterations to estimate $\pi_{\text{data}}$, and its theoretical convergence is not guaranteed. In contrast, our method runs a single iteration, offering advantages in efficiency and theoretical tractability.
We hope this clarifies the distinction.
3. **Comparison with state-of-the-art methods like SPPO:**
We appreciate the suggestion to compare against state-of-the-art methods. However, our work primarily focuses on the setting where offline, binary feedback labels are available (i.e., unpaired preference data). Methods like SPPO operate under a different, richer setting. SPPO typically requires access to a pre-trained reward model, allowing it to use continuous reward scores, and often involves online interaction or scoring during training. Comparing DDRO directly with methods designed for such richer information settings might not be entirely appropriate, as the available data and feedback mechanisms are fundamentally different. Our goal was to develop a theoretically sound and practically effective method specifically for the common scenario of offline binary feedback, where methods like SPPO might not be directly applicable or would require significant adaptation.
Thank you again for your comments, which encourage us to clarify the positioning and strengths of DDRO. | Summary: This paper introduces Direct Density Ratio Optimization (DDRO), a method for aligning LLMs by directly estimating the density ratio between preferred and unpreferred output distributions. DDRO minimizes a Bregman divergence-based loss, eliminating the need for explicit preference modeling while ensuring statistical consistency. Empirical results show that DDRO performs on par or outperforms existing alignment methods in two settings: (1) unpaired preference data, compared to KTO and BCO, and (2) paired preference data, compared to DPO.
Claims And Evidence: The claims made in Section 1 of the paper are well supported by the methodology design, theoretical analysis, and empirical results.
Methods And Evaluation Criteria: **Strengths**
1. Direct density ratio estimation is well-motivated for LLM alignment, as it eliminates the need for an explicit preference model which may not accurately reflect human preferences. It also eliminates the reliance on paired preference data, which can be costly to annotate.
2. The formulation of DDRO is well-structured, and its derivation is clear and easy to follow.
3. DDRO models are trained using UltraFeedback, a widely used public preference dataset for LLM alignment. The aligned LLMs are evaluated on five diverse benchmarks covering language understanding and generation tasks, making the evaluation setup comprehensive.
**Questions**
1. In more realistic settings, when human labelers annotate single generations without direct comparison, many responses may be classified as neutral—neither strongly preferred nor unpreferred. Could DDRO handle such neutral data?
2. DDRO assumes a constant $p(+ | x)$. What would be a good way to determine $p(+ | x)$? How robust is DDRO to policies with different $p(+ | x)$?
Theoretical Claims: I briefly checked the proof of Theorem 4.1 in Appendix A without verifying all details. The theorem’s insights on the bias-variance trade-off are interesting.
I have a question regarding the bias term—could you provide some intuitive examples of scenarios where the model $g_\theta$ might be misspecified? Given that data are generated from the reference model and subsequently annotated by humans, and that $g_\theta$ is initialized from the same reference policy, what are the cases where $g_\theta$ might fail to capture the true preference distribution? Additionally, are there ways to mitigate such misspecification through strategies like prompt selection?
Experimental Designs Or Analyses: **Strengths**
1. The evaluation across five benchmarks covering different language tasks is comprehensive.
2. The experiment results are generally strong and support the claim that DDRO performs on par with or outperforms baseline alignment methods on both unpaired and paired preference data.
**Questions**
1. In Section 5.2, Figure 1 suggests that DDRO outperforms KTO on GSM8K for most model sizes, which appears inconsistent with the description in Section 5.2. Additionally, DDRO underperforms KTO across all model sizes on AlpacaEval. Could you provide insights into why DDRO might specifically underperform on AlpacaEval?
Supplementary Material: Yes, mainly Appendix A and B.
Relation To Broader Scientific Literature: LLM alignment is an important research direction for ensuring the broad utility and safe deployment of LLMs. This paper contributes to this field by introducing a preference-modeling-free approach, which avoids making parametric assumptions about human preferences. By directly inferring the preference distribution from data, the proposed method alleviates the need for explicitly paired preference data, reducing annotation costs and improving scalability. This provides a more generalizable and flexible framework for LLM alignment for broader applications.
Essential References Not Discussed: The paper discusses and compares most key methods in LLM alignment, as summarized in Table 1. One widely adopted preference optimization method, ORPO, is not cited. While ORPO is a reference-model-free approach and may not be directly comparable to DDRO, discussing its relevance could potentially provide additional context.
Hong, J., Lee, N., & Thorne, J. (2024). Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691.
Other Strengths And Weaknesses: Please refer to previous comments.
Other Comments Or Suggestions: 1. In Equation 6, the KL term for the preferred examples appears to be mistakenly written as the KL term for the unpreferred ones.
2. In Section 5.1, the reference to the MMLU benchmark is broken.
Questions For Authors: Please refer to previous comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and questions regarding our work on DDRO. We appreciate the careful reading and valuable feedback.
1. **Handling neutral data:**
The reviewer asked if DDRO can handle "neutral" labels, which might arise when annotators evaluate single responses without direct comparison. Currently, DDRO is formulated to utilize binary preference signals (preferred/unpreferred) and it cannot directly incorporate a distinct "neutral" category. However, we believe that binary feedback collection is a prevalent and practical scenario. Major LLM services like Gemini, Claude, and ChatGPT often employ simple thumbs-up/thumbs-down mechanisms to collect data, which correspond directly to the binary setting DDRO addresses. Therefore, we consider the setting DDRO operates in to be sufficiently realistic and widely applicable.
2. **Determining $p(+|x)$ and robustness:**
The reviewer inquired about determining the constant $p(+|x)$ and DDRO's robustness to its value. We assumed $p(+|x)$ to be constant (denoted as $t$) for simplicity in our analysis. This constant $t$ can be empirically estimated by sampling responses from the reference policy $p_{\text{ref}}$ for a set of prompts $x$ and calculating the proportion of responses deemed "preferred". However, our empirical investigations (which we can add details of in the final version) showed that the training dynamics and final performance of DDRO were not highly sensitive to the specific value chosen for $t$. Using a standard default value like $t = 0.5$ generally works well in practice, suggesting that precise estimation of $p(+|x)$ is often unnecessary.
3. **Scenarios for $g_\theta$ misspecification and mitigation:**
The reviewer asked for intuitive examples where the learned density ratio model $g_\theta$ might be misspecified, even when initialized from the reference policy. Misspecification can occur if the true preferred distribution $p^+(y|x)$ is too complex to be accurately represented by the chosen model architecture, given its parameterization (e.g., the LLM has insufficient capacity). In that case, the model might not be able to capture the true function even with infinite data. The bias term in our analysis (Section 3.2) relates directly to this discrepancy between the best possible model within the chosen hypothesis space and the true function; it depends on the model class, not the data distribution. Strategies to mitigate this bias primarily involve using model architectures known for their expressive power and ensuring the model has sufficient capacity (e.g., using larger models or architectures proven effective). Therefore, processing data (including prompt selection strategies) do not reduce the model bias, although they influence the data distribution which affects the variance term.
4. **Inconsistency between Figure 1 and Section 5.2:**
Thank you for pointing out this discrepancy. As you mentioned, the "GSM8K" in the text describing Figure 1 should have referred to "AlpacaEval LC Winrate". We apologize for this mistake and will correct it in the camera-ready version.
5. **DDRO underperformance on AlpacaEval:**
Following the correction above, the reviewer asked for insights into why DDRO might specifically underperform KTO on AlpacaEval LC Winrate. We hypothesize that KTO's specific preference model assumptions might align particularly well with the nature of the AlpacaEval benchmark. This could lead to strong performance on this specific benchmark, potentially at the expense of broader generalization. DDRO, lacking such model-specific biases, results in strong results across a wider array of benchmarks.
6. **Error in Equation (6) and MMLU reference:**
Thank you for identifying the potential error in the KL term in Equation (6) and the broken MMLU reference in Section 5.1. We will carefully review Equation (6) to ensure correctness regarding the preferred/unpreferred KL terms and fix the MMLU citation. We will perform a thorough proofread to catch any similar issues.
We appreciate the reviewer's constructive feedback, which will help enhance the clarity and correctness of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarifications. I will maintain my score. | Summary: This paper focuses on LLM alignment, aiming to address two key limitations of existing direct preference learning methods such as DPO: (1) Existing work requires a preference model (e.g., the Bradley-Terry model), which may not generalize well to capture complex human preferences that do not fit these models; (2) Existing work lacks statistical consistency guarantees, indicating they do not ensure convergence to an optimal model. To handle these challenges, the authors propose Direct Density Ratio Optimization (DDRO), demonstrating that the ground-truth reward can be represented as the density ratio between positive and negative response distributions. They develop a learning approach based on Bregman divergence to estimate this ratio. Theoretically, they prove that DDRO guarantees statistical consistency, and empirically, they demonstrate that DDRO performs on par with or better than existing baselines.
Claims And Evidence: This paper makes two key claims:
1. Existing SFT methods lack statistical consistency guarantees, indicating that even with sufficient training data, they do not necessarily converge to the optimal model. The authors provide a theoretical proof in Sec. 4.1 and partially justify it empirically in Sec. 5, where DDRO achieves comparable (or slightly better) results.
2. Current methods rely on predefined preference models, such as the Bradley-Terry or Plackett-Luce models, making them unsuitable for capturing more complex human preferences. The authors derive DDRO theoretically in Sec. 3, demonstrating that it does not require such an explicit preference model.
However, while both claims are reasonable and well-motivated, they are not well-supported empirically.
* For Claim 1, although DDRO does not require an explicit preference model, could the density ratio g = p- / p+ itself be regarded an implicit preference model? Besides, in Fig. 1, DDRO—which is theoretically statistically consistent—does not achieve a significant advantage over other unpaired baselines, such as KTO and BCO. According to line 27 (right part), statistical consistency is defined as "as the amount of data increases, the learned model converges to the true optimal model." A straightforward way to validate this would be to train DDRO and baselines with varying dataset sizes and compare its convergence speed to KTO, demonstrating both that statistical inconsistency is an issue in practice and that the proposed method effectively addresses it.
* For Claim 2, the authors state in line 24 (right part) that "This assumption may not accurately capture the complexity of real human preferences," which is intuitively plausible but not well justified. Experimentally, preference-model-based methods like KTO and DPO do not perform significantly worse than DDRO. To support this claim, the authors should evaluate their method on datasets with more complex human preferences and empirically demonstrate the benefits of not relying on a predefined preference model.
Generally, I believe this paper makes sufficient theoretical contributions, and the use of density ratio is also interesting. I will raise my score if the authors can address the concerns above.
Methods And Evaluation Criteria: As discussed above, the two claims are well-motivated and intuitively reasonable through the proposed method, with the authors providing theoretical proofs for some aspects. However, since this work is not purely theoretical, additional experimental results are needed to empirically validate these claims. See [Claims and Evidence].
Theoretical Claims: The authors present theoretical claims in Sec. 3, Proposition 3.3, and Theorem 4.1. However, I could not find the proof for Proposition 3.3. I reviewed the derivations in Sec. 3 and did not notice obvious issues. I also briefly examined the proof of Theorem 4.1 and did not identify clear errors. However, as I am not an expert in learning theory, I cannot guarantee the correctness of the proofs.
Experimental Designs Or Analyses: The experimental design in this paper does not fully support its claims (see [Claims and Evidence]). Additionally, several existing density ratio-like LLM alignment methods have not been cited. The authors should cite these works and discuss how DDRO compares to them. Ideally, they should include the comparison with at least one of these related methods. For relevant papers, see [Essential References Not Discussed].
Supplementary Material: I have read the Appendix.
Relation To Broader Scientific Literature: As far as I know, the two claims made in this paper have not been explicitly discussed in other LLM alignment research. However, several papers have already explored unifying different DPO variants within a unified framework, making this work closely related to those efforts. Specifically:
* Tang et al., Generalized Preference Optimization: A Unified Approach to Offline Alignment. 2024.
* Han et al., f-PO: Generalizing Preference Optimization with $f$-divergence Minimization. 2024.
Essential References Not Discussed: There are already some ratio-like loss papers. The authors should cite and discuss them. Specifically:
* Zeng et al., Token-level Direct Preference Optimization. 2024.
* Hong et al., ORPO: Monolithic Preference Optimization without Reference Model. 2024.
* Xu et al., Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation. 2024.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: 1. Line 165, left part, he -> the
2. Line 357, right part, reference missing in MMLU
Questions For Authors: 1. In the unpaired ltrafeedback-gpt-3.5-turbo-helpfulness dataset, how are the unpaired negative samples used in Eq. (6) generated?
2. In Fig. 1, DDRO outperforms KTO in AlpacaEval LC Winrate. How should this result be explained?
3. In Eq. (6), is there a possibility that some positive samples have almost zero probability mass in the LLM’s space? If so, how can this issue be mitigated?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive comments on our paper. We appreciate the opportunity to address your concerns and clarify aspects of our work.
1. **Is the density ratio $g$ an implicit preference model?**
We thank the reviewer for this insightful question. The density ratio $g = p^- / p^+$ reflects inherent preferences derived directly from the true distributions ($p^+, p^-$) without assuming a specific parametric structure (like the Bradley-Terry model used implicitly by methods like DPO or KTO). Therefore, $g$ represents the data's preference information itself, not a constructed model, allowing DDRO to avoid explicit preference modeling.
2. **Performance comparison and statistical consistency:**
We appreciate your suggestion. However, our primary theoretical claim regarding statistical consistency focuses on the convergence target – ensuring the learned model converges to the true optimal distribution under certain conditions – rather than the rate of convergence. Our experiments aim to show DDRO achieves competitive or superior performance on various benchmarks at a fixed data size, implicitly supporting the benefit of a sound convergence target.
3. **Justification for avoiding explicit preference models:**
We note that the very fact that our method demonstrates advantages on the majority of benchmarks, despite underperforming on a single benchmark, suggests that preference-modeling-methods may have become overly specialized to the particular types of preferences required only by certain benchmarks.
Also, we believe that demonstrating the limitation of specific models (like the Bradley-Terry model assumed in DPO) does not necessarily require datasets with highly complex preferences. It is sufficient to show scenarios where preferences, even simple ones, deviate from the assumed model structure.
For example, Bradley-Terry model does not capture a non-transitive relatoinship such as in rock-paper-scissors.
We acknowledge that including demonstration of such cases would strengthen our argument and therefore plan to add a discussion or a simple illustrative example to the camera-ready version.
- [1] Pan, Y., Tsang, I. W., Chen, W., Niu, G., \& Sugiyama, M. (2022). Fast and robust rank aggregation against model misspecification. Journal of Machine Learning Research, 23(23), 1-35.
4. **Proof for Proposition 3.3:**
We apologize for the omission. We had intended the paragraph preceding Proposition 3.3 to serve as an justification. We will provide a formal proof for Proposition 3.3 in the appendix of the camera-ready version.
5. **Missing citations for related work:**
Thank you for bringing these relevant papers to our attention. We have reviewed them carefully.
We acknowledge these papers implicitly involve density-ratio through regularization (like odds ratios) or KL constraints.
However, these are primarily for stabilization or performance enhancement, and they do not appear to directly estimate or optimize the density ratio itself as the main target.
In contrast, the central contribution of our paper is the introduction of a consistent theoretical framework built directly upon the perspective of direct density ratio matching.
Therefore, while valuable contributions, we believe their direct relevance to our paper's central novelty is limited.
6. **Typos and missing references:**
We will correct these errors and conduct a thorough proofreading using automated tools and careful manual checks to identify and fix any other potential issues in the final version.
7. **Generation of negative samples in unpaired Ultrafeedback:**
Unpaired datasets like Ultrafeedback typically consist of triplets: a prompt, a generated response, and a binary label (e.g., "good" or "bad") assigned by human annotators. Therefore, responses labeled as "bad" constitute the set $D^-$.
8. **DDRO vs. KTO on AlpacaEval LC Winrate:**
We interpret this result as potentially stemming from KTO's underlying preference model (based on spectrum theory) aligning particularly well with AlpacaEval benchmark. This focused alignment might come at the cost of performance on other, broader benchmarks. In contrast, DDRO, by directly learning from the preference signals without an intermediate model, avoids such model-induced biases, leading to more robust and generally strong performance across a wider range of benchmarks, as observed in our experiments (e.g., BBH, GSM8K, MMLU, TruthfulQA).
9. **Potential for zero probability mass in Eq. (6):**
As mentioned in the paragraph just before Eq. (6), this can indeed occur and potentially lead to gradient spikes. To mitigate this, we introduced the function $S(\cdot)$ which shapes the loss landscape to be more benign.
Thank you again for your detailed feedback, which will help us improve the final version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the responses. I think my minor concerns (e.g., the proof of Proposition 3.3) have been addressed, but the major two, (1) empirical benefits of statistical consistency and (2) justification of Claim 2, still remain.
I fully acknowledge this work's theoretical contribution in ensuring convergence to a sound target— this is why I'm leaning toward accept. However, as mentioned before, the authors didn't frame the paper as purely theoretical work, and hence empirical evidence is important. It’s still unclear whether such theoretical benefits would translate into practical improvement. Regarding my concern on Claim 2, the non-transitive preference example is somewhat helpful but not fully convincing, as the influence of non-transitivity on performance remains uncertain.
Therefore, I maintain my current score, though I would raise it to 3.5 if there is such an option. | null | null | null | null | null | null |
Distillation of Discrete Diffusion through Dimensional Correlations | Accept (poster) | Summary: This paper focuses on an important research question of distilling discrete diffusion models, which poses unique challenges due to the necessity of modeling the joint distribution of multiple discrete states grows with a combinational complexity. This paper proposes Di4C, a principled model agnostic approach for distilling discrete diffusion models. Specifically, a mixture model with enough expressivity is employed as the student model and is learned with a consistency trajectory distillation fashion loss, along with several auxiliary losses and a designed control variate. Experimental results demonstrate the effectiveness of the proposed method on both image generation and text generation.
Claims And Evidence: The claims are well supported with mathematical derivation and empirical evidence.
Methods And Evaluation Criteria: The proposed method is reasonable with comprehensive evaluation across several standard benchmarks.
Theoretical Claims: Though I didn't go through every detail in the proof, the conclusion makes sense to me.
Experimental Designs Or Analyses: The experimental designs for image generation seem nice to me.
For the experiment part of text generation, I have several questions:
* Why do you opt to apply your method on top of another specific distilled model? Does the proposed method work well on top of a vanilla teacher model compared to the other distillation method?
* It is known that the generative perplexity has crucial flaws [1,2], so it would be great to consider additional metrics for the text generation task.
[1] Shi, Jiaxin, et al. "Simplified and generalized masked diffusion for discrete data." (NeurIPS 2024)
[2] Zheng, Kaiwen, et al. "Masked diffusion models are secretly time-agnostic masked models and exploit inaccurate categorical sampling." (ICLR 2025).
Supplementary Material: I reviewed Appendix A, E, and F in detail.
Relation To Broader Scientific Literature: Modeling complex correlations / joint distribution of multiple high-dimensional random variables is an important research question and this paper presents several nice ideas to tackle this.
Essential References Not Discussed: Several papers discussing the evaluation criteria of discrete diffusion models for text generation (e.g., [1]) and developing fast samplers (e.g., [2]) should be discussed.
[1] Shi, Jiaxin, et al. "Simplified and generalized masked diffusion for discrete data." (NeurIPS 2024)
[2] Zheng, Kaiwen, et al. "Masked diffusion models are secretly time-agnostic masked models and exploit inaccurate categorical sampling." (ICLR 2025).
Other Strengths And Weaknesses: ## Strength
* This paper studies an important and underexplored research question of distilling discrete diffusion models.
* This paper is well executed on both technical and empirical aspects.
* The writing is easy to follow.
## Weakness
* The primary weakness of the current manuscript is the presentation of the main text, likely due to space constraints. As a result, readers must consult the supplementary material to fully grasp the paper's core technical contributions. Additionally, for clarity, a formal description of the sampling scheme should be included in the main text.
* Please refer to the "Experimental Designs Or Analyses" part for my question about the metric and the teacher model used for text generation.
Other Comments Or Suggestions: * In Eqn. (4), should the expectation be taken over $x_T$?
* On the top of Figure 1 (right), should the second term be $p^1(x; \beta)p^2(y; \beta)$?
Questions For Authors: * How does the proposed method compare to advanced samplers for discrete diffusion models such as [1]?
* Does the time step schedule (i.e., the choice of $s, u, t$) play an important role as in the consistency models [2,3]?
[1] Zheng, Kaiwen, et al. "Masked diffusion models are secretly time-agnostic masked models and exploit inaccurate categorical sampling." (ICLR 2025).
[2] Song, Yang, and Prafulla Dhariwal. "Improved techniques for training consistency models." (ICLR 2024)
[3] Geng, Zhengyang, et al. "Consistency models made easy." (ICLR 2025).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of the paper. Let us answer your questions. We will also correct typos you have pointed out.
### [q-1] Why we chose SDTT model as teacher
> Why do you opt to apply your method on top of another specific distilled model? Does the proposed method work well on top of a vanilla teacher model compared to the other distillation method?
We chose the SDTT model as the teacher for the following reasons.
Firstly, we wanted to see if our method works even on top of a well-distilled model. While the teacher models (sdtt-6 / sdtt-7) went through many rounds of distillation, their modeling remains dimensionally independent. So, we wanted to confirm that there is still room for improvement by introducing dimensional correlations.
Secondly, we were not aware of any other distillation methods for discrete diffusion (even though SDTT is just for masked diffusions) with which we compare our methods. By examining their performance gain (i.e., sdtt-6 -> sdtt-7), we could see how our methods work compared to another distillation method at the same time (please also see [q-2] for the actual comparison).
### [q-2] Generative perplexity and diversity
> It is known that the generative perplexity has crucial flaws [1,2], so it would be great to consider additional metrics for the text generation task.
Generative perplexity can indeed be easily hacked as it does not consider the diversity of the generated samples. Following our teacher model's work, we measured the Self-BLEU metric (similar to the sentence entropy used in [2] from your reference list, but lower is better) in addition to generative perplexity and plotted their trade-off curve in Figure 4(b). Our finding is that our distillation method (sdtt-6 -> sdtt-6+di4c(^2)) does not worsen the Self-BLEU compared to a round of SDTT (sdtt-6 -> sdtt-7).
### [q-3] Relation to an advanced fast sampler
> How does the proposed method compare to advanced samplers for discrete diffusion models such as [1]?
Thank you for pointing out this relevant work. We understand that the fast sampling method in the reference [1] (from your reference list) suggests that, in masked diffusion models, we can reduce the size of categorical sampling by first choosing the index (or indices in the parallel decoding variant) to unmask. In their parallel decoding variant, they do not model the dimensional correlations, so our mixture modeling can be combined to further improve sampling quality.
### [q-4] Time step scheduling
> Does the time step schedule (i.e., the choice of $s,u,t$) play an important role as in the consistency models [2,3]?
We suppose so, as the quality of our distillation target can vary depending on the time step scheduling. While we did not optimize the teacher sampling steps in our work, there is research optimizing the time step schedules in discrete diffusion [Par25], which can be combined with our method. With this approach, we can also optimize the step schedule after training the model (i.e., the step schedule in few-step sampling).
### References
[Des25] Deschenaux and Gulcehre. Beyond Autoregression: Fast LLMs via Self-Distillation Through Time. ICLR 2025.
[Par25] Park et al. Jump your steps: Optimizing sampling schedule of discrete diffusion models. ICLR 2025. | Summary: This paper studies the distillation problem for discrete diffusion models. The authors identify a key challenge in capturing dimensional dependencies and provide theoretical analyses to support their findings. To address this, they propose a mixture student model with tailored loss functions to facilitate distillation. The proposed method is demonstrated to be effective in both vision and language domains.
Claims And Evidence: Yes, the claims are well supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria are mostly appropriate for the problem. However, I find one requirement somewhat restrictive in practical applications. Specifically, the method relies on reference distributions for $r_\delta$ and $r_t$, which must either be sampled from real training data or approximated using multiple teacher steps. The first approach requires access to real data, which is not always feasible—especially in cases where only a pretrained model is available and due to copyright restrictions. The second approach, while avoiding this issue, may introduce significant computational overhead (e.g., when approximating $r_\delta$ with $\delta \ll 1$), making the method less practical for large-scale applications.
Theoretical Claims: I reviewed the main theorems and their proofs and found no issues.
Experimental Designs Or Analyses: I reviewed the experimental design and discussion and found no major issues. However, I did found parts of the experiments could be further improved:
1. Table 1 only provides the evaluation metrics of 10,20,40 inference steps. It would be more informative to include results of other steps in a graph similar to Figure 3. For example, I would like to see how many times of acceleration the proposed distillation method can achieve to maintain a similar performance (in FID/IS) to the teacher model using 20 inference steps. Additionally, could the authors explain how the results were obtained? Based on Campbell et al. [1] Figure 4, CIFAR-10 FID of $\tau\text{LDR-0}$ ("teacher model" here) does not drop near 8 until over 256 NFEs. Why is the result of the teacher model using 40 steps in Table 1 already close to 8?
2. Additionally, in [1], with additional corrector steps, the results of $\tau\text{LDR}$ can be significantly improved--3.74 in FID and 9.49 in IS for $\tau\text{LDR-10}$. However, it seems that the authors did not compare their results with it. Does it indicate that the proposed distillation method is not compatible with the predictor-corrector sampler. Otherwise, I would suggest including the results combining the student model with the predictor-corrector sampling strategy.
3. Could the authors also include the sampling results with one inference step? Considering that distillation methods for continuous diffusion models can already achieve performance on par with their teacher models using just one step sampling, I feel there is still a huge gap between the proposed method and its counterpart for continuous diffusion. While the proposed distillation method does open a new door for accelerating discrete diffusion sampling, I don't find the proposed method especially impressive and convincing based on these experimental results.
[1] Campbell, Andrew, et al. "A continuous time framework for discrete denoising models." *Advances in Neural Information Processing Systems* 35 (2022): 28266-28279.
Supplementary Material: I found the codebase in the supplementary material but did not analyze it in detail.
Relation To Broader Scientific Literature: The paper makes important contributions to discrete diffusion distillation through both theoretical analysis and practical algorithm design.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is well-written with solid theoretical grounding and mathematical details. The proposed algorithm is flexible, practical, and applicable across different model architectures and tasks.
Other Comments Or Suggestions: None.
Questions For Authors: How should the sampling distribution of $\lambda$ be chosen? Have the authors ever considered other common distributions than uniform? How sensitive is model performance to this choice?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback and positive comments. Let us respond to your questions.
### [d-1] On the reference distributions
> the method relies on reference distributions for $r_\delta$ and $r_t$, which must either be sampled from real training data or approximated using multiple teacher steps.
It is a valid point that our method has limitations on the source of reference samples. However, in the relatively large-scale experiments (masked image/language modeling), we used only a small portion of the original dataset. For example, we used 200K out of 9M samples in OpenWebText (similarly for ImageNet). Additionally, in our current implementation, each sample $x_0$ generates only a single $x_t$ (with a single $t$) throughout the training run. This suggests that more efficient batch designs could reduce the essential number of training samples. That said, we acknowledge that accessing or constructing reference samples is not always straightforward. Future research should explore the use of partial samples or samples from alternative distributions to address this limitation.
### [d-2] Regarding $\tau$LDR models
> Why is the result of the teacher model using 40 steps in Table 1 already close to 8?
This discrepancy arises from the different sampling methods used. As detailed in Table 3 in Section F.2.2, the $\tau$-leaping sampler does not work well with 40 steps. Instead, we used the analytical sampler to evaluate the teacher model. We suppose this difference stems from the non-time-homogeneous nature of the forward diffusion. Specifically, $\tau$-leaping approximates the transition rate using a constant matrix over a time interval of length $\tau$, which fails to accurately capture the actual transition rates when $\tau$ is large and the rates vary significantly within the interval. In contrast, the analytical sampler avoids this issue, although it still does not account for dimensional correlations.
> Does it indicate that the proposed distillation method is not compatible with the predictor-corrector sampler.
We can use predictor-corrector (PC). However, PC requires additional network evaluations (NFE; each as expensive as one step of denoising), and does not perform well in the small NFE regime [Cam22, Fig 4]. Below are the FIDs (based on 10K images; IS omitted due to the character limit) for different PC settings under total NFE of 20. 'n+2*m' means we used m corrector steps before each of the final 2 out of n denoising steps (imitating [Cam22]).
|NFE|6+2*7|10+2*5|14+2*3|20+2*0|
|-|-|-|-|-|
|teacher|57.57|32.74|21.32|14.42|
|student|44.57|22.63|14.25|11.81|
### [d-3] Inference of other number of steps
> It would be more informative to include results of other steps in a graph similar to Figure 3. For example, I would like to see how many times of acceleration the proposed distillation method can achieve to maintain a similar performance (in FID/IS) to the teacher model using 20 inference steps.
Here, we have computed additional FID/IS for our CIFAR-10 experiment using 10K samples (fewer than the 50K samples used in the paper, so the numbers may be slightly worse) for 2-20 steps:
FID:
|model|2|4|6|8|10|12|14|16|18|20|
|-|-|-|-|-|-|-|-|-|-|-|
| teacher | 392.24 | 173.29 | 78.24 | 49.44 | 34.70 | 26.33 | 21.47 | 18.05 | 15.80 | 14.42 |
| student | 411.70 | 147.67 | 59.62 | 33.85 | 22.57 | 17.46 | 14.37 | 12.86 | 12.28 | 11.81 |
IS:
|model|2|4|6|8|10|12|14|16|18|20|
|-|-|-|-|-|-|-|-|-|-|-|
| teacher | 1.17 | 2.99 | 5.90 | 7.01 | 7.45 | 7.82 | 7.96 | 8.16 | 8.35 | 8.46 |
| student | 1.25 | 3.48 | 6.71 | 7.68 | 8.17 | 8.33 | 8.37 | 8.50 | 8.38 | 8.39 |
In terms of FID, our method achieves approximately 1.4 times acceleration in the 10–20 step range. However, it does not perform well in very few steps (e.g., 2–4 steps).
> I feel there is still a huge gap between the proposed method and its counterpart for continuous diffusion.
To further reduce the number of steps, it is necessary to model high-dimensional discrete distributions more efficiently, as we lack a deterministic formulation like probability-flow ODEs (available in the continuous case). While mixture modeling provides some improvement, further optimization is required, including adjustments to the architecture, initial distribution, and dimensionality of $\lambda$. As you noted, our work represents a first step in building the theoretical foundation for this goal.
### [d-4] Sampling distribution of $\lambda$
> How should the sampling distribution of $\lambda$ be chosen? Have the authors ever considered other common distributions than uniform?
We prioritized ablations of different components over exploring alternative distributions for $\lambda$. As noted at the end of [d-3], this remains an area for future investigation.
### References
[Sah24] Sahoo et al. Simple and effective masked diffusion language models. NeurIPS 2024.
[Cam22] Campbell et al. A continuous time framework for discrete denoising models. NeurIPS 2022. | Summary: The paper proposed a improved model for distilling discrete diffusion models(DDMs). The key idea is that traditional DDMs break apart the latent distributions into product of marginal distributions, while the proposed model represent the latent distributions into products of bi-dimension distribution pairs, which is able to approximate the ground-truth distribution in fewer de-noising steps with accuracy.
Claims And Evidence: The authors provided FID measurement as well as image instances to prove that the distilled model compliant with the teacher model. They claim that distilled model with 4-steps reaches similar performance of the teacher model in 8-steps, while the overhead for each step is minimal (according to supp. F.1). Although this claim is rational, it should be noted that this can be frequently observed for certain distillation models because they take benefits from the hint of the teacher models.
Methods And Evaluation Criteria: The evaluation criteria make sense but is a very rough one. The FID/IS metric is a very limited metric for image generation quality. It mainly focus on the average distribution of inception features, without considering the diversity and detailed fidelity of the images.
Theoretical Claims: The authors provided theoretical proof that the total variance decrease with increasing training steps.
Experimental Designs Or Analyses: Firstly, as discussed above, the authors mainly compare their model with the teacher model using FID/IS scores, which is inherited limited . Also, they made this comparison on CIFAR-10 and ImageNet datasets, where the FID/IS scores converge for each model in very few steps, which cannot justify the benefit of the model because DDM is not slow in this case. The authors did introduce other metrics (e.g., PPL and MAUVE score) in conditional generation but each time only single metric is used for one dataset, and no statistical variance analysis was given. Third and most important, the method is not compared against other distilling methods.
Supplementary Material: I have checked the experimental results in the supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength: The authors actually proposed an improvement for DDM.
Weakness: If this improved structure can only be applied to distill pre-trained DDM, the benefit is very limited.
Other Comments Or Suggestions: N/A
Questions For Authors: It is confusing why the authors insist on distilling DDM, which is already a fast approximate model, rather than slow high-quality models? Is it possible to use the model for distilling original or modified diffusion models of higher quality, or training from a scratch?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review. Let us reply to your comments.
### [1-1] Added metrics beyond FID/IS
> the authors mainly compare their model with the teacher model using FID/IS scores, which is inherited limited
We have additionally computed precision and recall metrics for the ImageNet experiment (see the table below), following established practices in the current literature (e.g., [Tian24]). Due to character limits, we present the results for each model at the classifier-free guidance (CFG) coefficient that achieves the best FID (see Table 6). We can see di4c models achieve better scores than the teacher model with the same number of steps.
|Model (#steps)|Precision|Recall|
|-|-|-|
|teacher (4)|0.7737|0.5057|
|di4c (4)|0.7910|0.5363|
|di4c-d (4)|0.7866|0.5391|
|teacher (8)|0.7939|0.5499|
### [1-2] DDMs are not "fast and approximate" in general
> It is confusing why the authors insist on distilling DDM, which is already a fast approximate model, rather than slow high-quality models? Is it possible to use the model for distilling original or modified diffusion models of higher quality, or training from a scratch?
The characterization of DDMs as approximations of 'original diffusion models' (as implied in your latter comment) is not typically correct. Indeed, DDMs are not "approximate" or "fast" in general.
- **Not Approximate:** The teacher DDMs were generally trained from scratch (except for SDTT models we used in language modeling) without relying on any other diffusion models. While the parallel decoding (analytical sampling) in teacher DDM is indeed an approximate inference (with guarantees given in Theorem 1), a similar approximation (unimodal Gaussian approximation) is commonly employed in continuous diffusion models, as noted in [Li24]. Therefore, DDMs are not particularly "approximate" compared to continuous diffusion models.
- **Not Fast:** DDMs typically require tens to hundreds of sampling steps, depending on the complexity of model/data. For the experiment on Discretized Gaussian Diffusion, the teacher model requires 30-40 steps to reach FID<10, which we believe is no longer "very few steps". For comparison, in the continuous case, while EDM on CIFAR-10 requires 35 steps [Kar22], notable distillation methods [Son23] start from EDM and distill it into a few-step model. MaskGIT(-pytorch) is one of few examples that achieves a small number of steps (8 steps to reach FID~7.0), enabled by some heuristics including confidence-based sampling and CFG. We tested our method in this scenario because we believe it is worth investigating if our distillation methods can also work in combination with such heuristics.
Thus, we believe our method can be applied to "original or modified diffusion models". Learning dimensional correlations in DDM from scratch by incorporating the consistency training of [Son23] into our method is an interesting future work.
### [1-3] We mostly compute multiple metrics for each dataset
> each time only single metric is used for one dataset, and no statistical variance analysis was given.
It is not accurate to state that "each time only single metric is used for one dataset". In fact, we computed both FID and IS for CIFAR-10 and ImageNet experiments. For the OpenWebText experiment, we computed Gen. PPL for unconditional generation, and three different metrics (Gen. PPL, Self-BLEU, and MAUVE) for conditionally generated texts with prompts from the WebText dataset.
Regarding statistical variance analysis, we provided variance values for the latency analysis in Table 2. While it is ideal to retrain and resample everything 5-10 times, the large number of training checkpoints and experimental data points makes it common practice in the literature to report numbers from a single run (as seen in most references cited in this reply).
### [1-4] We compare our method with SDTT distillation
> most important, the method is not compared against other distilling methods.
While there are few distillation methods for discrete diffusion available (except for some concurrent works), we did compare our method with one round of distillation by SDTT [Des25] in the language modeling experiment (Figure 6). Specifically,"sdtt-7" is obtained after one round of SDTT distillation upon "sdtt-6", while "sdtt-6 + di4c" (or di4c^2) is obtained by Di4C training using the same teacher (sdtt-6). We explicitly compare them in Section 5.3.
### References
[Tian24] Tian et al. Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction. NeurIPS 2024.
[Li24] Li et al. Soft mixture denoising: Beyond the expressive bottleneck of diffusion models. ICLR 2024.
[Kar22] Karras et al. Elucidating the design space of diffusion-based generative models. NeurIPS 2022.
[Son23] Song et al. Consistency models. ICML 2023.
[Des25] Deschenaux and Gulcehre. Beyond Autoregression: Fast LLMs via Self-Distillation Through Time. ICLR 2025. | null | null | null | null | null | null | null | null |
WorldSimBench: Towards Video Generation Models as World Simulators | Accept (poster) | Summary: This paper introduces WorldSimBench, a benchmark for evaluating video predictive models from an embodied perspective, in contrast to previous approaches focused on generative evaluation. It features Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, combining human preference assessments from a visual standpoint with action-level evaluations in embodied tasks. WorldSimBench covers three key embodied scenarios: Open-Ended Embodied Environments, Autonomous Driving, and Robot Manipulation. Experiments on multiple existing models reveal the strengths and limitations of current world simulators. Additionally, it provides the HF-Embodied Dataset, comprising 35,701 entries, to further support research in this domain.
Claims And Evidence: This is a benchmark paper and does not make traditional claims.
The authors propose:
1. Evaluating the capability of predictive models from a hierarchical perspective.
2. Assessing models from an embodied perspective by leveraging video-action models to evaluate driving and manipulation tasks, revealing significant shortcomings in existing models.
Overall, evaluating from an embodied perspective is a valuable approach, as current generative models are clearly insufficient. However, regarding the first point, I have some doubts. As the authors mentioned, prediction can occur across different modalities with varying focuses, but it does not necessarily imply a clear hierarchical structure.
Methods And Evaluation Criteria: The authors evaluate video prediction models from explicit perceptual and implicit manipulative perspectives. They assess models based on visual quality, condition consistency, and embodiment, which is a reasonable approach. However, the work lacks significant innovation, serving more as an integrative evaluation of previous research.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: 1. Running only 20 trials on CALVIN is not reasonable, as the standard evaluation setting typically involves 1,000 trials.
2. The evaluation of AD also feels somewhat problematic. It essentially involves predefining trajectories as instructions, using those instructions to guide video generation, and then converting them into corresponding actions. Throughout this process, information loss occurs at each transformation step.
Supplementary Material: Yes, the supplementary materials provide related works, additional experiments, and implementation details.
Relation To Broader Scientific Literature: This work attempts to evaluate the capabilities of existing video generation models from an embodied perspective, aiming to make them more practically applicable.
Essential References Not Discussed: Predictive models have been widely explored in both autonomous driving and robotics. For instance, in autonomous driving, works such as GAIA-1, Drive-WM, and DriveDreamer focus on world models. However, this paper's related works section lacks discussion on these contributions.
Other Strengths And Weaknesses: I appreciate the authors' contributions, as they have conducted extensive experiments across gaming, driving, and robotics. Although some metric evaluations may not be entirely reasonable, their work still contributes to the advancement of the field.
However, the evaluation lacks strong persuasiveness and innovation, as it primarily follows existing methods to assess current models. Additionally, video-to-action approaches are not universally applicable across different domains.
Other Comments Or Suggestions: 1. How to use the human feedback annotations for further improvement?
2. Line84-85, duplicate “output modality”
3. Line1238, cite PVDM
Questions For Authors: 1. This paper evaluates multiple video generation methods. What are the key conclusions and insights? How can the findings guide improvements in model design and methodology?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive and thoughtful comments. They were indeed helpful in improving the paper. We take this opportunity to address your concerns:
>Hierarchical Structure of Modalities
The hierarchical structure we propose is not based on semantic abstraction, but on the practical difficulty of turning different predicted modalities into real-world executable actions—a key consideration for evaluating video generation models as world simulators. Language is high-level but often ambiguous and under-specified for action planning; images are more concrete but lack temporal and causal context; videos, while potentially modeling transitions, may produce physically inconsistent content. To address this, we introduce the notion of actionable video prediction—predicted videos that are both instruction-aligned and physically plausible, making them suitable for downstream decision-making in embodied agents. This task-driven, grounded perspective is essential for advancing video generation toward real-world utility.
---
>CALVIN Evaluation Trials
While the main text initially reported 20-trial results on the CALVIN benchmark, we also conducted 100-trial evaluations following prior works such as UniPi and SuSIE (Appendix Table 15). In response to your suggestion, we have now further extended our evaluation to **1,000 trials** to improve statistical reliability.
#### Task completed in a row (%) ↑ for (1,2,3,4,5)
| Method| 1| 2| 3| 4| 5| Avg. Len. ↑ |
|-------------------|-------|-------|-------|-------|-------|-------------|
| **Open-Sora-Plan**|91.8|73.2|61.2|35.3|32.2| 3.06|
| **DynamiCrafter**|92.4|69.2|50.1|29.3|18.7| 2.71|
| **EasyAnimate**|88.3|57.3|33.9|17.3|12.2| 2.05|
---
>AD Evaluation Setup
We would like to clarify that our AD evaluation does not rely on predefined trajectories. Instead, the agent receives language instructions, and a video generation model predicts future frames accordingly. These are passed to a fixed video2action policy—trained only on ground-truth data and shared across all models—to generate control actions. As this policy is not trained on generated videos, task success rates directly reflect the quality and physical plausibility of the predictions. Our setup is designed to evaluate whether models can produce instruction-aligned, physically grounded outputs, rather than reproduce preset trajectories.
---
>Related Work in Autonomous Driving
We will revise the Related Work section to include these approaches.
---
>Evaluation Persuasiveness and Domain Generality
We respectfully clarify that our evaluation framework is tailored for embodied settings, where video predictions must be both visually plausible and physically actionable. Unlike prior work focused on perceptual quality, our task-driven protocol assesses a model’s ability to support real-world decision-making by using predicted videos to control agents via a shared video-to-action policy. We introduce the concept of actionable video prediction—emphasizing instruction alignment and physical feasibility—and combine perceptual and task-based evaluations. While video-to-action may not apply universally, it is well-suited for embodied tasks like robotics and autonomous driving, where grounding in action space is essential. This embodiment-centered perspective brings both novelty and practical value to evaluating generative world models.
---
>Suggestions #1
We address this point in **Appendix C.1**, where we explore how human feedback annotations can be leveraged to further improve the video model’s capabilities through reinforcement learning. Specifically, we conducted experiments using the Open-SORA-Plan framework to fine-tune the model based on human feedback. The results of these experiments are reported in **Appendix Tab.5**, which demonstrate that incorporating human feedback can lead to measurable improvements in planning and control performance.
---
>Suggestions #2, #3
We will fix the typos in the revised version.
---
>Questions: Conclusions and Insights
Our paper highlights several key insights across different sections. In **Section 4.3**, we analyze limitations of current video generation models as world simulators with several physical aspects. **Appendix C.1** shows how human feedback from the HF-Embodied dataset can enhance video quality via reinforcement learning, offering a path to integrate human preferences into training. **Appendix H** further demonstrates how generated videos and our benchmark can benefit VLA methods, revealing broader utility in downstream embodied tasks. Together, these findings provide actionable guidance for future model design, including better multi-modal supervision, improved control fidelity, and the use of synthetic data for generalization.
---
Thank you for the constructive feedback. We will incorporate the above clarifications, examples, and additional experiments into the paper, and we hope these revisions address your concerns. | Summary: This paper proposes a framework that can be used to evaluate world simulators, such as video generation models. the framework is divided into a couple of components, a perceptual evaluation, and an embodied evaluation. the perceptual one uses a model trained on human collected data to score the results. the embodied on measures task success by combining it with a trained video-to-action model.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: n/a
Experimental Designs Or Analyses: yes
Supplementary Material: no
Relation To Broader Scientific Literature: i believe this paper will have some impact to a sizable audience in computer vision and ML communities
Essential References Not Discussed: references are adequate
Other Strengths And Weaknesses: 1. there are limited qualitative results. this makes it very difficult to evaluate the performance
2. can the generated video be applied to improve vision-language-action models?
3. I feel that the proposed method is more like a survey paper, which means that the novelty is limited.
Other Comments Or Suggestions: n/a
Questions For Authors: please see the comments above
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your recognition of our paper:
- **Inspiring for the Community**: *"I believe this paper will have some impact to a sizable audience in computer vision and ML communities."*
- **Comprehensive Literature Review**: *"References are adequate."*
---
>Supplementary Material
Our supplementary material is included in the same PDF file, appended to the end of the main paper.
---
>Quantitative Experimental Results
We appreciate your suggestion regarding the presentation of results. **D.2. Qualitative Results** and **Figure. 7** in the supplementary material shows some of the Quantitative Experimental Results, and we have also provided corresponding analyses for these results. We have also created a **[Anonymous Website](https://sites.google.com/view/worldsimbench)** to showcase more visualized results, which we believe will enhance the clarity and impact of our paper. We hope this will provide a more comprehensive understanding of the performance of different models on our benchmark.
---
>Discussion of VLA
We appreciate the reviewer for bringing this up—connecting our work to Vision-Language-Action (VLA) models is indeed a valuable perspective. We would like to clarify that we have already discussed this in the supplementary material, specifically in **Section H. Discussion of Vision-Language-Action Models.**
As summarized there, the generated videos can contribute to VLA models in two key ways:
1. As a source of **diverse and scalable training data for imitation learning**, including through hindsight relabeling, which helps models learn from generated successful trajectories.
2. As a means of **reward generation in reinforcement learning settings**, where generated videos can simulate goal states or desirable behaviors, enabling the creation of dense and context-aware reward signals.
This is particularly beneficial for bridging the sim-to-real gap in real-world tasks where explicit reward specification is often challenging.
---
>Survey Paper Concern
We respectfully disagree with the assessment that our paper resembles a survey. While our work does provide a structured taxonomy and draws insights from multiple perceptual and task-based settings, these components are not retrospective summaries, but rather integral parts of a novel and operational evaluation framework.
Specifically, we make the following contributions that go beyond a survey:
- We design and implement a **novel evaluation framework** tailored for assessing video generation models as world simulators—a key open challenge in the community. This framework is not merely a collection of metrics; it includes:
- A **human-aligned perceptual evaluation protocol** that maps cognitive dimensions to measurable factors, supported by the **HF-Embodied dataset** with costly human preference annotations that enrich the quality and depth of the evaluation.
- A **physics-driven embodied task evaluation**, which leverages embodied agents in simulated environments to test the rationality and functional validity of generated videos—something traditional metrics and human studies often overlook.
- We introduce and apply a **taxonomy of perceptual attributes** to systematically assess different facets of human judgment. This taxonomy is used to guide newly designed human studies, not to summarize existing ones.
- We conduct **extensive experiments** across diverse tasks and settings, showcasing the practical utility and diagnostic power of our framework. Our results not only highlight model behaviors but also reveal consistent patterns between perceptual and task-level evaluations—demonstrating the framework’s effectiveness and explanatory value.
In summary, rather than surveying existing work, our paper offers a principled and implementable approach to a previously underexplored problem. We believe this constitutes a significant and novel contribution to the evaluation of video generation systems, and we hope the reviewer will reconsider their assessment in this light.
---
Thank you for the constructive feedback. We will incorporate the above clarifications, examples, and additional experiments into the paper, and we hope these revisions address your concerns. | Summary: The lack of categorization based on inherent characteristics hinders predictive model development, and existing benchmarks fail to evaluate highly embodied models effectively. To address this, this paper introduces WorldSimBench, a dual evaluation framework for World Simulators. It includes Explicit Perceptual Evaluation, using the HF-Embodied Dataset and a Human Preference Evaluator for visual fidelity assessment, and Implicit Manipulative Evaluation, measuring video-action consistency in dynamic environments. Authors evaluate across three embodied scenarios: open-ended environments, autonomous driving, and robot manipulation. It provides key insights to advance video generation models and strengthen their role in embodied AI.
Claims And Evidence: As video generation models continue to advance, we lack a more powerful and comprehensive benchmark. I completely agree with this point, especially since earlier evaluation metrics focused too much on video quality rather than its practical usability—particularly whether the generated videos can serve as a world simulator.
This viewpoint is somewhat obvious to practitioners in the field. Therefore, the key to evaluating this paper lies in whether the benchmark it provides aligns with researchers' needs. Its contribution ultimately depends on the novelty and relevance of the new dataset and evaluation metrics.
Methods And Evaluation Criteria: WorldSimBench evaluates World Simulators at two levels: Explicit Perceptual Evaluation, assessing human-perceived quality, and Implicit Manipulative Evaluation, testing video-to-control translation in closed-loop tasks. It covers three key embodied scenarios: Open-Ended Environments (OE) using Minecraft for complex tasks, Autonomous Driving (AD) for stability in dynamic conditions, and Robot Manipulation (RM) for precise control in physical interactions. These benchmarks provide a comprehensive assessment of a simulator’s effectiveness in real-world tasks.
Theoretical Claims: No theoretical claims are required for review in this work.
Experimental Designs Or Analyses: 8 popular video generation models were evaluated, including Open-Sora-Plan, Lavie, ModelScope, Open-Sora, AnimateDiff, Open-Sora-Plan, Dynamicrafter, and EasyAnimate. These models are assessed through both Explicit Perceptual Evaluation and Implicit Manipulative Evaluation across three scenarios: Open-Ended Embodied Environment, Autonomous Driving , and Robot Manipulation. Each model is fine-tuned on specific datasets corresponding to these embodied scenarios for a comprehensive evaluation.
Supplementary Material: Yes, I’ve gone through it. However, I feel that the authors could present the results of different models on their new benchmark more clearly. Providing more visualized results on the website would be essential for increasing the paper’s impact.
Relation To Broader Scientific Literature: A strong benchmark should highlight the shortcomings of current models and indicate future directions for improvement. Right now, I think the paper could do a better job of making these aspects more visually explicit.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Overall, the evaluation of this work should focus on 3 key aspects:
1. Data Contribution – The novelty and usefulness of the proposed dataset.
2. Metrics and evaluation – Whether the evaluation metrics effectively capture the critical aspects of video generation. How well the benchmark reveals the limitations of current models.
3. Guiding Future Research – Whether it provides clear insights into the next steps for advancing video generation models.
**Data contribution**: HF-Embodied Dataset is useful for constructing evaluation metrics. Large-scale embodied videos sourced from the internet, paired with captions, are used to train data generation models. Fine-grained human feedback annotation is then applied based on the corresponding Task Instruction Prompt List, ensuring coverage across multiple embodied dimensions.
Additionally, instructions for video generation are expanded using GPT and manually verified to construct a comprehensive Task Instruction Prompt List, which serves as a guideline for both data generation and evaluation.
I think this aspect of the contribution is positive, but there is still room to expand the scope of the dataset. For example, since the paper emphasizes embodiment, it would be beneficial to include additional annotated information, such as camera poses and other egomotion data. This could further enhance the dataset’s utility. This is just a suggestion for the authors to consider in future iterations.
**Metrics and evaluation**: The authors correctly focus on instruction alignment and train a Human Preference Evaluator using the HF-Embodied Dataset to evaluate eight models. They then observe that these models struggle with instruction alignment. However, I have two concerns:
1. Effectiveness of Fine-Tuning: The authors fine-tune these models on the new dataset, but different models can exhibit significantly different behaviors depending on how they are fine-tuned. Whether SFT, LoRA, or other techniques are used can also impact the results. How do the authors ensure that their fine-tuning is effective and comparable across models?
2: Choice of Models: The models used in the paper seem somewhat outdated. It would be much more compelling if the evaluation included newer models like CogVideoX, as this would provide a better assessment of current model capabilities.
**Future Direction**: I think this section could be further improved. I would like to see more failure cases of the Human Preference Evaluator, as well as a deeper analysis of when and why current video generation models fail.
The paper could further categorize different prompts to identify specific scenarios where models are more prone to failure. This would provide a more detailed and actionable understanding of instruction alignment challenges in video generation.
Other Comments Or Suggestions: NA
Questions For Authors: I have mentioned this in the previous sections
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your recognition of our paper:
- **Inspiring and Novelty**: *"It provides key insights to advance video generation models and strengthen their role in embodied AI."*
- **Reasonable Evaluation Criteria**: *"These benchmarks provide a comprehensive assessment of a simulator’s effectiveness in real-world tasks."*
- **Useful Dataset**: *"HF-Embodied Dataset is useful for constructing evaluation metrics."*
---
>Quantitative Experimental Results
We appreciate your suggestion regarding the presentation of results. To address this, we have created a **[Anonymous Website](https://sites.google.com/view/worldsimbench)** to showcase more visualized results, which we believe will enhance the clarity and impact of our paper. We hope this will provide a more comprehensive understanding of the performance of different models on our benchmark.
---
>Broader Scientific Literature
We agree that a strong benchmark should clearly expose the limitations of current models and point toward directions for future improvement. While our paper already discusses these aspects—for example, in **Section 4.3** we analyze the physical limitations of current video generation models as world simulators; **Appendix C.1** demonstrates how human feedback from the HF-Embodied dataset can improve video quality via reinforcement learning; and **Appendix H** shows how generated videos can benefit VLA methods and downstream embodied tasks—we acknowledge that these insights could be presented more intuitively. In the revised version, we will make these findings more visually explicit and accessible, helping readers better grasp the limitations of current approaches and the future opportunities they reveal.
---
>Suggestion of Dataset Expansion
Thank you for your thoughtful and constructive feedback. We’re glad to hear that you find the HF-Embodied Dataset valuable, especially in terms of evaluation and task diversity. We appreciate your suggestion regarding the inclusion of additional annotations such as camera poses and egomotion data. This is indeed a valuable direction, and we agree that such information could further enrich the dataset and benefit downstream embodied tasks.
Due to the diverse sources and large scale of our current dataset, obtaining accurate camera pose and egomotion annotations presents certain challenges. However, we recognize their importance and are actively exploring methods—both automated and semi-automated—to incorporate such annotations in future versions of the dataset. We believe this will significantly boost its applicability in more complex embodied scenarios.
Thank you again for the suggestion, it provides a meaningful direction for our future work.
---
>Effectiveness of Fine-Tuning
Thank you for your insightful comment. To ensure the effectiveness and comparability of our fine-tuning process across different models, we used the same dataset for training all models. In order to maintain fairness and optimize performance, we followed the fine-tuning strategies recommended by the authors of each respective model. This approach allows us to ensure that each model is trained in the most optimal way according to its specific architecture and design.
By adhering to the suggested fine-tuning protocols, we aim to eliminate potential biases and ensure that the models' performances are evaluated under comparable conditions. We hope this approach addresses your concern regarding the impact of different fine-tuning methods on the results. Thank you again for raising this important point!
---
>Choice of Models
We have added evaluations on HunyuanVideo, Cosmos Diffusion 7B, and CogVideoX1.5-5B. Due to space limits, please see references in our reply to Reviewer yx3f **More Video Models**.
---
>Future Direction
We appreciate the reviewer’s insightful suggestions regarding failure analysis and prompt categorization. We fully agree that understanding the limitations of the Human Preference Evaluator and identifying failure modes in instruction alignment are meaningful directions for future research.
Specifically, analyzing when and why video generation models fail, and categorizing prompts to uncover scenario-specific weaknesses, could offer valuable insights into the robustness and generalization capabilities of current models. These efforts would require dedicated experiments and analysis, which we believe merit a standalone investigation.
We will incorporate these ideas into the **Future Work** section of the paper to inspire and guide follow-up research in this important area. Thank you again for highlighting this valuable direction.
---
Thank you for your thoughtful and constructive feedback—it has been instrumental in helping us improve the paper. We will incorporate the above clarifications, examples, and additional experiments into the paper, and we hope these revisions address your concerns, and we’d be glad to continue the discussion if you have any further questions or suggestions.
---
Rebuttal Comment 1.1:
Comment: My concerns are well addressed. Therefore I will keep my original rating and recommend this work for accepting. | Summary: This paper proposes WorldSimBench, a benchmark used to evaluate the world simulation performance of video generative models. This paper investigates several video valuation benchmarks and introduces a hierarchy for classifying video models. This paper evaluates several video generative models through proposed Explicit Perceptual Evaluation and Implicit Manipulative Evaluation.
Claims And Evidence: Yes. This paper includes extensive empirical results to support the main claim made in this paper.
Methods And Evaluation Criteria: This paper introduces a Human Preference Evaluator, which can be quite interesting and applicable. Other metrics introduced in this paper also make sense.
Theoretical Claims: N\A
Experimental Designs Or Analyses: Yes. However, only quantitative experimental results are included in this paper, while qualitative results are missing.
Supplementary Material: No
Relation To Broader Scientific Literature: N\A
Essential References Not Discussed: I encourage the authors to include more advanced open-sourced video models for evaluation, such as Huanyuan and Cosmos.
Other Strengths And Weaknesses: ## Strengths
- This paper proposes a new benchmark to evaluate video-based world models, which can be valuable for future research in this area.
- This paper introduces a human preference evaluator and video-to-action evaluation metrics to evaluate the world simulation abilities of video models, which are interesting.
- This paper investigates the world simulation performance of several video generative models.
## Weaknesses
- Only quantitative experimental results are included in this paper, while qualitative results are missing. Including some qualitative visualization can let readers know whether your benchmark is challenging or easy for nowadays video models.
- Some world foundation models instead of traditional video generative models should be evaluated by the proposed benchmark, such as the newly introduced Cosmos.
Other Comments Or Suggestions: NA
Questions For Authors: - Since you mainly evaluate some text-based video models, I am curious does these models exhibit superior world simulation ability in the embodied tasks.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your recognition of our paper:
- **Solid Experiment**: *"This paper includes extensive empirical results to support the main claim made in this paper."*
- **Novelty and Soundness**: *"This paper introduces a human preference evaluator and video-to-action evaluation metrics to evaluate the world simulation abilities of video models, which are interesting."*
- **Reasonable Evaluation Criteria**: *"Other metrics introduced in this paper also make sense."*
- **Inspiring for the Community**: *"This paper proposes a new benchmark to evaluate video-based world models, which can be valuable for future research in this area."*
---
>More Video Models
We have added evaluations on **HunyuanVideo**, **Cosmos Diffusion 7B**, and **CogVideoX1.5-5B**:
Evaluation results in **OE**, which can be incorporated into **Table 6** of the supplementary material. The abbreviations are listed in Table 2.
| Model | BC | FC | IA | SA | VC | TJ | EI | Overall |
|---------------|------|------|------|------|------|------|------|---------|
| HunyuanVideo | 1.95 | 2.0 | 2.0 | 1.6 | 2.0 | 2.0 | 1.65 | 1.89 |
| Cosmos | 1.9 | 2.0 | 2.0 | 1.8 | 2.0 | 2.0 | 1.8 | 1.93 |
| CogVideoX | 1.95 | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 | 1.55 | 1.93 |
Evaluation results in **AD**, which can be incorporated into **Table 7** of the supplementary material. The abbreviations are listed in Table 2.
| Model | AE | IA | PV | TJ | KE | SF | Overall |
|---------------|------|------|------|------|------|------|---------|
| HunyuanVideo | 3.75 | 5.0 | 4.1 | 4.8 | 3.4 | 5.0 | 4.34 |
| Cosmos | 4.15 | 5.0 | 4.6 | 4.9 | 3.65 | 5.0 | 4.55 |
| CogVideoX | 3.45 | 5.0 | 3.9 | 4.8 | 3.0 | 5.0 | 4.20 |
Evaluation results in **RM**, which can be incorporated into **Table 8** of the supplementary material. The abbreviations are listed in Table 2.
| Model | AE | BC | FC | IA | PV | TJ | EI | Overall |
|---------------|------|------|------|------|------|------|------|---------|
| HunyuanVideo | 4.0 | 3.9 | 4.0 | 1.9 | 5.0 | 5.0 | 4.1 | 3.99 |
| Cosmos | 4.2 | 4.26 | 4.0 | 2.8 | 5.0 | 5.0 | 4.44 | 4.24 |
| CogVideoX | 3.85 | 4.0 | 4.0 | 2.2 | 5.0 | 5.0 | 4.23 | 4.04 |
---
>Quantitative Experimental Results
**D.2. Qualitative Results and Figure 7** in the supplementary material shows some of the Quantitative Experimental Results, and we have also provided corresponding analyses for these results. Additionally, we have showcased more visual results on an **[Anonymous Website](https://sites.google.com/view/worldsimbench)** to address your concerns.
---
>Questions
There are already quite a few papers and projects based on video models for specific tasks. Methods [1, 2, 3, 4] have demonstrated impressive results in robot manipulation and navigation tasks. Furthermore, future video prediction models can leverage Internet-scale pretraining and visual understanding to guide low-level goal-conditioned policies, potentially showing better generalization in new scenarios. However, during the inference phase, these methods tend to incur additional computational overhead, making them slower compared to traditional methods like imitation learning. Nevertheless, with the continued development of models and acceleration techniques, these approaches may hold significant potential and be worth further exploration.
---
Thank you for your constructive and thoughtful comments. They were indeed helpful in improving the paper. We will incorporate the above clarifications, examples, and additional experiments into the paper, and we hope these revisions address your concerns. If you have any further questions, please do not hesitate to continue the discussion with us.We hope this will provide a more comprehensive understanding of the performance of different models on our benchmark.
---
**References**
[1] Learning Universal Policies via Text-Guided Video Generation
[2] Compositional Foundation Models for Hierarchical Planning
[3] VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation
[4] Grounding Video Models to Actions through Goal Conditioned Exploration
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for adding additional experiments to address my concerns. Since my main concerns have been addressed adequately, I decided to raise my score to 4 to support this work. | null | null | null | null | null | null |
History-Guided Video Diffusion | Accept (poster) | Summary: This paper delves into video diffusion models, aiming to extend classifier-free guidance (CFG) to video diffusion with variable-length history frames. The authors identify two key challenges: architectures supporting only fixed-size conditioning and poor performance of CFG-style history dropout. To tackle these, they propose the Diffusion Forcing Transformer (DFoT). DFoT is a video diffusion architecture with a theoretically grounded training objective, enabling flexible conditioning on a varying number of history frames. It expands the “noising-as-masking” paradigm to non-causal transformers and is compatible with existing architectures.
Claims And Evidence: DFoT's Performance: The claim that DFoT surpasses baselines in video generation is supported by experiments on datasets like Kinetics 600. Comparing DFoT with standard diffusion (SD), binary-dropout diffusion (BD), and full-sequence diffusion with reconstruction guidance (FS), the authors show that DFoT has better Fréchet Video Distance (FVD) scores and generates higher-quality samples.
History Guidance: The effectiveness of History Guidance methods is well-evidenced. For instance, on Kinetics-600, vanilla history guidance improves frame quality and consistency, and fractional history guidance addresses the issue of static videos. In tasks like handling out-of-distribution history on RealEstate10K, DFoT with temporal history guidance outperforms baselines.
Long-Video Generation: DFoT's ability to generate ultra-long videos is demonstrated by creating an 862-frame navigation video on RealEstate10K, far exceeding the capabilities of prior methods.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The authors provide a theoretical justification for the training objective through a variational lower bound. They derive an Evidence Lower Bound (ELBO) corresponding to the DFoT training objective, showing that it optimizes a reweighting of the ELBO on the expected log-likelihoods. This theoretical foundation adds credibility to the DFoT architecture and its training process. However, further discussion on the practical implications of these theoretical results, such as how the reweighting affects DFoT's performance in different video generation tasks, would be beneficial.
Experimental Designs Or Analyses: Experimental Designs: The experimental designs are sound. The authors conduct experiments on multiple datasets with diverse characteristics, demonstrating the generality of their methods. For example, Kinetics-600 is used for benchmarking and quantitative comparisons, while RealEstate10K, Minecraft, and Fruit Swapping are used to study new applications. The use of sliding window rollout in experiments on Kinetics - 600 to test video generation consistency is a valid approach.
Analyses: The analyses of the experimental results are comprehensive. The authors report numerical results of evaluation metrics and provide qualitative analyses, such as visualizations of generated videos. This helps readers better understand the performance of DFoT and History Guidance methods. However, in some experiments, like the long-context generation in Minecraft, a more in-depth analysis of the trade-offs between different factors, such as the balance between long-term memory and robustness to out-of-distribution history, would be valuable.
Supplementary Material: Yes. I checked the videos.
Relation To Broader Scientific Literature: Video Diffusion Models: In the field of video diffusion models, the paper addresses the limitations of existing architectures that can only support fixed-length conditioning. It also improves upon the CFG-style history dropout approach, which has been shown to be suboptimal. The proposed methods offer new ways to condition on history frames, enhancing the performance of video diffusion models in terms of quality, consistency, and the ability to generate long videos.
Essential References Not Discussed: There are no obvious essential references that are not discussed in the paper.
Other Strengths And Weaknesses: Strengths:
1. **Significance**: The research has significant implications. Improving the quality and consistency of video generation, as well as enabling the generation of ultra-long videos, has potential applications in many fields, such as robotics, virtual reality, and video production.
2. **Effective History Guidance Methods**: The History Guidance (HG) family of methods, enabled by DFoT, offers a powerful way to leverage history in video generation. Vanilla History Guidance (HG-v) alone significantly improves video quality and temporal consistency. More advanced methods which combines Temporal History Guidance (HG-t) and Fractional History Guidance (HG-f), further enhance motion dynamics, enable generalization to out-of-distribution history, and can stably roll out extremely long videos.
Weaknesses:
1. **Video Quality and Diversity**: The generated videos have relatively low resolution and limited scene variety. For example, in RealEstate10K experiments, the navigation videos lack complexity and high-resolution details, restricting practical applications. The author are encouraged to experiment with more diverse dataset or stronger pretrained model such as CogVideoX.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for finding our work significant and effective. Below, we address your concerns with further explanation, additional ablations, and results of fine-tuning a video foundation model into DFoT.
> ### **Q1. Practical implications of theoretical results (ELBO)**
Thank you for your insightful question. In practice, reweighting terms in diffusion ELBO are often dropped for simplicity, like in the original DDPM paper. Therefore, it’s difficult to compare DFoT’s ELBO and vanilla diffusion’s ELBO in practice since both of them train on simplified objectives. However, our **Appendix C.2 - Diffusion** does provide some practical insights: Higher noise level masks out a greater portion of information from the original sequence. As noted in previous works [1, 2], and in our findings (Appendix C.2), a diffusion model's performance is highly sensitive to signal redundancy. Specifically, when the denoising task contains a lot of redundant signals, training should emphasize the objective at high noise levels. In DFoT, some tasks (e.g. unconditional, image-to-video) provide smaller amounts of conditioning frames and are thus less redundant, while other tasks (e.g. video-to-video) are the opposite. Thus, the latter benefit from higher emphasis on high noise levels. Therefore, it is expected that **as the reweighting $\omega$ biases towards high noise levels, DFoT's performance shifts towards tasks with more conditioning frames, and vice versa**. Therefore, reweighting and noise schedules should be carefully selected based on this analysis, tailored to the downstream tasks.
> ### **Q2. Trade-offs in long-context generation**
We present an ablation studying how HG-t trades off long-term memory and quality. We conducted the Minecraft long context generation experiments (**Section 6.4 - Task 2**) with varying weights of combining short and full history. We report the results in **Figure R3 on [our anonymous website](https://anon-dfot.github.io/)**. The Minecraft dataset was specifically designed by TECO [5] to be deterministic under an action-conditioned setting. We, therefore, directly use a deterministic metric LPIPS to measure long-context consistency. In addition, we use frame-wise FID score as a measure for image-only quality, disregarding consistency. The resulting plot shows a clear reverse trend between quality and long-context memory as the weight for full-context increases, indicating that HG-t can trade off long-horizon memory for improved quality robustness by mixing a short-history model.
> ### **Q3. Video Quality and Diversity**
We are excited to report enhanced video generation results using a larger DFoT model at higher resolution, featuring complex and diverse scenes. Please see **Figure R1 on [our anonymous website](https://anon-dfot.github.io/)** for the results. We fine-tuned the 1.3B T2V foundation model [3] to DFoT on a subset of the Panda-70M dataset [4] for 20K steps. The base model is text-to-video only, so our reported image-to-video results are largely due to DFoT, not the base model. Here is a summary of the results:
- **Improved video quality, diversity**: The generated videos have a higher resolution (832x480) and feature more dynamic, complex scenes with enhanced quality. Although the original Wan2.1 model was not trained to condition on images and our fine-tuning dataset is limited to human-action videos, our model effectively generalizes to generate diverse scenes from random web images, ranging from human and animal actions to nature scenes and moving objects.
- **Scalability of DFoT and History Guidance**: Our method effectively scales to larger models and datasets, while maintaining its advantages. It handles flexible-length history (Figure R1a: conditioning on a single frame or on the previous 13 frames during rollout; Figure R1b: interpolation between the first and last frames). DFoT also stably generates long 217-frame videos via sliding window rollouts (Figure R1a), which is substantially longer than the 49-frame context window used during fine-tuning. Lastly, Figure R1b shows that the benefits of history guidance persist at larger scales, significantly enhancing video quality and consistency.
Overall, these results demonstrate that our method, beyond Re10K experiments, can effectively scale to more complex video generation tasks with superior performance, highlighting its practical applicability.
> ### References
- [1] Chen, T. "On the importance of noise scheduling for diffusion models." arXiv 2023.
- [2] Hoogeboom et al. "Simple diffusion: End-to-end diffusion for high resolution images." ICML 2023.
- [3] WanTeam et al. "Wan: Open and Advanced Large-Scale Video Generative Models." arXiv 2025.
- [4] Chen et al. "Panda-70m: Captioning 70m videos with multiple cross-modality teachers." CVPR 2024.
- [5] Yan et al., “Temporally Consistent Transformers for Video Generation” | Summary: The paper introduces the Diffusion Forcing Transformer (DFoT), a video diffusion architecture that extends diffusion forcing with a theoretically grounded training objective that enables conditioning on a flexible number of history frames. On top of diffusion forcing using transformer, the authors introduce History Guidance – a family of guidance strategies unique to their architecture where the combination of time and frequency guidance achieves the best in motion dynamics and long-term coherence.
## update after rebuttal
After the rebuttal, most of my concerns have been addressed except for the apparent artifacts in the additional results of model scalability. It is understandable that scaling on the 1.3B model within the rebuttal period with limited resources is hard. Therefore, the scalability of DFoT could be a future direction down the road. Still, this does not affect the fact that this is a strong ICML submission. In light of this, I will keep my original rating.
Claims And Evidence: Claim1: The most important claim is that the approach enables generating extremely long videos stably, far beyond prior art. This is enabled by rolling out DFoT with the proposed history guidance. The results on the website validate this claim by presenting in-domain visualization results.
Claim2: Compositional generation with long-context is another claim. This is validated on Minecraft video dataset and robotic fruit swapping trajectory.
Claim3: The paper claims DFoT with temporal guidance can handle OOD history inputs (extreme camera poses) that cause other models to fail. This is somehow validated by stress-test on RealEstate10K where abrupt viewpoint changes are introduced between conditioning frames as OOD camera poses. Still, visual domain generalization is not validated.
Methods And Evaluation Criteria: The paper introduces the Diffusion Forcing Transformer (DFoT), a video diffusion architecture that extends diffusion forcing with a theoretically grounded training objective that enables conditioning on a flexible number of history frames. Instead of feeding a fixed set of past frames as conditioning, DFoT noises each frame independently during training. On top of diffusion forcing using transformer, the authors introduce History Guidance – a family of guidance strategies unique to their architecture where the combination of time and frequency guidance achieves the best in motion dynamics and long-term coherence.
The authors benchmark their base model against strong baselines on standard video generation metrics, i.e., FVD and VBench scores. For deterministic task as next frame prediction for robotics, LPIPS is reported. Overall, the evaluation is sufficient and solid.
Theoretical Claims: The authors claim that their DFoT training objective is theoretically well-founded, specifically that it optimizes a valid likelihood-based objective (rather than being a heuristic) as Theorem 4.1. In specific, the per-frame noise training loss optimizes a reweighting of an Evidence Lower Bound (ELBO) on the expected log-likelihoods.
Experimental Designs Or Analyses: Overall, the evaluation is sufficient and solid.
The authors benchmark their base model against strong baselines on standard video generation metrics, i.e., FVD and VBench scores. For deterministic task as next frame prediction for robotics, LPIPS is reported. Datasets are extensively explored, including kinetics, RealEstate10K, Minecraft, and real-world robotic tasks. The analysis and ablation studies are also sound and satisfying.
Supplementary Material: I appreciate the efforts that the authors provide sufficient supplementary material including appendix and project website page for a more comprehensive understanding and evaluation of their work.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths are extensively discussed in previous sections. I would mention several weaknesses here:
1. One inherent downside of the history guidance approach is that sampling requires multiple model evaluations per timestep. For CFG (and thus HG-v), you already need two forward passes (conditional & unconditional). For HG-t or HG-f, you might need even more (e.g. one for each history window or each noise level setting) to then combine them. By treating them as a batch might mitigate this issue, but it still takes more computational budget.
2. The scalability of DFoT is not validated. All the dataset used for training are in relatively limited scale. The potential of DFoT in scaling up is not validated and might be worth exploring in the future.
3. Only in-domain (visual domain) results are presented for ultra-long video generation. I’m wondering if DFoT can reasonably generalize to arbitrary in-the-wild input.
4. The method is specialized to using visual history frames as conditioning. It does not experiment with text-to-video or audio-conditioned video, etc. While not exactly a weakness (the paper already has a broad scope), it means the benefits are demonstrated only in the context of visual conditioning. An open question is how DFoT would work if the model also had a text prompt – can it combine text and flexible frame guidance?
Other Comments Or Suggestions: NA
Questions For Authors: Please refer to the Strengths and Weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your positive comments on the strengths of our method, theoretical justification, and extensive analysis. Below, we address questions on sampling efficiency and scalability/generalizability of DFoT, including new results from fine-tuning a 1.3B text-to-video foundation model (see **[anonymous website](https://anon-dfot.github.io/)** for **Figures R1, R2**):
> ### **Q1. Sampling efficiency of History Guidance**
As you noted, history guidance requires multiple function evaluations (NFE) per timestep, but this doesn't mean a larger sampling budget is needed. Rather, **Figure R2 on [our anonymous website](https://anon-dfot.github.io/)** shows that **history guidance offers better sampling efficiency than sampling without it**.
- **Total sampling budget**: Total sampling cost depends not only on NFE per timestep but also on the number of timesteps: $\text{Total NFE} = \text{NFE/timestep} \times \text{\\# timesteps}$
- **Figure R2**: We compared the performance with/without history guidance under the same total sampling budget (i.e. same total NFE). With the same total NFE, history guidance substantially outperforms sampling without it (mostly HG-f > HG-v > w/o HG). This shows that, despite requiring more NFE per timestep, history guidance can be performed efficiently by reducing the number of timesteps, thereby achieving better performance with the same total sampling cost.
- **Parallelization**: As you mentioned, even under the same total NFE, history guidance can be performed in parallel by stacking score estimations across batches for faster computation, as we do in our implementation (Appendix C.2 - Sampling).
> ### **Q2. Scalability of DFoT**
We are pleased to report strong results of scaling up DFoT. We fine-tuned the Wan2.1 T2V-1.3B model [1] to DFoT on a subset of the Panda-70M dataset [2] for 20K steps. We note that this model is text-to-video only, so its history conditioning capability is entirely due to DFoT. Despite limited compute and data, **Figure R1 on [our anonymous website](https://anon-dfot.github.io/)** demonstrates the successful scaling while maintaining its advantages:
- **Handling flexible-length history**: The base Wan2.1 model can only generate videos given a text prompt, not being able to condition on history frames. After fine-tuning, DFoT can effectively handle different history-conditional video generation tasks. Specifically, Figure R1a uses a single frame as the initial conditioning frame and then conditions on the previous 13 frames during rollout, while Figure R1b shows interpolation between the first and last frames.
- **Long video generation**: Figure R1a shows DFoT's long video generation capability still holds at larger scales. We present 217-frame videos via sliding window rollouts, which is substantially longer than the 49-frame window used during fine-tuning.
- **History guidance**: Figure R1b compares frame interpolation results with/without history guidance, showing that history guidance improves video quality and consistency even at larger scales.
> ### **Q3. Generalizing to (visual) out-of-domain**
The video generation results in **Figure R1** demonstrate that DFoT can **generalize to arbitrary in-the-wild visual input**. We fine-tuned on a subset of Panda-70M filtered to include only 190K videos with human actions. Despite this, DFoT generalizes well to various in-the-wild visual inputs from random internet images (not part of the training set), such as animals, nature scenes, and moving objects. This capability is not merely due to the base model, which is text-to-video only; DFoT provides the image-conditioning ability.
> ### **Q4. Beyond visual conditioning (text prompt)**
Our fine-tuning results further demonstrate that **DFoT is highly compatible with other types of conditioning**, such as text prompts. With just 18 lines of code changes, the fine-tuned model can condition on varying-length history. All videos in **Figure R1** are generated by jointly conditioning using both text prompts and history frames, and adhering closely to both. Specifically, we utilize both text guidance and history guidance during sampling, with different guidance scales for each. While it is well-known that text guidance enhances generation results, ablation in **Figure R1b** shows that history guidance further improves the quality and consistency, when combined with text guidance. We believe DFoT's sliding window rollout can further synergize with text prompt conditioning by specifying different text prompts for each sliding window. Additionally, DFoT has already shown compatibility with camera pose conditioning in Re10K experiments, and we believe it can extend to other sequential conditioning (e.g. audio).
> ### References
- [1] WanTeam et al. "Wan: Open and Advanced Large-Scale Video Generative Models." arXiv 2025.
- [2] Chen, Tsai-Shien, et al. "Panda-70m: Captioning 70m videos with multiple cross-modality teachers." CVPR 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their detailed response. Most of my concerns have been addressed except for the apparent artifacts in the additional results of model scalability. It is understandable that scaling on the 1.3B model within the rebuttal period with limited resources is hard. Therefore, the scalability of DFoT could be a future direction down the road. Still, this does not affect the fact that this is a strong ICML submission. In light of this, I will keep my original rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and appreciation!
Aside from the limited resources, we'd like to highlight that the base model was a 1.3B **text**-to-video model. Traditionally, for a model to do well on **image**-to-video generation, the 1.3B size is not sufficient and would inevitably contain artifacts. For example, almost all pepper cutting (R1-b-mid) results you see online are generated in a **text**-to-video setting or with a much bigger image-to-video model, while we are sampling in a harder **image**-to-video setting only with a smaller model size. Therefore, we respectfully suggest that the observed artifact be interpreted in this context.
If you have further questions, please let us know, and we will be happy to provide further explanations! | Summary: Classifier-free guidance (CFG) greatly improves conditional generation in diffusion models, but applying it to video diffusion—where the number of context frames can vary—introduces significant challenges. Existing architectures often restrict conditioning to a fixed size, and CFG-style history dropout is ineffective. In response, the Diffusion Forcing Transformer (DFoT) and its associated History Guidance methods enable flexible conditioning, substantially enhancing video quality and consistency over potentially very long sequences.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, I checked the theoretical claims of lower bound evidence.
Experimental Designs Or Analyses: Yes. Very detailed ablations regardind history gudiance and diffusion force transformer.
Supplementary Material: Yes. The video results.
Relation To Broader Scientific Literature: Introducing a groundbreaking method of history guidance for extended video generation using diffusion models
Essential References Not Discussed: The reference part is good to me.
Other Strengths And Weaknesses: Strengths:
Proposes a novel strategy for producing extended or even infinite-length videos.
The idea of training independent noise levels for each frame in a video diffusion framework is innovative and warrants further investigation.
Weaknesses:
It would be valuable to see an ablation study comparing training with independent noise levels for the history frames versus using a uniform noise level for the generated frames.
The long video demonstrations primarily focus on static scenes, leaving open questions about performance on lengthy sequences of human actions (e.g., in Kinetics-600).
Although the approach seems straightforward to adapt to top-tier video generation models like CogVideo (through additional fine-tuning), I’m curious how well it would perform in creating long videos when combined with state-of-the-art model and the proposed training as well as guidance methods.
Other Comments Or Suggestions: Please see the above part.
Questions For Authors: Please see the above part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your positive comments on the novelty and theoretical support of our method, and extensive experimental results. Below, we address your questions by presenting enhanced long video generation achieved through fine-tuning a 1.3B text-to-video foundation model and providing an ablative comparison of the training objectives.
> ### **Q1. Long video generation on dynamic scenes, with SOTA foundation models**
While our original results were limited by our compute and data, we are pleased to report positive outcomes by fine-tuning a state-of-the-art video generative model with DFoT using limited compute. We fine-tuned the Wan2.1 T2V-1.3B model [1], a leading SOTA model, to DFoT for only 20K steps on a subset of the Panda-70M dataset [2]. This was achieved with just 18 lines of code changes, thanks to DFoT’s simplicity. Please see **Figure R1 on [our anonymous website](https://anon-dfot.github.io/)** for our enhanced long video generation results. Key findings include:
- **Scalability of DFoT and History Guidance**: Our method's strengths remain robust at larger scales and in real-world applications. First, we present long video generation results (**Figure R1a**), where DFoT stably generates long 217-frame videos via sliding window rollouts. This is substantially longer than the 49-frame context window used during fine-tuning. Additionally, DFoT handles flexible-length history effectively; Figure R1a shows videos generated by initially conditioning on a single frame and then on the previous 13 frames during rollout. Figure R1b demonstrates video generation by interpolating between the first and last frames. Lastly, we present a direct ablation study on the effectiveness of history guidance (**Figure R1b**). Similar to our paper results, history guidance significantly improves the video quality and consistency.
- **Dynamic scenes**: Unlike our Re10K demonstrations focusing on static scenes, the generated videos in **Figure R1** are dynamic scenes with human/animal actions and moving objects, at higher resolution. This demonstrates our method's capability to perform well in complex, realistic scenarios, beyond static ones.
> ### **Q2. Ablation on independent vs. uniform noise levels**
There could be multiple interpretations of your suggestion, so we address each of them below:
#### Interpretation 1: Independent vs. uniform noise levels
**Section 6.2**, **Table 1**, and **Figure 4** compare our independent noise level training objective against uniform noise levels. Our independent noise level approach outperforms these baselines using uniform noise levels, such as SD, FS, and BD, both quantitatively (Table 1) and qualitatively (Figure 4). This demonstrates that our proposed training objective offers advantages not only in flexibility (conditioning on arbitrary history frames) but also in performance.
#### Interpretation 2: Fully independent vs. Partially independent noise levels
Alternatively, your question might ask whether DFoT can benefit from training with independent noise levels for some frames and uniform noise levels for others.
To clarify, our problem statement defines history as **any** subset of the frames. While sampling distinguishes between history and generated frames, training does not, to encompass all possible history conditioning during sampling. Since training with partially independent noise levels would deviate from our problem statement, we discussed it in Appendix A.5 for clarity.
As detailed in **Appendix A.5**, if only a subset of history conditioning schemes is desired, fully independent noise may not be optimal due to unnecessary complexity as the number of frames increases. Therefore, we explored partially independent noise level training. Given the maximum length of history frames we aim to support, independent noise levels are only applied up to this maximum length, with uniform noise levels assigned to the remaining frames. Our Minecraft model, which processes 50 latent frames, used a simplified training objective with a maximum length of 25 latent frames. This approach improved training efficiency and video generation performance compared to fully independent noise levels. However, we also found that such efficiency gains are not noticeable for fewer frames. Since modern video foundation models [1] often feature around 20 latent frames, partially independent noise does not significantly increase efficiency, aligning with Diffusion Forcing [3]'s finding that temporal complexity is minor compared to visual complexity.
> ### References
- [1] WanTeam et al. "Wan: Open and Advanced Large-Scale Video Generative Models." arXiv 2025.
- [2] Chen et al. "Panda-70m: Captioning 70m videos with multiple cross-modality teachers." CVPR 2024.
- [3] Chen et al. “Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion.” NeurIPS 2024 | Summary: This paper regards the noise in the diffusion process as a form of masking, integrating history frames and generated frames into a unified Diffusion Forcing Transformer (DFoT) framework. By combining different masking strategies for history information, the paper proposes several History Guidance (HG) methods, enhancing generation quality, dynamics, and historical consistency. Experimental results show the advantages of the proposed methods in long video generation and many downstream tasks.
After the rebuttal, the authors address my concerns.
Claims And Evidence: This work innovatively introduces the Diffusion Forcing Transformer, integrating generated frames and history conditions within a unified framework. The authors also proposes multiple History Guidance methods, with HG-v already showing performance improvements. Variants HG-t and HG-f further enhance model generalization and dynamic generation capabilities.
However, although the paper claims to support flexible-length history information, experiments do not clearly demonstrate the advantage of History Guidance in ultra-long video generation. The incremental extension method for "unlimited-length" videos achieves coherent transitions but lacks semantic continuity (e.g., supplementary material video i2v_dfot_long_1.mp4, from 11 to 24 seconds). Although HG-t shows potential in integrating long-term and short-term history contexts, its effectiveness has not been validated in general video generation tasks, limiting the method's practical applicability.
Methods And Evaluation Criteria: Yes. The paper proposes DFoT to address high-quality and consistency issues in long video generation. The experiments validate the model’s capabilities using the Kinetics-600, RealEstate10K, Minecraft, and Fruit Swapping datasets, demonstrating its effectiveness in long video generation tasks.
Theoretical Claims: This paper is not primarily a theoretical contribution. Nevertheless, it provides theoretical support for DFoT’s optimization objectives based on Evidence Lower Bound.
Experimental Designs Or Analyses: The flexibility and effectiveness of DFoT are evaluated on Kinetics-600, RealEstate10K, Minecraft, and Fruit Swapping datasets, as well as in tasks such as video prediction, frame interpolation, long video generation, and imitation learning.
Supplementary Material: This paper provides extensive supplementary materials with detailed evaluations of generation quality, implementation details, and results from various downstream tasks.
Relation To Broader Scientific Literature: This work provides insights into sementically consistent long video generation.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The supplementary materials are entensive, providing impressive experimental results.
Other Comments Or Suggestions: None
Questions For Authors: I am curious about the transition mechanisms between different segments in long video generation, particularly the role of user interactions (there seems to be significant discontinuity between different segments).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your positive comments on our method's innovation, theoretical support, and evaluation. Below, we address concerns on long video generation and present new results from fine-tuning a 1.3B text-to-video foundation model to DFoT.
> ### **Q1. Advantage of our method in ultra-long video generation**
Long video generation quality is measured by two factors: rollout stability and long-context semantic continuity. Here, we emphasize our method’s exceptional stability, while semantic continuity is discussed in Q2.
Stability refers to maintaining video quality as models condition on previously generated, potentially erroneous frames. Achieving stable rollouts via sliding windows has historically been challenging due to compounding errors [1], with even large-scale foundation models often failing. As noted in Appendix D.6, Re10K is a challenging dataset, and the previous state-of-the-art [2] was limited to 32 frames before failure. This underscores why our method's ability to generate nearly 1000 frames marks a significant improvement in long-rollout stability. Furthermore, Appendix D.6 and Figure 9 show DFoT and history guidance are essential for ultra-long video generation.
> ### **Q2a. Coherent transitions but lacking long-context semantic continuity**
While we acknowledge that our Re10K demos lack long-context semantic continuity, we emphasize that **this limitation is not inherent to our method, DFoT and History Guidance**, which perform well on Minecraft, designed to test long-context consistency. Instead, the issue arises from limited context window size due to computational constraints and the limited semantic length of the Re10K dataset:
- Limited compute: Our Re10K model is trained with a context window size of only 8 frames, as training with larger context windows becomes increasingly expensive. This results in an overlap of only 4 frames between successive sliding windows, insufficient for long-memory in 3D space.
- Limited training data: The Re10K dataset consists of semantically short video clips without loop closures, typically covering a single room/area (Appendix D.6). Thus, the model cannot learn to maintain semantic continuity over greater lengths (e.g., multiple rooms/areas).
We further argue that the **semantic continuity of the generated videos scales with the semantic length of the training data and the context window size**. Beyond strong performance on Minecraft, we are pleased to present new long video generation results in **Figure R1a on [our anonymous website](https://anon-dfot.github.io/)**. We fine-tuned the Wan2.1 T2V-1.3B model [3] to DFoT on a subset of the Panda-70M dataset [4]. With a larger context window size and longer semantic length of the training data, the results are more coherent and semantically consistent than those from Re10K. This demonstrates that the semantic continuity of generated videos using our method can be improved by simply scaling the training data and context window size.
> ### **Q2b. Transition mechanisms between different segments in long video generation**
As detailed in **Appendix C.9**, our Re10K videos are generated by user input of camera poses for each sliding window. For simplicity, we've implemented a basic transition mechanism where the camera pose changes at a constant rate within each sliding window, based on specified distance and rotation angles. This setup causes discontinuous transitions due to velocity changes between sliding windows. We clarify that the **discontinuous transitions stem from the simplicity of our current implementation of camera pose transitions, not a limitation of our method**. Smoother transitions could be achieved with a more advanced mechanism.
> ### **Q3. HG-t's effectiveness in general video generation tasks**
**Section 6.4 demonstrates the practical applicability of HG-t**. We show that HG-t addresses three critical challenges: OOD history, long context generation, and imitation learning. At the core of these tasks is the OOD nature of observed history frames. As noted in **Section 5**, the amount of required training data grows exponentially with the length of conditioning history. Consequently, the OOD problem frequently arises in tasks that require understanding long histories, such as video-to-video generation, and HG-t effectively addresses this core issue. Additionally, the history guidance framework's strength lies in its techniques (HG-v, HG-t, and HG-f) offering unique advantages, allowing the best technique to be chosen based on specific needs of general video generation tasks.
> ### References
- [1] Qiu et al. "Freenoise: Tuning-free longer video diffusion via noise rescheduling." ICLR 2024.
- [2] Watson et al. “Controlling space and time with diffusion models.” ICLR 2024.
- [3] WanTeam et al. "Wan: Open and Advanced Large-Scale Video Generative Models." arXiv 2025.
- [4] Chen et al. "Panda-70m: Captioning 70m videos with multiple cross-modality teachers." CVPR 2024. | null | null | null | null | null | null |
Causal Discovery from Conditionally Stationary Time Series | Accept (poster) | Summary: This paper extends the existing work in time series causal discovery from stationary data to conditionally stationary time series data, which is stationary if conditioned on the latent states. The authors propose a conditional summary graph to represent the causal structure and give the identifiability results. Empirical experiments on semi-synthetic and real-world data show the superior performance over the baselines.
## update after rebuttal
I will keep my score after reading the response and the other reviews.
Claims And Evidence: The claims made in the submission (e.g. establishing the identifiability for the conditional summary graph and related structural properties) are supported by the clear definition (Def. 4.1), reasonable assumptions, and rigorous theoretical results (Thm. 4.3). The experimental results, conducted under the authors’ specified settings, provide convincing empirical validation. However, providing more real-world examples for the three classified scenarios of states would improve the paper’s practical relevance.
Methods And Evaluation Criteria: Overall, the proposed State-Dependent Causal Inference (SDCI) framework makes sense for the problem of causal discovery in conditionally stationary time-series data. The probabilistic formulation, graph-based modeling, and variational inference provide a compelling approach. However, the clarity of the notations and the derivations of the formulations could be further improved.
Theoretical Claims: Overall, the proofs provided for the Thm. A.6, Thm. A.7, and Corollary A.8 seem correct. The reasoning effectively utilizes key assumptions to establish partial identifiability up to permutation equivalence. Some minor refinements could further enhance accessibility and readability.
1. The theorem appears to rely on several existing results. Providing clearer references or explanations in the appendix regarding the specific works used would enhance clarity and attribution.
2. Additionally, for the identifiability of finite Gaussian mixture, does it need any regularity assumptions? Please clarify these assumptions if you are going to use the results.
Experimental Designs Or Analyses: The experimental design is well-structured and appropriate, leveraging both semi-synthetic and real-world data. Careful modifications and adjustments have been applied to the datasets to ensure smooth comparisons. However, the evaluation criteria could be clarified further. For example, in Fig. 6, does the fixed graph correspond to the summary graph for the full-time graph, while the variable graph represents the conditional summary graph? Explicit clarification of these evaluation targets would enhance the interpretability of the results.
Supplementary Material: Please see the Theoretical Claims and the Experimental Designs or Analyses parts.
Relation To Broader Scientific Literature: This work is related to causal discovery for nonstationary time-series and multi-domain data (Yao, 2022; Varambally, 2024; Song, 2024; Huang, 2015, 2019, 2020; Gong, 2022; Zhang, 2017; Ghassami, 2018). The conditional summary graph proposed by the authors provides a more compact representation than the full-time and regime-dependent graphs. The identifiability results over hidden latent states are novel.
References:
1. Yao, W., Sun, Y., Ho, A., Sun, C., and Zhang, K. Learning temporally causal latent processes from general temporal data. In International Conference on Learning Representations, 2022.
2. Varambally, S., Ma, Y., and Yu, R. Discovering mixtures of structural causal models from time series data. Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 49171–49202. PMLR.
3. Huang, B., Zhang, K., and Scholkopf, B. Identification of time-dependent causal model: A gaussian process treatment. In Twenty-Fourth international joint conference on artificial intelligence, 2015.
4. Huang, B., Zhang, K., Gong, M., and Glymour, C. Causal discovery and forecasting in nonstationary environments with state-space models. In International Conference on Machine Learning, pp. 2901–2910. PMLR, 2019.
5. Huang, B., Zhang, K., Zhang, J., Ramsey, J. D., SanchezRomero, R., Glymour, C., and Scholkopf, B. Causal discovery from heterogeneous/nonstationary data. J. Mach. Learn. Res., 21(89):1–53, 2020.
6. Gong, W., Jennings, J., Zhang, C., and Pawlowski, N. Rhino: Deep causal temporal relationship learning with history-dependent noise. In NeurIPS 2022 Workshop on Causality for Real-world Impact, 2022.
7. Zhang, K., Huang, B., Zhang, J., Glymour, C., and Scholkopf, B. Causal discovery from nonstationary/heterogeneous data: Skeleton estimation and orientation determination. In IJCAI: Proceedings of the Conference, volume 2017, pp. 1347. NIH Public Access, 2017.
8. Ghassami, A., Kiyavash, N., Huang, B., and Zhang, K. Multi-domain causal structure learning in linear systems. Advances in neural information processing systems, 31, 2018.
Essential References Not Discussed: I did not identify any missing essential references, though further discussion on related work in causal identifiability for time series data could be beneficial.
Other Strengths And Weaknesses: Strengths:
1. This paper is original for its proposed setting (stationary time-series when conditioned on states). Based on the identifiability of Markov switching models, the identifiability of this model is also guaranteed.
2. This work is significant since it extends the stationary time-series to a broader setting.
Weaknesses:
1. There is room for the clarity/simplification of the notations. For example, it is hard to differentiate a real probability density from an estimated one. Especially in Section 3.3, $p$, $p_{\psi}$, and $q_{\phi}$ confused me.
2. The work seems like a compact version of the regime-dependent graph, which also considers the time-series data in a conditional stationary setting. The identifiability results are impressive, but I am concerned with the contributions.
Other Comments Or Suggestions: Some typos:
1. Line 146\~147, right side, “the parent of $x^{(2)}\_2$ are …” lost a “{“.
2. Line 217, left side, “we can guide $\psi_\omega$ though” -> through.
3. Line 738 (a2.2), “, the following set has zero measure:”, where is the following set?
4. Line 1022~1023, what’s behind “=”?
5. The citation format is inconsistent in the Related Work part.
I found the notations not that clear, e.g., an explicit footnote about $j\in C_k^{(i)} \iff x_{t-1}^{(i)}\in PA_{s_{t-1}}^{(j)}(t-1), s_{t-1}=k$ and the reason why PA should take t-1 as input but $C$ should not, might be rather helpful.
Questions For Authors: 1. Is the proof of Thm. A.6 originally derived by the authors, or is it adapted from existing work?
2. Why do you propose labels of conditional effects? It seems just to work as a compact representation of the existence of edges. Could you please give a more intuitive explanation for $f_{e_i}$ in Eq. (8)?
3. What’s the difference between two $f_{s_{t-1}}^{(j)}$s in Eq. (5) and Eq. (8)? The authors claim Eq. (8) reduces the exponential complexity of Eq. (5), but it’s bearly clear to me.
4. How did you learn the $K$ in your implementation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s feedback. Below, we address key concerns.
> There is room for the clarity/simplification of the notations.
We appreciate this point and will revise the manuscript for clarity.
- Subscripts $\psi$ and $\phi$ denote parameters of the generative model and variational approximation, respectively. The generative model should always be indexed by $p_{\psi}$
- The outgoing structure $C_k^{i}$ contains variable indices, $1, \dots, N$, while $PA_{s_{t-1}}(t-1)$ refers to function arguments (e.g. $x_{t-1}^{(i)}$). For time series, we consider the time index when computing densities at each time step $t$, and this is why we use $t-1$ as an argument to $PA$.
We will make sure the above is revised and address all noted typos.
> but I am concerned with the contributions.
Our theoretical results encompass three levels of generality.
- Markov Switching Models (m1-m3): Thm. A.6
- Conditionally stationary time series (m1-m4): Thm A.7
- SDCI (m1-m5): Cor. A.8.
Our paper explicitly studies the applications of SDCI to address nonstationarity. However, our scope in terms of theoretical results is **broader**, and our results from Thms. A.6 and A.7 could be of independent interest. Identifiability in regime-dependent time series is a very challenging problem with only very recent contributions [1], and importantly, our work makes no assumptions on state dependence.
Regarding empirical contributions, Gene Regulatory Network (GRN) inference methods are typically validated with semi-synthetic data that mimics the underlying biological system, as validation through real-world cell systems is very expensive. SDCI design agrees with GRN literature and its ability to model gene activation/deactivation could be impactful in this domain.
[1] Balsells-Rodas et al., "On the identifiability of switching dynamical systems." ICML 2024.
**Theoretical Claims and Q1**
All proofs are original. Theorem A.6, leverages Yakowitz & Spragins (1968) [2], also used in [1]. We note **key differences**:
- We allow dependencies from $x_{1:t}$ to $s_t$, strictly prohibited in [1].
- Our weaker assumptions require a **novel proof strategy** resolving permutation equivalence issues from Yakowitz & Spragins (1968) result.
- Our strategy addresses identifiability from **conditional distributions** $p(x_t| x_{1:t-1})$, while [1] considers the **joint distribution** $p(x_{1:T})$.
Identifiability in finite mixtures relies on function linear independence [2]. For Gaussian families, this holds if means or covariances are distinct. We ensure this by assumption (a1). We will clarify the above in the appendix, and plan to include a discussion comparing other works in the main document (see Reviewer Fbj4).
[2] Yakowitz and Spragins. "On the identifiability of finite mixtures." AMS 1968.
**Reviewer’s questions:**
Q2: Conditional effect labels improve scalability in conditionally stationary time series. SDCI, inspired by interactive systems (e.g. NRI, ACD), learns diverse graphs from limited types of interaction types $n_{\epsilon}$. Example: in sports data, different samples may have distinct graphs, but all interactions could be categorized as "no interaction," "aggressive," or "defensive."
Q3:
- Eq. (5): $f_{s_{t-1}}^{(j)}$ in is indexed by $s_{t-1}$ leading to $K^N$ functions definitions, which is prohibitively expensive.
- Eq. (8): We use GNNs with pairwise interactions and an aggregator function, reducing complexity, and we prove identifiability of the labels under this setting.
- SDCI considers (m1-m5), but we also prove identifiability considering weaker cases (m1-m4), allowing alternative solutions.
Q4:
- We require no knowledge of $K$, unlike most mixture model results on time series.
- This allows finding $K$ via model selection (assuming convergence to MLE), which is theoretically impossible if $K$ requires to be known.
- In practice, standard techniques such as the “elbow method” (used in K-means) can be applied. See the figure below for model selection with true $K=2$ in the springs and magnet data, where reconstruction MSE plateaus at $K=2$.
https://postimg.cc/k2rFZkBp
We plan to expand on selecting $K$ in the Experiments section, showcasing also $K=4$ in the springs and magnets data.
**Actions**
To improve our manuscript (see other Rebuttals for details), we aim to:
- Clarify notation and revise typos.
- Provide intuitions from [2] in our Proof for Thm. A.6.
- Contrast our method assumptions with prior works, emphasizing our broader theoretical scope.
- Expand the discussion on assumptions and implications to real-world data.
- Provide details for $K$ selection, and clarify for NBA data.
- Clarify scalability and include runtime analysis in Appendix.
- Include figures for increasing data sizes on fixed graph data in Appendix.
- Modify Figure 5b to include additional variables.
We sincerely appreciate the reviewer’s insights and believe these revisions will significantly improve the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed response and clarification, especially regarding the distinction from previous works and how to select $K$. As I mentioned earlier, this is a complete and well-structured paper, though the extent of its contribution is not game-changing. For this reason, I am leaning toward acceptance, and I believe my score accurately reflects my evaluation. | Summary: This paper introduces State-Dependent Causal Inference (SDCI), an approach for causal discovery building on VAE-based approaches in nonstationary time series characterized. SDCI leverages a "conditional summary graph" to compactly represent state-dependent causal structures and establishes identifiability guarantees under specific state-dependency assumptions.
Claims And Evidence: Yes, the claims are generally supported by experiment, though there are some gaps. In particular, the paper claims that it relaxes the assumptions of causal discovery for nonstationary time series, but it lacks a clear discussion on it.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for the problem of causal discovery from nonstationary time series.
Theoretical Claims: The theoretical claims in this paper seems correct. However, the proposed results seem not much related to the proposed algorithm.
Experimental Designs Or Analyses: Yes, the experimental designs and analyses are partial sound. Howeve, experiments are somewhat weak, e.g., the number of variables (N=3, 5) is too small in the recurrent states and varying structures experiment.
Supplementary Material: I have check the supplementary material particular for the proof.
Relation To Broader Scientific Literature: This paper related to the causal discovery literature.
Essential References Not Discussed: Most enssential references are discussed.
Other Strengths And Weaknesses: Strengths:
- It developed a novel VAE and GNN-based framework to build the proposed method.
Weaknesses:
- The main issue is that the setting of introducing discrete latent variables to modeling nonstationary time series is not new [1]. This work claims that it relaxing assumption under such setting, but does not provide a clear discussion on how it relax the assumption. It is unclear how the proposed method is different from existing methods under the same setting.
- Although some theoretical results are developed, it is still unclear how to ensure the assumptions hold?
- Moreover, does the proposed algorithm satisfy all the requreied assumptions since there are some assumption on the function.
[1] Hälvä, Hermanni, and Aapo Hyvarinen. "Hidden markov nonlinear ica: Unsupervised learning from nonstationary time series." Conference on Uncertainty in Artificial Intelligence. PMLR, 2020.
Other Comments Or Suggestions: N/A
Questions For Authors: See the weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thoughtful comments. Below, we address the key concerns raised in the review.
> It is unclear how the proposed method is different from existing methods...
While prior works [1-3] also consider discrete latent variables for modeling nonstationary time series, our method has **three key advantages:**
- **No assumption of known number of states:**
- Most previous works, including HMM-ICA [1] and CtrlNS [2], assume the **number of states is known**.
- Our work leverages Yakowitz & Spragins (1968) [4], which enables identifiability **without** knowing the number of states. This allows for model selection to determine $K$, an aspect theoretically impossible in prior approaches.
- **No assumptions on state dependency:**
- Prior works often assume that **state transitions independent of observations** (e.g., HMM-ICA [1], [3]).
- Recently, NCRTL [2] models **direct observation-to-state dependencies**, but requires **stronger assumptions** (e.g., mechanism sparsity, known $K$).
- Our proof technique requires no assumptions in terms of state dependencies, allowing feedback from observations.
- **Regime-dependent identifiability**
- While we provide identifiability for SDCI in the main text (m1-m5), our results provide a **broader scope** with general identifiability results for regime-dependent time series (m1-m3) in Theorem A.6.
- In Thm. A.6 we present a **novel proof technique** leveraging Yakowitz and Spragins (1968) [4]. This result can be of independent interest.
- Strategies based on [1] **cannot solve this problem**, as their proof strategy prohibits explicit autoregressive dependencies from $x_{t-1}$ to $x_t$.
We will explicitly incorporate this discussion in the revised manuscript.
[1] Hälvä, Hermanni, and Aapo Hyvarinen. "Hidden markov nonlinear ica: Unsupervised learning from nonstationary time series." UAI 2020.
[2] Song, Xiangchen, et al. "Causal temporal representation learning with nonstationary sparse transition." NeurIPS 2024.
[3] Balsells-Rodas, Carles, Yixin Wang, and Yingzhen Li. "On the identifiability of switching dynamical systems." ICML 2024.
[4] Yakowitz, Sidney J., and John D. Spragins. "On the identifiability of finite mixtures." AMS 1968.
> it is still unclear how to ensure the assumptions hold?
Aligning data with assumptions is a key challenge in causal discovery and ICA. Our assumptions are as follows:
- (m1-m5): Our model design is inspired by interactive systems, and physical models (e.g. sum of forces).
- (a1) Unique Causal Structures Across States: Each variable must have distinct causal interactions per state.
- (a2) Functional Faithfulness: Discussed in lines 248–253. It can be achieved by using smooth functions (e.g. analytic functions), which are commonly assumed in EEG or Gene expression data.
For semi-synthetic datasets, the assumptions are simple to verify due to access to the groundtruth dynamics. In real-world data, expert knowledge is required. Assumption violations can break identifiability, which would invalidate interpretability. We discuss these issues in Section 6.3 and will expand on them in revision.
> does the proposed algorithm satisfy all the required assumptions
Yes, SDCI is designed to incorporate these assumptions:
- (m1-m5) & (a2.1): Enforced via a GNN architecture and the no-edge effect (i.e. $w_{ijk}=0 \to f_0:= 0$).
- (a1): Date-dependent, independent of algorithm design.
- (a2.2): Ensured via analytic activation functions (e.g., Softplus, Cosine) in the edge-type mechanisms.
We will enhance the discussion on how SDCI aligns with these assumptions in the final version.
> The number of variables (N=3, 5) is too small
See the figure below for an additional column with $N=10$ in Figure 5b. Increasing $N$ in time series causal discovery is challenging due to the temporal component. In our setting, the total data dimensionality is $N \cdot d$ ($N\cdot 4$ in springs and magnets), but complexity also increases with $K$ and $T$.
https://postimg.cc/mzznGYhz
**Actions**
To improve our manuscript (see other Rebuttals for details), we aim to:
- Clarify notation and revise typos.
- Provide intuitions from [2] in our Proof for Theorem A.6.
- Contrast our method assumptions with prior works, emphasizing our broader theoretical scope.
- Expand the discussion on assumptions and implications to real-world data.
- Provide details for $K$ selection, and clarify NBA data.
- Clarify scalability and include runtime analysis in Appendix.
- Include figures for increasing data sizes on fixed graph data in Appendix.
- Modify Figure 5b to include additional variables.
We believe these revisions will strengthen the paper and appreciate the reviewer’s constructive feedback. | Summary: This paper presents a new framework for solving the causal discovery problem in non-stationary sequences. This paper makes certain assumptions under the non-stationary condition, and then proposes the new method Conditional summary graph for representing causality, and the method for conducting causal discovery state-Dependent Causal Inference. The paper provides identifiability analysis for the key steps of its approach, validating SDCI methods on semisynthetic data based on physical and biological systems and real-world datasets.
Claims And Evidence: The paper mainly focuses on the causal discovery problems under new and more lenient conditions. The feasibility of the proposed method is tested through theoretical identifiability analysis and experimental testing.
Methods And Evaluation Criteria: The conditions assumed in this paper are mainly that the dependence of the state dependence can be decomposed into the superposition of the dependence of various features. This assumption can indeed be satisfied by prior knowledge in certain realistic scenarios. Therefore, the proposed method expands the application scope of causal discovery in the time series, which has significance.
Theoretical Claims: No
Experimental Designs Or Analyses: The experimental design of this paper revolves around the causal discovery of conditional stationary time series, and mainly verifies the proposed SDCI framework from the simulation system (spring-magnet), gene regulation network (GRN) and real data (NBA). Covering the simulation, synthesis and real data scenarios, the physical mechanism is verified by the spring system, and the applicability of the biological network is verified by GRN, reflecting the universality of the method. The comparison methods include traditional methods (R-PCMCI, Neural Granger Causality) and leading hidden variable causal models (ACD, Rhino), covering different methodological systems. However, NBA data are only mentioned in context, lacking detailed experimental design and outcome analysis, which may weaken the credibility of the method in real scenarios. In addition, the paper mentioned that the consistency (consistency) of SDCI is still an open problem, but the targeted validation is not designed in the experiment (such as the convergence test under different data quantities), and the theoretical advantages are not fully translated into empirical results.
Supplementary Material: Yes, but it's hard to understand.
Relation To Broader Scientific Literature: All causal discovery papers require assuming certain conditions, which are often too strict for realistic scenarios to make the method inapplicable, and so many studies are devoted to relaxing these conditions. Starting from this goal, this paper studies the causal discovery problem under new and more relaxed conditions.
Essential References Not Discussed: As far as I known, it's no problem.
Other Strengths And Weaknesses: The main advantage of this paper is the method of causal discovery from conditional stable time series, which provides new ideas for solving the limitations of traditional causal inference in handling non-stable data, especially in handling causal relationships in complex systems or dynamic environment. The method is innovative in theoretical framework or algorithm design to capture more hidden causal patterns. However, the disadvantage of the paper is the strong dependence on the "conditional stability" assumption, which, if the actual data cannot meet the premise, may lead to model failure. Moreover, the computational complexity or scalability of experimental validation is not fully demonstrated.
Other Comments Or Suggestions: No
Questions For Authors: Why is it that the work of this paper does not need to know the number of K mentioned in Remark 4.2, but in experiment 3, the movement trajectory needs to do two groups: K=2 and K=4, and the results of the two groups are different?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s time and thoughtful feedback. We are glad they find our framework innovative and acknowledge its significance in handling nonstationary time series. Below, we address the key concerns raised in the review.
> ... if the actual data cannot meet the premise, may lead to model failure.
SDCI is explicitly designed for conditionally stationary time series, with applications in interactive systems (e.g., sports) and Gene Regulatory Networks (GRNs). However, our theoretical results are **broader**, covering:
- Markov Switching Models (m1-m3): Theorem A.6.
- Conditionally stationary time series (m1-m4): Theorem A.7.
- SDCI (m1-m5): Corollary A.8.
We develop theoretical results starting from a very general case to our specific implementation, SDCI. These results, A.6 and A.7, provide a **general foundation** for regime-dependent causal discovery.
We would like to note that identifiability in regime-dependent time series is a **very challenging** problem with only recent contributions [1]. Importantly, our results make no assumptions on state dependencies, and introduce a **novel proof technique** based on classic finite mixture model theory (Thm. A.6). We will incorporate a discussion emphasizing our theoretical scope in contrast with related work (see Reviewer Fbj4 for additional details).
[1] Balsells-Rodas, et al. "On the identifiability of switching dynamical systems." ICML 2024.
> … scalability of experimental validation is not fully demonstrated.
We acknowledge this concern and will include additional runtime analysis in the appendix to illustrate SDCI’s linear scaling with respect to $K$. We will also compare state inference times across “determined” and “recurrent” scenarios.
> NBA data are only mentioned in context, lacking detailed experimental design and outcome analysis
We provide details on training in Appendix B.4 (lines 1097-1099) and dataset preprocessing in Appendix C.3. These details were deferred to the appendix to focus on the interpretability of learned structures.
> … this paper does not need to know the number of K mentioned in Remark 4.2, but in experiment 3, the movement trajectory needs to do two groups …
Our identifiability results do not require knowledge of $K$, which implies we can determine $K$ via model selection (assuming convergence to MLE). This is a significant contribution, as other related works require knowledge of $K$ (with exception of [1]) and cannot rely on this idea.
However, selecting $K$ in practice is a well-known challenge even in non-temporal settings. We can use the “elbow method” (as in K-means), or similar heuristics. Below we show results for synthetic springs and magnets with $K=2$ where reconstruction MSE plateaus for $K=2$.
https://postimg.cc/k2rFZkBp
Our discussion regarding results on $K=2$ and $K=4$ aimed to show that both values provide meaningful interpretations. Figure 9 (forecasting results) shows similar performance for $K=2$ and $K=4$, suggesting $K=4$ overfits the data in this scenario.
We acknowledge selecting $K$ is an important challenge, and will
- include a discussion on how to select $K$ in practice, showing figures similar to the above, including a setting with $K=4$.
- discuss how this applies to NBA data.
> the targeted validation is not designed in the experiment (such as the convergence test under different data quantities) …
Figure 4 shows that SDCI achieves 100% $F_1$ score when the graphs are fixed, demonstrating consistency empirically. For samples with different graphs, the challenge relies on the approximate posterior design $q(W|x_{1:T})$. The identifiability results support structure learning from purely unsupervised data, which for this setup is very challenging.
We provide dataset size variation results for observed states in Appendix E.1, Figure 14. Below, we show similar results for fixed graphs on "Recurrent states" with $N=5$ and $K=2$, which will be included in the appendix:
| $\|\mathcal{D}\|$ | $\mathcal{W}$ $F_1$ score | State $F_1$ score |
| ---------------- | :---------: | :-------: |
| 10 | 0.65 | 0.524 |
| 100 | 0.875 | 0.543 |
| 500 | 1.00 | 0.857 |
| 1000 | 1.00 | 0.884 |
| 5000 | 1.00 | 0.891 |
**Actions**
To improve our manuscript (see other Rebuttals for details), we aim to:
- Clarify notation and revise typos.
- Provide intuitions from [2] in our Proof for Theorem A.6.
- Contrast our method assumptions with prior works, emphasizing our broader theoretical scope.
- Expand the discussion on assumptions and implications to real-world data.
- Provide details for $K$ selection, and clarify NBA data.
- Clarify scalability and include runtime analysis in Appendix.
- Include figures for increasing data sizes on fixed graph data in Appendix.
- Modify Figure 5b to include additional variables.
We appreciate the reviewer’s constructive feedback and believe these revisions will further improve the paper. | null | null | null | null | null | null | null | null |
Clustering Properties of Self-Supervised Learning | Accept (poster) | Summary: This paper studies the clustering properties of self-supervised learning. This paper finds that the encoder's output exhibits superior and more stable clustering properties than other components. Based on insight, this paper proposes a novel positive feedback method to improve the representation ability further.
The idea of positive-feedback is interesting. Extensive experiments validate the feasibility of this idea.
The finding on the clustering properties of encoding is interesting. However, the underlying reason is not well explained.
Claims And Evidence: The clustering properties of encoding are interesting.
However, I cannot understand why encoding and embedding have different clustering properties. To my knowledge, both embedding and encoding can be regarded as features of neural networks with various depths. So why will adding more layers (from encoding to embedding) degrade the clustering properties?
In Figure 3, how to get the ARI values?
Methods And Evaluation Criteria: Yes.
Theoretical Claims: This paper lacks a theoretical analysis. I suggest the author analyze the theoretical foundations of the findings.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I have checked all the supplementary material.
Relation To Broader Scientific Literature: This paper is related to self-supervised learning.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
The experiment is through.
Weakness:
This paper needs to calculate the $A_H$, which seems computationally heavy.
Other Comments Or Suggestions: No.
Questions For Authors: The idea of positive feedback is interesting. In line 117, we can see that this paper aims to enhance the clustering structure in the encoding. But my question is, what if an error exists (some samples belonging to different classes are in the same cluster)? Will the proposed method boost this error?
In 139, $A_H$ is a clustering assignment matrix. But in line 146, it becomes a doubly stochastic matrix. The assignment matrix and the stochastic matrix have different sizes.
In Eq. (3), if the encoding replaces the embedding, will Eq. (3) produce better performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer kGg9,
We sincerely thanks for your time and the valuable suggestions to improve this paper. Here we address each of the comments in detail.
**Why encoding and embedding have different clustering properties**: Thanks for this constructive question. Due to word count limitations, we kindly ask you to refer to our response to Reviewer 69BS, specifically under the section **Why encodings inherently possess better clustering capability**. We sincerely appreciate your understanding and attention to this matter.
**How to get the ARI values**: In Definition A.2 of Appendix A, we provide a detailed definition and formula for the Adjusted Rand Index (ARI). For practical computation, you may also refer to "sklearn.metrics.adjusted_rand_score".
**Theoretical foundations of the findings**: Thanks sincerely for this insightful suggestion. The theoretical exploration of the distinction between encoding and embedding remains a challenge in SSL. We will follow your suggestion and continue to investigate how ReSA facilitates positive feedback learning, optimizes the model's noise robustness, and whether it offers improved bounds compared to other methods.
**$\mathbf{A} _\mathbf{H}$ seems computationally heavy.**: We note that the Sinkhorn-Knopp algorithm, which is used to compute $\mathbf{A} _\mathbf{H}$, does **not** involve gradient propagation and requires only three iterations, making it both efficient and practical as reported in Caron et al. (2020). To quantitatively describe the computation time of the Sinkhorn-Knopp Algorithm, we conducted experiments and found that using an A100-PCIE-40GB GPU, the time for a single computation of a 1024 * 1024 tensor matrix (as used in our experiments) is **0.001578** seconds. This means that when training ResNet-50 on ImageNet for 1 epochs (1251 iterations), the total time for Sinkhorn-Knopp algorithm is about 1.974 seconds. As shown in Table 4, the overall training time is 0.16 × 3600 = 576 seconds, so the proportion of time spent on Sinkhorn-Knopp algorithm is approximately 1.974 / 576, which is about **0.3%**. Moreover, when using larger models, the time spent on the Sinkhorn-Knopp algorithm will represent an even smaller proportion of the total computation time.
[1] Caron et al. (2020). Unsupervised learning of visual features by contrasting cluster assignments.
**What if an error exists (some samples belonging to different classes are in the same cluster)? Will the proposed method boost this error?**: Thank you for this insightful question. In the context of self-supervised learning, the issue of some samples from different classes being assigned to the same cluster is quite likely, especially when there is no label information guiding the process. This is a challenge faced by most clustering-based SSL methods. However, numerous experiments have shown that ReSA exhibits relatively slow performance improvement in the early stages of training, but achieve significantly better results in the later stages compared to other methods, such as contrastive or decorrelation-based SSL, as shown in Figure 9 of our paper and https://anonymous.4open.science/r/long-tailed-evaluation-2843. This suggests that while ReSA may suffer from incorrect cluster assignments in the early training stages, which slows down learning, it is able to gradually address these issues and learn the correct clustering patterns as training progresses, rather than boosting the errors. Furthermore, clustering-based SSL methods like SwAV, DINO and ReSA typically employ clustering assignments with sharp distribution, which help mitigate the impact of incorrect assignments. Investigating the optimization dynamics of clustering-based SSL is an intriguing theoretical direction, and we hope to explore this area in the future.
**The assignment matrix and the stochastic matrix**: We note that $\mathbf{A} _\mathbf{H}$ is the clustering assignment matrix obtained by applying the Sinkhorn-Knopp algorithm to $\mathbf{S} _\mathbf{H}$. The purpose of the Sinkhorn-Knopp algorithm is to ensure that the output is a doubly stochastic matrix, where each row and column sums to 1. Therefore, $\mathbf{A} _\mathbf{H}$ is a doubly stochastic clustering assignment matrix, and the matrices $\mathbf{A} _\mathbf{H}$ on Line 139 and 146 are exactly identical.
**If the encoding replaces the embedding, will Eq. (3) produce better performance?**: Thank you for this thought-provoking question. In Eq. (3), SwAV requires learnable prototypes to perform clustering. If the encoding is used in place of the embedding—namely $\mathbf{Q}=sinkhorn(\mathbf{H}^\top \mathbf{C})$—we must ensure that the encoding and embedding output dimensions match. We tested this approach in early experiments on ImageNet-100 and found that, although training proceeded normally, the resulting performance was quite poor. We speculate that this may be due to the difficulty of effectively optimizing the prototypes under this condition.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal. Part of my concerns have been addressed. The new results on ReSA's learning curves and their comparisons are very interesting. But I still have the following questions.
The authors did not explain why ReSA suffers from incorrect cluster assignments in the early training stages, and it can gradually address these issues and learn the correct clustering patterns as training progresses. Actually, the observation is interesting, but the reason behind it is not clear.
If the encoding replaces the embedding, can you perform K-means or K-means++ to get the prototype?
Besides, the understanding of why encodings inherently possess better clustering capability is still theoretically unclear.
Nevertheless, I will maintain my positive score, as the observations on the different behaviours of encoding and embedding are interesting.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kGg9,
We sincerely appreciate your thoughtful feedback and are truly honored by your kind words regarding the novelty of our findings. It is a privilege to receive your recognition. In response, we would like to offer further clarifications to address the remaining points you raised.
**Why ReSA can gradually address ...**
For clarity, we reformulate Equation 2 in the paper to express it in terms of a single sample $z _i$, omitting the symmetric terms.
$l _{\text{ReSA}}(z _i) = - \sum _{j=1}^m {\mathbf{A} _\mathbf{H}}^{(i,j)} \log\frac{e^{z _{i} ^\top z' _{j} /\tau}}{ \sum _{k=1}^m e^{z _{i} ^\top z' _{k} /\tau}}$
Here, ${\mathbf{A} _\mathbf{H}}^{(i,j)}$ denotes the $ (i,j) $-th element of $ \mathbf{A} _\mathbf{H} $, where $ \mathbf{A} _\mathbf{H} = \text{Sinkhorn}(\mathbf{H}^\top \mathbf{H}) $. Given that the Sinkhorn-Knopp algorithm exhibits strict monotonicity, we have the property that if $ {h _i}^\top {h _j} > {h _i}^\top {h _k}$, then $ {\mathbf{A} _\mathbf{H}}^{(i,j)} > {\mathbf{A} _\mathbf{H}}^{(i,k)}$. Since the vectors in $ \mathbf{H} $ are $\ell _2 $-normalized, the diagonal elements of $\mathbf{H}^\top \mathbf{H}$ are all maximized to 1. This implies that for any $ i $ and $j $, we have $ {\mathbf{A} _\mathbf{H}}^{(i,i)} \geq {\mathbf{A} _\mathbf{H}}^{(i,j)} $. Furthermore, due to the sharp distribution employed in Sinkhorn, the diagonal elements are significantly larger than the off-diagonal elements. This ensures that the optimization focus of ReSA remains on the alignment of positive samples, substantially reducing the impact of early assignment errors (off-diagonal elements) on the training process. Throughout training, the continual alignment of positive samples enables the model to learn meaningful representations, which in turn facilitates correct cluster assignments in the later stages, further promoting learning.
In contrast, other clustering-based methods such as SwAV and DINO require an initialized prototype for cluster assignment, making it more challenging to avoid early clustering errors. We speculate that this is one of the key reasons why ReSA outperforms these methods in terms of overall effectiveness.
**Can you perform K-means or K-means++ to get the prototype?**
We are both surprised and honored to find that your insights align so closely with the research trajectory we have followed in this work. At the onset of this project, our initial approach was to use k-means for calculating hard pseudo-labels in the encoding phase, and then apply these pseudo-labels to compute the supervised contrastive loss [1]. We also explored the use of k-means++ and soft k-means, and the results we obtained from training on CIFAR-10 are as follows:
| method | knn acc | linear acc |
| :-------- | --------:| :--: |
| k-means | 89.72 | 91.95 |
| k-means ++ | 90.09 | 92.35 |
| soft k-means | 90.17 | 92.41 |
| ReSA | 93.02 | 93.53 |
[1] Khosla P, et al. Supervised contrastive learning.
Similar to DeepCluster, using k-means to calculate pseudo-labels requires storing the encoding vectors for the entire dataset (as the performance of mini-batch k-means is poor), and performing k-means on the entire dataset is computationally intensive. Consequently, we abandoned this approach and, in subsequent work, introduced ReSA, which outperforms existing state-of-the-art methods in both performance and training efficiency.
**the understanding of why encodings ... is still theoretically unclear.**
Although the theoretical investigation of the projector remains an open research challenge within the community, we are actively working towards addressing this issue. Given Intra-Class Compactness $ \mathcal{D} _{\text{intra}}^{(c)}(\mathbf{H}) = \mathbb{E} _{x _i, x _j \sim c} \||h _i - h _j\||^2 $ and Inter-Class Separability $ \mathcal{D} _{\text{inter}}^{(c _1, c _2)}(\mathbf{H}) = \mathbb{E} _{x _i \sim c _1, x _j \sim c _2} \||h _i - h _j\||^2 $, we define the clustering ratio $ \mathcal{R}(\mathbf{H}) = \frac{\mathbb{E} _c [\mathcal{D} _{\text{intra}}^{(c)}(\mathbf{H})]}{\mathbb{E} _{c _1 \neq c _2} [\mathcal{D} _{\text{inter}}^{(c _1, c _2)}(\mathbf{H})]} $ and $ \mathcal{R}(\mathbf{Z}) = \frac{\mathbb{E} _c [\mathcal{D} _{\text{intra}}^{(c)}(\mathbf{Z})]}{\mathbb{E} _{c _1 \neq c _2} [\mathcal{D} _{\text{inter}}^{(c _1, c _2)}(\mathbf{Z})]} $, where a lower $\mathcal{R} $ indicates better clustering property. Under InfoNCE optimization, we are exploring the theoretical relationship between $\Delta \mathcal{R}(\mathbf{H}) $ and $ \Delta \mathcal{R}(\mathbf{Z}) $ during training, where $ \Delta \mathcal{R}$ denotes the improvements in the clustering ratio after one optimization step. This relationship may not be fixed, as our experiments have shown that the clustering performance of the embeddings improves more rapidly in the early stages of training, but tends to decline in the later stages. | Summary: The paper presents a new self-supervised learning method that encourages clustering in the output "embedding" layer based on clustering found in the earlier "encoding" layer used as representations for downstream tasks.
The paper analyses clustering metrics on representations extracted at different layers of the NN architecture (encoder + projector), to identify where clustering might be better applied and thereby derive a training objective that combines concepts used previously in SSL in a new, arguably simpler way.
Claims And Evidence: The paper claims to achieve superior performance to existing methods, which seems well justified, subject to including all relevant comparators. (see below)
Methods And Evaluation Criteria: The method is based on appropriate analysis (of clustering in NN layers) and applies clustering, used extensively in prior SSL works, in a novel way that makes intuitive sense.
Theoretical Claims: The paper includes no theoretical claims.
The paper would be improved if it considered how the proposed training objective fits with current theoretical rationale for SSL (see below).
Experimental Designs Or Analyses: The protocol seems standard for SSL papers
Supplementary Material: Not extensively
Relation To Broader Scientific Literature: * (025R) Caron et al. (2018) should be cited here as one of the first to do this with modern NNs
* (029R) the introduction links clustering and representation learning empirically and theoretically (029R-038R), the latter should include Bizeul et al. (https://arxiv.org/pdf/2402.01399), which proposes a latent variable model that unifies various SSL methods, including clustering methods so seems directly relevant.
- The clustering nature of ReSA and close relationship shown to InfoNCE suggest ReSA appears consistent with their model. Discussion of this would improve the paper, perhaps adding theoretical justification for why ReSA works/what is implicitly assumed about the data.
* (037L) "Representation Learning with Contrastive Predictive Coding" van den Oord et al. (InfoNCE) would seem appropriate to cite here as it was a key initiator of recent interest in SSL models.
* Recent works: MuGS and MSN (https://arxiv.org/pdf/2204.07141, https://arxiv.org/pdf/2203.14415) are not referenced or compared to.
- Please discuss these and reconcile their results, e.g. for ImageNet, with those in the paper.
Essential References Not Discussed: See "Relation to Broader.." (above)
- MUGS and MSN (https://arxiv.org/pdf/2204.07141, https://arxiv.org/pdf/2203.14415)
- Bizeul et al. (https://arxiv.org/pdf/2402.01399)
Other Strengths And Weaknesses: * Well written and clear paper.
* Prepared to raise score if concerns are appropriately addressed.
Other Comments Or Suggestions: 087R: The citation is DeepCluster, text says DeepCluster2?
Questions For Authors: * 086L: Do you train 1 neural network function or 2 independent functions with distinct parameters? This gives the impression of 2, but later it seems only 1. If 1, then this presentation is misleading and should be made more clear and consistent with the paper and model that follows. This is important for a clear understanding of how representations are a function of the data.
* 083R: "it remains unclear.." - Why would this *not* continue through all layers of the network (a layer doesn't "know" which part its in)?
* 085R: "embedding tends to be less influenced by clustering irrelevant features" - what does this mean?
* 100R: what is clustering "capability" and "stability" in local clustering? (e.g. stability with respect to what?)
* 142L: "These include excellent local clustering ability .. and stability ..., global clustering capability and similarity measure effectiveness (ARI)." Can you put this in clearer every day language? What is the "ability" of a set of vectors, etc?
* 208L: $S_M$ for a matrix $M=H$ is already defined (137R) to be symmetric so taking transpose in Eq 2 appears to make no sense. But you then ***re**define* $S_M$ below (for $M=Z$).
- It would be clearer to explicitly put $Z^TZ'$ in Eq 2, which is then directly comparable to Eq 3.
* 208L: Since $S_H$ is symmetric, is $A_H$ not symmetric also? If so, why transpose? (confusing)
* 214L: no need to define the same function twice for different arguments (confusing)
* 214L: what is the normalisation constant? Is it not the sum over all of S?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer PEEd,
We would like to begin by sincerely thanking you for the insightful and valuable comments. Below, we address each of the comments in detail.
**Relation to Broader...**:
We have revisited these papers, and will cite and discuss them in the new manuscript.
- We will further investigate the ELBO proposed by Bizeul et al. which may be helpful to derive ReSA's bounds.
- MuGS and MSN are both excellent works that aim to enhance SSL performance in ViTs. In contrast, this work is grounded in traditional SSL benchmarks to demonstrate the effectiveness of encoding-based positive feedback SSL, which is why our experiments primarily use ResNet models. In the past two months, we also explored the effectiveness of ReSA pretraining in ViTs using framework of DINOv2, achieving **77.8%** k-NN accuracy with ViT-B after just 100 epochs, compared to DINOv2's **76.0%** after 100 epochs and MuGS’s **78.0%** after 400 epochs.
**087R**: Here, we refer to SSL methods that apply clustering constraints on the embedding (i.e., the output of the projector). Since DeepCluster (Caron et al., 2018) does not include a projector, this limitation was addressed and improved as DeepCluster-v2 noted in the SwAV (Caron et al., 2020). We will clarify this point.
**086L**: In the definition of joint embedding architectures (JEA), it is feasible to use either two identical (i.e., one shared) or different neural networks. In practice, using the same architecture without sharing parameters (i.e. the teacher-student distillation with momentum updates as used in BYOL, DINO, and MoCoV3) often yields the best performance. For generality, we do not emphasize the detailed choices of neural networks here, as different methods may adopt different strategies.
**083R**: Thanks again for this constructive question. Due to word count limitations, we kindly ask you to refer to our response to Reviewer 69BS, specifically under the section **Why encodings inherently possess better clustering capability**. We sincerely appreciate your understanding and attention to this matter.
**085R**: As we clarify above, the invariance constraint imposed by the SSL loss can cause the embedding to lose diverse information, which may include clustering-irrelevant features such as background information, color redundancy, viewpoint variations and so on. In such cases, fewer clustering-irrelevant features contribute to better clustering performance. However, the invariance constraint may also lead to the loss of class-relevant information, making it difficult for us to determine which—encoding or embedding—exhibits better clustering properties in this context.
**100R**:
- Local clustering capability refers to how well the representation partitions the data into meaningful local clusters. A higher mean silhouette coefficient for the representation indicates better separation and tighter grouping of data points within their respective clusters, which demonstrates stronger clustering capability.
- Local clustering stability is quantified by the standard deviation of the silhouette coefficient. A lower standard deviation signifies that the clustering results are more consistent across the data points, meaning fewer outliers and more stable cluster assignments. This indicates that the clustering representation is robust and less sensitive to variations or noise in the data. We would clarify these more clearly in appendix A.
**142L**: When we talk about the 'ability' of a set of vectors, we are referring to how well they can represent the underlying structure of the data. In this case, 'Global clustering capability' looks at how well the entire set of data is grouped into clusters as a whole, considering the overall structure of the data. As we use k-mean when calculating ARI, so a higher ARI value also indicates that the relationships between the vectors can be well captured and measured using the similarity.
**208L**:
- We sincerely appreciate your valuable feedback. We will explicitly revise $\mathbf{S} _\mathbf{Z}$ to $\mathbf{Z}^\top \mathbf{Z}'$ for better clarity.
- Although $\mathbf{S} _\mathbf{H}$ is a symmetric matrix, the iterative row and column normalization inside the Sinkhorn-Knopp algorithm does not preserve symmetry. This is because each row and column can receive different normalization factors during each iteration. As a result, even a symmetric input matrix can become asymmetric after being processed by the Sinkhorn updates.
**214L**:
- Yes, they indeed represent the same softmax functions. Here we stated them twice to emphasize that these distributions are respectively computed row-wise and column-wise. We appreciate your feedback and will revise it to convey this more clearly.
- $\mathbf{1} _m \in \mathbb{R}^{m\times 1}$ stands for all-ones column vector as we note in (144R) of Algorithm 1. Here, we express the softmax functions in the form of matrix multiplication, where the denominator corresponds to the sum of each row.
---
Rebuttal Comment 1.1:
Comment: *Assuming* the paper is improved in line with the above responses I am more in favour of publication, so **increase my score 3 to 4**.
In particular though, for ML to "mature" there needs to be far less (if any) divide between "empirical" and "theoretical" works. Neural networks are, of course, popular due to their empirical results and much research remains largely validated empirically, but as theoretical understanding develops, it is important that it is not overlooked and new works should discuss how they fit with it (or not). This paper presents a relatively "simple" model (by which I mean neat, not trivial) and would be more impactful if there were discussion as to *why* it works and what theoretic models of SSL it supports, or otherwise. I note that lack of theoretic grounding is a concern for all reviewers. **My increased rating very much assumes that this point especially is incorporated**, which I would ideally review but since the process does not allow that, I take on faith.
Detailed points:
* models like MUGS should at least be mentioned as ML is more about the task than the approach, it is completely reasonable to say that the proposed method improves a particular family of models, but should not give the impression of absolute state-of-art if not the case
* 085R - this information loss would seem to relate to what is predicted theoretically and shown empirically in Bizeul et al.
* 086L - the paper should clearly state what is actually done. A generalisation that is not empirically validated may not be relevant here even if it is elsewhere (e.g. I *doubt* that training two networks would be worthwhile, but either way that would need to be empirically verified).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer PEEd,
We sincerely appreciate your response and are truly grateful for your recognition and guidance.
We wholeheartedly acknowledge the significance of bridging the gap between empirical research and theoretical foundations. We agree that a more profound theoretical understanding is essential for the advancement of the ML field.
In our revised manuscript, we will make sure to explicitly highlight how the proposed method aligns with established theoretical frameworks in SSL. Additionally, we will include a detailed discussion on the underlying reasons for the model's effectiveness and the theoretical models it supports, thereby enriching the overall contribution of the paper.
We value your increased rating and will incorporate the key points in the revised version, which we hope could meet your expectations. In response to your detailed points,
- we will provide a more thorough comparison of ReSA with other prominent SSLmodels, such as MuGS, and clearly highlight its advantages as well as potential limitations.
- Furthermore, building upon the ELBO proposed by Bizeul et al., we will delve deeper into the theoretical reasons behind the information loss induced by the invariance constraints in SSL, offering a more comprehensive analysis.
- We will provide a clearer and more detailed definition for the joint embedding architectures (JEA). The experiments of training two separate networks was tested in Table 5 of the VICReg paper [1] , but it proves to be not worthwhile, as it yields worse performance compared to training a single network.
On behalf of all the authors, we would like to express our sincerest gratitude once again and wish you all the best.
[1] Adrien Bardes, Jean Ponce and Yann LeCun. VICReg: Variance-Invariance-Covariance Regularization For Self-Supervised Learning. | Summary: This paper investigates the clustering properties inherent in self-supervised learning (SSL) through joint embedding architectures. The authors empirically demonstrate that encoder outputs (encodings) exhibit superior clustering quality compared to projector embeddings, as measured by silhouette coefficients and adjusted Rand indices. Building on this insight, They propose Representation Soft Assignment (ReSA), a novel SSL framework that leverages online self-clustering on encodings to guide representation learning through soft assignment targets. ReSA doesn't require learnable prototypes and executes Sinkhorn-Knopp clustering only once per iteration, achieving computational efficiency while preserving semantic structure. A weak augmentation strategy that balances clustering stability and invariance learning further enhances the method's effectiveness.
Claims And Evidence: Some important claims lack justification:
1. The core premise "encodings inherently have better clustering properties"—is presented as an empirical observation (Sec 3) but lacks theoretical grounding. The paper empirically demonstrates that encoder outputs (H) exhibit stronger clustering properties than projector embeddings (Z) via metrics like SC/ARI (Fig 3-4). However, no theoretical analysis explains why encodings inherently possess better clustering capability. Does the encoder’s position (pre-projector) naturally filter augmentation noise or preserve semantic information? It is suggested to quantify the semantic preservation difference of H/Z through methods such as information bottleneck theory.
2. There is an error propagation risk that exists: The positive feedback of ReSA depends on the quality of cluster assignment, If the initial clusters are noisy (such as in the early stages of training), the doubly stochastic matrix AH reinforces incorrect associations.
While Figure 11 suggests that Sinkhorn-Knopp mitigates early noise, there is no proof that the ReSA loss function suppresses error propagation. The paper lacks quantitative evidence that ReSA robustly handles noisy clusters. No analysis shows whether initial noisy assignments (e.g., low-ARI epochs) get amplified via the feedback loop.
Methods And Evaluation Criteria: In the section "3. Exploring Clustering Properties of SSL", the evaluation exclusively focuses on SSL methods with projector heads, omitting architectures without projectors. This introduces selection bias, as the observed encoding superiority (Sec 3) might be contingent on projector-augmented frameworks rather than intrinsic to encodings.
Suggested Improvement: Validate clustering metrics on projector-free SSL baselines to disentangle whether the encoding advantage stems from architectural constraints or fundamental representation properties.
Theoretical Claims: The paper does not present formal theoretical proofs so it is suggested that providing the theoretical proof of the core premise—"encodings inherently have better clustering properties"
Experimental Designs Or Analyses: Experiments only use balanced datasets (e.g., CIFAR, ImageNet). There is no validation on long-tailed or domain-shifted data. It is suggested that test ReSA under imbalanced data distributions** to assess whether clustering-guided learning amplifies biases.
Supplementary Material: In the section“C.2. How ReSA avoids representation collapse?” in the supplementary material, the explanation provided by the authors does not provide good proof that ReSA has good noise robustness.
Relation To Broader Scientific Literature: The paper is highly related to these deep clustering methods based on SSL such as contrastive clustering.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: 1. The theoretical basis of this paper is insufficient, for the core premise-“encodings inherently have better clustering properties”, even though the authors try to prove it from an experimental point of view, I still think the experimental results are not significant enough.
2. The paper is well presented in terms of iconography and text, with a good summary of previous work and clear claims.
3. The innovation of this paper is okay, and the use of soft assignment does solve the problem that samples belonging to the same category may be unintentionally pushed farther away during the training process, thus destroying the underlying semantic clustering structure, and making full use of the hidden information contained in the encoding. It would be an excellent work if the authors could add the theoretical basis for the important points in the paper and optimize the noise robustness of the model.
Other Comments Or Suggestions: It is recommended that the section “5.3. Transfer to Downstream Tasks” be more detailed, as this is an advantage of SSL to better assess the validity of the ReSA methodology proposed in this paper.
Questions For Authors: See the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer 69BS,
We sincerely thanks for your time and the valuable feedback for this paper.
**Why encodings inherently possess better clustering capability**:
We would like to thank the reviewers for their insightful discussion regarding the distinction between encoding and embedding. To the best of our knowledge, the projector has become an indispensable component of JEA-based SSL. However, the dynamics of its optimization and the reasons behind its success remain an open question within the community. Some works have attempted to explain these principles. For example, Jing et al. (2022) [1] empirically discovered that applying SSL loss either to encoding or embedding led to a significant decrease in the rank of the corresponding features. They argued that this rank reduction indicates a loss of diverse information, which, in turn, reduces generalization capability. This explanation aligns with the hypothesis in SimCLR [2], **where the additional projector acts as a buffer to prevent information degradation of the encoding caused by the invariance constraint**. Additionally, Gupta et al. (2022) [3] 's null space analysis for the projector posited that the projector might implicitly learn to select a subspace of the encoding, which is then mapped into the embedding. In this way, only a subspace of the encoding is encouraged to be style-invariant, while the other subspace can retain more useful information.
Therefore, the SSL constraint can cause the embedding to lose information, which may include not only clustering-irrelevant features such as background information, but also class-relevant information, making it difficult for us to determine which—encoding or embedding—exhibits better clustering properties in this context. Based on this, our paper empirically demonstrates that the projector indeed filters out information that could support clustering, while encoding achieves better clustering performance.
**theoretical analysis on ReSA**: We are truly grateful for your expectations and support for this work. As clarified above, the theoretical investigation of the projector remains an open research challenge. The community currently lacks definitive theoretical evidence explaining why the encoding can achieve superior downstream performance compared to the embedding. Building on your insights, we can first conduct interpretability experiments to examine which class-relevant and class-irrelevant features are captured by both the encoding and the embedding, thereby demonstrating how the encoding may possess better clustering properties. We also appreciate your suggestion to quantify this semantic preservation difference through methods such as information bottleneck theory. This could be a valuable approach to theoretically investigate why encodings, as opposed to projector embeddings, may lead to better clustering. We will look into incorporating this analysis in the revised version of the paper to provide a more comprehensive theoretical foundation.
**error risk**: We agree that the paper could benefit from a more rigorous analysis of how the ReSA loss function handles error propagation. In the context of self-supervised learning, early clustering errors are inevitable, as there is no label information or other prior knowledge to guide the process. This is a challenge faced by most clustering-based SSL methods. However, numerous experiments have shown that ReSA exhibits relatively slow performance improvement in the early stages of training, but achieve significantly better results in the later stages compared to other methods, as shown in Figure 9 of our paper and https://anonymous.4open.science/r/long-tailed-evaluation-2843. This suggests that while ReSA may suffer from incorrect cluster assignments in the early training stages, which slows down learning, it is able to gradually suppresses error propagation and learn the correct clustering patterns as training progresses, rather than amplifying the errors.
**projector-free SSL baselines**: To the best of our knowledge, the projector has become an indispensable component of JEA-based SSL. Traditional projector-free baselines, such as the MoCo (He et al. 2020), no longer achieve competitive results, while newer versions like MoCo V2 and V3 have demonstrated the projector’s crucial role in enhancing representational generalization.
**long-tailed data**: Dear Reviewer, please refer to https://anonymous.4open.science/r/long-tailed-evaluation-2843.
**Transfer to Downstream Tasks**: In addition to the experiments described in Section 5.3, we also perform transfer learning on fine-grained datasets (Table 6) and conduct a low-shot evaluation (Table 10). We will include these details in Section 5.3 as well.
[1] Understanding dimensional collapse in contrastive self-supervised learning.
[2] A Simple Framework for Contrastive Learning of Visual Representation.
[3] Understanding and Improving the Role of Projection Head in Self-Supervised Learning. | null | null | null | null | null | null | null | null |
SecEmb: Sparsity-Aware Secure Federated Learning of On-Device Recommender System with Large Embedding | Accept (poster) | Summary: This paper proposes a privacy-preserving retrieval and aggregation method, SecEmb, that can be applied to federated recommender systems that leverage sparse embeddings. Likewise, due to the latent representation, SecEmb achieves faster computational & communication times versus standard (uncompressed) FedRec protocols. This is important in the federated setting, where communication, memory, and computational constraints are heavily present.
Claims And Evidence: > Our SecEmb consists of two
correlated modules: (1) a privacy-preserving embedding
retrieval module that allows users to download relevant em-
beddings from the server, and (2) an update aggregation
module that securely aggregates updates at the server.
Section 4 nicely details each piece and provides insight into SecEmb's communication efficiency.
Methods And Evaluation Criteria: The paper does have a nice experimental section comparing SecEmb to many other compressed messaging schemes. It is a robust section which does showcase SecEmb's (marginal) improvement in performance.
Theoretical Claims: I did not check the correctness. I will mention that the Theorems provided and proven in the supplementary material should be summarized in the main body.
Experimental Designs Or Analyses: I did not thoroughly check the validity of the experimental analysis.
Supplementary Material: I did not thoroughly review the supplementary material.
Relation To Broader Scientific Literature: It seems that this paper is slightly orthogonal to actual Federated RecSys training (i.e. actual compression) and instead focuses on how to transmit the compressed representations in a secure manner. I would ask for a more detailed description of the Federated RecSys setting (see my question below). However, I am not familiar enough in the secure messaging area to know how novel this new messaging scheme is.
Essential References Not Discussed: Not to my knowledge.\.
Other Strengths And Weaknesses: ## Strengths
1. Effective retrieval and aggregation method applied to Federated RecSys. This is useful for the burgeoning federated recsys area.
2. Well-written.
3. Good empirical performance.
## Weakness
1. Slightly unclear RecSys problem formulation (see comment below).
2. The novelty simply revolves around applying a new secure retrieval and aggregation method fit for existing latent RecSys methods. Much of the speedup arises from using a smaller, compressed space.
Other Comments Or Suggestions: I would add more information about the problem setting. I had to dig a bit further into your cited work (e.g., Xue 2017) to better understand the low-dimensional latent factor setting. Specifically, a better understanding of the mapping from X,Y to P,Q and then going from P,Q to the recommendation value. Furthermore, aren't R and Y related? I didn't see this mentioned.
Questions For Authors: 1. What are the compression rates used for the other message compression methods? Are the hyper-parameters consistent across the methods?
2. Is the novelty of this work simply constructing a secure message process on top of existing latent RecSys methodology?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your insightful comment and recognition of our work. Hope our response below could address your concerns.
*Q1*: I will mention that the Theorems provided and proven in the supplementary material should be summarized in the main body.
*Ans*: Thanks for your suggestion. We will move Theorem F.1 and F.2 to the main body to highlight our theoretical insights.
*Q2*: Questions on problem setting – mapping from X,Y to P,Q and then going from P,Q to the recommendation value & relationship between R and Y.
*Ans*: We will add more explanation in 3.1. Problem Statement to make the setting clearer. P and Q refer to the user and item embedding respectively, while X and Y refer to the user and item attributes respectively. P and Q are matrices that **encode the user or item ids into embeddings**. X and Y represents **attributes of users or items**, such as demographic details, genre, or price.
For example, in DeepFM, we map the user id into a vector $p\in \mathbb{R}^d$ using matrix P, and map the item id into a vector $q\in \mathbb{R}^d$ using matrix Q. Similarly, X and Y are converted into embedding matrices $V_x\in \mathbb{R}^{l_x\times d}$ and $V_y\in \mathbb{R}^{l_y\times d}$, where $l_x$ and $l_y$ are the number of user and item attributes. Then the embeddings $p$, $q$, $V_x$, and $V_y$ are concatenated together and go through several hidden layers to obtain the final prediction.
Below we further provide explanations of R, Y, and X:
- R represents the ratings users assign to items (the item and user attributes are NOT included). $R_u$ is the collection of (item id, rating) for user u.
- Y denotes item attributes, such as price and genre. $Y\in \mathbb{R}^{m\times l_y}$ gives the attributes for each item, and $y_i\in \mathbb{R}^{l_y}$ is the attributes for item i.
- X is the attribute of users, such as their age. $X\in \mathbb{R}^{n\times l_x}$ gives the attributes for each user, and $x_u\in \mathbb{R}^{l_x}$ is the attributes for user u.
Therefore, R, X, and Y provide distinct information, ensuring they complement rather than overlap with each other.
*Q3*: The novelty simply revolves around applying a new secure retrieval and aggregation method fit for existing latent RecSys methods.
*Ans*: Our work aims to construct a **lossless and efficient secure training protocol** for embedding-based RecSys. For security, we would like to ensure that the server learns nothing about individual gradients except the aggregated results during federated training. To achieve this goal, existing FL protocols typically adopt full model downloading and SecAgg, which incur significant communication and computation cost.
This paper designs an efficient protocol **based on the properties of embedding-based RecSys**: 1) users typically interact with only a small fraction of available items; 2) only the interacted item embedding are relevant for prediction, and thus the gradient of item embedding is a sparse matrix. Accordingly, we design a secure and lossless protocol that is efficient both computationally and communicationally. **In Table 10 (Appendix J) we compare SecEmb with existing protocols, showing the distinct advantage of SecEmb in terms of efficiency, security, and utility for resource-constrained edge devices, highlighting our contribution of significant speedup while ensuring accuracy and security.**
Note that **it is a non-trivial task to simultaneously ensure security and succinct user computation & communication cost** (i.e., cost depends linearly on the number of non-zero elements in each user’s update, and independent of or logarithmic in the total parameter size), and to the best of our knowledge, **there is no work that simultaneously achieves the two goals for embedding-based models (including LM and RecSys)**. We address the problem by a novel construction of FL training algorithm for embedding-based RecSys, which can be extended to the training of other large embedding models, such as language model (see Appendix L.2. and response to Reviewer EqFa’s Q6).
*Q4*: What are the compression rates used for the other message compression methods? Are the hyper-parameters consistent across the methods?
*Ans*: The compression rates for other message compression methods, given by the Reduction Ratio in Table 2 Section 5, are computed as a result of:
- For SVD and CoLR, based on the **rank** of the reduced item embedding update matrix (Table 9, Appendix I). The ranks are chosen to ensure a comparable reduction ratio with SecEmb while preserving utility.
- For Bit8quant and Ternquant, based on the **bit size per value** of the quantized embedding udpates. Note that the bit size to represent a single value is fixed for each quantization method.
The **hyper-parameters for SecEmb and other message compression methods remain consistent** across the models and datasets, using those given by Table 7 in Appendix I. | Summary: In the context of federated recommendation systems, this paper proposed the SecEmb protocol based on the Functional Secret Sharing algorithm and its coding property, which combines on-device request and update consistency on item indices. The protocol allows the client-side to only download and upload the embeddings of rated items, rather than the complete embedding table. It also ensures that the server is unaware of the rated items of each user and corresponding updated embeddings. Compared to the Secure FedRec, it significantly reduces the communication overhead between edge devices and the servers, as well as the computational overhead on the user side. Unlike existing dimensionality reduction and quantization methods, it retains complete information without sacrificing accuracy.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The evaluation was conducted on MovieLens and Yelp datasets using factorization-based models.
Theoretical Claims: Yes, I checked Section Complexity and Security Analysis.
Experimental Designs Or Analyses: Evaluation can be enhanced as follows:
- The criteria for dataset splitting should be added. Is it divided according to the chronological order of all samples, the rating time of each user, or some other method?
- More large-scale datasets are desired, such as the series of Amazon datasets.
- The current mainstream sequence recommendation models should be added for evaluation.
Supplementary Material: I have reviewed most of the appendices, which supplement the areas not explained in detail in the main text.
Relation To Broader Scientific Literature: Federated learning for recommendation models with item embeddings.
Essential References Not Discussed: The comparison with Kvsagg: Secure aggregation of distributed key-value sets published in ICDE 2023 should be added.
Other Strengths And Weaknesses: The key innovation of this work lies in effectively utilizing redundant data from two transmissions to minimize various overheads in federated learning as much as possible. However, a limitation of this method is requiring at least two server parties, and it must be ensured in the application that the data from these two parties can be collated. Other suggestions on evaluation and comparison with related work can be found above.
Other Comments Or Suggestions: The font size in most figures is somewhat small and difficult to read (for instance, the text in the overall framework of Figure 1 is not clear enough). The distinction between X, Y, and P, Q is unclear in Section 3, and the symbols x and i are somewhat confusing in terms of their meanings in Section 4.
Questions For Authors: Q1: Is it possible to adapt this method to a single server?
Q2: Compared to the method of downloading the full embedding table, does this approach result in any loss in recommendation effectiveness?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable comment and suggestions. Hope our response below could address your concerns.
*Q1*: The criteria for dataset splitting should be added.
*Ans*: We randomly split the ratings **for each user** into training and testing set.
*Q2*: More large-scale datasets are desired & Add evaluation on sequence recommendation models
*Ans*: We add experiments on the full Amazon datasets (https://cseweb.ucsd.edu/~jmcauley/datasets/amazon/links.html). After filtering out users with less than 3 ratings to train sequence recommendation model, we have 9,267,503 items and 6,775,277 users.
We test two sequence models: Caser[1] and SASRec[2] (using hyperparameters in [1] and [2]), with results presented in Table R1. SecEmb reduces the upload communication cost by up to 2500x under huge item size (9 million+) and highly sparse data (density<0.002‰) while maintaining accuracy.
[**Table R1.** Accuracy and Reduction Ratio (R.R.) using sequence models on Amazon dataset.]
|||SecEmb|Bit8quant|Ternquant|SVD|CoLR|
|-|-|-|-|-|-|-|
|Caser|HR@10|0.629|0.628|0.623|0.627|0.625|
||NDCG@10|0.483|0.481|0.478|0.483|0.480|
||R.R.|2549|4|8|38|38|
|SASRec|HR@10|0.639|0.636|0.635|0.621|0.600|
||NDCG@10|0.485|0.485|0.483|0.472|0.463|
||R.R.|2342|4|8|12|12|
[1] Tang, J., & Wang, K. (2018). Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining (pp. 565-573)
[2]Kang, W. C., & McAuley, J. (2018). Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM) (pp. 197-206).
*Q3*: The comparison with Kvsagg should be added.
*Ans*: We will add the comparison as follows:
- **Kvsagg leaks more information to the server than SecEmb**. Beyond aggregated results, KvsAgg exposes the exact number of clients who rated each item within a batch, whereas SecEmb limits the server's knowledge to the aggregate alone.
- Kvsagg has a **minor failure rate**, occasionally yielding inaccurate aggregation results, whereas SecEmb ensures error-free aggregation.
- **Kvsagg incurs higher communication costs than SecEmb.** Kvsagg's cost scales with the **union** of all participating users' rated items, whereas SecEmb's cost scales with each single user's rated items. Table R2 presents the upload communication cost of the two methods, showing SecEmb's communication benefits.
[**Table R2.** Upload communication cost (in MB) using MF.]
||ML100K|ML1M|ML10M|ML25M|Yelp|
|-|-|-|-|-|-|
|Kvsagg|1.21|2.49|3.41|4.37|7.31|
|SecEmb|0.17|0.27|0.28|0.51|0.52|
*Q4*: Issues of two server setting
*Ans*: **Two server setting is common in MPC protocols and has been adopted in industry by companies such as Goolge and Apple** (see Response to Reviewer TCtN's Q2). In SecEmb, one of the servers could be recommendation service provider, and the other could be a governmental organisation or a privacy service provider. Both parties have sufficient motivation to train a strong model and perform the computation correctly.
Compared with two server protocols, the single server secure computation solutions generally incur significant overhead. They either require multiple rounds of communication and heavier user computation[3], or rely on computationally intensive homomorphic encryption[4]. While it becomes inefficient under a single server setting, we can relax the two non-colluding server assumption using a multi-party distributed point function (see Response to Reviewer TCtN's Q2).
[3]Bonawitz, K., ... & Seth, K. (2017). Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175-1191).
[4]Chai, D., ... & Yang, Q. (2020). Secure federated matrix factorization. IEEE Intelligent Systems, 36(5), 11-20.
*Q5*: The font size in most figures is somewhat small.
*Ans*: Thanks for pointing out. We will refine the figures to enlarge the font size.
*Q6*: The distinction between X, Y, and P, Q is unclear in Section 3, and the symbols x and i are somewhat confusing in terms of their meanings in Section 4.
*Ans*: P and Q are embedding matrices that **encode the user or item ids into embeddings**. X and Y represents **attributes of users or items**, such as demographic details, genre, or price. (see Response to Reviewer sPkh's Q2)
i denotes the target item id, while x denotes any item id fed into the point function. The point function outputs non-zero value only when x=i.
*Q7*: Compared to downloading the full embedding table, does this approach result in any loss?
*Ans*: SecEmb does NOT result in any loss compared with full embedding download, as long as users utilize only the embeddings of the target and previously interacted items to predict the rating or likelihood (the embeddings of other items are irrelevant). Most embedding-based RecSys models satisfy this criterion (see Response to Reviewer TCtN's Q1).
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response! After reading the response, the reviewer still has some key concerns as follows and thus will maintain the score.
1. What are the setups (e.g., how many clients each round, how many rounds to converge, the machine configurations) and the detailed time and communication overhead of SecEmb, Bit8quant, Ternquant, SVD, and CoLR for evaluating two sequence models on the full Amazon datasets? Implementation details should be included.
2. How the parameter m'_u affects the security/level analysis of the designed protocol? Since a small m'_u is the key of reducing the cost, but will it impact the security guarantee of the proposed protocol? The formal analysis on Page 16 misses the analysis on m'_u or the empirically set m' across all the users.
3. The design relies on private embedding retrieval for download and function secret sharing for upload, where the latter is a new application in federated recommendation. What is the new technical contribution of this work in terms of secure protocol design and analysis?
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer WwRR,
Thank you for your thoughtful follow-up. We hope the responses below address your concerns clearly and thoroughly.
**Q1**: What are the setups (e.g., how many clients each round, how many rounds to converge, the machine configurations) and the detailed time and communication overhead of SecEmb, Bit8quant, Ternquant, SVD, and CoLR for evaluating two sequence models on the full Amazon datasets? Implementation details should be included.
**Ans**: There are 100 client in each round, and it requires around 50000 rounds to train the model on the huge Amazon dataset. We use a server with L40 GPU and Intel Xeon Platinum 8336C CPU (CUDA version 12.2). The four baselines are integrated with the most efficient SecAgg protocol (2-server additive secret sharing) to ensure security. SecEmb and the four baselines adopt the same hyperparameters (in the original work of Caser and SASRec) for each sequence model. The rank of the reduced item embedding update matrix for SVD and CoLR is set as 4.
The user computation time and communication overhead are listed in Table R1. It can be observed that **SecEmb results in communication cost reduction by at least 180x and computation cost reduction by at least 40x compared with the four baselines, since it effectively leverages the sparsity of the huge dataset for payload optimization**. We will detail the setting and overhead in our paper.
[**Table R1.** Computation cost per user (in seconds) and upload communication cost (in MB) in one iteration on Amazon dataset.]
|||SecEmb|Bit8quant|Ternquant|SVD|CoLR|
|-|-|-|-|-|-|-|
|Caser|Computation cost|0.011|0.527|0.518|0.870|0.451|
||Communication cost|1.36|866.65|433.33|276.65|276.65|
|SASRec|Computation cost|0.004|0.324|0.316|0.443|0.285|
||Communication cost|1.48|864.21|432.27|276.77|276.77|
**Q2**: How the parameter m'_u affects the security/level analysis of the designed protocol? Since a small m'_u is the key of reducing the cost, but will it impact the security guarantee of the proposed protocol? The formal analysis on Page 16 misses the analysis on m'_u or the empirically set m' across all the users.
**Ans**: In Appendix C, we detailed the method to select a universal $m'$ for all users. The universal $m'$ is determined by the average of all users' individual $m_u'$, using a SecAgg protocol with only $O(1)$ communication and computation overheads per user. Therefore, **the server learns only the overall average rated item size, without gaining any information about individual $m'_u$ values**.
**Since the $m'$ is pre-determined in advance, the value of $m'$ is unrelated to the security guarantee of the protocol.** Each user sends the same number of keys, concealing both their actual $m_u'$ and the corresponding key values and indices. Note that the formal analysis in Appendix F states that under **a pre-determined and universal $m'$**, the collection of $m'$ FSS keys hide **everything** about the user's gradient.
**Q3**: The design relies on private embedding retrieval for download and function secret sharing for upload, where the latter is a new application in federated recommendation. What is the new technical contribution of this work in terms of secure protocol design and analysis?
**Ans**: Thanks for your question. We would like to highlight our contributions as follows:
- Existing FL protocols consider only the payload optimization for upload or download stage **separately**, while our method **cleverly leverage the connection between upload and download transmission** to optimize the computation and communication overhead. Specifically, we identify a crucial property of the FSS key, and observe that the indices of relevant embeddings remain the same across both stages. Based on these observations, we design a significantly smaller and more efficient FSS key.
- Given the optimized FedRec protocol, **we provide a rigorous security proof that our optimized construction of secEmb maintains security**, i.e., the server learns nothing except the updated models. Note that the security proof for the cryptographic protocol is subtle and non-trivial, especially when we transmit **a condensed & optimized version of FSS key instead of the raw FSS key** in the upload stage, which necessitates a from-scratch security analysis.
----
Overall, **we sincerely appreciate your thoughtful and constructive feedback. You insightful feedback helps make our work more thorough and robust.** We would be truly grateful if you could kindly take our further responses into consideration and reconsider your evaluation.
Best Regards,
Authors | Summary: This paper proposes a lossless and efficient federated recommendation training protocol to address the challenge of balancing efficiency and privacy in federated recommendation systems.
Additionally, it explores the use of row-wise sparsity in embedding matrices to optimize computational load.
Extensive experiments demonstrate the protocol's superior performance in terms of communication and computational efficiency.
Claims And Evidence: This paper makes a convincing case to a certain extent. Extensive experiments demonstrate the protocol's superior performance in terms of communication and computational efficiency.
Methods And Evaluation Criteria: The method proposed in this paper is relatively reasonable, using a lossless and efficient aggregation protocol to enhance efficiency and privacy. However, the evaluation criterion for utility in this paper is relatively outdated. Specifically, this paper uses RMSE to assess the performance of the algorithm, while the current mainstream methods for verifying the performance of recommendations mainly include HR, NDCG, MRR, etc.
Theoretical Claims: I have examined the theoretical contributions of the paper and found that its original theoretical innovations are relatively limited.
Experimental Designs Or Analyses: I examined the experimental setup and analysis and found that the dimensional settings of NCF vary across different datasets, whereas other backbone models, such as MF, FM, and DeepFM, maintain consistent dimensions across datasets. This inconsistency may lead to an unfair comparison.
Supplementary Material: Yes, I have read the supplementary material.
Relation To Broader Scientific Literature: This paper leverages the sparsity of embedding matrices to achieve a lossless and efficient federated recommendation training protocol, which is likely to attract widespread attention from researchers.
Essential References Not Discussed: This paper primarily focuses on addressing efficiency and privacy issues in federated recommendation systems. However, some essential related works, such as FedMF [1] and LightFR [2], are missing. Specifically, FedMF introduces a homomorphic encryption technique to ensure privacy in federated recommendation systems, while LightFR leverages learning to hash to balance efficiency and privacy. The authors should conduct a more thorough review of the field and incorporate relevant literature to enhance the completeness of their work.
[1] Chai D, Wang L, Chen K, et al. Secure federated matrix factorization[J]. IEEE Intelligent Systems, 2020, 36(5): 11-20.
[2] Zhang H, Luo F, Wu J, et al. LightFR: Lightweight federated recommendation with privacy-preserving matrix factorization[J]. ACM Transactions on Information Systems, 2023, 41(4): 1-28.
Other Strengths And Weaknesses: Strengths:
1. The overall logic of this article is relatively clear.
2.This paper employs a large number of experiments and analyses to verify the effectiveness of the proposed method.
Weaknesses:
1. In the experimental part, there are some inconsistent settings for the experiments, which may lead to unfair comparisons.
2. The relevant work needs to be further supplemented to improve the overall progress in the relevant field.
3. The paper lacks effective analysis on certain experimental setups, such as why additional relevant experiments on language models are conducted and the necessity of MF and FM experiments.
Other Comments Or Suggestions: As far as I know, this paper merely utilized the behavioral interaction information within the recommendation system. On this basis, FM is equivalent to MF without the utilization of additional auxiliary information. Therefore, does the experiment of MF and FM exist redundancy?
Questions For Authors: The core topic of this paper is to explore the privacy and efficiency of the embedding tables in recommendation systems. However, the appendix includes additional experiments related to the embedding tables of language models, which may lead to ambiguity in the topic. This is because the entities of Embedding in recommendation systems and those in language models are different. In recommendation systems, the objects are items, while in language models, the objects are tokens. Therefore, the author needs to provide an analysis of the necessity of conducting the related experiments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your insightful comment and valuable suggestions. Hope our response below could address your concerns.
*Q1*: The evaluation criterion for utility in this paper is relatively outdated.
*Ans*: Our paper focuses on the rating prediction task, and thus utilizes RMSE to measure the discrepancy between predicted and actual ratings. We have added experiments to evaluate the HR and NDCG, with a subset of results presented in Table R1, showing the superior performance of SecEmb over baselines. In response to *Reviewer WwRR's Q2*, we also present the experiment results in terms of HR and NDCG for sequence recommender system.
[**Table R1.** Accuracy on ML1M using MF.]
||SecEmb|Bit8quant|Ternquant|SVD|CoLR|
|-|-|-|-|-|-|
|HR@10|0.593|0.593|0.592|0.591|0.590|
|NDCG@10|0.337|0.335|0.333|0.335|0.331|
*Q2*: The paper’s original theoretical innovations are relatively limited.
*Ans*: We would like to highlight the theoretical contributions of our work as follows:
- **Reduction of User Overhead:** Existing SecAgg protocols incur user communication and computation overhead linear in the item size $m$, while we theoretically show that our protocol achieves exponential reduction to $\log m$ (Section 4.4.1).
- **Security analysis:** We provide a rigorous theoretical proof that our optimized construction of SecEmb maintains security, i.e., the server learns nothing except the updated models (Appendix F). We will move Theorem F.1 and F.2 to the main body to highlight our theoretical insights.
To the best of our knowledge, we are the first to achieve both security and minimal user overhead for embedding-based models, demonstrating efficiency benefits both theoretically and practically, while remaining lossless in principle and practice.
*Q3*: The dimensional settings of NCF vary across different datasets
*Ans*: For NCF we adjusted the embedding sizes based on each dataset's characteristics, but **the comparison is fair as we use consistent hyperparameters (including dimension) for both SecEmb and the baselines**. To further validate our approach, we added experiments to set the embedding size for ML100K, ML1M, and Yelp as 24, matching that of ML10M and ML25M. The results, presented in Table R2, demonstrate similar observations that SecEmb maintains utility and reduce upload communication cost by up to 76x for NCF.
[**Table R2.** RMSE and Reduction Ratio (R.R.) for NCF under d=24.]
|||SecEmb|Bit8quant|Ternquant|SVD|CoLR|
|-|-|-|-|-|-|-|
|ML100K|RMSE|0.944|0.951|0.953|0.947|0.961|
||R.R.|4.27|3.90|7.56|4.62|4.75|
|ML1M|RMSE|0.902|0.904|0.913|0.908|0.919|
||R.R.|6.30|3.96|7.80|5.94|6.01|
|Yelp|RMSE|1.029|1.031|1.032|1.037|1.059|
||R.R.|76.54|4.00|7.99|12.21|12.22|
*Q4*: Some essential related works, such as FedMF and LightFR, are missing.
*Ans*: We will add more discussion of both works in our paper as follows:
- FedMF ensures privacy with HE techniques. Compared with SecEmb, FedMF incurs substantial computation overhead, and fails to simultaneously reduce user computation & communication cost and ensure security.
- LightFR improves the efficiency by binarizing continuous user/item embeddings through learning-to-hash, but it suffers utility loss in principle and incurs overhead linear in item size m (as opposed to SecEmb's exponential reduction).
*Q5*: Does the experiment of MF and FM exist redundancy?
*Ans*: In our framework, MF updates involve only item embeddings, while FM updates encompass both item embeddings and item feature embeddings. This distinction results in differing parameter sparsities between the two models since item feature embeddings are not generally sparse (see Table 8, Appendix I). **By evaluating SecEmb's efficiency and utility across models with varying item embedding proportions and thus update sparsity ratios, we provide a thorough assessment of its performance.**
*Q6*: Necessity of conducting the related experiments on language models
*Ans*: We aim to show that **our framework is applicable to the federated training of language models by substituting rated item IDs with target token IDs in SecEmb's algorithm**. Roughly speaking, each user encodes the relevant token ids (i.e., token ids appear in their local dataset) with FSS keys, and the server computes secret shares of token embeddings for embedding retrieval or aggregates the secret shares of token embedding updates for SecAgg. It helps to reduce the overhead since the tokens appearing in each user’s local dataset are typically a small subset of the entire vocabulary. Table 17 shows that SecEmb achieves an overhead reduction ratio between 1.06 and 1.55 for full-parameter fine-tuning, and it can achieve higher reduction ratio when combining with parameter-efficient finetuning (e.g., up to 30x reduction when applying LoRA on Llama3-8b). | Summary: This paper introduces a secure federated recommender system, tailored to the case where user data is sparsely presented. Conventional federated secure aggregation methods suffer from unnecessary communication overhead. Thus, the authors use function secure sharing and propose two modules (embedding retrieval module and update aggregation module) that forms the proposed lossless secure protocol SecEmb. The computation and communication complexities are analyzed. Emprical results demonstrates the effectiveness of the proposed method.
Claims And Evidence: Line 215: "only the embeddings for interacted items are relevant for model updates." Could the authors elaborate more on this claim? For example, is it a claim made for a specific type of recommendation systems, or it stands for any recommendation systems?
Other main claims seem to be supported by both complexity analysis and empirical experiments.
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experiment design is sound.
Supplementary Material: Supplementary is skimmed through
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is overall well written, and the evaluation is comprehensive in the given scope. That being said, the scope is limited: the proposed method only applies to secure federated recommendation system, where user data is sparse, and with two non-colluding servers. The paper would be strengthened if it can be applied to a wider range of settings, such as only one trusted server, or where the sparsity is unknown.
Other Comments Or Suggestions: * Line 123. The introduction of l_x is abrupt, and its explanation only comes a few paragraphs later.
* The explanation for "remaining parameters theta" is vague
Questions For Authors: * Is it easy to relax the assumption of two non-colluding servers?
* How robust is the proposed method to local devices dropping out during training?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your insightful comment and positive rating of our paper. Hope our response below could address your concerns.
*Q1*: Elaboration on "only the embeddings for interacted items are relevant for model updates"
*Ans*: It works for recommender systems (RS) based on item embeddings, as long as **users utilize only the embeddings of the target and previously interacted items to predict the rating or likelihood (the embeddings of other items are irrelevant)**. In other words, to predict $r_{ui}$, user $u$ only utilizes the embeddings of item $i$ or previously interacted items $I_u$. Most embedding-based RSs satisfy this criterion, including but not limited to:
- Latent factor-based collaborative filtering RSs and their extensions, such as MF, NCF, FM, and DeepFM.
- Sequential RSs based on item embeddings, such as CNN-based models (Caser), RNN-based models (GRU4Rec), and Attention-based models (SASRec).
*Q2*: Assumption on two non-colluding servers.
*Ans*: **Two non-colluding server is a common assumption in MPC protocols [1][2][3], and has been adopted in industry.** For example, Prio[1] has been used by Google and Apple to measure the effectiveness of Covid-19 exposure-notification apps on iOS and Android[4] and improve the iOS Photos app[5]. In SecEmb, one of the servers could be recommendation service provider, and the other could be a governmental organisation or privacy service provider.
Compared with two server protocols, the single server secure computation solutions generally incur significantly higher communication and computation cost. They either require multiple rounds of communication and heavier user computation[6], or rely on computationally intensive homomorphic encryption[7].
**The two non-colluding servers assumption can be relaxed using m-party distributed point function (m>2), allowing collusion among up to m-1 parties**[8]. Each user generates m FSS keys for their updates, which are distributed to m servers for aggregation and reconstruction. However, this protocol has less overhead reduction than the two server setting, since user overhead depends sub-polynomially in item size $O(\sqrt{m})$, rather than logarithmically $O(\log m)$ in the two server case. We leave further reduction to future research.
[1]Corrigan-Gibbs, H., & Boneh, D. (2017). Prio: Private, robust, and scalable computation of aggregate statistics. In 14th USENIX symposium on networked systems design and implementation (pp. 259-282).
[2]Mohassel, P., & Zhang, Y. (2017). Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE symposium on security and privacy (SP) (pp. 19-38).
[3]Boneh, D., ... & Ishai, Y. (2021). Lightweight techniques for private heavy hitters. In 2021 IEEE Symposium on Security and Privacy (SP) (pp. 762-776).
[4]Apple and Google. (2021). Exposure Notification Privacy-preserving Analytics (ENPA) white paper.
[5]https://machinelearning.apple.com/research/scenes-differential-privacy
[6]Bonawitz, K., ... & Seth, K. (2017). Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175-1191).
[7]Chai, D., ... & Yang, Q. (2020). Secure federated matrix factorization. IEEE Intelligent Systems, 36(5), 11-20.
[8]Boyle, E., Gilboa, N., & Ishai, Y. (2015). Function secret sharing. In Annual international conference on the theory and applications of cryptographic techniques (pp. 337-367).
*Q3*: Application to unknown sparsity.
*Ans*: **SecEmb supports unknown sparsity, where the server lacks prior knowledge of the sparsity ratio**. Each user uploads a self-defined number of FSS keys based on their own data sparsity, which can exceed the number of interacted items to obscure it. The server aggregation still proceeds correctly with varying FSS key counts across clients. Appendix K.5. shows that SecEmb effectively reduces overhead when the density of item embedding update is within 50\% in general cases. This approach extends to other sparse embedding update settings (with unknown sparsity ratio), such as language model.
*Q4*: Abrupt introduction of l_x & Vague explanation for "remaining parameters theta"
*Ans*: Thanks for pointing out and we will revise the paper accordingly. $l_x$ is the number of user features. Remaining parameters $\theta$ refer to the parameters other than user and item embeddings. For example, $\theta$ in DeepFM is the collection of: 1) parameters in hidden and output layers, and 2) embeddings for user and item attributes (rather than user or item ids).
*Q5*: Robustness to local devices drop out.
*Ans*: For n participants, SecEmb is robust to up to n-1 dropouts. It has one communication round for private embedding retrieval and another round for secure update aggregation. If users drop out in the second round, servers could still aggregate the gradients for remaining users correctly (see Appendix L.3). | null | null | null | null | null | null |
Temporal Difference Flows | Accept (oral) | Summary: This paper studies the problem of learning generative horizon models (GHMs), which are generative models of the successor measure of a policy. That is, GHMs are models capable of generating samples from the discounted distribution of future states visited by a policy at any given time step $t$. In particular, this paper introduces Temporal Difference Flows (TD-Flow), a class of flow-matching techniques used to learn GHMs. The authors introduce three variants of TD-Flow (TD-CFM, Coupled TD-CFM, TD^2-CFM) and additional extensions to diffusion models (TD-DD, TD^2-DD) that improve the accuracy and stability of the learned GHMs. The methods are evaluated in robotics domains from the DeepMind Control Suite with respect to different error metrics, as well as in terms of performance when the methods are employed in a zero-shot transfer setting in combination with generalized policy improvement (GPI).
---
### Post Rebuttal
I thank the authors for their careful response. I am maintaining my acceptance score since I do not have additional questions.
However, I would like to emphasize the importance of including the confidence intervals for the figures in the revised manuscript.
Claims And Evidence: Overall, the paper provides strong empirical and theoretical support for its claims, with extensive benchmarking and mathematical justification.
Regarding its theoretical results, the paper offers convergence guarantees for the TD-based flow methods, demonstrating that TD2-CFM, TD-CFM, and TD-CFM(C) all converge to a fixed probability path (assuming that the flow-matching loss is minimized exactly at each iteration).
Regarding the empirical validation, the results indicate that TD^2-based methods outperform alternative models (GANs, VAEs, and diffusion models) in long-horizon predictions, with significantly lower error metrics such as Earth Mover’s Distance and value-function mean squared error.
The paper also includes ablation experiments to analyze the effect of path design, confirming the robustness of the introduced techniques.
Methods And Evaluation Criteria: The proposed methods are evaluated in 4 domains (Maze, Walker, Cheetah, Quadruped) from the DeepMind Control Suite. This benchmark is standard in the RL literature and provides diverse control challenges for evaluating the proposed method. One concern I have is that these domains are almost deterministic. It would be interesting to see how the proposed methods perform in domains with more stochastic transitions, e.g., with multi-modal distributions of next states.
Regarding the baselines employed in the experiments, they represent current state-of-the-art techniques for learning GHMs and make sense for the empirical evaluation.
Theoretical Claims: I checked the proof of Theorems 1, 2, and 3 and did not find any issues. I did not check the proofs of other theoretical results in the paper and in Appendix E.
Experimental Designs Or Analyses: The experimental design of the paper is well-conducted and includes an evaluation of the proposed techniques with different relevant metrics (log-likelihood, Earth Mover’s distance, and value function MSE). The authors also investigate how the performance degrades with larger horizons, and how they affect the performance of an agent that employs generalized policy improvement (GPI) for solving tasks.
My only concern with the empirical design is that the authors employ only three random seeds for each domain. The authors should justify the use of such a small value. Moreover, the results in Figure 2 and Figure 3 should also report the confidence level or statistical dispersion of the reported metrics.
Supplementary Material: The paper has no supplementary material. I checked the Appendix for proofs, pseudocode, and additional experimental details.
Relation To Broader Scientific Literature: This paper's main contributions extend previous works based on the successor representation/measure (SR), especially regarding methods for learning generative horizon models (GHMs) of the SR. Previous works have explored learning GHMs using techniques such as GANs, Normalizing Flows, and auto-encoders. This paper introduced new algorithms and theoretical results based on flow match and diffusion models for learning GHMs. Importantly, their methods are proven to converge to the successor measure and are shown to reduce the variance of sample-based gradient estimates.
This paper is also related to policy transfer methods based on generalized policy improvement (GPI). In particular, the authors employ their introduced techniques to perform GPI using GHMs conditioned on policy embeddings.
Essential References Not Discussed: To the best of my knowledge, the paper discusses the most relevant current state-of-the-art methods for learning GHMs.
Other Strengths And Weaknesses: Besides the points raised in the other sections, one of the paper's strengths is its clarity and presentation. In particular, the paper covers many different mathematical formulations (e.g., RL, flow matching, diffusion models) while maintaining mathematical rigor, clarity, and precision.
One point for improvement would be the addition of a dedicated related work section that pinpoints the most relevant contributions of the paper in contrast to the state-of-the-art. Currently, these points are scattered throughout the paper.
Other Comments Or Suggestions: - In line 29, Thakoor et al., 2022 refer to GHM as “geometric horizon models”, and not “generative horizon models”.
- In line 59, why restrict to deterministic policies?
- In Equations 2 and 3, the variable $\rho$ was not defined.
- In Section 5.2, I suggest discussing in more depth the relation with the GPI technique of (Thakoor
et al., 2022), which also extends GPI via GHMs. Currently, in this section, the authors only mention the work that first introduced GPI (Barreto et al., 2017). Did the authors try to perform GPI similarly to Thakoor et al, 2022, that is, using geometric switching policies?
- The introduced method had many variants (TD-CFM, Coupled TD-CFM, TD^2-CFM) and additional extensions to diffusion models (TD-DD, TD^2-DD). This makes it difficult for the reader to track the disadvantages and advantages of each variant, as well as their unique properties. I suggest the authors include a table that summarizes all introduced variants with their properties and pros and cons for improved clarity of the paper.
Questions For Authors: See Other Comments Or Suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and thoughtful evaluation, and for recognizing the clarity, rigor, and contributions of the work—both theoretical and empirical. Your careful reading and constructive suggestions are greatly valued.
> **Number of seeds and statistical reporting**
Thank you for raising this important point. While we report results using 3 random seeds, this is done per policy and task, across multiple tasks within each domain—resulting in a substantial number of evaluations across the full benchmark suite. For generative modeling alone, the single- and multi-policy experiments involve over 800 individual runs, not including the computational cost of training the TD3 and FB policies. Additionally, in practice, we observed consistently low variance across runs, especially in value estimation and planning metrics, which informed our choice. We’ll be happy to include confidence intervals for figures in the revised manuscript.
> **Related work and comparison to GPI techniques**
Thank you for the keen suggestion. While we did not evaluate the geometric switching variant of GPI proposed in [1], we see it as a natural and promising direction for future work—particularly given that TD-Flow significantly improves the underlying GHM used in their approach. We believe combining TD-Flow with geometric switching can further improve performance, but given the technical depth already required to develop and analyze TD-based flow and diffusion models, we felt this extension was best left to future work that can give it proper attention. A more focused related work discussion is already included in Appendix C, and we’d be happy to bring specific references or points into the main text if there are any the reviewer feels would benefit from greater emphasis.
> **Clarifications and presentation improvements**
- **Line 29**: Thank you for catching this error, as defined in the appendix this should read geometric not generative.
- **Line 59**: We restrict to deterministic policies primarily for clarity and simplicity of exposition, especially in the theoretical sections. We’ll clarify this point in the revised manuscript.
- **Equations 2 and 3**: Thank you for pointing this out. We will add a definition of $\rho$ in the text. Specifically, $\rho$ denotes the empirical distribution over transitions sampled from a replay buffer or dataset.
- **Many variants and method comparison**: Thank you for the suggestion. To improve clarity, we’ll expand Table 3 in the Appendix to include the TD-diffusion variants and highlight the key differences between all methods. If space allows, we’ll consider moving this table to the main text; at the very least, we’ll ensure it’s clearly referenced from the main body to help guide readers.
---
We’re grateful for the reviewer’s thoughtful feedback and are excited to incorporate these refinements into the final version. We hope the improved clarity and added discussion further strengthen the case for the paper’s significance.
### References
[1] Shantanu Thakoor, Mark Rowland, Diana Borsa, Will Dabney, Rémi Munos, and André Barreto. Generalised Policy Improvement with Geometric Policy Composition. International Conference on Machine Learning (ICML), 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their careful response. I am maintaining my acceptance score since I do not have additional questions.
However, I would like to emphasize the importance of including the confidence intervals for the figures in the revised manuscript. | Summary: This paper introduces a flow-matching and diffusion-based generative modeling framework to learn accurate Geometric Horizon Models by integrating temporal difference structure in the learning objective and sampling process. The proposed TD$^2$-DD and TD$^2$-CFM achieve low modeling errors in both success measure and value function, and remain robust in long effective horizons. Furthermore, the TD-based GHM approaches demonstrate strong planning performance with Generalized Policy Improvement, greatly outperforming the baseline based on Forward-Backward representation.
## Update after rebuttal
I thank the authors for their detailed response. Most of my concerns have been addressed. I will keep my positive score.
Claims And Evidence: Please see the “Other Strengths and Weaknesses” and “Questions” sections below
Methods And Evaluation Criteria: Please see the “Other Strengths and Weaknesses” and “Questions” sections below
Theoretical Claims: No critical issues are observed in my review of theoretical claims.
Experimental Designs Or Analyses: Please see the “Other Strengths and Weaknesses” and “Questions” sections below
Supplementary Material: I’ve reviewed the hyperparameters, experimental details, and additional results in the supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The proposed method innovatively builds GHMs on flow and diffusion models, while injecting temporal difference structure into the sampling procedure and learning objective.
- Extensive evaluations demonstrate the efficacy of TD$^2$-CFM and TD$^2$-DD through non-trivial improvement over generative metrics and planning performance.
- TD$^2$-CFM remains effective and robust in curved path designs.
Weaknesses:
- It would be helpful to show how the quality and scale of offline datasets might affect the planning performance and modeling accuracy
- FB[1] evaluates tasks of different types, including complex maze navigation, robotic manipulation, and games, whereas the evaluations in Section 5.2 mainly focus on continuous locomotion tasks (3 out of 4 domains), including Walker, Cheetah, and Quadruped. Since TD$^2$-based GHM demonstrates the capabilities of modeling successor measures over a long effective horizon, tasks that require long-horizon reasoning, such as navigation tasks in a large Pointmaze or the four-room setting of MiniGrid, should also be tested.
- So far the evaluation seems mainly performed in low-dimensional but deterministic environments. Given the strong capabilities of flow and diffusion models, it would be a meaningful investigation to understand their performance in stochastic or partially observable environments.
[1] Touati, A. and Ollivier, Y. Learning one representation to optimize all rewards. In Neural Information Processing Systems (NeurIPS), 2021.
Other Comments Or Suggestions: 1. Please re-format the equation (the parentheses in the denominator) on the left column of Line 189.
Questions For Authors: 1. Bootstrapping is one cause of the training instability in RL when combined with off-policy learning and function approximation. How would the use of bootstrapped samples affect the training stability of the proposed GHMs overall?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their detailed and thoughtful evaluation. We appreciate the recognition of our theoretical contributions, the robustness of TD²-CFM and TD²-DD in long-horizon modeling, and the strong planning performance demonstrated through Generalized Policy Improvement (GPI).
> **Impact of dataset quality and scale on planning and modeling performance**
We agree that understanding how data quality and scale affects performance is an important question. While we did not vary dataset size in this work, we note that all prior GHM methods similarly rely on offline data and suffer from instability and degraded performance even in much simpler and lower-dimensional environments than those we consider. Our key contribution is to show that by integrating temporal difference structure into both the learning objective and sampling process, we can significantly improve both stability and modeling performance. This is supported by both our theoretical analysis and strong empirical results demonstrating robustness across complex control tasks.
For the planning experiments, we use the same standard offline dataset as the Forward-Backward (FB) representation from [2], which has become a common benchmark in this setting. This choice enables a direct comparison and reveals a compelling result: TD²-based GHMs are able to discover significantly better policies from the same data. This highlights both the strength of our method and the fact that FB fails to identify the best policies within its own class—despite having access to the same data and that its reward embedding is purported to be optimal. We will clarify this in the revised manuscript to better contextualize our design choices.
> **Task diversity and inclusion of discrete/stochastic domains**
Thank you for this thoughtful suggestion. In this paper, we focused on continuous control tasks as they are widely used benchmarks for evaluating generative models and the successor measure. Importantly, these settings already pose significant challenges for long-horizon modeling, and prior methods struggle to model even lower-dimensional versions of these tasks.
As a point of reference, the Ms. Pacman maze experiment in [1] only models the successor measure of 2D $(x,y)$ coordinates of the agent (the backward embedding takes as input $(x,y)$ and not full RGB frames), whereas our method learns to model much higher-dimensional state spaces—including full-body continuous dynamics. Additionally, our Pointmass maze tasks are sparse-reward and require long-horizon reasoning, making them meaningfully different in structure and challenge from the locomotion tasks.
We also agree that moving toward partially observable settings would be a valuable next step. At the same time, such environments introduce a distinct set of challenges, including belief-state estimation, memory, and uncertainty modeling, which we view as part of a larger and important research direction. We see our work as establishing a strong foundation for scalable, stable successor state modeling in fully observable settings, and we’re excited about the community building on this in future work.
> **Use of TD and potential training instability due to bootstrapping**
Thank you for raising this important point. You're absolutely right that bootstrapping—especially when combined with function approximation and off-policy data—can introduce instability in reinforcement learning. These challenges are not fully avoidable in our setting either, as TD²-based GHMs retain all the characteristics of the so-called "deadly triad."
That said, a central goal of our approach is to mitigate these issues as effectively as possible. By integrating temporal-difference structure into flow and diffusion-based models, we observe significantly improved training stability and sample efficiency compared to prior GHM methods, which often struggle even at relatively short horizons. Our theoretical results further support this by showing that tighter coupling between sampling and regression reduces gradient variance, contributing to more stable learning.
In practice, we also incorporate common stabilization techniques—such as employing an exponential moving average target network. While instability remains a core challenge in TD-based methods, we believe our work represents a meaningful step toward more stable and scalable generative modeling of the successor measure.
> **Formatting**
- **L189**: Thank you—we will correct the formatting issue in the final version.
---
We hope our responses help to contextualize the design choices and emphasize the broader relevance of our contributions.
### References
[1] Ahmed Touati and Yann Ollivier. Learning One Representation to Optimize All Rewards. Neural Information Processing Systems (NeurIPS), 2021.
[2] Ahmed Touati, Jérémy Rapin, and Yann Ollivier. Does Zero-Shot Reinforcement Learning Exist? International Conference on Learning Representations (ICLR), 2023.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. Most of my concerns have been addressed. I will keep my positive score. | Summary: This paper introduces TD-flow, a novel approach to using the Bellman equation on probability path with flow(score)-matching techniques for generative models. The approach outperforms the existing ones in terms of more accurate generation over extended horizons. TD-flow is validated across diverse experiments and domains. The paper also contributes theoretical analysis.
Claims And Evidence: Yes, the claims are supported by theoretical analysis and experiment validation.
Methods And Evaluation Criteria: Yes, the evaluation is reasonable.
Theoretical Claims: I roughly went through Section 4 theoretical analysis.
Experimental Designs Or Analyses: The experimental design makes sense to me. And the results look promising
Supplementary Material: No, I didn't.
Relation To Broader Scientific Literature: This paper is very relevant to the generative modeling domain, which is also obvious by its name TD-flow, using TD learning (concept from RL) to aid the score/flowing matching in generative models. I feel this paper can/will receive a lot of attention from the domain.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. I would say this paper is well-written. For someone like me with limited knowledge of generative modeling, it is easy to follow and effectively presents the key ideas.
2. It studies a very important yet hard problem, accurate generative modeling for long horizons. And the experiment results look quite promising. The RL community needs an accurate world model for sure, which could facilitate the development of model-based RL by resolving the modeling errors and biases.
Weaknesses:
1. It is not clear to me what benefits TD could bring to the flow/score matching and how these benefits help with accurate generative modeling for extended horizons.
Other Comments Or Suggestions: 1. What do theorem 2 and 3 imply intuitively?
2. Line 230, is the Bellman operator missing?
3. Why does Eq(1) hold?
4. Line 55. The MDP definition is different from the RL community, where $\gamma$ acts on reward and value functions. It would be better to make the difference more clear.
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and encouraging feedback. We're especially glad to hear that the paper was clear and accessible, even for readers less familiar with generative modeling.
> **What benefits does TD bring to flow/score matching?**
This is a great question, and we appreciate you highlighting it. We’ll be sure to clarify it in the revised manuscript.
Temporal Difference (TD) learning, when applied to flow models, brings the classic benefits of TD methods into learning generative models of the successor measure of a given policy. By using current predictions to estimate future outcomes, TD allows us to train on shorter segments of data while still capturing long-horizon structure. More concretely, this provides several key advantages:
* **Reduced variance and greater sample efficiency**: Rather than relying on full long-horizon rollouts (which can be noisy, high variance, or unavailable), TD updates allow us to learn from short segments (e.g., 1-step transitions) and still reason about long-term behavior.
* **“Stitching” across time**: TD enables combining information from different parts of the environment/dataset to construct consistent long-horizon predictions, even when full trajectories are missing or limited.
* **Off-policy learning**: TD naturally supports learning from off-policy data—that is, trajectories collected from different policies or behaviours. This is critical in the offline setting, where the model must learn from a fixed dataset. Our planning results in particular are only possible because of this capability: TD-Flow can plan effectively using policies not seen during data collection.
> **What do Theorems 2 and 3 imply intuitively?**
The theorems compare the variance in estimating the gradient of the loss functions. The smaller the variance, the less the number of data samples and generative samples are needed to have an accurate estimate of the gradient.
Intuitively, we can think of the gap in variance as being caused by the misalignment of the conditional vector field $u_t$ with the true vector field $v_t$. For conditional paths, under certain conditions (i.e., coupling between $(X_0, X_1)$ and straight non-crossing paths) there is alignment and this gap is zero.
> **Why does Eq(1) hold?**
By definition we have,
$$
\begin{align*}
Q(s, a) &= \mathbb{E}_{\pi} \left[ \sum\_{k=1}^\infty \gamma^{k-1} r(S\_{t+k}) \mid S\_t = s, A_t = a \right] \\\\
&= \sum\_{k=1}^\infty \gamma^{k-1} \int\_{\mathcal{S}} r(x)\\, \mathrm{Pr}(S\_{t+k} = x \mid S\_t = s, A\_t = a; \pi) \mathrm{d}x \\\\
&= \int\_{\mathcal{S}} r(x) \underbrace{\sum\_{k=1}^\infty\\, \gamma^{k-1} \mathrm{Pr}(S\_{t+k} = x \mid S\_t = s, A\_t = a; \pi) \mathrm{d}x}\_{(1-\gamma)^{-1}m^\pi(x\mid s,a)} \\\\
&= (1-\gamma)^{-1} \mathbb{E}\_{X \sim m^\pi(\cdot\mid s, a)} \left[ r(X) \right]
\end{align*}
$$
> **The MDP definition is different from the RL community, where γ acts on reward and value functions.**
Thank you for pointing this out. You're absolutely right that in standard RL formulations, the discount factor $\gamma$ typically appears in the definition of return, i.e., as a weight on future rewards.
In our setting, since we model the successor measure directly, $\gamma$ instead appears in the distribution over time steps—specifically, it defines a geometric distribution over how far into the future we sample. This reparameterization is equivalent but shifts the emphasis from discounting rewards to discounting the probability of future state occurrences. Importantly, this does not imply that the MDP itself terminates with probability $1 – \gamma$ at each step—rather, it’s a modeling choice that reflects the probabilistic weighting of future events, not actual environment termination.
Since this interpretation doesn’t appear frequently in standard RL formulations, we’ll expand on it in the revised version to help clarify the connection.
---
We hope this response strengthens your overall impression of our work. | Summary: This paper introduces a new family of algorithms–TD-CFM, Coupled TD-CFM, and TD^2-CFM which propose a Flow-matching-based generative modeling framework to learn to sample from the discounted successor measure. Learning the successor measure rather than predicting long term rollouts prevents the usual accumulations of prediction errors. The paper proposes several extensions of TD-CFM to coupled flows, analogous denoising diffusion models along with an interesting variance reduction technique using boot-strapped updates (TD^2-CFM). Experiments on several simulated control tasks show that these methods yield more accurate predictions and better value estimates improving planning performance.
Claims And Evidence: This paper claims that
-The proposed TD-flow framework can learn a successor measure that more effectively models long-horizon state distributions.
-TD^2-CFM can reduce gradient variance compared to standard TD-CFM by using direct velocity matching under certain conditions.
-TD-flow translates into better downstream value estimation and planning performance in simulated tasks.
Theorem 1 shows that when the TD-CFM is solved exactly, it forms a contraction (in the 1-Wasserstein distance) and Theorem 2-3 shows that variance is reduced through a tighter coupling of bootstrapped targets. Experimental results support improved value function estimation and planning and generally support the claim the TD^2 can improve performance via variance reduction.
Methods And Evaluation Criteria: I think the proposed method is a very interesting solution to efficiently learning successor measures. I'm not exactly sure if successor measures are a necessity for the task of planning compared to other offline methods using direct Q learning or simply finite-horizon MPC. However, their benchmarks directly quantify Q function estimation accuracy and improved planning with generalized policy improvement over finite families of pre-trained policy and show significant improvements.
Theoretical Claims: I briefly reviewed the proofs, but did not verify them carefully. Many results follow directly from CFM. The assumption of exactly solving CFM at each iteration for Theorem 1 is reasonable since we are training in the offline setting and CFM objective can be efficiently optimized if the dataset is sufficiently large enough.
Experimental Designs Or Analyses: The experiments are quite thorough. I would like to see some run-time analysis for training the proposed algorithms and baselines. It may also be interesting to better how this planning with the successor measure compares to other finite horizon shooting methods / model-based planning methods if applicable.
It could be useful to see an ablation as dimension increases or in the presence of noisy data.
Supplementary Material: I briefly reviewed proofs of the main results and examined some additional experiment figures.
Relation To Broader Scientific Literature: The work builds on several recent advances in generative modeling for reinforcement learning, including generative horizon models. I believe it is a particularly original and interesting application of flow-based generative modeling.
Essential References Not Discussed: I'm not familiar enough with offline modeling of successor measure literature to suggest additional references.
Other Strengths And Weaknesses: Strengths:
-The paper is well-written and I found the ideas generally interesting. It is my impression that the integration of TD learning with flow matching to directly model the successor measure is quite novel with several improvements and variations already proposed here.
-Theoretical analysis provides insights into contraction properties and variance reduction.
-Comprehensive experimental evaluation over several downstream application of successor measures.
Weaknesses:
-Even after the extensive evaluations, I don’t have a good feeling for how this framework compares to more direct methods for learning Q function from offline data or model-based finite-horizon planners (e.g. shooting methods, cross entropy method). Even if there isn't a directly comparable setting, its worth discussing their context more and why the added complexity of learning to sample successor measures is useful.
Other Comments Or Suggestions: A typo on line 190, left column in the equation denominator.
Questions For Authors: 1. How does this framework compare to more direct methods for learning Q function from offline data or model-based finite-horizon planners (e.g. shooting methods, cross entropy method) ? It could be worth discussing the context of these other methods more and why the added complexity of learning to sample successor measures is useful in practice (besides controlling long term errors better).
2. How does the run-time in training and inference compare between the proposed methods and baselines? It would also be nice to get an idea for what dimension the variance and iterative training scheme becomes an issue through a simple ablation study.
After understanding these points, I'm open to increasing my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful and constructive feedback, and we’re glad that the novelty and potential of TD-Flow and its variants were appreciated. We respond below to the key points raised and will revise the paper accordingly to better highlight these aspects.
> **Comparison to direct Q-function learning and finite-horizon planning methods**
We agree that placing TD-Flow in the context of existing value-based and model-based planning methods is important. Our approach offers several distinct advantages:
* **Generalization across reward functions**: Unlike traditional value-based methods, which are tied to a specific task, TD-Flow learns the successor measure which is reward agnostic. This enables zero-shot transfer to new reward functions, which we demonstrate in our planning experiments—an especially valuable property in offline or multi-task environments.
* **Long-horizon reasoning**: Because TD-Flow directly learns to sample from long-horizon state distributions, it avoids limitations of standard MPC methods (e.g., shooting or cross-entropy methods). This is particularly beneficial in sparse reward or goal-conditioned tasks, where short-horizon planners fail to make meaningful progress due to limited planning depth.
* **Planning-time efficiency**: Although training iterative generative models may be more expensive, inference can be significantly more efficient. For example, with a discount factor of 0.99, the expected time horizon is ~100 steps. Whereas a model-based planner would need to simulate 100 one-step transitions to reach that horizon, TD-Flow can sample directly from that distribution in far fewer steps using an ODE solver—often with just 10 integration steps—yielding substantial computational savings at inference time.
To directly assess this comparison, we include results for a Model Predictive Path Integral (MPPI) controller with a learned dynamics model. We train a similar capacity dynamics model to that of TD-Flow before evaluating MPPI with a finite horizon of 32 for locomotion tasks and 128 for maze, where at each step we sample 256 action candidates and perform 10 optimization rounds with 64 elites (top-k actions) per round. The results below show that TD-Flow significantly outperforms MPPI in 3/4 domains with comparable results in Walker. MPPI notably displayed instability related to compounding errors in environments with difficult to model dynamics. We hope this helps clarify the benefits of directly modeling the successor measure.
| | FB | TD²-CFM-GPI | MPPI |
|---|:---:|:---:|:---:|
| Cheetah | 479.35 (14.56) | **693.63 (5.50)** | 541.22 (5.28) |
| Pointmass | 472.45 (14.40) | **800.99 (8.56)** | 286.43 (54.95) |
| Quadruped | 627.28 (1.98) | **695.73 (2.07)** | 156.80 (122.89) |
| Walker | 526.66 (5.94) | 627.63 (7.97) | **658.15 (21.46)** |
> **Run-time analysis and computational cost**
This is a valuable point. Both the runtime at training and inference is dominated by the number of iterative steps to solve the ODE or SDE during sampling. A key insight—shown theoretically in Appendix E.7—is that our method naturally minimizes the transport cost which enables the use of a relatively small number of solver steps without significantly compromising accuracy.
To support this claim, we ran an empirical analysis showing how prediction quality degrades as we reduce the number of integration steps on the Loop task in Pointmass Maze. The results below show that TD²-CFM remains robust even at coarse discretizations of the ODE with as little as 5 integration steps, while we observewith a predictable degradation whenas the number of steps is too smalldecreases. Additionally, orthogonal work on consistency models [1] and self-distillation [2] can be applied to TD-Flow to further increase accuracy for few-step generation.
| | NLL↓ | EMD↓ | VF Error↓ |
|---|:---:|:---:|:---:|
| 2 Steps | -0.48 (0.21) | 0.076 (0.003) | 379.52 (81.75) |
| 5 Steps | -2.75 (0.15) | 0.036 (0.000) | 23.82 (2.05) |
| 10 Steps | -2.85 (0.17) | 0.025 (0.001) | 7.71 (2.75) |
| 20 Steps | -2.99 (0.04) | 0.0218 (0.001) | 4.40 (0.82) |
---
We thank the reviewer again for their insightful suggestions and willingness to reconsider their evaluation. We hope our response clarifies the unique strengths of TD-Flow and its potential to the community.
### References
[1] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency Models. International Conference on Machine Learning (ICML), 2023.
[2] Kevin Frans, Danijar Hafner, Sergey Levine, and Pieter Abbeel. One Step Diffusion via Shortcut Models. International Conference on Learning Representations (ICLR), 2025.
[3] Manan Tomar, Philippe Hansen-Estruch, Philip Bachman, Alex Lamb, John Langford, Matthew E Taylor, and Sergey Levine. Video occupancy models. CoRR, abs/2407.09533, 2024.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for their responses. All of my concerns have been addressed. I have raised my score (3->4). | null | null | null | null | null | null |
Online Sparsification of Bipartite-Like Clusters in Graphs | Accept (poster) | Summary: The paper studies graph sparsifiers that preserve bipartite-like communities in undirected and directed graphs. The notion of communities is formalized via the bipartiteness ratio and then generalized to $k$ communities in the standard way through $k$-way partitions.
For undirected graphs, the novel sparsifier is obtained by subsampling from the original graph using a technique similar to one proposed by Sun and Zanetti (2019).
For directed graphs, the construction is more involved and first constructs an undirected graph which can be sparsified using a result by Sun and Zanetti, and then the undirected sparsified graph can be made directed again.
Claims And Evidence: There is good evidence for the theoretical claims, including full proofs in the appendix of the submission.
Methods And Evaluation Criteria: Generally they make sense, but I think the experiments could be more comprehensive.
Theoretical Claims: I did not check the proofs in detail.
Experimental Designs Or Analyses: - The experiments should contain a detailed discussion how the sampling parameters were set. This seems to be missing at this point. This is particularly important since the theoretical sampling probabilities contain terms like $\lambda_{n-k}$ and it is not obvious how this will be computed (I guess using power iteration? But then what do you do for local algorithms – see also next point.).
- I was surprised that for both undirected and directed graphs only a single algorithm was used for the evaluation of the sparsifier. Additionally, I was surprised that only a local algorithm was used when evaluating the sparsifier for undirected graphs.
- The graphs used in the experiments are relatively small, never containing more than 5000 nodes for the synthetic datasets. For the real-world datasets I am not sure about their exact size, but the running times suggest that they are also rather small.
Supplementary Material: No.
Relation To Broader Scientific Literature: I think the key references are cited.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: The biggest strength of the paper are the theoretical results which are interesting and non-trivial to obtain. The paper could be improved with a better experimental evaluation, though. Overall, I think the positives outweigh the negatives, given that this is mostly a theory paper.
Other Comments Or Suggestions: - In Theorems 1 and 2, there are no bounds on the size of the sparsifier. In some sense, they are implicit in the running time, but I think they should be made explicit.
- I would appreciate a more comprehensive experimental evaluation on more and larger real-world datasets.
- Theorem 5 is missing a statement about the running time for constructing the sparsifier.
- Theorem 2 appears to be a bit informal and a more formal version of it (similar to Theorem 5 as formal version of Theorem 1) would be a good addition.
**Update after rebuttal:** The authors have given a thorough reply to my comments and I have thus increased my score. I will appreciate when the authors discuss the choice and impact of the sampling probability thoroughly in their experiments.
Questions For Authors: - How did you set the sampling probabilities in the experiments? How did you deal with the term $\lambda_{n-k}$, especially for the local algorithms.
- For the Lithuania dataset it seems like the gap in bipartiteness ratio between the two algorithms is relatively large. Do you have a justification for this?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive evaluation and detailed comments. Here is our response to the raised questions:
**Response to _Other Comments Or Suggestions_:**
> In Theorems 1 and 2, there are no bounds on the size of the sparsifier. In some sense, they are implicit in the running time, but I think they should be made explicit.
In the proof of Theorem 3, we proved on Lines 623 of the appendix that the number of edges in our sparsifier is
$$
O\left(\frac{n\log^3n}{2-\lambda_{n-k}}\right),
$$
and the number of edges in our constructed directed sparsifier is dominated by the term above as well. We'll make it clear in the next version of the paper.
> I would appreciate a more comprehensive experimental evaluation on more and larger real-world datasets.
We agree that a more comprehensive experimental evaluation on our developed algorithms will make our paper considerably stronger. We'll add more experimental results in the next version of the paper.
> Theorem 5 is missing a statement about the running time for constructing the sparsifier.
The algorithm behind Theorem 5 runs in nearly-linear time in the number of edges of the input graph. We will make it clear in the next version of the paper.
> Theorem 2 appears to be a bit informal and a more formal version of it (similar to Theorem 5 as formal version of Theorem 1) would be a good addition.
In the next version, we will add a more formal version of Theorem 2, and add it to Section 4. The role of the more formal theorem and Theorem 2 will be similar with Theorem 5 and Theorem 1.
**Response to _Questions For Authors_:**
> How did you set the sampling probabilities in the experiments? How did you deal with the term $\lambda_{n-k}$, especially for the local algorithms.
Our sampling probability is defined in Equation (3.1) of the submission. By the assumption of the $k$-way expansion and the higher-order dual-Cheeger inequality, we treat $C\log^3(n)/(2-\lambda_{n-k})$ as $O(\log^c(n))$ for a constant $c$. This will only influence the total number of sampled edges and the algorithm's overall running time by a factor of $\log^c(n)$. We employed this trick in our experiment, and will state this in detail in the next version of the paper.
> For the Lithuania dataset it seems like the gap in bipartiteness ratio between the two algorithms is relatively large. Do you have a justification for this?
This is a very interesting observation. We think this dues to the fact that, while we apply different seed vertices as the input of the local algorithm and the algorithm returns the output clusters of potentially very different sizes, we employ the same sampling probability to construct a sparsifier. As such, the bipartiteness ratios returned by our algorithm and the MS algorithm could scale differently.
We would like to thank the reviewer once more for these insightful comments. We will implement these suggestions in the next version of the paper. | Summary: In this paper, the authors study study bipartite-like clusters and present efficient and online algorithms that find such clusters in both undirected graphs and directed ones. Experiments on real and synthetic graphs demonstrates that the proposed algorithm can speedup the existing algorithm.
Claims And Evidence: The title of the paper is "Finding Bipartite-like Clusters on the Fly", however, the proposed techniques seems like sparsifier regarding bipartite-like clustering. Amore appropriate title may be better.
Methods And Evaluation Criteria: 1. The fields of the datasets used in the experiments is limited. It seems more convincing if more datasets from various areas can be used to evaluate the proposed methods.
Theoretical Claims: Yes
Experimental Designs Or Analyses: 1. For the synthetic datasets, it is more convincing if more generation models can be used to generate synthetic datasets.
2. The size of the datasets used in the paper is unclear.
Supplementary Material: A, B, C, D
Relation To Broader Scientific Literature: The proposed method can be used to speedup finding bipartite-like clusters.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Weakness:
W1. The significance to find the bipartite-like clusters in a graph seems weak. It is more convincing if more direct application scenerios can be provided.
Other Comments Or Suggestions: No
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer's work and the report. Here is our response to the raised questions:
**Response to _Claims And Evidence_**:
>The title of the paper is "Finding Bipartite-like Clusters on the Fly", however, the proposed techniques seems like sparsifier regarding bipartite-like clustering. A more appropriate title may be better.
We thank the reviewer for the comment. In the next version of the paper, we will update the title to reflect their suggestion.
**Response to _Experimental Designs Or Analyses_**:
>For the synthetic datasets, it is more convincing if more generation models can be used to generate synthetic datasets.
Since SBMs are commonly used models to generate synthetic graphs for clustering algorithms, it's not very clear to us which other generation models the reviewer refers to. If the reviewer has a specific generation model in mind that is suitable for our problem, we'll be happy to add additional experiments to make our work more convincing.
>The size of the datasets used in the paper is unclear.
The datasets used in our work are the common ones studying the problem, and are the ones used in the previous work, e.g., [3]. Following the reviewer's comments, we will include the size of the datasets in the next version of the paper.
**Response to _Other Strengths And Weaknesses_:**
>W1. The significance to find the bipartite-like clusters in a graph seems weak. It is more convincing if more direct application scenarios can be provided.
We respectfully disagree with the reviewer's comment on the _significance_ of finding bipartite-like clusters in a graph due to the following reasons:
First of all, finding bipartite-like clusters in a graph is one of the most important problems in theoretical computer science, and is a natural generalisation of the max cut problem; recall that the max cut problem is closely linked to the unique games conjecture. In particular, Trevisan [1] proved that algorithms for finding bipartite-like clusters can be employed to design an approximation algorithm for the max cut problem, and this approach has remained the only combinatorial one for designing approximation algorithms for the max cut problem.
Secondly, the algorithms and their complexities for finding bipartite-like clusters in a graph relate to understanding the top eigenspace of a graph's normalised Laplacian matrix, and this problem is studied separately in spectral graph theory community [2]. This is another indication for its importance.
Thirdly, the problem of finding bipartite-like clusters in a graph has been also actively investigated in the machine learning community. For instance, Macgregor and Sun [3] presented local algorithms for finding bipartite-like clusters; the significance of their work is clearly recognised by an ICML'21 oral presentation. With respect to more applied domains, our problem is closely linked to finding clusters in disassortative networks and has been applied in training neural networks, e.g., learning over networks with heterophily [4].
We hope that the above-mentioned research fields, in which finding bipartite-like clusters has been studied, could convince the reviewer on the significance of the problem. Building on this sequence of research over the past 15 years, our work presents the first algorithms that sparsify the input instances of both the undirected and directed graphs. Hence, we feel that our work is important from this perspective. In the next version of the paper, we will expand the section of related works to better highlight the significance of the problem.
_Additional Comments_:
We hope that we have answered all of the reviewer's concerns on our work. Given that the reviewer's overall evaluation is mainly due to its significance (instead of flawed proofs or easily obtained theoretical results), we hope that the reviewer can take our response and other reviewers' reports into account when further evaluating our submission. Many thanks.
**Reference**
1. Trevisan, L. Max cut and the smallest eigenvalue. STOC'09.
2. Liu, S. Multi-way dual Cheeger constants and spectral bounds of graphs. Advances in Mathematics, 2015.
3. Macgregor, P. and Sun, H. Local algorithms for finding densely connected clusters. ICML'21.
4. Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., and Koutra, D. Beyond homophily in graph neural networks: Current limitations and effective designs. NeurIPS'20 | Summary: The paper propose efficient and online algorithms to detect bipartite-like clusters for both directed and undirected graphs. They both are graph sparsifiers that can sparsify the graph into $\tilde{O}(n)$ edges while preserving the bipartite clusters with high probability.
Claims And Evidence: The claims are well supported
Methods And Evaluation Criteria: The evaluation methods make sense.
Theoretical Claims: The claims are well supported
Experimental Designs Or Analyses: The speedup is not obvious in figure 3, and the running time fluctuates around 1750 nodes. Some explanations here could be helpful.
Supplementary Material: N.A.
Relation To Broader Scientific Literature: The algorithms proposed is highly related to the cluster-preserved sparsifier work from Sun and Zanetti 2019.
Essential References Not Discussed: The paper "Algorithmic Tools for Understanding the Motif Structure of Networks" in ECML 2022 shows by finding square-dense and triangle-sparse subgraphs can lead to bipartite-like subgraphs, and leveraged this idea to detect anomalies in social networks.
Other Strengths And Weaknesses: The problem of detecting communities as bipartite-like structures in graphs is not well-motivated in the paper.
Authors also should highlight that the proposed conductance is not only find bipartite-like structure within the cluster, but also penalizing the connections from the cluster to outside.
Other Comments Or Suggestions: What if $A_i$ and $B_i$ are not required to be disjoint? In some directed graph problems the "sender" and "receiver" can be considered as different roles.
Questions For Authors: N.A.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive evaluation and detailed comments. Here is our response to the raised questions:
**Response to _Experimental Designs Or Analyses_**:
>The speedup is not obvious in figure 3, and the running time fluctuates around 1750 nodes. Some explanations here could be helpful.
This is a very interesting and meaningful question. Notice that the sampling probability used in our algorithm is defined in Equation (3.1) of the submission. By the assumption of the $k$-way expansion and the higher-order dual-Cheeger inequality, we always set $C\log^3(n)/(2-\lambda_{n-k})$, the quantity involved in the sampling probability, as a fixed constant in the experiment. On the other hand, by (3.1) the sampling probability of every edge doesn't change linearly with respect to the linear increase of $n$. We believe this is the reason behind the fluctuation shown in Figure 3. We will add necessary explanation in the next version of the paper.
**Response to _Essential References Not Discussed_**:
>The paper "Algorithmic Tools for Understanding the Motif Structure of Networks" in ECML 2022 shows by finding square-dense and triangle-sparse subgraphs can lead to bipartite-like subgraphs, and leveraged this idea to detect anomalies in social networks.
Thank you for pointing out this reference. Indeed, this paper is closely related to our submission. In the next version of our paper, we will add necessary discussions of this ECML'22 paper and other related works.
**Response to _Other Strengths And Weaknesses_**:
>The problem of detecting communities as bipartite-like structures in graphs is not well-motivated in the paper.
In the next version, we will expand our Introduction section, and add the following discussion to better motivate our studied problem:
First of all, finding bipartite-like structures in a graph is one of the most important problems in theoretical computer science, and is a natural generalisation of the max cut problem; recall that the max cut problem is closely linked to the unique games conjecture. In particular, Trevisan [1] proved that algorithms for finding bipartite-like clusters can be employed to design an approximation algorithm for the max cut problem, and this approach has remained the only combinatorial one for designing approximation algorithms for the max cut problem.
Secondly, the algorithms and their complexities for finding bipartite-like clusters in a graph relate to understanding the top eigenspace of a graph's normalised Laplacian matrix, and this problem is studied separately in spectral graph theory community [2]. This is another indication for its importance.
Thirdly, the problem of finding bipartite-like structures in a graph has been also actively investigated in the machine learning community. For instance, Macgregor and Sun [3] presented local algorithms for finding bipartite-like clusters. With respect to more applied domain, our problem is closely linked to finding clusters in disassortative networks and has been applied in training neural networks, e.g., learning over networks with heterophily [4].
>Authors also should highlight that the proposed conductance is not only find bipartite-like structure within the cluster, but also penalizing the connections from the cluster to outside.
Thanks a lot for pointing out this. This is an excellent suggestion, and we will better highlight this point in the next version of our paper.
**Response to _Other Comments Or Suggestions_**:
> What if $A_i$ and $B_i$ are not required to be disjoint? In some directed graph problems the "sender" and "receiver" can be considered as different roles.
It is a really interesting question, and it seems that our current technique cannot be easily adjusted to handle this situation. We believe that this could be a meaningful question for future work.
We would like to thank the reviewer once more for these valuable comments. We will implement these suggestions in the next version of the paper.
**Reference**
1. Trevisan, L. Max cut and the smallest eigenvalue. STOC'09.
2. Liu, S. Multi-way dual Cheeger constants and spectral bounds of graphs. Advances in Mathematics, 2015.
3. Macgregor, P. and Sun, H. Local algorithms for finding densely connected clusters. ICML'21.
4. Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., and Koutra, D. Beyond homophily in graph neural networks: Current limitations and effective designs. NeurIPS'20 | Summary: This paper studies the problem of finding bipartite-like clusters in both directed and undirected graphs. The authors propose a novel graph sparsification algorithm that can be implemented online and preserves the structure of bipartite-like clusters. The main findings are theoretical results proving that their algorithm, which runs in nearly-linear time, produces a sparse subgraph with $\tilde{O}(n)$ edges that maintains the key bipartite-like cluster properties of the original graph. The main algorithmic idea is to perform a specific type of edge sampling based on the dual Cheeger constant and degrees. For directed graphs, a reduction to an undirected "semi-double cover" graph is introduced before sparsification, and a "reverse semi-double cover" operation is used to transform the sparsified undirected graph back into a directed one. The authors also demonstrate empirically that their algorithm significantly speeds up existing local clustering algorithms while preserving the quality of the results.
Claims And Evidence: The main claims are well-supported by theoretical evidence (proofs of Theorems 1,2 and 5) and experimental evidence.
* Undirected Graphs: The sparsified graph G* preserves the k-way dual Cheeger constant (and hence the bipartite-like cluster structure) and has $\tilde{O}(n)$ edges (Theorem 1/5). The proof is sketched in Section 3, while the remaining details are shown in Appendix B.
* Directed Graphs: A similar result holds for directed graphs, using the semi-double cover construction (Theorem 2). The proof is sketched in Section 4, with the remaining details in the Appendix.
* Practical Speedup: The authors validate their theoretical findings with an empirical evaluation showcasing that the sparsification algorithm speeds up existing local clustering algorithms in practice. The evidence is experimental results on both synthetic and real-world datasets.
The claims seem strong and supported by sound evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate.
* Sparsification: Using graph sparsification to accelerate clustering is a sensible approach. The specific sampling scheme based on the dual Cheeger constant and node degrees is novel and theoretically sound.
* Semi-Double Cover: The reduction from directed to undirected graphs via the semi-double cover is a clever technique to leverage the undirected sparsification algorithm. The reverse operation is well-defined.
* Empirical Evaluation: The evaluation uses standard metrics, derived from the definition of the problem, to do their experiments. The results are reported over a number of runs. Both synthetic and real-world datasets are used, similarity to (Macgregor & Sun, 2021a).
Theoretical Claims: I checked the correctness of the proofs for the main theorems (Theorem 1/5 and Theorem 2) as best as possible. I followed the high-level arguments and checked the key steps in the proofs.
The proofs appear to be mathematically sound and well-structured, leveraging standard techniques appropriately.
Experimental Designs Or Analyses: I reviewed the soundness of the experimental designs and analyses. The datasets and evaluation criteria follow that of (Macgregor & Sun, 2021a) and is reasonable.
The experimental design appears sound and the analyses are clearly presented.
Supplementary Material: No, I only looked at the main body and appending in the main file.
Relation To Broader Scientific Literature: The paper is well-situated within the broader scientific literature on graph clustering, graph sparsification, and spectral graph theory. It cites relevant prior work on: Graph Clustering, Graph Sparsification, Spectral Graph Theory.
The paper clearly distinguishes its contributions from prior work. It emphasizes the novelty of preserving the inter-connection between vertex sets (bipartite-like clusters) rather than just the cut values between a set and its complement. It further outlines the importance of practical sparsification algorithms that can easily be implemented.
Essential References Not Discussed: I do not believe there are essential references that are not discussed.
Other Strengths And Weaknesses: Strengths:
* Novelty: The proposed sparsification algorithm for bipartite-like clusters is novel and theoretically well-justified.
* Generality: The algorithm works for both undirected and directed graphs, which is a significant advantage. The reduction to the undirected case for directed graphs is clever.
* Efficiency: The algorithm is nearly-linear time, and can be implemented using a local algorithm, making it suitable for large graphs.
* Practical Impact: The experimental results demonstrate significant speedups for existing local clustering algorithms.
* Clarity: The paper is well-written and clearly explains the problem, the proposed algorithm, and the theoretical and experimental results.
Weaknesses:
* Some concepts and metrics can be explained better (see questions).
Other Comments Or Suggestions: Consider explicitly mentioning what do you mean by online in the paper. Online as in dynamic algorithm, or online as in online algorithms (where decisions are irreversible )? The oracle with the degrees of the nodes is with respect to the current graph, or with respect to the graph after all node/edge insertions?
Page 2, line 60: "since most sparsification algorithms are only applicable for undirected graphs", did you mean directed graphs?
Explain better how do you use the SBM model to generate bipartite graphs. The way it is described it seems to be generating non-bipartite graphs.
Give some intuition on the bipartiteness ratio and the flow-ratio used in the experiments. If I understand this correctly, one wants to find clusters with lower bipartiteness (and flow-ratio), so this means that with your sparsification does better compared to running an algorithm on the initial graph?
Questions For Authors: Could you elaborate more on the practical implications of the lower bound restriction on the dual Cheeger constant ($\overline{\rho}_G(k) \geq 1/ \log n$)? Are there specific types of graphs or applications where this condition is likely to be met or not met? How does the algorithm's performance degrade as $\overline{\rho}_G(k)$ approaches this lower bound?
Could you do the extra effort and add an experiment for graphs with a higher number of clusters in the SBM model? Given that the algorithm by (Macgregor & Sun, 2021a) stops once it finds a good cluster, doesn't this mean it could find subset of a cluster, that I assume you remove from the graph, and it can affect the subsequent clusters that it finds?
Could you explain why it doesn't make sense to compare to spectral sparsifiers in practice?
See also some questions in the comments/suggestions section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and such detailed report. Here is our response to the raised questions:
**Response to _Other Comments Or Suggestions:_**
>Consider explicitly mentioning what do you mean by online in the paper.
Online in our setting is more relevant to online algorithms and, with the degree oracle for the whole graph, our algorithm decides whether to keep every online arriving edge in a sparsifier or not without the global information of the graph. We will make this clearer in the next version of the paper.
>Page 2, line 60: "since most sparsification algorithms are only applicable for undirected graphs", did you mean directed graphs?
No. We do mean "undirected graphs" in this specific place, and most sparsification algorithms are indeed only applicable for undirected graphs.
>Explain better how do you use the SBM model to generate bipartite graphs. The way it is described it seems to be generating non-bipartite graphs.
We explained the SBM model on Lines 354- 388 (left) for undirected graphs, and on Lines 374-383 (right) for directed graphs. The reviewer is right that indeed we use SBM to generate non-bipartite graphs. However, since vertices in the same cluster are connected with much lower probability, our generated graphs are _almost bipartite_. As shown from the experiments, our algorithms are able to find the two partitions of the clusters. Notice that the our task becomes trivial if the underlying input graph is a bipartite graph. In the next version of the paper we'll better explain this part.
>Give some intuition on the bipartiteness ratio and the flow-ratio used in the experiments.
Two clusters $A$ and $B$ with a low bipartiteness/flow ratio corresponds to the case that most edges leaving vertices in $A$ (resp. $B$) will go to $B$ (resp. $A$), and in comparison there are fewer edges inside $A$ and $B$. The objective for finding bipartite clusters is to approximately find the sets $A$ and $B$. We prove that this task can be achieved with easily implementable sparsification algorithms. Our developed algorithms can be used to speed up the running time of the overall algorithm framework for finding two clusters.
**Response to _Questions For Authors:_**
>Could you elaborate more on the practical implications of the lower bound restriction on the dual Cheeger constant ($\overline{\rho}_G(k)\geq 1/\log n)$)? Are there specific types of graphs or applications where this condition is likely to be met or not met? How does the algorithm's performance degrade as $\overline{\rho}_G(k)$ approaches this lower bound?
A graph with $\overline{\rho}_G(k)\geq 1/\log n$ implies that $G$ has at least $k$ mutually disjoint $A_i$ and $B_i$ such that
$$
\frac{2w(A_i, B_i)}{\mathrm{vol}(A_i\cup B_i)} \geq \frac{1}{\log n};
$$
that is, most edges adjacent to vertices in $A_i\cup B_i$ are between $A_i$ and $B_i$, and $A_i$ and $B_i$ forms an _almost_ bipartite graph.
In practice, this condition are usually met when we study $k$ pairs of vertex groups that are densely connected. Our experimental results on the Interstate Disputes Dataset and the Migration Dataset are two good examples.
Our sampled edges are inversely proportional to $\overline{\rho}_G(k)$. The smaller the value of $\overline{\rho}_G(k)$, the more sampled edges needed in order to achieve the proved guarantee. We'll make it clear in the next version of the paper.
>Could you do the extra effort and add an experiment for graphs with a higher number of clusters in the SBM model? Given that the algorithm by (Macgregor & Sun, 2021a) stops once it finds a good cluster, doesn't this mean it could find subset of a cluster, that I assume you remove from the graph, and it can affect the subsequent clusters that it finds?
In the next version of the paper, we will add an experiment for graphs with more number of clusters. The algorithm by Macgregor & Sun is a local one. If we want to find $k$ pairs of clusters, several other algorithms can be used to achieve this objective.
>Could you explain why it doesn't make sense to compare to spectral sparsifiers in practice?
In our point of view, it doesn't make sense to compare our result to spectral sparsifiers for 3 reasons:
1. Our focus is to construct a sparsifier which preserves the cut values $w(A_i, B_i)$ for $k$ pairs of $A_i$ and $B_i$. A spectral sparsifier only preserves the cut values between any vertex set $A$ and _its complement $V\setminus A$_. Hence, a spectral sparsifier doesn't achieve our target;
2. Our algorithms work for both undirected and directed graphs, but a spectral sparsifier only works for undirected ones.
3. Our work is easy to implement, but most algorithms for constructing spectral sparsifiers are based on Laplacian solvers or complicated expander decomposition schemes making it difficult to implement.
We have tried to address all of your questions within the word limit. We'll be happy to answer any other questions during the discussion phase. | null | null | null | null | null | null |
Gradient Inversion of Multimodal Models | Accept (poster) | Summary: This paper studies gradient inversion (GI) attacks specifically for multi-modal Document Visual Question Answering (DQA) models and proposes GI-DQA, a novel method for reconstructing private document content from gradients. The empirical experiments demonstrate that their approach exposes critical privacy vulnerabilities even in sota DQA models.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The proposed method is intriguing, but certain aspects of the methodology and evaluation setup require further clarification. Please refer to the questions section below.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiment design is reasonable, though additional experiments would be beneficial. In particular, the ablation study—specifically the Prior Effects paragraph in Section 4.2.2—does not fully convince me. Please refer to the questions section below
Supplementary Material: Yes
Relation To Broader Scientific Literature: This is the first attempt to conduct gradient inversion attacks on multi-modal models, particularly in the novel and impactful setting of DQA models. It opens new directions for multi-modal safety research.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. It is the first attempt to apply gradient inversion attacks to multi-modal models, particularly DQA models.
1. The empirical results are promising
Weaknesses:
1.The writing could be improved, particularly in clarifying the insights and analysis behind the design of the objective function in Eqn. 4.
2. While the method combines multiple techniques effectively, it lacks deep insights into why certain design choices work well beyond empirical success.
Other Comments Or Suggestions: 1. Section 3.3.1 could be written more clearly. A brief outline before delving into each paragraph would improve readability.
2. A more detailed explanation of the objective function should be included, particularly regarding:
- How the weights were chosen.
- Why Gaussian/TV priors were included (the current descriptions are not convincing enough to me).
3. Notations inconsistency: (e.g., both $R_{txt}$ and $R_{lap}$ are used inconsistently).
Questions For Authors: 1. Are the rows in Tab. 2 cumulative comparisons? For example, in the row labeled $+ R_{lap}$, do the numbers compare $L_{grad}$ with $L_{grad} + R_{lap}$, or $L_{grad} + R_{TV}$ with $L_{grad} + R_{TV} + R_{lap}$ ? If the latter, it would be helpful to include a direct comparison for each individual component.
2. Is the Gaussian filter essential? I understand that Gaussian filtering preserves smoothness and prevents overfitting to noise, but
- the provided examples do not show substantial improvement.
- the interpretation of Table 2 is unclear due to my first question.
- the weight $\alpha_{gau}$ is 0.005, while $\alpha_{txt}$ is 0.5 and $\alpha_{TV}$ is 0.05. Given these values, I wonder how important the Gaussian filter actually is.
3. In your overall auxiliary priors, you set weights for each prior. However, for TV, you did not differentiate between channel and pixel-level TV. Was there a reason for this decision? Also, it would be great to include the importance of channel/pixel TV respectively in your Tab. 2.
4. Can you provide more insights into the selection of priors and the scheduler? The current description in Sec 3.3 and Sec. 4.2.2 are not convincing enough to me. A discussion in the appendix illustrating the importance of each prior and your choice of scheduler would be valuable.
5. You mentioned that “auxiliary priors play a critical role in stabilizing the early stage.” Including a plot or table illustrating this observation would be beneficial. Additionally, why does this occur? Most prior work delays activation of auxiliary priors until later stages of optimization to avoid suboptimal convergence. Wouldn’t your scheduler lead to suboptimal convergence?
6. How were the weights $\alpha$'s chosen? Are they learnable parameters or manually set hyperparameters? Based on my current understanding of Table 2, the example you provided, and the coefficients, it seems that Gaussian/TV are the least important priors. If these weights are manually set, why were Gaussian/TV intentionally downweighted? Are they truly necessary?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for his time, effort, and valuable suggestions.
Q1. "Tab. 2 cumulative.." + Q3."..channel and pixel-level TV.."
A1. We followed the standard practice of prior works such as GradViT (Hatamizadeh et al., 2022) and GradInversion (Yin et al., 2021), which apply priors incrementally and in combination rather than in isolation. These methods recognize that priors are often **interdependent**—for example, Total Variation enforces smoothness, while Laplacian sharpens edges, and together they produce better reconstructions than either alone.
Our goal in Section 4.2.2 was to show how **combinations of priors** improve reconstruction quality, particularly in **multi-modal DQA** settings, where each prior targets different aspects (e.g., edges, smoothness, spatial layout).
That said, we agree that isolating each prior provides additional insight. We have therefore conducted an **ablation study** evaluating each prior individually alongside the core gradient loss ($L_{grad}$).
|Experiment|PSNR|FFT2D|MSE|Binary|Fuzz Ratio|
|-|-|-|-|-|-|
|$L_{grad}$|14.619|0.023|81.96|0.198|0.146|
|$R_{TV-C}$|20.083|0.007|72.446|0.510|0.745|
|$R_{TV-S}$|18.608|0.008|77.744|0.479|0.749|
|$R_{TV}$ (combined)|20.527|0.005|72.579|0.579|0.821|
|$R_{txt}$|17.738|0.011|80.052|0.425|0.680|
|$R_{gau}$|18.139|0.009|80.907|0.410|0.694|
The results show that while each prior contributes modestly on its own, the combination (presented in the paper) consistently outperforms any single prior, confirming the complementary nature of these losses.
For the TV variants, we can see they contribute meaningfully on their own, but their combination consistently yields the best performance across all metrics.
Note that these results differ slightly from those in the paper, as they were conducted on a data subset due to rebuttal time constraints. However, they clearly convey the main trend, and full results will be included in the final version.
Q2. "gaussian essential.."
A2. We would like to clarify that the absolute values of the loss weights do not reflect relative importance, as each term operates on a different scale. For instance, the Gaussian prior yields smaller values than the text loss, so its weight is lower to ensure balanced gradient contributions. Therefore, a smaller weight for the Gaussian prior does not imply that it is negligible—rather, it ensures balanced gradient contributions during optimization across all terms. This normalization prevents any term from dominating due to scale differences rather than actual relevance.
Q4. "selection of priors.."
A4. Our priors were selected for their ability to capture complementary and theoretically grounded aspects of the reconstruction process:
- Laplacian emphasizes high-frequency components, enhancing edge sharpness and fine details—critical for recovering small text.
- Gaussian applies a low-pass filter to promote global smoothness, stabilizing early optimization and reducing noisy minima.
- Total Variation (TV) encourages piecewise smoothness while preserving edges:
- Spatial TV regularizes differences between neighboring pixels, reducing noise while preserving layout structure.
- Channel-wise TV promotes consistency across color channels, helping suppress chromatic artifacts and preserve clean text appearance.
Together, these priors form a multi-scale regularization strategy, guiding optimization from low-level structure to high-level semantics and enabling more stable, accurate gradient inversion in DQA models.
We will include in-depth discussion in the appendix illustrating their importance.
Q5. "auxiliary priors scheduler.."
A5. Unlike prior works (e.g., GradViT), we found that using only ($L_{grad}$) in early iterations was insufficient for reconstructing meaningful text—likely due to the template reducing gradient signals in sensitive regions. To address this, we use a scheduler that starts with strong prior weights and gradually reduces them, stabilizing early optimization. The table below compares this to GradViT’s approach, which delays prior introduction until later iterations:
|Scheduler|PSNR|FFT2D|MSE|Binary|Fuzz Ratio|
|-|-|-|-|-|-|
|Ours|18.390| 0.008|80.383|0.410|0.711|
|GradViT|8.885| 0.079|83.407|0.118|0.017|
As shown, GradViT’s scheduler struggles to reconstruct meaningful content, especially in terms of semantic fidelity. We will include this comparison in the appendix and integrate the key findings into the final version of the paper.
Q6. "set hyperparameters.."
A6. The prior weights were manually set via grid search to optimize reconstruction performance. As noted in Answer 2, their absolute values don’t reflect relative importance due to differences in loss scales. Both Gaussian and TV priors are especially useful early in optimization, promoting smoothness and structure that complement sharper and semantic losses like Laplacian and text similarity.
Q7. "Notations inconsistency.."
A7. Thank you, this will be fixed in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. I'd like to ask whether the authors could perform additional experiments using DQA models trained with DP (differential privacy) (e.g., [1,2]) and compare their performance against publicly available models. It may strengthen the work
[1] https://arxiv.org/pdf/2310.03104
[2] https://arxiv.org/pdf/2306.08173
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s suggestion to consider differentially private (DP) training techniques, including [1] and [2]. These are valuable contributions to the growing field of privacy-preserving learning, though they target **different threat models** than the one addressed in our work.
[1] focuses on contrastive, vision-only models and introduces a privacy mechanism tailored to non-decomposable losses. However, its assumptions and objectives do not translate to supervised, multi-modal architectures like those used in DQA.
[2] adapts CLIP training in a centralized setting and targets open-ended generation tasks such as VQA, but it does not address DQA or the federated learning (FL) setting considered in our work.
We also note that the implementation for [2] is not publicly available, making reimplementation infeasible within the rebuttal period. We plan to include an implementation of this experiment in the final version of the paper, if feasible.
Both works primarily aim to mitigate **centralized privacy risks**, such as **membership inference** and **memorization** attacks, where the adversary inspects the final model. In contrast, our work addresses **server-side adversaries in FL**, who observe **per-client gradient updates**, requiring a fundamentally different defense strategy.
While [1] and [2] focus on settings different from ours, we explored whether applying standard DP-SGD provides meaningful protection in the federated DQA context.
To that end, we conducted two complementary experiments: the first demonstrates the **limitations of DP-SGD when applied to DQA training**, and the second evaluates a **standard DP defense commonly used in FL**, where noise is added to the shared gradients.
1. **Training with DP-SGD**: Following the reviewer’s suggestion, we trained a DQA model using DP-SGD (with standard hyperparameters, **$\sigma = 1$**) and evaluated the attack’s success after each training epoch. The results are shown [here](https://drive.google.com/file/d/1WEWykBErwl6W4PJnkSZqCSP4lNZ8GnxN/view?usp=sharing). Compared to models trained without DP (orange), models trained with DP-SGD (red) remain **consistently vulnerable** to gradient inversion across all epochs. This indicates that DP-SGD interferes with convergence, preventing the model from forming stable and focused representations. As a result, the gradients remain rich in recoverable information, and the attack remains highly effective throughout training.
This behavior aligns with our earlier observation (see Reviewer fLco, Answer 5) that **poorly trained or randomly initialized models are more susceptible** to inversion due to diffuse attention and unstructured gradients. In this case, the DP noise hinders convergence, inadvertently maintaining the model in a more vulnerable state.
**These findings suggest that DP-SGD, in its current form, may not be well-suited for DQA models**, where convergence is critical to minimizing gradient leakage.
2. **Standard FL DP Defense**: In a second experiment, we simulated a **typical federated learning defense** by applying **Gaussian noise directly to the shared gradients**—a common form of **local differential privacy** used to protect per-client updates before aggregation. We used a regularly trained DQA model (without DP-SGD) and evaluated the attack’s effectiveness across training iterations. The results are shown [here](https://drive.google.com/file/d/1syEgKq-SSGTvb6EHQHPyvK7PrCv8NmZ_/view?usp=sharing). While this approach attenuates the gradient signal, the attack remains effective, especially in early iterations before convergence.
This experiment reflects a **realistic FL deployment scenario**, where noise is added to the shared gradients only at inference time. The results show that while such defenses reduce leakage, they do **not eliminate it**, and achieving a strong privacy–utility trade-off remains challenging.
These findings reinforce our motivation to explore alternative input-level defenses that degrade semantic recoverability without modifying the training process or compromising utility.
We hope this follow-up, along with our initial rebuttal response, thoroughly addresses your suggestions and concerns. We sincerely thank you for your thoughtful feedback and for encouraging us to further strengthen this aspect of the paper. | Summary: The paper explores gradient inversion attacks targeting multi-modal Document Visual Question Answering (DQA) models in the context of federated learning and propose GI-DQA a novel method that reconstructs private document content from gradients. The approach seems to expose privacy vulnerabilities.
Claims And Evidence: - the results are pretty convincing in terms of numbers and vizualization of the documents
The threat model however is not very clear:
- Having access to a template (which in the set-up is the document minus the private information) seems toy.
- The authors do not detail how the question-answer pairs are reconstructed before performing the image reconstruction, while it is apparently the most important step to get good results
- How can PIIs which are not part of the question/answer be reconstructed in the document? it should not be part of the gradient?
Methods And Evaluation Criteria: Having access to the template and perfecty reconstructing the question/answer pairs is crucial to get the results. The first one would not be realistic in practive, and the second one is not explained how.
Theoretical Claims: no theoretical claim
Experimental Designs Or Analyses: see above
Supplementary Material: I briefly checked the appendix.
Relation To Broader Scientific Literature: I dont know much about the gradient inversion litterature but it seemed well discussed.
Essential References Not Discussed: I dont know much about the gradient inversion litterature
Other Strengths And Weaknesses: The paper is well written and easy to follow.
For the weaknesses, see above Until page 5, it would have been nice to have some numbers to justify some statements.
For instance : “Visual Text Prior. Accurately reconstructing small-sized words (∼1-2% of the full image) is a highly challenging task due to their minimal contribution to the overall gradient and their low visibility in the reconstructed image” is from the intuition of the authors. My intuition would have been that because the question is specifically about these numbers, then it should be easier
The threat model is that the adversary has everything except the sensitive data, not just a general template
How are the q-a pairs reconstructed before the reconstruction of the document? If q-a pairs are already reconstructed, then all the PIIs are already known? Can text be constructed that was not part of the answer? Can you show examples of these qa?
Other Comments Or Suggestions: - Until page 5, it would have been nice to have some numbers to justify some statements. For instance : “Visual Text Prior. Accurately reconstructing small-sized words (∼1-2% of the full image) is a highly challenging task due to their minimal contribution to the overall gradient and their low visibility in the reconstructed image” is from the intuition of the authors. My intuition would have been that because the question is specifically about these numbers, then it should be easier
- The threat model is that the adversary has everything except the sensitive data, not just a general template
Questions For Authors: - How are the q-a pairs reconstructed before the reconstruction of the document? If q-a pairs are already reconstructed, then all the PIIs are already known?
- Can text pixels be constructed even if the text was not part of the answer? How is it possible? Can you show examples of these qa?
- What would be the results without the so called tempates?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for his time, effort, and valuable suggestions.
Q1. "access to a template.."
A1. While the use of a template may appear simplified at first glance, we argue that this setup is practically grounded and reflects real-world scenarios. In many document-based applications, such as invoices, medical forms, boarding passes, or receipts, the layout and structure are highly standardized across users, with only the sensitive fields (e.g., names, amounts, dates) varying.
For example, a receipt issued by a vendor often follows the exact same format for all customers. If an attacker has previously received such a receipt or obtained a sample from another user, it effectively serves as a valid template when attempting to reconstruct another client’s document. This assumption is especially realistic in settings where the attacker is an insider (e.g., the server operator) or has access to public document templates shared across users.
We will clarify this practical motivation in the final version of the paper.
Q2. "QA reconstruction.."
A2. We thank the reviewer for this important observation. The reconstruction of the question-answer (QA) pair is indeed a critical step in our pipeline, and we address this in Section 3.3.
In our current work, we assume access to the QA pair, based on strong evidence from prior analytic-based methods—particularly DAGER (Petrov et al., 2024)—which showed that QA tokens can be accurately recovered from gradients in small-batch federated learning, achieving 100% recovery.
Our focus is on the document reconstruction step conditioned on known QA pairs, reflecting a realistic threat model where such tokens are obtainable using existing techniques. To strengthen this connection, we will include the QA reconstruction in the final version of the paper and clarify the assumption more explicitly.
Q3. "How can PIIs.."
A3. Please see Reviewer fLco, Answer #5—specifically the [visualization](https://drive.google.com/drive/folders/1NxT7gBMSH5RQb9V1bEAuTT2AvXPqhex1?usp=drive_link) of the cross-attention maps, which illustrates how private data outside the question and answer can still influence the computed gradients, offering insight into how such regions are captured.
Q4. "Until page 5.."
A4. The sentence in question was intended to convey an intuition about the general difficulty of reconstructing small-sized text regions (such as names or dates), which typically occupy only 1–2% of the image area and thus contribute weakly to the overall gradient signal—especially when compared to larger visual elements like headers, tables, or logos.
Importantly, our attack is able to reconstruct personally identifiable information (PII) in the document regardless of the specific question or answer, as long as the attack is performed on a model initialized with random weights.
Q5. "..threat model.."
A5. "Our threat model is consistent with most prior gradient inversion works, where the adversary is assumed to have access to the model architecture, parameters, optimizer, and input-output format—but not the sensitive input data itself. The only additional assumption in our setup is access to a template document, which reflects a realistic scenario in many applications (e.g., invoices, receipts, forms) where the document structure is shared across users and only a few fields contain private information. This assumption is grounded in practical document workflows."
Q6. "PIIs are already known.."
A6. While the answer in the QA pair may reveal a piece of PII, it typically represents only a small portion of the sensitive content (e.g., a single name, date, or value). Many other PII fields remain hidden and can still be reconstructed from gradients.
Our attack can also reconstruct text outside the question and answer, due to the model’s cross-attention attending to broader regions—especially in early training, when attention is more diffuse.
Additionally, see Reviewer fLco, Answer 6, for an experiment showing that PII can still be reconstructed even when the QA tokens are replaced with random tokens.
We provide sample data, including documents and their corresponding Q-A pairs, [here](https://drive.google.com/drive/folders/1Xo0u9kZhgOM6AYnoUlUqwdSmoN_psRDq?usp=drive_link).
Q7. "results without templates.."
A7. Thank you for the question. We conducted an experiment where the attack was performed without access to a template, and found that reconstruction quality dropped significantly. Without the template, the model must infer both the layout and the content, making optimization far more difficult—especially in structured documents like receipts or forms.
We see this work as a step toward full document reconstruction, and future work could explore removing the template assumption entirely. | Summary: The paper proposes a gradient inversion attack targeting multi-modal DQA models in Federated Learning (FL) setups: GI-DQA.
In DQA models, the input consists of both a document and its corresponding question, while the output is the target answer. GI-DQA first employs existing methods to reconstruct question-answer pairs and then aims to reconstruct the input document. The experiments are conducted using two DQA models, Donut and LayoutLMv3, with the PFL-DocVQA dataset. The results demonstrate that GI-DQA outperforms other methods across various evaluation metrics.
## update after rebuttal
I increased the grading based on the rebuttal
Claims And Evidence: The main claim is proposing the first gradient inversion attack on multi-modal models, with a novel method specifically tailored for multi-modal DQA models. However, there are concerns in the experiments, undermining the evidence.
The author claims that the proposed method is the first gradient inversion attack on multi-modal FL setups. However, I found [r1], which also explores gradient inversion in multi-modal FL setups, even though it does not specifically focus on DQA models.
[r1] Liu, Xuan, et al. "Mutual Gradient Inversion: Unveiling Privacy Risks of Federated Learning on Multi-Modal Signals." IEEE Signal Processing Letters (2024).
Methods And Evaluation Criteria: The proposed method is ok, but some concerns:
- The paper does not have any theoretical analysis.
- The authors aim to perform a gradient inversion attack in multi-modal federated learning (FL) setups. However, the FL setup used in the experiment is unclear. Neither Donut nor LayoutLMv3 are FL models, and the number of local machines in the FL setup is also unspecified.
Theoretical Claims: no theoretical claims
Experimental Designs Or Analyses: Section 4.2.2 (Priors Effect) aims to analyze the contribution of each training loss term. However, the experiments are conducted by adding losses together rather than evaluating each loss term individually. As a result, the findings demonstrate the effectiveness of combining training losses rather than the specific contribution of each loss term.
Supplementary Material: yes, A.1 additional results
Relation To Broader Scientific Literature: The paper conducted a study of gradient inversion in FL, which is a type of privacy attacks, for multimodal setups, while previous work focuses mainly on unimodal setups.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: see above.
Other Comments Or Suggestions: See above.
Questions For Authors: 1. In an FL setup, a gradient inversion attack can be performed at any iteration t. How does t impact the results? For example, does executing the attack at the beginning of training yield lower success rates compared to performing it at the end of training? What value of t was used in the results presented in the paper? Was the attack performed at a specific iteration, or were multiple values of t tested?
2. I am a bit confused by Section 4.2.2 (Question-Answer Effect). The results indicate that document reconstruction is infeasible without question-answer knowledge. However, the authors also state that the specific content of the question and answer is irrelevant to the attack’s success (lines 366-370, right column). How do the authors arrive at this conclusion? Is there any evidence supporting this claim?
3. In Table 1, are the results for DLG and iDLG truly identical? Could you clarify whether they have the exact same values?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for his time, effort, and valuable suggestions.
Q1. "found [r1].."
A1. We thank the reviewer for pointing out this work. [r1] explores gradient inversion in multi-modal FL using synthetic image-text pairs processed by separate, modality-specific models trained independently, with no shared representations. The attack is coordinated via mutual knowledge distillation.
While valuable, we believe the use of the term “multi-modal” in [r1] is somewhat misleading, as it refers to parallel models that only interact during inversion—unlike standard multi-modal architectures that jointly fuse modalities (e.g., via cross-attention).
Our work focuses on such deeply fused, jointly trained models like those used in DQA. Nonetheless, we appreciate this connection and will include a discussion of [r1] in the related work section.
Q2. "theoretical analysis.."
A2. We would like to clarify that our work follows a well-established line of research on gradient inversion, which has primarily focused on empirical demonstration of vulnerabilities rather than formal theoretical analysis. Foundational works such as DLG (Kim et al., 2019), iDLG (Zhao et al., 2020), and GradViT (Hatamizadeh et al., 2022) have aimed to develop practical attack strategies and assess reconstruction quality under various conditions.
Given the complexity and high dimensionality of modern neural networks—especially in multi-modal setups like DQA—formal guarantees remain an open challenge. As in prior work, we rely on rigorous experimental evaluation across architectures and settings.
Our work extends gradient inversion to multi-modal architectures, with ablations isolating each design component’s effect.
Q3. "FL setup.."
A3. We appreciate the comment and clarify that Federated Learning is a training paradigm, not a model type—it can be applied to any architecture, including Donut and LayoutLMv3.
Our setup simulates gradients computed locally by a client on private data, which are then forwarded to a central server—where the adversary is assumed to reside. This is consistent with prior gradient inversion works and reflects a server-side threat model.
Since our focus is on leakage from a single client, the total number of clients does not affect the core attack. We will revise the paper to clarify this setup.
Q4. "Priors Effect.."
A4. Please see Reviewer xopj, Answer 1.
Q5. "iteration t.."
A5. We thank the reviewer for this interesting and important question. In our experiments, the attack was performed at iteration t=0, using the model’s initial random weights.
We also ran the attack at later training stages and found it to be significantly less effective. We hypothesize that in well-trained models, attention and token representations become sharply focused on regions relevant to the question and answer, which narrows the gradient signal. In contrast, early in training, encode more diverse and spatially distributed information, making reconstruction easier.
To support this, we include a [visualization](https://drive.google.com/drive/folders/1NxT7gBMSH5RQb9V1bEAuTT2AvXPqhex1?usp=drive_link) of cross-attention maps for both randomly initialized and fine-tuned models, showing how attention becomes increasingly narrow and localized when the model is finetuned.
[Here](https://drive.google.com/drive/folders/16way4IQb3i2Thc_2K9fhAImCMhEyeFjk?usp=drive_link), we present a systematic analysis of attack success across training iterations, confirming that inversion becomes progressively more difficult as the model converges.
Q6. "QA Effect.."
A6. Thank you for raising this point. As shown in Section 4.2.2, gradient inversion fails without access to both the question and answer tokens, highlighting the need for semantic grounding.
However, when we say the specific content is irrelevant, we mean the attack depends on the token identities and positions, not their semantic meaning. To demonstrate this, we replaced the original QA with random tokens of the same length and structure. The reconstruction quality remained nearly identical:
|Experiment|PSNR|FFT2D|MSE|Binary|Fuzz Ratio|
|-|-|-|-|-|-|
|OriginalQA|18.519|0.008|79.853|0.419|0.688|
|RandomQA|18.359|0.008|80.089|0.421|0.683|
This suggests that having access to the correct tokens—regardless of their meaning—is sufficient for the attack. An reconstruction example is shown [here](https://drive.google.com/drive/folders/1LgTor6EGxV5vQWt2TIidvyu6rG5c3-sT?usp=drive_link).
We will clarify this distinction in the paper and provide additional examples.
Q7. "DLG and iDLG.."
A7. Yes, the results for DLG and iDLG in Table 1 are identical. The only difference is that iDLG has access to the answer (i.e., the ground-truth label). However, as shown in Section 4.2.2, the answer alone is insufficient for reconstruction in DQA, since it lacks the context provided by the question, which guides model attention and gradient flow. As a result, iDLG offers no advantage over DLG in our setting. | Summary: This paper presents a novel approach to gradient inversion attacks (GI-DQA) on multi-modal models specifically targeting extraction of textual information in Document Question Answering (DQA) tasks. The authors demonstrate why gradient inversion attacks designed for targeting unimodal models trained for image classification tasks would be a subpar choice to attack multi-modal models like those used for DQA. They clarify their rationale behind GI-DQA and how it is a more effective choice to attack pre-fusion stage gradients in DQA models. Further, they propose defences that could help mitigate GI-DQA using document-level perturbations.
Claims And Evidence: Yes, the authors thoroughly clarify their choice of the components of the loss term used for designing GI-DQA. I found none of the claims problematic. Furthermore, they clearly state the adversary's capabilities and the federated learning setting wherein the attack would be useful.
Methods And Evaluation Criteria: Given that there has been no prior work targeting multi-modal models using gradient inversion attacks, the authors chose to use prior works designed to reconstruct images from unimodal models as the baseline in Table 1. My only issue was a lack of comparison with the more recent GradViT attack by Hatamizadeh et al. which they refer to in subsection 2.1 but do not include in Table 1.
Theoretical Claims: The paper contains no theoretical claims/ proofs to review.
Experimental Designs Or Analyses: I want to highlight the following issues with the experimental design:
1. No confidence intervals were reported in the paper. This makes it hard to judge how stable the performance of the attack is with multiple repeats.
2. In line 419, Section 6, the authors present their proposed defences as less harmful to the utility of the multi-modal models. They do not follow this statement with numbers to prove their assertion.
Supplementary Material: I have reviewed the Appendix section of the paper including section A1, which discusses the impact of batch size on the effectiveness of GI-DQA and section A.2, which details the evaluation of proposed defences against GI-DQA.
Relation To Broader Scientific Literature: Prior works on gradient inversion attacks in federated learning frameworks, such as APRIL [1] seem to focus on recovering images from unimodal vision models meant for image classification tasks. Additionally, they are optimal for recovering images depicting a single object. With this work, the authors demonstrate the ineffectiveness of such approaches to recover private textual information in DQA tasks. They further develop an attack that could be much more useful for targeting DQA models
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The attack is agnostic of the nature of DQA models. Works well with OCR and non-OCR models.
2. The attack combines direction and magnitude matching for computing gradient matching loss.
3. Incorporates prior knowledge in the loss term to enable effective text data reconstruction.
4. Proves effectiveness of their approach against prior attacks which were designed for unimodal models highlighting the superiority of the proposed attack.
5. Highlights the implicit privacy safeguard offered by the fusion of text-based and visual features in the training of multi-modal models which makes post-fusion gradients less susceptible to GI-DQA.
Weaknesses:
1. The performance of GI-DQA appears to vary across OCR and non-OCR models. One possibility is that this could be simply because of the difference in size of these models. Larger models are often more susceptible to overfitting which increases their vulnerability to privacy attacks but this could also stem from the difference in the nature of the two types of DQA models. I would urge the authors to clarify the factors which affect the performance of GI-DQA against OCR and non-OCR models.
2. No confidence intervals reported in the paper. This makes it hard to judge how stable the performance of the attack is with multiple repeats.
3. Differential Privacy (DP) [3] is considered the gold standard for protecting clients' data via gradient perturbation. McMahan et al. [4] even introduced a variant of DP for federated learning. Author's do not compare their proposed defences against a baseline where the clients' gradients are protected by DP.
Other Comments Or Suggestions: None
Questions For Authors: Questions:
1. May I know why the authors chose not to use GradViT [2] as one of the baselines in Table 1?
2. From Table 2, it is unclear that using the Gaussian prior in addition to the Laplacian filter-based prior is contributing significantly to improving the reconstruction. What then justifies the inclusion of the former term in the loss computation?
3. A lack of comparison with DP makes it hard for me to understand whether these defences offer effective protection while preserving the utility of the models. Could the authors clarify their stance on this?
P.S. If the authors are able to address the questions raised above, I am amenable to raising my score.
[1] Jiahao Lu, Xi Sheryl Zhang, Tianli Zhao, Xiangyu He, Jian Cheng. APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers. CVPR 2021. https://arxiv.org/abs/2112.14087
[2] Ali Hatamizadeh, Hongxu Yin, Holger Roth, Wenqi Li, Jan Kautz, Daguang Xu, Pavlo Molchanov. GradViT: Gradient Inversion of Vision Transformers. CVPR 2022. https://arxiv.org/abs/2203.11894
[3] Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep Learning with Differential Privacy. ACM CCS '16. https://doi.org/10.1145/2976749.2978318
[4] H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. ICLR 2018. https://openreview.net/forum?id=BJ0hF1Z0b.
P.P.S. The authors have addressed concerns/ questions raised by me adequately (with proof). Accordingly, I am raising my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for his time, effort, and valuable suggestions.
Q1. "..confidence intervals.."
A1. Our original submission reported mean values, as we observed low variance across runs, confirming that our method is stable and consistently outperforms baseline methods.
We will include the standard deviation values in the final version of the paper and provide additional details in the appendix to enhance transparency and reproducibility.
Q2. "..defences..utility.."
A2. Our proposed defense strategy applies perturbations to the input templates outside the FL training procedure. These perturbations are added to publicly available or previously observed templates and are used solely by the attacker as priors during the inversion process—not by the participating clients during training (which use "clean" documents).
As a result, the actual training inputs, local computations, and shared gradients remain completely untouched, and the federated learning process proceeds as usual. This ensures that the utility of the multi-modal models is fully preserved, unlike defenses that modify gradients directly (e.g., via differential privacy), which often degrade model performance [1].
We will update the paper to clarify this distinction and emphasize the utility-preserving nature of our defense.
Q3. "..performance of GI-DQA.."
A3. We thank the reviewer for this thoughtful observation. To clarify, overfitting is not a factor in our experiments, as all models—both OCR-based (e.g., LayoutLMv3) and OCR-free (e.g., Donut)—are evaluated using randomly initialized weights (see Reviewer fLco, Answer 5). This ensures that performance differences arise from architectural properties, not training dynamics.
We hypothesize that the variation in GI-DQA performance stems from differences in the size and structure of the visual backbones. LayoutLMv3 uses a smaller image encoder, leading to lower representational complexity and making gradient inversion more tractable. Donut, in contrast, uses a larger and more expressive visual backbone, which produces more abstract and distributed features—making inversion harder without strong priors.
We will revise the manuscript to clarify this architectural difference and its impact on attack performance.
Q4. "Differential Privacy (DP).."
A4. We fully agree that Differential Privacy (DP) is a well-established and rigorous approach for protecting client data in federated learning.
While defense design is not the main focus of our work, we propose a lightweight input-level method that operates entirely outside the FL training loop, leaving shared gradients untouched. This makes it a practical alternative when DP integration may be too costly or degrade utility.
To provide a point of comparison, we evaluated a DP-style defense by adding Gaussian noise to the shared gradients. Results are shown below:
|Experiment|PSNR|FFT2D|MSE|Binary|Fuzz Ratio|
|-|-|-|-|-|-|
|DP|5.578|0.176|86.743|10.9%|0.000|
|Ours|9.004|0.075|86.961|0.4%|0.001|0.001|
While DP yields lower PSNR (better in terms of defense), our method more effectively disrupts text recovery, as shown by binary accuracy. This demonstrates the complementary nature of our defense, which focuses on degrading machine-readability, even if some visual structure remains.
We will include this discussion and table in the final version of the paper.
Q5. "..use GradViT.."
A5. We originally intended to include GradViT as a baseline in Table 1, given its relevance to gradient inversion in vision models. However, the authors of GradViT did not release their code, and despite reaching out to them multiple times, we unfortunately did not receive a response. Given the complexity of accurately reproducing their method—particularly in the context of multi-modal models—we decided against including potentially inaccurate or incomplete re-implementations. We would be happy to include GradViT as a baseline in future versions if their code becomes available.
Q6. "..Gaussian.. in addition..Laplacian.."
A6. While Table 2 may suggest the Gaussian prior is less impactful than Laplacian, we include both as they address complementary aspects of reconstruction. The Gaussian prior promotes smoothness and stabilizes early optimization by reducing noise, while the Laplacian prior sharpens edges and preserves fine details—crucial for small text recovery.
See Reviewer xopj, Answer 1 for the ablation of individual priors and Answer 4 for insights into the selection of priors.
[1] Cummings, R., Desfontaines, D., Evans, D., Geambasu, R., Huang, Y., Jagielski, M., ... & Zhang, W. Advancing Differential Privacy: Where We Are Now and Future Directions for Real-World Deployment. Harvard Data Science Review, 6 (1), jan 16 2024. | null | null | null | null | null | null |
Lean and Mean Adaptive Optimization via Subset-Norm and Subspace-Momentum with Convergence Guarantees | Accept (poster) | Summary: This paper introduces two complementary adaptive optimization techniques that reduce the memory footprint of optimizer states while accelerating LLM large-scale neural network training.
- Subset-Norm (SN): A generalization of AdaGrad-Norm and AdaGrad-Coordinate that shares step sizes across subsets of parameters. SN reduces AdaGrad’s memory footprint from $O(d)$ to $O(\sqrt{d})$ while maintaining strong theoretical convergence guarantees under sub-Gaussian noise assumptions.
- Subspace-Momentum (SM): A technique that applies momentum in a low-dimensional subspace while using SGD in the orthogonal complement. The authors prove high-probability convergence guarantees for SM under standard assumptions.
Empirical results demonstrate that combining SN and SM achieves Adam’s validation perplexity on LLaMA 1B with half the training tokens (6.8B vs. 13.1B) while reducing Adam’s optimizer state memory footprint by more than 80%, with minimal additional hyperparameter tuning.
Claims And Evidence: The claims in the paper are generally well-supported by:
- Theoretical analysis: The authors provide strong mathematical guarantees for both SN and SM. The convergence proofs are well-presented and adhere to standard assumptions in stochastic optimization.
- Empirical validation: The experiments on pre-training and fine-tuning large-scale LLMs demonstrate clear improvements in memory efficiency while maintaining or improving convergence speed.
- Comparison to baselines: SN and SM are compared against AdaGrad, Adam, and GaLore, showing consistent improvements in both perplexity and memory efficiency.
However, some additional validations would strengthen the claims:
- CV benchmarks : The proposed methods should be tested on ImageNet/CIFAR-10 with ViTs or ResNets to evaluate performance in a broader range of architectures and compare with well-established optimizers in vision tasks.
- Robustness: The paper lacks hyperparameter robustness analysis, which would help practitioners understand how sensitive SN and SM are to tuning.
- Comparison with recent memory-efficient optimizers: The paper does not compare against newer low-memory optimizers such as Adam-mini or MicroAdam, which have been proposed to reduce the memory footprint of optimizers.
Methods And Evaluation Criteria: The proposed SN and SM methods are well-formulated and grounded in existing literature on AdaGrad-style optimizers and gradient compression techniques.
The authors evaluate SN and SM across multiple model sizes (from 60M to 7B parameters) and compare optimizer memory footprints, training perplexity, and convergence rates. The evaluation is relevant, but additional benchmarks (e.g., ImageNet, CIFAR-10) would improve generalization.
Theoretical Claims: The convergence proofs for SN and SM seem correct.
Experimental Designs Or Analyses: - The experimental design is strong, with multiple scales of LLaMA models tested.
- The results clearly demonstrate the memory efficiency of SN and SM, particularly on LLM pre-training tasks.
- However, there are no details on the numerical precision used during training (e.g., FP32, BF16, FP16). Clarifying this would be important since precision affects memory consumption.
- Hyperparameter tuning details are unclear:
- How were the “red LR” values tuned? Do you adjust them based on instability?
- How should practitioners tune SN and SM hyperparameters? A guideline for selecting subset sizes and subspaces would be helpful.
Supplementary Material: The supplementary material contains detailed proofs, algorithm descriptions, and additional ablation studies. However, additional visualizations of hyperparameter robustness would be useful to help interpret the effects of tuning.
Relation To Broader Scientific Literature: The paper is well-positioned within the literature on adaptive optimizers (Adam, AdaGrad) and memory-efficient optimizers (GaLore, FLORA, GRASS). However, no comparisons are made to newer memory-efficient optimizers like MicroAdam or Adam-mini, which should be included.
Essential References Not Discussed: Missing citations to recent memory-efficient optimizers such as Adam-mini.
No discussion of quantization-based approaches like 8-bit Adam, which also aim to reduce optimizer memory.
Other Strengths And Weaknesses: **Strengths:**
- Well-motivated problem with significant practical relevance for training LLMs.
- Strong theoretical backing with convergence guarantees.
- Empirical results demonstrate state-of-the-art memory savings.
**Weaknesses**:
- No evaluation on computer vision benchmarks (ViT, ResNet).
- No comparison with MicroAdam or Adam-mini.
- Lack of clear recommendations for tuning SN and SM hyperparameters.
- Inconsistent notation between tables (Adam vs. AdamW).
- No discussion of numerical precision used in training.
Other Comments Or Suggestions: The notations in Table 3 and Table 4 are inconsistent (Adam vs. AdamW). This should be clarified.
Questions For Authors: (1) How does SN and SM compare to recent memory-efficient optimizers like MicroAdam and Adam-mini?
(2) Can SN and SM be effectively applied to ViTs and ResNets for computer vision tasks?
(3) What numerical precision was used during training (BF16, FP16, FP32)?
(4) How do you tune hyperparameters for SNSM?
(5) How was the “red LR instability” tuned?
(6) What heuristics can be used to select the optimal subset size in SN?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful consideration and constructive feedback. Below, we clarify points raised and address specific concerns.
___
> No discussion of quantization-based approaches like 8-bit Adam
Due to space limit, we cite and discuss quantization approaches (and more) in the related works section on page 13 of the Appendix. This is an important orthogonal direction that can be combined with our method for further future improvement.
> Question 1: MicroAdam and Adam-mini comparisons
We provide brief comparisons to MicroAdam and Adam-mini in the related works section on page 13, but will provide more detailed comparison for Adam-mini due to its relevance as also pointed out by other reviewers in our revision.
- For MicroAdam, while it delivers good performance for fine-tuning, it has a limitation for the pre-training LLM setting (see page 10 of their paper) which is a setting we want to focus on due to the relative lack of effective memory efficient optimizers.
- For Adam-mini, we want to note that it is concurrent work to ours, to be published in ICLR this year, but we agree with the reviewer that Adam-mini is highly relevant and including a detailed comparison in the main body improves the quality of our paper. We will include a more detailed comparison in the main body of our revision with a draft as in our response to reviewer 6d8c.
> Question 2: vision tasks
- While our main focus is on large models that are more typical to language models where memory is often a bottleneck, to address the reviewer’s request for additional experiments, we conducted further evaluations using the DiT-L/2 model (458M) from [facebookresearch/DiT](https://github.com/facebookresearch/DiT) on a setup with batch size 2048, image size 64, and 8×A6000 GPUs.
We compared our method (SNSM) with Adam using the same training configuration as in the paper. As shown in the table below, SNSM outperforms Adam in FID similarly to LLM tasks:
| FID/Iter | 200k | 300k | 400k | 500k | 700k |
|-----------|-------|-------|-------|-------|-------|
| Adam | 56.69 | 56.63 | 40.69 | 41.15 | 39.61 |
| AdamSNSM | 66.76 | 66.31 | 34.05 | 32.31 | **32.26** |
- We further evaluate Adam, AdamSN, and AdamSNSM (rank 64 and no update gap) by training “vit_base_patch16_224” (~85M params) from the `timm` library on the `CIFAR100` dataset for 10 epochs with bs64 and wd0 on a 2x4090 machine. We tune the lr across {1e-3, 5e-3, 1e-4, 5e-4, 5e-5, 1e-5} grid for all methods.
| | Adam | AdamSN | AdamSNSM |
|-----------|-------|-------|-------|
|Best Val Acc| 43.3%| 45.2% | **45.6%**|
- Please see the loss curve here: https://imgur.com/a/4rtgxOx. These preliminary experiments show promising results for the application of our methods to vision tasks.
> Question 3: numerical precision
- We use BF16 as mentioned in Appendix B.2 and ensure that all our experiments across different optimizers use consistent numerical precision for fair comparisons. Due to space constraint, a lot of the experimental setup and implementation details are moved to the Appendix. We will make this clearer in the main body of our revision.
> Question 4: tune hyperparameters
- Tuning effort is provided in Appendix B.2. We use a similar setup (rank, projection update gap, etc.) to Zhao et al 2024 and try to minimize tuning (in the spirit of this resource constraint setting) and only tune for the learning rate. However, we perform extensive ablation studies (Section 5.4 and Appendix C) to show that performance can further be improved with additional tuning. We are grateful that reviewers t4t5 and 6d8c looked at these ablations and considered them "very solid" and "quite extensive and contains important details."
> Question 5: red LR
- The red LR simply highlights the results where the best LR found for the larger model differs from the smaller model and hence tuned similarly to the other LR. It is desirable for hyperparameters to transfer from smaller models to larger models. However, this is a complex topic in general (e.g. https://arxiv.org/abs/2203.03466).
> Question 6: SN heuristics
- On line 213 of the paper, we provide a simple heuristic of grouping the latent dimension together. In Figure 5, we examine different subset sizes where one can tune to get a better result. However, our heuristic seems to be able to capture much of the performance. In general, we find that choosing a subset size around sqrt(mn) for each block seems to be a good heuristic and balance that is supported by our analysis.
___
We sincerely appreciate your detailed reviews and constructive suggestions. We believe the discussion, analysis, explanations and additional experiments clarify your concerns and improve the quality of our submission. We hope this provides sufficient reasons to raise the score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Q2. Thank you for the additional experiments. However, I was referring more to benchmarks like CIFAR-10—a small but widely adopted dataset that enables easier and more consistent comparisons—or ImageNet, using architectures such as ResNets or ViTs, which are considered standard in the field.
Q4. I still find the description of the tuning process insufficiently detailed to ensure reproducibility.
Overall, I still struggle to position this new method relative to Adam-mini, particularly in terms of computational complexity and memory footprint.
---
Reply to Comment 1.1.1:
Comment: > Q2: CIFAR-10
The model that we used, [vit_base_patch16_224](https://huggingface.co/timm/vit_base_patch16_224.augreg2_in21k_ft_in1k), is the vision transformer model introduced in the paper “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” (https://arxiv.org/abs/2010.11929v2). We use the randomly initialized version of the ViT above to train [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) which is the 100 classes version of CIFAR-10, where it provides a good balance between CIFAR-10 and ImageNet. However, we further provide our results on CIFAR-10 on the ViT below:
| Best val accuracy 10 epochs | Adam | AdamSN | AdamSNSM (r=64, g=1000) |
|-----------|--------|--------|--------------------------|
| CIFAR100 | 43.30% | 45.20% | **45.60%** |
| CIFAR10 | 69.02% | 69.18% | **71.21%** |
| Peak Mem (bs 64) | 9.288GB | 8.886GB | **8.878GB** |
The loss curves for CIFAR10 is shown at: (1) train loss https://imgur.com/a/7ATZ4o7; and (2) val loss https://imgur.com/a/zRuBZgu. This is similar to CIFAR100.
We want to emphasize again that our paper focuses primarily on language models and transformers due to the use of much larger models in language tasks (billions of parameters) than vision tasks (millions of parameters), where memory is a major bottleneck for large models.
> Q4. I still find the description of the tuning process insufficiently detailed to ensure reproducibility.
We already provided all the codes in the supplementary material and all the parameter choices for the tuning in our paper along with extensive ablation studies. We would appreciate it if the reviewer can describe more concretely the missing information. We will be happy to include that in our revision to make reproducing our results as easy as possible.
To reiterate Appendix B here, $\beta_1$ and $\beta_2$ are not tuned and set according to the default of Adam in PyTorch ($\beta_1=0.9, \beta_2 =0.999$). Subspace rank and update gap are not tuned and set according to the default of GaLore (rank as in Table 3, update gap=200). We run all main experiments on BF16 format, gradient clipping of 1.0 (standard in training LLMs like LLaMA/DeepSeek and not tuned), and batch sizes 512. We use the same cosine learning rate decay and warm-up steps schedule as GaLore (Zhao et al., 2024) and Llama (Touvron et al., 2023; Dubey et al., 2024). We **only tune** for the learning rate within the set {0.1, 0.05, 0.01, 0.005, 0.001} (except for AdaGrad algorithms that need larger lrs). There are ablations on the effect of changing some of the parameters like rank and update gap, but they are not tuned for the experiments reported in Table 3. This information can be found in Appendix B1 and B2 of our paper. We will rewrite those sections to make the information more streamlined and clearer.
> Comparison with Adam-mini
As in our response to reviewer 6d8c, again while emphasizing that **Adam-mini is concurrent work to ours**, we will include a more detailed comparison to Adam-mini in our revision.
In particular, Adam-mini is most comparable to our AdamSN in both computational complexity and memory footprint. The state of the Adam algorithm consists of two vectors of the same length: m and v. Both AdamSN and Adam-mini keep m the same and compress v to a negligible size (<1% of the original). Thus, in terms of memory for the optimizer state, both AdamSN and Adam-mini use about 50% that of Adam. In terms of computational complexity, they both compute m in the same way as Adam and spend negligible extra time per iteration to maintain v.
Our AdamSNSM goes beyond this and also compresses the m vector. Our size for m is less than 50% for rank=1/4 (which is the choice that we make in our experiments -- see also Tables 3 and 4) of the original. Combining both SN and SM, the memory for the optimizer state of AdamSNSM is less than half that of Adam-mini. In computational complexity per iteration, the SM technique introduces a small extra overhead of less than 2.5% per iteration but due to a better preconditioning as suggested by reviewer t4t5, it converges faster than Adam on pre-training tasks by up to 50% fewer iterations.
___
We thank the reviewer again for your time and effort. We believe that the discussion and additional experiments improve the quality of our submission and hope this provides sufficient reasons to raise the score. | Summary: This paper proposes two modifications to AdaGrad: 1) instead of coordinate wise adaptive learning rate, subset norm adaptative learning rate can provide adaptive learning rate scaling for different subsets of parameters 2) similar to GaLore, it keeps momentum in low rank and recovers momentum by up-projection, whose projection matrix is periodically updated via SVD.
Claims And Evidence: The experiments and ablations are very solid. I have nothing to complain about.
Methods And Evaluation Criteria: The method makes sense and evaluation all follows current common practices. I don't see anything suspicious in particular.
Theoretical Claims: I checked Theorem 3.1 and 4,1, they seem correct. Though I have not carefully checked the proofs in appendix, the conclusion fits my expectation to the convergence rate, which is similar to that of the SGD's convergence under smooth conditions.
One complaint here is Theorem 3.1 is highly dependent on the variance of the stochastic noise, which might render the bound vacuous.
Theorem 4.1 has no dependency on k the rank of the subspace, I would imagine it should because when k is full rank it should exactly recover the SGD baseline case.
There is also no analysis on how the projection matrix would impact the convergence.
Experimental Designs Or Analyses: The experiment designs follow the common practice and ablation studies are abundant, nothing much can be complaint about.
Supplementary Material: I have read the supplementary to check proofs for theorems.
Relation To Broader Scientific Literature: this work is related to a series of optimizers proposed to pretrain large language models faster and more efficiently, the subspace momentum can be seen as a form of preconditioning, which fits in the broader optimizer literatures, such Muon, MARS and Lion.
Essential References Not Discussed: Besides Galore, a few of its follow-ups are highly relevant to the discussion too. I believe it will benefit the readers if the authors can discuss the difference and commonality compared to those.
Other Strengths And Weaknesses: The proposed SN and S momentum are simple and straightforward, though they are not really new or surprising. especially subspace momentum, this is an incremental on top of GaLore.
For this method to be truly convincing, I would like to see wall clock speed up on pretraining task as well as peak memory for different implementations. Despite of momentum being saved in low rank, it might still have to materialize larger tensor due to intermediate computation of residual r, which cancels out the benefit of lower memory consumption.
Another setting that is important but not really discussed in the paper is how friendly the proposed method is to distributed training.
This method also introduce extra hyperparameter tuning, default hyperparameters of AdaGrad can't be reused and non-trivial effort to find the optimal block size and etc. It also inherits all the problems of GaLore in practice: SVD can become a real bottleneck when weight matrices become large, randomized/approximated SVD has no guarantee that it wouldn't degrade the perplexity as shown in appendix.
Nonetheless, I am still leaning towards acceptance due to the comprehensive ablations and detailed guidelines in hyperparameter choices and etc. Theoretical propositions also seem solid and interesting, though I am not completely familiar with the mathematical tools/framework used in this paper.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful consideration and constructive feedback. Below, we clarify points raised and address specific concerns.
____
> Theorem 3.1 is highly dependent on the variance of the stochastic noise
- The reviewer is right that the bounds depend on the noise and this is standard for any bounds on stochastic gradient descent. The novelty is that we show grouping the coordinates based on noise magnitudes can lead to a better performance. Our experiments show that our heuristic successfully isolates the noise into a small number of groups and perhaps as a result, delivers better training results.
> Theorem 4.1 has no dependency on k the rank of the subspace
- This is a limitation that we admit in the discussion section of the paper: the worst case convergence of SM is simply the SGD with momentum bound. We believe that a better analysis (that probably depends on a better subspace choice rather than the worst case one) could potentially show improved rates over naive SGDm. Our experiments show promising results for the top-k singular vector subspace so we believe that this is an interesting point for future works to show theoretical improvement.
- And similarly to our response to Reviewer 6d8c, this is a limitation that is general to the class of algorithms that utilize momentum.
This is an important open challenge for the community at large because the theoretical bound for momentum and without momentum are the same. We believe that any advance for SGD with momentum will lead to a better understanding of SM in our setting.
> Besides Galore, a few of its follow-ups are highly relevant to the discussion too.
- Please refer to our response to Reviewer 6d8c. We will add additional comparisons to Table 3 of our revision and a more detailed comparison with our Adam-mini in our main body.
> I would like to see wall clock speed up on pretraining task as well as peak memory for different implementations.
We provide the per iteration time, peak memory (via nvidia-smi), and time to Adam’s val perplexity after 100K steps for the 1B model for each method on a 2x4090 machine with the same setup as the paper (seq length 256, total batch size 512, micro batchsize 16) below:
| | Adam | AdamSNSM (Gap=5000) | AdamSNSM (Gap=200)| AdamSN | GaLore (Gap=200)|
|-------------------------|-------------|----------------------------|----------------------------|----------------------------|----------------------------|
| Time for 1K iters | 7426s | 7465 s | 7624 s | **7399s** | 7827s |
| Time per iteration | 7.43 s/it | 7.47 s/it | 7.62 s/it | **7.39 s/it** | 7.83 s/it |
| Time to perplexity < 16 | ~206.4 hrs (100K iters)| ~136.9 hrs (<66K iters) | **~101.6 hrs (<48K iters)** | ≈118.9 hrs (<58K iters) | >217 hrs (>100K iters) |
|Peak mem | 21.554 GB/GPU | **16.642 GB/GPU** | **16.642 GB/GPU** | 19.193 GB/GPU | 18.187 GB/GPU |
- The results above show that the dimensionality reduction can achieve some speedup and the improved performance can offset some overhead of SVD. Furthermore, as demonstrated in Table 6 on page 8, our methods do not require the SVD update gap to be as frequent as GaLore.
- Peak memory for batch size 1 is also shown in Figure 6 on page 16.
> Another setting that is important but not really discussed in the paper is how friendly the proposed method is to distributed training.
- Thank you for bringing up this great point. Our non-heuristic version of the SN algorithm (Algorithm 5) is FSDP-friendly due to being shape agnostic and can group coordinates locally. Our supplementary material contains the code for that version (`adamw_sng.py`). Figure 5 shows that local grouping with group size of around $\sqrt(d)$ gives good results.
- A brief discussion regarding SM’s distributed training is on line 932 where it is more subtle and depends on the subspace choice where subspace that aligns with the standard bases would work on distributed setup but there are other tradeoffs.
- We will include additional discussion regarding this point in our revision.
> This method also introduce extra hyperparameter tuning...
- We do not tune for the block size as it is simply the row/column of each matrix. While we show that tuning can bring additional benefits as shown in Figure 5, we can already attain a large part of that gain with no tuning via our heuristics on line 213. Finally, due to the modularity of our approaches, we can even abandon the use of momentum and still achieve strong performance relative to vanilla Adam with RMSPropSN as shown in Table 3 and 8 (versus vanilla RMSProp).
___
We sincerely appreciate your detailed reviews and constructive suggestions. We believe that the discussion and additional experiments improve the quality of our submission and hope this provides sufficient reasons to raise the score. | Summary: This work studies two modifications to widespread adaptive optimization meta-algorithm procedures, with the goal of reducing the memory footprint of adaptive optimization while simultaneously improving performance.
The first modification is referred to as adaptive subset-norm (SN) stepsizes, which is similar to existing methods such as Adam-Mini. Rather than maintaining an adaptive learning rate for each parameter (as in traditional Adagrad) or maintaining only a single adaptive learning rate for all parameters (Adagrad-Norm), they propose partitioning parameters into groups and maintaining an adaptive coordinate for each group. They state that their analytical results (e.g. Thm. 3.1) show that by choosing an intermediate number of groups, they can obtain better convergence guarantees. This reduces the memory footprint of maintaining adaptive stepsizes from $O(d)$ for $d$ the number of parameters to $O(c)$ for $c$ the number of groups.
Their second proposed method, subspace momentum, computes the momentum update in some $k$-dimensional subspace instead of the full-dimensional space, similar to existing methods like GaLore. However the authors propose simultaneously performing a step of (stochastic) gradient descent in the orthogonal complement of the selected subspace on each iteration. This allows them to obtain convergence guarantees (Thm. 4.1) in contrast with other methods which restrict momentum to some low-dimensional subspace, which cannot guarantee convergence without stronger assumptions about the span of this low-dimensional subspace relative to the objective function. This reduces the memory cost of maintaining historical momentum terms from $O(d)$ to $O(k)$. They suggest using the leading-$k$ singular subspace of a stochastic gradient sample to set the subspace, which is an intuitive rule and closely related to GaLore.
Claims And Evidence: Claim 1: Subset-norm (SN) can improve performance over either Adagrad or Adagrad-Norm.
Evidence:
- Thm. 3.1, which the authors describe as capturing a tradeoff between the number of subsets into which parameters are partitioned and the coordinate-wise stochastic gradient noise.
- Fig. 1, which shows that Adam+SN (e.g. Adagrad+SN with full-dim. momentum) minimizes validation perplexity during model pre-training compared with Adam and also has a lower memory footprint (2.6 Gb vs 5 Gb).
Claim 2: Subspace-momentum (SM) can reduce the memory footprint of momentum while still maintaining convergence guarantees.
Evidence:
- Thm. 4.1, which shows that SM is still guaranteed to converge in appropriate settings.
- Fig. 1, which shows that Adam+SNSM minimizes validation perplexity in fewest iterations and has the lowest memory footprint out of Adam, GaLore, and Adam+SN.
Methods And Evaluation Criteria: They consider pre-training of LLaMa models on the C4 dataset. This seems reasonable given their motivating interest in reducing memory footprints for large-scale models. It would be interesting to see if their modifications also improve performance on smaller models/other architectures, as their claims suggest that such improvements should hold in generality, but this may be out-of-scope.
Theoretical Claims: I looked at the proof of Thm. 3.1. The lines I checked seemed correct, and the techniques seem to be reasonable extensions of traditional convergence bounds for Adagrad under L-smoothness assumptions.
Experimental Designs Or Analyses: I did not check the soundness beyond what is presented in the main body.
Supplementary Material: I reviewed the proof of Thm. 3.1, the implementation details for SN (Algorithm 4), and the discussion of related works.
Relation To Broader Scientific Literature: Both of the proposed modifications, SN and SM, are closely related to existing methods. SN is closely related to Adam-mini, which also partitions parameters into blocks and maintains adaptive learning rates per-block. However the partitioning strategy in Adam-mini is related to the block-diagonal structure of the Hessian, while the theoretically-optimal partitioning strategy for SN is related to coordinate-wise stochastic gradient noise levels. However, the practical procedures for parameter partitioning proposed in the appendix are much more simplistic and do not analyze coordinate-wise noise; instead they merely group matrix-valued parameters into blocks of rows and/or columns depending on which dimension is larger.
SM is closely related to low-rank learning strategies, particularly GaLore, which also restricts updates to a low-dimensional subspace chosen based on estimating leading singular subspaces. Unlike GaLore, SM involves also performing an update in the orthogonal complement of the identified subspace, allowing the authors to still derive convergence guarantees.
The authors claim that related methods (e.g. Adam-mini, factorization like Shampoo/SOAP, low-rank parameterization like LoRA and GaLore) either lack theoretical guarantees, sacrifice too much in performance, or greatly increase the cost of parameter tuning.
Zhang, Yushun, et al. "Adam-mini: Use fewer learning rates to gain more." arXiv preprint arXiv:2406.16793 (2024).
Zhao, Jiawei, et al. "Galore: Memory-efficient llm training by gradient low-rank projection." arXiv preprint arXiv:2403.03507 (2024).
Vyas, Nikhil, et al. "Soap: Improving and stabilizing shampoo using adam." arXiv preprint arXiv:2409.11321 (2024).
Gupta, Vineet, Tomer Koren, and Yoram Singer. "Shampoo: Preconditioned stochastic tensor optimization." International Conference on Machine Learning. PMLR, 2018.
Essential References Not Discussed: Adam-mini only mentioned briefly in the appendix, but I think the main idea behind parameter partitioning is as closely-related to SN as GaLore is to SM. I think given the strength of the connection to Adam-mini and the level of discussion that the authors devote to GaLore in the main body, it would be appropriate to discuss Adam-mini in the main body so that readers are aware of the relevant work.
Other Strengths And Weaknesses: Strengths:
- The problem studied (“How can adaptive optimization be improved and made more memory-efficient for large models?”) is very well-motivated.
- The authors focus on adding theoretical analysis to heuristics that have begun to be explored in practice.
- The numerical results show that Adam+SMSN yields good training performance while also having a drastically smaller memory footprint.
Weaknesses:
My overall impression of the paper is that its main contribution is to combine two pre-existing heuristics that already existed in literature. The authors argue that their analysis makes their approach more theoretically-grounded, but the most novel analysis about stochastic noise (Thm. 3.1) is not utilized at all in their implementation, which instead chooses a simple heuristic partitioning of parameters based on matrix rows/columns. If I am mis-understanding the implications of their theory, or not seeing how it is informing their choice of parameter grouping, then I would be happy to increase my score (see Questions).
- Both SN and SM are closely related to existing methods, and the authors argue that one of the central contributions of this work is new theory that better explains these methods. Given the fact that the authors hold up the analysis as a significant portion of the contribution, I think the intended take-aways from the analysis need to be better-explained in text. For example, I found the explanation of the tradeoff between parameter partitioning after Thm. 3.1 to be confusing.
- Moreover, the practical implementations used in the paper seem very divorced from the theory; it is my understanding that when choosing parameter partitions, the authors merely group matrix-valued parameters into either rows or columns (depending on which is larger), whereas the guarantees in Thm. 3.1. The fact that this is the proposed partition also weakens the argument made in the appendix that the SN partitioning scheme is supposedly more theoretically-motivated than the scheme in Adam-mini.
Other Comments Or Suggestions: Parameter $k$ is referenced in “Our Contributions” before it is defined, even informally (line 90, left column). A reasonable reader can infer that $k$ must be related to the dimension of the momentum subspace, but a few words to clarify what is meant by $k$ would be an appropriate addition.
The note about norm notation (norm without subscript defaults to ell-2) is currently located at 126 left-hand column. I would suggest moving it up to the first paragraph on notation: when I was reading the statement of Thm. 3.1, I was very confused about whether the fact that some norms had subscripts and others did not meant that I should assume these norms are different. I could not find the note about norm notation because I looked in the notation paragraph, rather in the sub-Gaussian noise assumption paragraph where it is currently located. I might suggest switching to consistent norm notation within the same equation; in the statement of Thm. 3.1, all global-vector quantities have subscript-norms, while all subset quantities do not have subscripts, potentially adding to readers’ confusion.
Questions For Authors: As discussed in Strengths/Weaknesses, if the authors can clarify the insights provided by their theory and whether these insights are leveraged in their implementation, then I am open to increasing my score.
1. Can you please explain how the result in Thm. 3.1 expresses a trade-off between partition strategies and stochastic gradient noise? It almost seems to me as if apart from the 4-th order terms in $G(\delta)$, every other part of the guarantee gets worse as c increases: as $c\rightarrow d$, $\sum\_i ||\sigma\_{\Psi\_i} ||\_2 \rightarrow ||\sigma||\_1$ whereas for $c\rightarrow 1$, $\sum_i ||\sigma_{\Psi_i} ||_2 \rightarrow ||\sigma||_2$ where $||\sigma||_2 \leq ||\sigma||_1$. On top of that, for $\delta$ fixed, the probability of success $1-O(c\delta)$ gets smaller as $c$ grows, and of course all terms with explicit dependence on $c$ grow. Is my understanding correct?
2. If this understanding is correct, then it seems like for in the small-noise regime (e.g. $\sigma_i \leq 1$), the 4-th order terms are not dominant terms in the expression, and thus all dominant terms get worse as $c$ increases, suggesting that Adagrad-Norm has the best guarantee. Is this correct?
3. As I've written elsewhere, it seems to me that the parameter-partitioning strategy used in practice (group by rows/columns) is not particularly informed by the theoretical results, other than the fact that the theoretical results suggest that in some regimes an intermediate number of groups may outperform both Adagrad and Adagrad-Norm. Is this correct?
4. In Table 3, it is my understanding that the column HP counts the number of hyperparameters per method. Based on this, it seems to me that the authors treat SN as not introducing any additional hyperparameters; this seems slightly misleading; the number of groups in the parameter partition/choice of parameter partitioning creates new hyperparameters. It is true that using a particular fixed strategy (i.e. the row/column grouping, applied only to 2D linear modules) fixes the hyperparameters associated with this procedure, but this is true of any algorithm. For example, the number of hyperparameters associated with GaLore can be reduced if one commits ahead of time to using a particular procedure for picking subspace dimension.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful consideration and constructive feedback. Below, we clarify points raised and address specific concerns.
___
> Question 1 and 2
We answer question 1 and 2 of the reviewer together as they are related:
- The reviewer is correct in that arbitrarily increasing the number of subsets $c$ towards $d$ (moving from AdaGrad-Norm towards AdaGrad-Coordinate) would yield a suboptimal dependency and each n-th order term of the noise requires a more careful of balancing the subset size. Our derivation in Appendix B.2 shows the effect of the number of subsets c on all n-th order noise terms that appear in the bound.
In particular, Table 2 shows that the grouping of $c=d^{2/5 (1+\beta)}$ groups of size $d^{3/5-2\beta/5}$ each (which depends on the fraction of noisy coordinates) gives better dependency than $c=1$ (AdaGradNorm) or $c=d$ (AdaGrad-Coordinate) on most noise-density $\beta$ setting.
- And as the reviewer points out in question 2, indeed that in the low noise regime, AdaGrad-Norm would attain the best guarantee (second row of Table 2) and our grouping strategy of $c=d^{2/5 (1+\beta)}$ groups would be suboptimal. There, a more careful case-work analysis to handle edge cases could help attain a more optimal strategy.
- Regarding the reviewer’s point on $\delta$, the probability grows with $c$ but since our guarantee is high-probability, the worst case is $log(d/ \delta)$ so it is not the dominant term in the final bound.
- Finally, we thank the reviewer for the attention to details and constructive feedback. We will incorporate the discussion above and revise the writing in the section after Theorem 3.1 to improve clarity and intuition.
> Question 3
- Our theory provides an optimal grouping strategy that depends on the noise density. However, in practice, we must trade off the cost to figure out a good grouping and the performance gain from it.
- The key from the theory improvement is to group the coordinates with similar noise magnitudes together. However, any expensive method to figure out these groups (e.g. the Hessian in Adam-mini) would have detrimental effects on the wall clock time and memory (which are pointed out as crucial concerns by reviewer t4t5). Instead, our heuristic is meant to be a simple method to capture most of these groups.
- Intuitively, coordinates in the same row/column either act on the same input or are used to compute the same output. The noise and normalization on each input and output would affect coordinates in the same row/columns in a correlated way.
To provide some evidence, we perform the experiments in Section D.1 again but with the noise grouped by the corresponding dimension according to the heuristics. We show it in the plot at https://imgur.com/a/uKfHJ1B.
- There we see most groups have very low noise (very close to 0, namely less than 10^-12) while a small number of groups (top 1 percentile in the annotation) have much larger noises.
- Overall, our heuristics aim to capture the similar noise coming from the same inputs and outputs. Our experiments suggest that this is a major part of the gain. There might be other simple sources of correlation in the noise magnitudes, which we leave for future work.
> Question 4
- We agree with the reviewer. If there is an effective adaptive strategy to picking the subspace dimension, then one should not count that as a hyperparameter. Unfortunately, at this time, we are not aware of an effective strategy.
- We do not tune for the number of groups. With tuning, as in Figure 5, the performance can be improved. We will rename that column to more precisely reflect the number of tunable parameters as opposed to hyperparameters. The number of tuning runs is a more accurate measure of tuning cost but is not easily obtainable.
___
Other points
> modifications also improve performance on smaller models/other architectures
- Please see our response to Reviewer oW9K for additional experiments on vision tasks.
> discuss Adam-mini in the main body so that readers are aware of the relevant work
- We want to note that Adam-mini is concurrent work to ours, to be published in ICLR this year, but we agree with the reviewer that Adam-mini is highly relevant and including a detailed comparison in the main body improves the quality of our paper. We will include a more detailed comparison in the main body of our revision with a draft as in our response to reviewer 6d8c.
> clarify what is meant by k would be an appropriate addition.
> move norm notation to first paragraph.
We thank the reviewer for the suggestions and attention to details. We will clarify the meaning of $k$ in our revision and include all the norm subscripts in theorems for clarity in our revision.
___
We sincerely appreciate your detailed reviews and constructive suggestions. We believe that the discussion and additional experiments improve the quality of our submission and hope this provides sufficient reasons to raise the score.
---
Rebuttal Comment 1.1:
Comment: >in the low noise regime, AdaGrad-Norm would attain the best guarantee (second row of Table 2)
I am still a little confused about this point. It seems like by re-scaling the loss-function and the gradient estimates by the same scalar constant, one could arbitrarily change the values of $\sigma_i$. Note that in practice, gradient estimates are derived from mini-batch gradients, so re-scaling the loss function by a scalar would re-scale these estimates by the same scalar. This seems to suggest that a re-scaling of the loss function can move one into and out-of the regime where Thm. 3.1 implies a benefit from parameter grouping. This is very counter-intuitive to me; re-scaling the loss function preserves the geometry of the optimization problem and does not usually drastically change guarantees. Am I over-looking something?
---
Reply to Comment 1.1.1:
Comment: In the context of Table 2, the gain of grouping comes not from the magnitude of the noise but rather the *density* of the noise. In Table 2, we mean low-noise as in low-noise density i.e. $\beta=0$ means that only 1 coordinate contains noise. While scaling the loss would indeed scale the noise magnitude, it would not impact the noise density which is the primary contribution to the dimensional dependency that our analysis aims to target. Our Theorem 3.1 is also largely scale invariant: if we, say, double all the loss functions, the LHS of Theorem 3.1 goes up by a factor 4, but $G(\delta)$ (containing $f(x_1)-f(x^*)$ as in appendix F) and the sum of $|| \sigma_{\Psi_i} ||$ also go up by a factor of 2 each so it is still the same statement. | Summary: This paper proposes two techniques, Subset-Norm (SN) and Subspace-Momentum (SM), to reduce the memory footprint of adaptive optimization methods (like Adam and AdaGrad) when training large deep learning models, particularly large language models (LLMs). Subset-Norm reduces the memory of the adaptive step-size term by sharing it across subsets of parameters. Subspace-Momentum reduces momentum memory by restricting momentum updates to a low-dimensional subspace and using SGD in the orthogonal complement. The paper provides theoretical convergence guarantees for both methods and demonstrates empirically that combining them (SNSM) improves both memory efficiency and performance (lower perplexity) compared to standard adaptive optimizers on LLM pre-training and fine-tuning tasks.
Claims And Evidence: Most claims are supported by evidence, but some require further strengthening. For example, the main claim is that SN and SM improve performance and memory efficiency. While the empirical evidence is generally strong, the gains are sometimes modest, and the "why" behind the performance improvement of SM is not fully explained.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem.
Theoretical Claims: I didn't thoroughly check the correctness of the proofs.
Experimental Designs Or Analyses: The experimental setup is generally well-described, following standard practices for LLM pre-training and fine-tuning. The ablation studies on subset size and subspace selection are useful. However, in addition to Adam and GaLore, a wider range of baselines would strengthen the empirical evaluation.
Supplementary Material: The supplementary material is quite extensive and contains important details.
Relation To Broader Scientific Literature: The paper's contributions are related to memory-efficient adaptive optimization and LLM training.
Essential References Not Discussed: The paper cites many relevant works, but some important ones are not included in the comparison such as Adam-mini and FLORA.
Other Strengths And Weaknesses: Strengths: The problem of optimizer memory is clearly motivated, and the proposed solutions are intuitive.
Weaknesses: Compared to standard adaptive optimizers, the proposed methods are fairly complex, and the reliance on SVD for subspace selection in SM is a computational bottleneck.
Other Comments Or Suggestions: None.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful consideration and constructive feedback. Below, we clarify points raised and address specific concerns.
___
> the "why" behind the performance improvement of SM is not fully explained.
This is a limitation that is general to the class of algorithms that utilize momentum and it is an important open challenge for the community at large because the theoretical bound for momentum and without momentum are the same. We believe that any advance for SGD with momentum will lead to a better understanding of SM in our setting.
> in addition to Adam and GaLore, a wider range of baselines would strengthen the empirical evaluation.
We will add comparison with the following methods to Table 3 of our revision for the pretraining task. Some comparisons are shown below:
| Method | LLaMA 60M (1.3B tokens) | LLaMA 130M (2.62B tokens) | LLaMA 350M (7.86B tokens) | LLaMA 1B (13.1B tokens) |
|-----------------|-------------------------|----------------------------|----------------------------|--------------------------|
| AdamSNSM (ours) | 29.74 | 22.43 | 16.91 | 13.96 |
| AdamW | 30.46 | 24.60 | 18.67 | 16.00 |
| GaLore | 34.73 | 25.31 | 20.51 | 16.76 |
| FLORA | 32.52 | N/A | 23.69 | N/A |
| LoRA | 34.99 | 33.92 | 25.58 | 19.21 |
| ReLoRA | 37.04 | 29.37 | 29.08 | 18.33 |
> some important ones are not included in the comparison such as Adam-mini and FLORA.
We currently cite and compare with these works in the related works in the related works, Section A of the Appendix. However, we will include a more detailed comparison to Adam-mini in our revision, where we include a draft below:
"While Adam-mini also employs a grouping strategy for the adaptive step size, it is primarily motivated empirically and lacks a general grouping strategy for general parameters. Adam-mini gives a heuristic approach to form the group for transformers’ parameters without showing theoretically what this grouping is trying to achieve and if this goal is achieved, why the convergence is improved. In contrast, our theory results show that grouping by noise magnitude leads to improvement. Our heuristic is an efficient method toward this goal and the experiment validates this by showing that our groups successfully isolate the noise into a small number of groups and perhaps as a consequence, have improved performance (also see our additional experiments below). We further demonstrate in Table 2 that there are scenarios where subset-norm attains better convergence than existing methods.
In experiments, our AdamSNSM uses less memory than Adam-mini, due to the fact that Adam-mini uses full momentum while we use momentum only in a subspace (which outperforms full momentum in many cases given a good choice of subspace). Furthermore, in terms of perplexity, Adam-mini performs very closely to AdamW while our methods outperform Adam (which performs similarly to AdamW) on a range of language tasks and model sizes."
> Compared to standard adaptive optimizers, the proposed methods are fairly complex
The theories we developed for SN and SM are general and allow for many options, including future works. However, as reviewer t4t5 and KhrK pointed out, we also developed specific implementations that are simple and effective.
> The reliance on SVD for subspace selection in SM is a computational bottleneck.
As demonstrated in Table 6 on page 8, our methods do not depend on the SVD update gap to be as frequent as GaLore. In contrast to GaLore, our algorithms (e.g. AdaGradSNSM) could even benefit from a larger update gap as in Table 6, further reducing the cost of SVD, while achieving the best performance out of all baselines. Section C.4 of the Appendix contains additional ablation studies on the projection gap and rank which control the additional overhead cost and benefit from the projection computation.
Finally, please refer to our response to reviewer t4t5 which contains additional results showing the wallclock time and peak memory for different optimizers.
____
We sincerely appreciate your detailed reviews and constructive suggestions. We believe that the discussion and additional experiments improve the quality of our submission and hope this provides sufficient reasons to raise the score. | null | null | null | null | null | null |
Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search | Accept (poster) | Summary: This paper propose a post-training framework, incluing two stages: (1) the chain-of-Action-thought fine-tuning, i.e., format fine-tuning; (2) self-improvement RL, i.e., iterative distillation and RL. For first stage, using Qwen-2.5-Math-Instruct and Llama-3.1-70B-Instruct constructs a multi-agent data synthesis framework. For second stage, incorpating rule-based reward, reflection Bonuses and preference bonuses into final reward. Authors evaluate their resultsing model in GSM8K, MATH500, AMC2023, AIME2024, and OlympiadBench. They achieve comparable performance with other LLMs.
## Update
Thanks for detail reponse. I summary my concern as follow:
1. Overclaim of performance. As shown in table 1, the propose method did not outperform than baselines. Since this paper have no theoretical analysis, authors should highlight the up/down and numbers comparing to baselines.
2. Need more statics on augmented data. The new format of RL data is the core contribution. For more better understanding, authors should provide ablation on data.
3. This paper is more like a technical report than an academic paper, including data format, integrated reward and so on. Authors should provide more insights for academic community.
Finally, i will keep my socre.
Claims And Evidence: There are some over-claims:
1. "without external guidance" -> this paper adopt a a multi-agent system in data construction, including Qwen-2.5-Math-Instruct and Llama-3.1-70B-Instruct. If author want to illustrate the efficiency, I suggest they provide more evidence about cost, FLOPS or other metric of efficiency.
2. "superior performance" -> GSM8K: 93.9 (theirs) vs 95.2/91.6 (Qwen /w math). The same situation applies to MATH500. And, they also achieve 4 point improvement on average of OlympiadBench.
Methods And Evaluation Criteria: Yes, this paper use reasonable metric for evaluation.
Theoretical Claims: This paper does not provide any theoretical results or insights.
Experimental Designs Or Analyses: Yes, this paper's experimental designs are reasonable.
Supplementary Material: No, if need, i will review the supplementary material in the rebuttal.
Relation To Broader Scientific Literature: The key difference between this paper and current literature is the fine-tuning data format and reward design in RL. They incorporate all suitable formats for self-improving fine-tuning. And for reward designs, they propose reflection bonuses and preference bonuses.
Essential References Not Discussed: Not applicable.
Other Strengths And Weaknesses: The main concerns of this paper are :
1. there are not theoretical results.
2. the complex and diverse training system design ultimately only achieves comparable performance, and the results presented in the paper do not support the claims made in the introduction.
Other Comments Or Suggestions: If the paper includes more analysis on reinforcement learning theory, it would provide deeper insights.
Questions For Authors: - If you perform SFT using only your synthesized data in stage 1, would the performance differ from the current results?
- And is there an R1-style comparison?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **1. Over-claim about "without external guidance".**
We would like to clarify that **“external guidance” specifically refers to guidance provided by another LLM verifier at inference time (see Abstract, lines 18-19)**. Many existing LLM reasoning methods rely on extensive sampling and guidance from a verifier model (e.g., PRM-guided tree search). Regarding the cost of the multi-agent system, we believe this is analogous to the inevitable cost of curating most LLM training datasets. We will clarify these points in the revision.
**2. Over-claim about "superior performance".**
We would like to highlight that the **performance gains on challenging MATH benchmarks are substantial**. For example, we achieved +5% on OlympiadBench, +10% on AMC2023, and +3.3% on AIME2024 compared to the best available 7B models. Moreover, **on our out-of-domain generalization experiments**, our method consistently outperforms baseline models **with an average improvement of 8.3% (see Table 2)**. While we acknowledge that the paper review may carry some degree of subjectivity, we would also like to note that **all three other reviewers appreciate the strong empirical performance**. However, we would like to tone down the performance claims in the revision.
**3. No theoretical results.**
We acknowledge the reviewer’s concern regarding the lack of theoretical analysis, as our proposed training framework currently does not come with a rigorous theoretical justification. However, we would like to note that conducting a comprehensive theoretical analysis in the context of LLM, especially for complex RL training pipelines is highly non-trivial and beyond the scope of this work. We also agree that some components, particularly the proposed Restart and Explore (RAE) technique, merit further investigation. We view this as an important direction for future work and include some preliminary theoretical insights about RAE as a starting point for such analysis.
RAE modifies the initial state distribution by starting new rollouts not only from dataset-sampled prompts but also from random partial trajectories, i.e., prompts concatenated with partial responses generated by intermediate policies. This design can be justified by Theorem 6.2 of the Conservative Policy Iteration (CPI) paper [1], It states that if $\pi$ is an approximately optimal policy learned from one initial distribution $\mu$, and $\pi^*$ is the true optimal policy for a different initial distribution $\tilde{\mu}$, then
$$
\eta_{\tilde{\mu}}(\pi^*) - \eta_{\tilde{\mu}}(\pi)
\ \le \
\frac{\epsilon}{(1-\gamma)^2}
\left\|\frac{d_{\pi^*, \tilde{\mu}}}{\mu}\right\|_\infty,
$$
where $d_{\pi^*, \tilde{\mu}}$ is the stationary state distribution of $\pi^*$ under $\tilde{\mu}$, $\gamma$ is the discount factor, and $\epsilon$ is small approximation error. The concentrability coefficient $\bigl\|\frac{d_{\pi^*, \tilde{\mu}}}{\mu}\bigr\|_\infty$ measures how badly $\mu$ misses states that $\pi^*$ visits. If $\mu$ puts little or no mass on those important states, the policy learned from $\mu$ can perform poorly under $\tilde{\mu}$.
In the context of LLMs, the "initial state" corresponds to the prompt prefix. Training with a narrow or fixed set of prompts results in a limited $\mu$, which may place little to no mass on the key states an optimal policy should visit, thereby inflating the concentrability coefficient. RAE mitigates this issue by broadening $\mu$ through randomized partial rollouts, effectively augmenting the initial state distribution with more diverse and relevant prefixes encountered during training. While we do not have access to the true stationary distribution $d_{\pi^*, \tilde{\mu}}$, this augmentation strategy reduces distribution mismatch and narrows the theoretical performance gap between $\pi$ and $\pi^*$. This aligns with the conclusion in [1], which argues that more uniform state coverage leads to tighter bounds.
We plan to explore a more formal and comprehensive analysis of RAE in future work.
**4. Question-1: If you perform SFT using only your synthesized data in stage 1, would the performance differ from the current results?**
**We kindly refer the reviewer to the “Large-scale FT vs. Large-scale RL” ablation study in Section 6**, where we have demonstrated that performing SFT using only the large-scale synthesized data in Stage 1 is sub-optimal compared to large-scale RL training.
**5. Question-2: Is there an R1-style comparison?**
Please refer to [**our response to reviewer wWt8**](https://openreview.net/forum?id=j4FXxMiDjL¬eId=lC8KFYxRMy) for details.
**Reference**
- [1] Approximately optimal approximate reinforcement learning. ICML, 2002.
***
**If these clarifications satisfactorily address the reviewer's concerns, we kindly ask if the reviewer would consider updating the score to reflect what we believe is a paper with noteworthy contributions to the community.** | Summary: The paper presents a method called Satori, which could enhance the reasoning abilities of LLMs. It does this through Chain-of-Action-Thought (COAT), a system that adds special “meta-action” tokens (like ``<|reflect|>`` and ``<|explore|>``) to regular chain-of-thought prompts. These tokens let the model pause to check its work or try a different approach on its own. The authors train the model in two main stages: first, they fine-tune it on a small set of example trajectories to teach it how to use the new tokens. Second, they apply RL at a much larger scale, which allows the model to generate new solutions, find and fix its mistakes, and gradually improve how well it reasons.
Claims And Evidence: Yes. It is clear and convincing
Methods And Evaluation Criteria: Yes. It makes sense.
Theoretical Claims: They make no theoretical claims
Experimental Designs Or Analyses: They evaluate performance on datasets including GSM8K, MATH, and several out-of-domain tasks. Their experiments validate their claim that Satori can both solve challenging math problems and generalize to new domains.
Supplementary Material: I only read through the section A (Satori’s Demo Examples) and C (Details about Data Synthesis Framework)
Relation To Broader Scientific Literature: I understand the authors mentioned r1 model in their paper. However, I still want to mention the following fact:
Because o1 and o3 models are not open-sourced, it is unclear how OpenAI trained them. However, DeepSeek’s r1 shows performance comparable to o1 and reportedly uses a cold-start reinforcement learning approach, which seems to let the language model develop reflection skills on its own (the “Aha moment”). In contrast, the paper in question relies on a specialized dataset to teach reflection-like abilities. Compared with r1’s algorithm, this approach has several constraints: (1) it requires format tuning prior to self-improvement, limiting its flexibility to learn other skills, and (2) this format tuning could negatively affect the model’s existing capabilities. Moreover, their final results still lag behind o1, suggesting that while their method works, it may be somewhat behind the latest approaches.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Their method is undeniably powerful, and they provide strong evidence for its effectiveness. However, it remains unclear whether this approach can surpass the latest open-sourced r1 model.
Other Comments Or Suggestions: No other comments
Questions For Authors: In many RL settings, methods that rely heavily on pre-trained or trainable critic models can be easily hacked by LMs. How does your approach mitigate these risks? Could you explain any strategies you employ to ensure the policy cannot simply exploit the critic’s learned features?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. R1 uses a cold-start RL to develop reflection skills on its own (the “Aha moment”), but the paper relies on a specialized dataset to teach reflection-like abilities.**
We would like to respectfully clarify several important points:
- **R1 is a concurrent work and comparison should not be required:** r1 was officially released around the same time as the ICML 2025 abstract deadline (end of January). **According to the reviewer guidelines, we are not required to compare with or even mention r1.** However, we chose to acknowledge r1 as a concurrent work because we believe it offers valuable context for the research community amidst the recent interest it has generated.
- **Clarifying potential misunderstanding:** We respectfully suggest that the reviewer may have some confusion regarding r1. (1) Deepseek introduces two models: r1-zero, which is trained using pure RL from a base model, and r1, the actual released model. R1 includes a SFT stage before RL, where the SFT data is explicitly designed with a reflection pattern (see Section 2.3.1 in [1]). Thus, **r1 employs a similar strategy to our format-tuning stage, although its implementation details are not disclosed**. (2) **Rather than limiting capabilities, this SFT stage is helpful to achieve more effective RL**. Indeed, the r1 authors acknowledge the limitations of r1-zero without SFT (see Section 2.2.4 in [1]), including poor readability and language mixing issues. Furthermore, recent work [2] supports the effectiveness of incorporating SFT before RL. Several studies have also questioned the validity of the “Aha moment” claimed by r1-Zero and suggest “Aha moment” could be a mirage (see [3]), which challenges the reviewer's claim that "r1 can develop reflection skills on its own".
- **Comparison results provided:** Please refer to [**our response to reviewer wWt8**](https://openreview.net/forum?id=j4FXxMiDjL¬eId=lC8KFYxRMy).
**2. The performance still lags behind o1 and open-sourced r1 model.**
- **Unfair comparison:** We respectfully argue that it is not reasonable to compare the performance of a research prototype with that of large-scale industry systems. R1 contains over 600B parameters, whereas our model is based on a 7B LLM. Furthermore, the scale of training data and infra used by o1 and r1 is not publicly disclosed but is far beyond what is accessible in a academic setting.
- **R1 is not truly open-sourced:** While o1 is certainly closed-source, we would also like to clarify that r1 is not fully open-sourced. Only the model weights have been released. The training data, codebase, and detailed pipeline remain proprietary. Over the past two months, several efforts have attempted to reproduce r1’s performance, but few—if any—have succeeded in matching it, which further suggests that important implementation details are missing.
- **Research Objective:** o1 and r1 are undoubtedly successful, but their performance is largely attributed to extensive resources. In contrast, the goal of academic research is not to achieve SOTA benchmark results, but rather to explore novel methodologies orthogonal to scaling up resources. For instance, our proposed RAE technique effectively mitigates the sparse reward issue in RL, which remains a major challenge for complex reasoning tasks and is under-explored in prior works. Moreover, RAE could potentially be integrated into r1’s training framework as well. We respectfully note that if benchmark performance were the sole criterion for evaluation, then almost no academic work (including LLM reasoning papers submitted to ICML 2025) could surpass current industry-developed models such as r1 or o1.
**3. How does your approach mitigate the risk of critic model hacking?**
- **Request for clarification**: In RL, the term “hacking” typically refers to reward hacking, where the model exploits flaws in a reward model to receive undeservedly high scores. By contrast, the critic model in PPO is used for estimating advantages via GAE, and does not directly influence the reward signal. We kindly ask the reviewer to clarify whether the concern is regarding to reward model or PPO’s critic model.
- **Our approach**: Satori utilizes a hybrid reward mechanism with both a reward model and rule-based reward. (see Section 4.2). While it is possible that the reward model may become less reliable during later stages of training, our design ensures that the rule-based reward remains dominant, mitigating the risk of reward hacking.
**Reference**
- [1] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning, Jan, 2025.
- [2] Demystifying Long Chain-of-Thought Reasoning in LLMs, Feb 2025.
- [3] There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study, Feb, 2025.
***
**If these clarifications satisfactorily address the reviewer's concerns, we kindly ask if the reviewer would consider updating the score to reflect what we believe is a paper with noteworthy contributions to the community.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful and detailed response. I now understand that r1 is a concurrent effort and therefore need not be included in the comparisons for this conference.
I’d like to clarify one small point. You noted that “this SFT stage is helpful to achieve more effective RL,” but [1] reports that r1‑zero outperforms r1 on reasoning tasks, implying that format tuning can sometimes reduce model performance even after RL. I thought this nuance might be helpful to flag.
After reviewing your explanation, I realize my earlier evaluation was too harsh, so I have raised my score accordingly. Your framework is indeed a fair and promising direction for future language‑model training.
Thank you again for the engaging discussion. | Summary: This work studies the problem of post-training LLMs for self-reflection and self-exploration capabilities. A training scheme termed COAT is proposed, which consists of two stages: (1) a small-scale SFT to initiate COAT reasoning format; (2) a large-scale RL finetuning stage to further enhance the self-reflection/exploration capabilities. For the SFT phase, a generator-critic-reward multi-agent data synthesis framework is used to construct high-quality demonstration trajectories. For the RL phase, a restart and exploration strategy is adopted, letting LLMs to reflect starting intermediate steps, and reward is carefully designed to have both rule-based correctness reward, reflection bonuses, and additional preference bonus. Extensive results demonstrate the effectiveness of COAT in reasoning tasks, in particular, the improvement over base models.
Claims And Evidence: Yes, the claims in this work are made cautiously with supporting evidences.
Methods And Evaluation Criteria: Yes, the proposed method is intuitively reasonable, a SFT phase to get initialized and a RL phase to further boost the performance. The evaluation is also comprehensive with in-domain and out-of-domain tasks with many other metrics presented.
Theoretical Claims: NA
Experimental Designs Or Analyses: I have checked the results provided in the main paper and the results are convincing.
Supplementary Material: I have carefully reviewed appendix C and skimmed through other parts.
Relation To Broader Scientific Literature: The study of reasoning capability of LLMs are important to promote their usage in broader scientific domain, and I believe this work contribute to this field.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The paper is written very clearly, and it has been a pleasure to read it. The idea is nicely presented and the effectiveness is demonstrated with comprehensive experiments.
Other Comments Or Suggestions: NA.
Questions For Authors: One question that I would love to hear the author's opinion on: while this work considers a "more developed" framework to enhance models' self-improvement capability (e.g., use a SFT phase first to get COAT format and a large-scale RL phase), is it possible to compare with more naive approaches (e.g., the ones adopted in DeepSeek-R1 with only RL finetuning)? In particular, I would love to see both the performance comparison and the self-improvement pattern comparison.
I might have missed similar results in the paper. If that is the case, please provide a pointer.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Compare with more naive approaches (e.g., the ones adopted in DeepSeek-R1 with only RL finetuning).**
We appreciate the reviewers' interest in the comparison between our method and the RL-only approach (r1-zero). As noted in our response to reviewer WTyn, r1 is a concurrent work, and such comparisons are not expected according to ICML reviewer guidelines. Nonetheless, we are happy to provide this comparison and share the observations and insights we have gained, in the spirit of contributing to the broader research community.
The results and discussions can be found at https://docs.google.com/document/d/e/2PACX-1vRk1wLF9fsEVPFP2ijxNUd82LPnWZ2w-zTxVNGkkrof9yy36BNrM47JiCr6r8bga9c2Sr3q0a-S0oFF/pub
* **Apple to apple performance comparison of r1-zero and Satori:**
In order To fairly compare our method with r1-zero, we keep all experimental settings consistent, including the base model, training data, and RL training framework. The only differences lie in the algorithms used, our method employs PPO with RAE, while r1-zero uses GRPO, and whether an additional SFT stage is incorporated (ours includes Format Tuning and r1-zero starts RL training from the base model). **We evaluated r1-Zero on the same in-domain and out-of-domain benchmarks, and the results show no advantage over our method**. This suggests that **the perceived strong performance of the released DeepSeek-r1 model is likely due to significant engineering investment**, such as larger-scale in-house data, a much larger base model, and more powerful infrastructure. Moreover, **we observed several challenges and unexpected behavior of training r1-zero**, which we detail below.
* **Training of r1-zero could be unstable:**
We found that training r1-zero can be quite unstable. For instance, **rewards may drop sharply in later training stages, and the model sometimes begins generating repetitive random sequences**. This instability is especially common when the KL penalty is set to zero (i.e., no regularization). We hypothesize that without an SFT stage to anchor the model's behavior, the base model may produce harmful outputs early on, which can lead to a vicious cycle during optimization.
* **R1-zero's behavior is hard to control:**
Even when training converges, R1-Zero exhibits undesirable behaviors at inference time. Specifically, we observed two representative issues: (1) **Repetitive responses**, where the model generates repeated tokens or sentences mid-generation. (2) **Language mixing**, where responses may unexpectedly contain characters from other languages (e.g., Chinese), a phenomenon also noted by the r1 authors. These findings suggest that pure RL fine-tuning lacks model behavior constraints. While more carefully designed reward functions might help, we argue that a simpler and more effective solution is to incorporate a SFT stage, as we did with Format Tuning, to stabilize model’s initial behavior.
* **R1-zero shows some "reflection pattern", especially using Python code verification:**
We observe that r1-zero indeed demonstrates some reflection behavior after RL training, but we found **it tends to use Python code to verify answers, even when this is unnecessary**. For example, in a multiple-choice question from the MMLU-Pro dataset (which may not require logical verification), the model attempts to verify the answer using Python, which is a redundant step in that context. This behavior suggests that without appropriate constraints, **cold-start RL can introduce unexpected patterns, beyond the desired problem-solving skills**. Moreover, a recent study has also questioned the validity of the "Aha moment” claimed by r1-zero and suggests “Aha moment” could be a mirage [1].
* **Our final takeaway: SFT warm ups, RL improves.**
Our view is that **neither pure SFT nor pure RL (as in r1-Zero) is sufficient for effective LLM post-training**. Pure SFT can overly constrain the model and limit its ability to generalize beyond demonstrations. Pure RL can lead to unexpected and unreasonable behaviors, as observed with r1-zero. Instead, **we advocate for an appropriate combination of both**. This is the key insight we propose: a small-scale warm-up SFT helps the base model to get familiar with certain reasoning patterns, and a large-scale RL helps the model to utilize such reasoning patterns to incentivize its actual reasoning capability. This idea is also consistent with practices in the classical RL literature. For example, [2] demonstrates that a small amount of demonstration data can accelerate RL training by improving policy initialization. Moreover, the released DeepSeek-r1 model (as opposed to r1-Zero) also incorporates a SFT stage before RL (though the details of such SFT stage are not disclosed), which aligns with our insight.
**Reference**
- [1] There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study, Feb, 2025.
- [2] Deep Q-learning from Demonstrations, AAAI, 2018. | Summary: In this paper, the authors proposes Satori as a framework for LLM reasoning. It is a two-stage framework, including Format Tuning and Self-improvement, to enhance LLM reasoning capabilities. The core contribution of this paper is the Chain-of-Action-Thought (COAT) framework, which structures LLM reasoning with meta-action tokens (<|reflect|> and <|explore|>) to enable self-correction and exploration. The Self-Improvement stage utilizes RL, the Restart and Explore (RAE) algorithm, to efficiently train the model to use the COAT format effectively. Experiments on math and out-of-domain benchmarks demonstrate that Satori outperforms existing models, exhibiting strong generalization and test-time scaling behavior.
Claims And Evidence: Claims of superior performance and generalization are *partially* supported by empirical evidence.
* Strengths: Strong performance on math and out-of-domain benchmarks compared to baselines. Ablation studies show the contribution of reflection bonus and RL. Qualitative examples illustrate COAT reasoning.
* Weaknesses: Statistical significance is not explicitly discussed. Ablations are limited. Qualitative analysis is anecdotal. Benchmark scope is somewhat limited, and some benchmarks may be saturating. Lack of human-level performance comparison. Overall, while performance gains are shown, the strength and generality of the claims could be better supported with more rigorous and systematic evidence.
Methods And Evaluation Criteria: Proposed methods (COAT + RAE) and evaluation criteria (math and reasoning benchmarks) are generally sensible for the problem.
* Strengths: COAT provides a structured approach to reasoning. RAE addresses RL challenges in reasoning. Benchmarks are relevant and challenging. Out-of-domain evaluation is a strong point.
* Weaknesses: Evaluation primarily focuses on final answer accuracy (pass@1). Metrics that evaluate reasoning process quality would be valuable. Benchmark scope is somewhat limited.
Theoretical Claims: This paper doesn't contain theoretical claims. No theoretical claims or proofs were checked.
Experimental Designs Or Analyses: Experimental designs and analyses are generally sound, but could be strengthened.
* Strengths: Ablation studies provide some insights. Multi-agent data synthesis framework is a creative approach.
* Weaknesses: Ablations are limited in scope. Lack of statistical significance testing. Qualitative analysis is anecdotal. No error analysis of failure modes.
Supplementary Material: Supplementary material was reviewed, specifically Appendix A (Demo Examples), Appendix C (Data Synthesis Details), and Appendix D (Experimental Setup Details). The supplementary material provides helpful details and examples to understand the method and experiments.
Relation To Broader Scientific Literature: * Relation is clear: The paper clearly relates to the broader literature on LLM reasoning, CoT prompting, test-time search, self-improvement, and RL for LLMs.
* Specific relations: Builds upon CoT by adding meta-actions. Extends RL for reasoning by proposing RAE. Presents COAT as an alternative to data-intensive CoT fine-tuning and computationally expensive test-time search methods. Relates to SoS as concurrent work on training single LLMs for search, but argues for broader applicability of COAT.
Essential References Not Discussed: No essential references appear to be missing based on my current understanding of the literature.
Other Strengths And Weaknesses: Strengths:
* Originality: Novel combination of COAT framework, RAE algorithm, and two-stage training for reasoning.
* Significance: Addresses an important problem (enhancing LLM reasoning) with a practical and efficient approach. Demonstrates promising empirical results and generalization.
* Clarity: Paper is generally well-written and easy to understand, especially with figures and examples.
Weaknesses:
* Incremental Novelty: Novelty is more in the combination than in fundamentally new concepts.
* Limited Rigor in Evaluation: Evidence could be strengthened with more systematic evaluations, statistical significance testing, and more detailed ablations.
* Justification of Design Choices: Some design choices (meta-actions, RAE parameters, reward function) could be more thoroughly justified and explored.
Other Comments Or Suggestions: * Consider adding error analysis to understand common failure modes and how COAT helps address them.
* Minor comment: In Figure 1, "Trajectories Generation" could be rephrased to "Initial Trajectories Generation" for clarity.
Questions For Authors: 1. Have you tried to ablate <|reflect|> and <|explore|> meta-actions separately? What if <|explore|> is not used at all?
2. Have you observed how COAT helps to mitigate the common failure modes? Which failure modes are best mitigated by COAT?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. Ablations and analysis are limited: Some design choices (meta-actions, RAE parameters, reward function) could be more thoroughly justified and explored.**
We thank the reviewer for the suggestion. **We have conducted additional ablation studies to further analyze our design choices at** https://docs.google.com/document/d/e/2PACX-1vQ5lTnuQ5x6bx5Qh87cVdEq5iyrRIMLi5DYVHQUbrqB3f2Gye6mn0bHLwMcVosqddg_wp6P2JdEMfJM/pub.
- Meta-actions: Please refer to rebuttal response 6.
- RAE parameters: We clarify that RAE does not introduce any new tunable hyperparameters. However, to evaluate its impact, we perform an ablation study by training a model without applying RAE. The results show significant performance degradation, confirming that RAE is crucial for RL optimization.
- Reward function: We have already ablated the reflection bonus in Appendix E. In addition, we include another ablation study that removes the preference bonus provided by the outcome reward model. The performance degrades in this setting, indicating that the preference bonus plays an important role in mitigating the sparse reward issues.
**2. Analyze common failure modes and how COAT helps address them.**
We thank the reviewer for the suggestion. **We have conducted a failure mode analysis (see provided link above)**. Specifically, we find that COAT effectively identifies and mitigates five common failure modes in reasoning tasks: (1) overly complicated solution that misleads the problem-solving process; (2) numeric simplification and calculations error; (3) replacing variables with incorrect numerical values; (4) lack of comprehensive consideration; (5) wrong interpretation of the problem.
**3. Benchmark scope is limited.**
Since our training data consists of open-sourced math problems, it is reasonable to evaluate the model primarily on math-related benchmarks. However, we would like to emphasize that **our evaluation goes beyond the math domain with six OOD datasets logical reasoning, code reasoning, commonsense reasoning, tabular reasoning, and domain-specific reasoning**. This type of OOD evaluation is rarely explored in prior works. We would appreciate it if the reviewer could clarify in what sense the benchmark scope is considered limited, and what types of additional evaluation would be most helpful.
**4. Evaluation only focuses on final answer accuracy (pass@1).**
We would like to note that zero-shot pass@1 is a widely adopted evaluation metric in LLM reasoning literature, provided that the ground-truth final answer is usually available. We agree with the reviewer that analyzing the intermediate reasoning process could provide valuable insights. However, this remains challenging due to the lack of reliable automatic verifiers capable of evaluating intermediate steps at scale. Toward a deeper understanding of our model’s behavior, **we have included a diverse set of demo examples in Appendix A that showcase different reasoning patterns of our model.**
**5. Novelty is more in the combination than in fundamentally new concepts.**
While the novelty is the eye of the beholder, we respectfully argue that the contributions of this work go beyond a simple combination of existing ideas. The reviewer describes our approach as a “novel combination of the COAT framework, RAE algorithm, and two-stage training for reasoning.” To the best of our knowledge, **none of these components have been introduced in prior LLM reasoning research**:
* COAT differs substantially from classical CoT prompting in both design and motivations.
* RAE offers a new perspective (by changing initial state distribution) to tackle the sparse reward problem in RL, a challenge has not been resolved by prior works.
* Our two-stage training pipeline (small-scale Format Tuning + Online RL) also offers new insights for LLM post-training: SFT warm ups, RL improves.
**6. Question-1: Have you tried to ablate <|reflect|> and <|explore|> meta-actions separately? What if <|explore|> is not used at all?**
We believe that <|reflect|> and <|explore|> are fundamental meta-actions for reasoning tasks, and both are essential for effective problem-solving. Specifically, <|reflect|> prompts the model to evaluate its current reasoning, enabling it to identify potential errors or suboptimal steps, <|explore|> allows the model to propose alternative solutions, which is a natural follow-up when the model realizes (via <|reflect|>) that its current approach may be flawed.
***
**If these clarifications satisfactorily address the reviewer's concerns, we kindly ask if the reviewer would consider updating the score to reflect what we believe is a paper with noteworthy contributions to the community.** | null | null | null | null | null | null |
DeepCrossAttention: Supercharging Transformer Residual Connections | Accept (poster) | Summary: The authors propose DeepCrossAttention (DCA), a novel method that enhances residual connections in transformer architectures. In standard transformers, if we denote the input and output of the i'th block as $x_i$ and $y_i$ respectively, vanilla residual connections simply use $x_i = \sum_{j < i} y_j$. DCA instead employs learnable, input-dependent weights to dynamically combine layer outputs: $x_i = \sum_{j < i} y_j \cdot (\text{ReLU}(\langle y_j, w_{ij} \rangle) + b_{ij})$ where $w_{ij}$ and $b_{ij}$ are parameter vectors.
For attention layers specifically, DCA creates queries, keys, and values by independently combining previous layer outputs using separate parameter vectors, allowing for richer interactions between layers at different depths.
## Theoretical Contributions
The authors analyze stacked low-rank linear projections, demonstrating that DCA achieves a better accuracy-model size trade-off when the ratio of collective layer ranks to ambient dimension falls below a critical threshold. They extend this analysis to nonlinear models using the concept of bottleneck rank.
## Empirical Contributions
Extensive language modeling experiments on LM1B and C4 datasets demonstrate that DCA:
- Achieves better perplexity for a given parameter budget
- Reaches equivalent model quality up to 3x faster
- Exhibits improved training stability with fewer loss spikes
- Adds only a negligible number of parameters (approximately 0.2%)
## Strengths
1. The method is elegant in its simplicity while delivering substantial convergence improvements, particularly for models with smaller hidden dimensions
2. Comprehensive empirical validation across multiple scales (75M to 449M parameters) using various metrics
3. Thorough ablation studies isolate the contribution of each proposed component
4. Clear improvements over related methods (LAuReL, DenseFormer, Hyper-Connections)
## Weaknesses
1. Experimental validation is limited to language modeling tasks; testing on other modalities (vision, audio) would strengthen the paper's claims about the method's generalizability
2. The theoretical analysis, while sound, is less compelling than the experimental results; the authors could have expanded the empirical evaluations section instead
Claims And Evidence: Yes, the claims are supoorted theoretically and empirically.
Methods And Evaluation Criteria: Yes LM on C4 and LM1B makes sense but the evaluation could be further expanded.
Theoretical Claims: I checked the correctness of proofs and did not find any issues.
Experimental Designs Or Analyses: Yes, the experiments are sound.
Supplementary Material: no
Relation To Broader Scientific Literature: The proposed method builds on related work and the novelity over past work is not significant - however the authors do show improved reuslts over the methods that they build on.
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: See weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our manuscript. We address the two concerns raised by the reviewer below.
> Experimental validation is limited to language modeling tasks; testing on other modalities (vision, audio) would strengthen the paper's claims about the method's generalizability.
Based on the reviewer’s suggestion we have performed additional experiments on ImageNet classification using vision transformers. Since the ViT model is also transformer-based, we were able to incorporate DCA the same way as for the language models presented in the manuscript. We present our results on the ViT-S/16 model from https://github.com/google-research/big_vision (22M params):
| Method | Training loss | Accuracy |
| --- | --- | --- |
| ViT | 0.5698 | 76.4 |
| ViT+DCA | **0.5284** | **77.1** |
This indicates that our results generalize to the vision domain.
> The theoretical analysis, while sound, is less compelling than the experimental results; the authors could have expanded the empirical evaluations section instead.
In response to the reviewer’s feedback, we have significantly expanded the empirical results section of our manuscript. The updated version will include the ImageNet results presented above, as well as additional comparisons with prior work using larger-scale models (see also the response to reviewer nMe5) to provide a more comprehensive empirical evaluation of our method.
We are confident that these additional experiments and clarifications address the reviewer's concerns and further strengthen the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks - I am maintaining my score. | Summary: The authors introduce learnable residual connections to improve over standard residual connections used in ResNets and transformers. They highlight that simple residual connections struggle to recover the input (learn the identity function) on toy examples and their proposed learnable residual connections can overcome this problem. Theoretical analysis on a low rank linear model shows that their proposed method obtains lower risk given that the rank of the task is small enough. Further experiments on transformers are provided to highlight to empirically evaluate the method.
######## update after rebuttal #########
The authors have clarified my questions, i will keep my score.
Claims And Evidence: Yes, the proposed model architecture has been validated empirically.
Methods And Evaluation Criteria: The method is evaluated across multiple datasets.
Theoretical Claims: The authors show that the proposed GRN model can achieve lower risk.
Experimental Designs Or Analyses: The experiments are designed to verify the performance of the newly proposed GRN architecture.
Supplementary Material: I went over the empirical results and briefly looked at the proofs.
Relation To Broader Scientific Literature: The paper discusses residual connections and attention, two crucial mechanisms in modern deep learning.
Essential References Not Discussed: Most of the relevant literature is discussed.
Other Strengths And Weaknesses: Strengths
1. The proposed method is simple and can provide significant improvements as well as faster training as shown in the experiments.
2. The authors also provide some theoretical insights to validate their method, and show that as long as the rank of the target task is small enough, the GRN model can achieve lower risk.
Weaknesses
1. Most efficiency gains seem to occur by using the first and last-k layer outputs in the GRN, for k=2. Moreover the perplexity gains from increasing k further are limited. This implies that not all layer representations contribute to the residual connection in GRN. However, the authors do not compare with only a learnt residual in each layer while ignoring all previous residuals ($w_1 x_t + w_2 f(x_t)$). Would this already be sufficient?
2. Table 2 presents results where improvements with DCA diminish with increasing width. Can the authors estimate the rank in each layer to verify if similar trends like Fig 5 extend to an experimental setting.
3. An analysis of the importance of layers given the learnt weights of DCA is missing in the experiments. I believe this would be crucial to highlight the differences between standard residual connections and DCA. Are there specific layers that obtain a larger weight in residual connections of DCA and if so which are they?
While the overall mechanism proposed with GRN is simple and can help achieve improved performance, it is unclear which of these layers enabled with residual connection are most important and how the optimal $k$ can be estimated for at task.
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and insightful feedback on our manuscript. We address the three questions raised below.
> Most efficiency gains seem to occur by using the first and last-k layer outputs in the GRN, for k=2. Moreover the perplexity gains from increasing k further are limited. This implies that not all layer representations contribute to the residual connection in GRN. However, the authors do not compare with only a learnt residual in each layer while ignoring all previous residuals (w1xt+w2f(xt)). Would this already be sufficient?
The method suggested by the reviewer has previously been proposed as LAuReL-LR.
The LAuReL-PA method that we used in our experiments is a stronger generalization of LAuReL-LR, as observed in the LAuReL paper. Thus, we opted to report LAuReL-PA in our manuscript instead. Based on our results, including previous outputs in addition to the last output leads to significant perplexity improvements.
We also conducted experiments without the model inputs as an explicit input to each GRN but we found that this did not perform as well as including the model inputs (num_layers=6, emb_dim=512 on lm1b):
| Method | Perplexity |
| -------- | ------- |
| Transformer | 20.878 |
| GRN (last 4) | 20.301 |
| GRN (model inputs + last 3) | 20.227 |
We would like to emphasize that even with k=2 all the layer representations do contribute to the residual connections in GRN. This is because all the intermediate layers are summed, as in a vanilla residual network, and passed as an additional input to the GRN. With k=2 each GRN thus has the following 4 inputs: the input to the model, the sum of all intermediate layer outputs, the second last layer output, and the last layer output.
> Table 2 presents results where improvements with DCA diminish with increasing width. Can the authors estimate the rank in each layer to verify if similar trends like Fig 5 extend to an experimental setting.
Since the models used in Table 2 incorporate nonlinear activations, the appropriate notion of rank is the bottleneck rank as described in Section 4.4. In our case the bottleneck rank is the same as the width of the model because the model strictly improves as the width increases, which indicates that the model is not able to represent the same function with smaller width. Our empirical results thus align with our theoretical results because in both cases the benefit of our method decreases as the rank (or bottleneck rank) of the model increases.
> An analysis of the importance of layers given the learnt weights of DCA is missing in the experiments. I believe this would be crucial to highlight the differences between standard residual connections and DCA. Are there specific layers that obtain a larger weight in residual connections of DCA and if so which are they?
In Appendix H we provide insights into the importance of each layer by plotting the learnt weights for each GRN in Figure 10. The results show that the first and last few layers are most important as they obtain the largest weights. This insight led us to the design of the more efficient k-DCA which uses the model inputs and the last-k layer outputs together with the sum of all intermediate layer outputs as the input to the GRN.
Moreover, we observed in the training dynamics that the model input is important for all the hidden layers, especially in the beginning of model training. We further verified this by removing the explicit connection to the model input which performed notably worse. We will enrich the discussion in Appendix H with these insights.
We hope that these responses adequately address the reviewer's concerns. We believe these clarifications and additional results strengthen the manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for clarifying the use of model inputs and the rank of each layer. I will keep my score. | Summary: The paper introduces DeepCrossAttention (DCA), a new mechanism that stores and uses intermediate features of transformers. DCA enables learnable, input-dependent weights to mix preceding intermediate features, enhancing the model's representation power. The authors also provides theoretical justifications regarding why DCA blocks have higher representation powers compared to standard ResNet architectures. Experiments on LM1B and C4 show that DCA achieves competitive performance compared to vanilla Transformer architectures.
Claims And Evidence: Most of the claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criterion make sense for the problem.
Theoretical Claims: I did not check the proofs of theorems in the paper.
Experimental Designs Or Analyses: I have checked the soundness/validity of the experimental designs in the paper.
Supplementary Material: I have checked Sections F., G., and H in the supplementary material.
Relation To Broader Scientific Literature: The main contribution of the paper is that it proposes a way to use a small number of additional parameters to improve the performance of the LLM.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength:
1. The paper is well-written and easy to read and follow.
2. The paper provides theoretical justifications alongside relatively large scale training experiments to validate the proposed method.
Weaknesses:
1. The majority of the paper is comparing DCA with vanilla transformers. And the comparison with baseline methods in Table 5 is limited to very small number of parameters, i.e., ~50 M. The authors should extend their analysis to larger-scale models.
2. There is no discussion regarding memory requirements during the forward and backward passes. It would be beneficial for the authors to report the maximum memory usage in both phases to better check the method's scalability.
Other Comments Or Suggestions: 1. Could the authors provide analysis regarding the memory usage during forward and backward pass?
2. Can the authors extend their comparison to include baselines with larger-scale models to assess the method's effectiveness at higher parameter counts?
Questions For Authors: No other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our manuscript. We hope that our responses adequately address the reviewer's concerns. We believe these clarifications and additional results strengthen the manuscript.
> Could the authors provide analysis regarding the memory usage during forward and backward pass?
Since the difference in the number of model parameters is negligible, the main difference in memory usage comes from the number of activations that need to be stored.
Let us first analyse the memory usage during inference. The vanilla transformer only stores one activation tensor which each transformer block adds to in order to compute the residual connection. DCA takes all the previous layer outputs as its input, thus the number of activations it needs to store scales linearly with the depth of the model, which significantly increases the memory usage for deep models. To mitigate the memory overhead, we propose k-DCA, which only uses the model’s input, last-k layer outputs, as well as the sum of the remaining intermediate layer outputs. This reduces the number of stored activations to k+2 times that of the vanilla transformer, which is independent of the model depth.
During model training the memory usage of DCA is on the same order as the vanilla transformer. This is because the vanilla transformer also needs to keep all activations in memory to compute the gradients. As the DCA layer acts in a modular way on its input activations, the forward and backward passes can be performed by storing only one additional activation tensor. Based on profiling a 179M parameter model, we see that the peak memory footprint increases from 5.3GB to 6.6GB (this can likely be lowered further since our implementation is not tuned for efficiency). We also note that DCA is compatible with model sharding, which can greatly reduce the memory consumption per chip. Finally, DCA does increase the memory bandwidth because for each layer we need to read the activations of all other layers instead of just one. This additional cost is already factored into the runtime analysis presented in the manuscript which shows that DCA obtains lower perplexity for a given training time.
> Can the authors extend their comparison to include baselines with larger-scale models to assess the method's effectiveness at higher parameter counts?
Based on the reviewer’s suggestion, we conducted additional experiments with larger-scale models to compare our method against related work. We have started additional runs comparing DCA against all baselines at 179M parameters (n_layers=13, emb_dim=1111, hid_dim=4444) on the C4 dataset. While our experiments are still running, we can provide the following perplexity results for the models at 400K (out of 500K) steps.
| Method | Perplexity, 400K steps |
| --- | --- |
| Transformer | 21.772 |
| 1x1-DenseFormer | 21.313 |
| Hyper-Connections | 21.261 |
| LAuReL-PA | 21.117 |
| 8-DCA (ours) | **20.625** |
The preliminary results show that even at larger scale our method outperforms prior work. We will add the final results to the paper once all runs are complete.
---
Rebuttal Comment 1.1:
Comment: Thank you for your feedback. I am maintaining my rating. | null | null | null | null | null | null | null | null |
One-Pass Feature Evolvable Learning with Theoretical Guarantees | Accept (poster) | Summary: This work focuses on the online learning scenario with a special assumption of the environment: the feature space evolves, where old features vanish and new features emerge during the process of online learning. This work considers to characterize the feature relationship via kernel function, and proposes the KOM discrepancy (Kernel Ortho-Mapping Discrepancy) as a difference measure between two feature spaces, which equals the minimum difference between the empirical mappings of two kernel functions. Based on the proposed KOM discrepancy, the authors develop the OPFES approach, which consists of the online learning process covering all three stages in feature evolvable data stream. The OPFES approach includes the traditional online kernel learning as well as the online optimization of the proposed KOM discrepancy. The authors verify the superiority of the proposed OPFES approach both theoretically and empirically. They provide regret bound for the proposed OPFES approach, and the experimental result shows the promising performance of the OPFES approach.
Claims And Evidence: Yes. The claims made in this submission is well-supported. The authors claim that the proposed OPFES approach is superior to other methods, and present both theoretical and empirical evidence. They give a regret bound to show that the OPFES approach benefits from the minimization of KOM discrepancy and the previous obtained kernel model. Besides, they conduct extensive experiments to show that the performance is overall better than other online learning methods with or without the modification for feature evolvable data streams.
Methods And Evaluation Criteria: Yes. The proposed methods and criteria make sense for the problem. In online learning, it is quite common to feed online learning algorithm with data instances from traditional benchmark datasets in random order to simulate the online learning process, which is also adopted in this paper. Besides, the cumulative error rate is also a widely used criteria in online learning scenario.
Theoretical Claims: Yes. I checked the detailed proofs of two main theorems in the appendix, and I believe that the proofs are correct. For Theorem 3.3, the authors incorporate the empirical kernel mapping to derive the finite-dimensional kernel mapping, and consider to bound the difference of linear SVMs under the finite-dimensional feature mapping. The proof of Theorem 3.3 is straightforward and easy to follow.
For another important theorem, Theorem 4.3, the proof basically follows traditional regret analysis of online SVM method, but uses Theorem 3.3 to generalize traditional regret bound to feature evolvable scenario. The proof technique is interesting and correct.
For other theorems in this paper, I roughly checked the proof. The ingredients are similar, and the proofs seem to be correct.
Experimental Designs Or Analyses: Yes. The experimental designs are sound. The authors first introduce other compared methods, and gives a detailed discussion on the parameter settings of each method. They then compare the cumulative error rate of these methods, which is a very important criteria for online learning methods, and they give some analysis on the experimental result. They also conduct experiment on different dimensionalities of random Fourier feature to show that the algorithm benefits from moderately large number of features. The experimental result is overall convincing.
Supplementary Material: Yes. I reviewed the supplementary material. I checked the correctness of the proofs of two main theorems, and I went through the additional experimental results. Overall, I believe that the proofs of the main theorems are correct, with similar ingredients as in other related works. I also find an interesting phenomenon in the additional experimental result, which the cumulative error rate decreases, then increases and finally decreases as the dimensionality of random features increases.
I also notice that the authors have uploaded the source code for their experiments. Although I did not run the source code, I believe the results can be reproduced if the parameters are set as introduced in Section 5.
Relation To Broader Scientific Literature: This work has great impact on the area of online learning with feature evolvable streams, or even on multi-model learning if it were to be accepted. Previous works on this area mostly consider linear feature relationships, while this paper proposes to incorporate kernel mapping and describe the feature relationship as the proposed KOM discrepancy. The proposed discrepancy is not limited to linear relationship, which pushes the performance boundary forward for algorithms handling feature evolvable data streams. Benefit from the KOM discrepancy, the authors propose OPFES method and achieve the state-of-the-art performance among other representative algorithms, which mostly consider linear relationship between different feature spaces.
Essential References Not Discussed: The authors do not miss any important related works. I am familiar with online learning, and I think that the authors have already cited all important works in this area.
The paper mentions many famous online learning algorithms, e.g., online SVM, online kernel SVM, etc. The authors also compare the proposed method with previous state-of-the-art algorithms like the variations of FESL and OCDS.
Since this paper also gives some theoretical analysis of the proposed OPFES approach, I also checked the related works in this area. I found that the regret bound in this paper achieves O(1/\sqrt{T}) rate, which is comparable with the state-of-the-art result in online learning scenario with strongly convex loss function. The related theoretical works are also cited in this paper.
Other Strengths And Weaknesses: The detailed strengths of this work are as follows:
1. This work is well-written and it is easy to follow. Besides, the authors also uploaded the source code in the supplementary material, which further enhances the reproducibility.
2. This paper shows a relatively good originality. The scenario of online learning with changing feature spaces was introduced a couple of years ago and many algorithms were proposed. However, existing methods only focus on linear transform between feature spaces, yet without theoretical guarantee on the regret of the online learning algorithm. The authors introduce the KOM discrepancy to characterize the relationships between old and new feature spaces, which considers non-linear and more complicated relations compared with most existing algorithms. In addition, they theoretically demonstrate that the difference between the performances of two optimal kernel classifiers can be upper-bounded by KOM discrepancy, and reaches better regret bounds based on this theorem.
2. This paper has solid theoretical and empirical analysis and is significant to some extent. It theoretically shows that the model learned in the old stage can help achieve fast convergence and better regret in the new stage, which is inspiring and interesting to further develop more practical learning algorithms for this scenario. The empirical result also verifies the superiority of the OPFES method, and is convincing for me.
Despite the strengths above, I have some concerns on this work:
The notations used in this paper are a bit confusing, with a large number of subscripts and superscripts that significantly affect the readability of the text. This complexity makes it challenging for readers to follow the main arguments and mathematical formulations. I recommend simplifying the notation wherever possible to enhance clarity.
Other Comments Or Suggestions: The clarity of this work could be further improved if the authors were to summarize all the notation used in this paper in a table. Current notations are a little bit confusing. Despite the excellent work, it did take me a long time to understand each notation.
Questions For Authors: 1. It seems that the OPFES approach is highly dependent on the random Fourier feature representation of kernel functions. I would like to know whether it is necessary to adopt random Fourier features, or there are other kernel approximation methods that are compatible with the OPFES approach?
2. What is the relationship between feature evolvable learning and multi-model learning? Can it be viewed as an online version of multi-model learning? For example, in the previous stage only images can be observed, and in the current stage only text description is available.
3. Is it possible to apply other non-linear models, i.e., neural networks, to the feature evolvable data stream learning scenario with the proposed KOM discrepancy?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: [Q1] …whether it is necessary to adopt random Fourier features, or are there other kernel approximation methods that are compatible with the OPFES approach?
[A1] We will clarify that we take random Fourier features to avoid storing the entire and partial data, since kernel function $k(x_1,x_2)$ is defined on a pair of instances, and previous online kernel learning requires to store partial and entire training data [Kivinen et al. 2001; Orabona et al. 2012; Ghari & Shen 2024]. We will clarify that the random Fourier features technique has been a popular approximation with theoretical guarantee [Rahimi & Recht, 2007; Avron et al., 2017; Likhosherstov et al., 2022], and it is interesting to exploit other approximations.
[Q2] What is the relationship between feature evolvable learning and multi-modal learning? Can it be viewed as an online version of multi-modal learning...
[A2] We will clarify that our work focuses on the general feature spaces without constraints on the types of feature spaces as in [Hou et al., 2017; Zhang et al., 2020], and it can be directly used for two-modal learning. For multi-modal (more than two modals) learning, this problem becomes rather difficult since we should consider multiple and combinatorial correlations for multiple models.
[Q3] Is it possible to apply other non-linear models, i.e., neural networks, to the feature evolvable data stream learning scenario with the proposed KOM discrepancy?
[A3] We will clarify that this work considers different non-linear models by selecting different kernel functions, such as Gaussian kernels. It is interesting to consider deep neural networks, and we could take the NTK technique [Jacot et al., 2018; Novak et al., 2022] to analyze neural networks with the KOM discrepancy.
---
Rebuttal Comment 1.1:
Comment: The rebuttal has solved my question, and I will keep the score. | Summary: This paper tackles online learning in feature-evolving streams where old features vanish and new ones emerge. They propose OPFES, a one-pass algorithm that processes data without storage. It combines online kernel learning with random Fourier features to adaptively capture evolving data patterns and introduces a kernel ortho-mapping discrepancy framework through kernel functions to quantify cross-feature space relationships. Theoretical analysis establishes a regret bound for the algorithm, while empirical evaluations compare cumulative error rates across methods. Experiments confirm OPFES' superior performance and efficiency in managing real-time feature evolution, offering both practical streaming solutions and theoretical guarantees for feature-evolvable learning tasks.
Claims And Evidence: The claims made in the submission are well-supported by both theoretical analysis and empirical results. The paper introduces the Kernel Ortho-Mapping (KOM) discrepancy to characterize relationships between different feature spaces and establishes theoretical guarantees linking KOM discrepancy to classifier performance. The proposed one-pass feature evolvable learning algorithm is backed by regret analysis and convergence proofs, ensuring a solid theoretical foundation. Additionally, the empirical validation includes extensive experiments on multiple datasets, demonstrating the effectiveness and efficiency of the proposed method compared to existing approaches.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for feature evolvable learning. The Kernel Ortho-Mapping (KOM) discrepancy effectively characterizes relationships between evolving feature spaces, and the one-pass learning algorithm is efficient for streaming data. The evaluation uses diverse benchmark datasets and cumulative error rate (CER), ensuring relevance and comparability to prior work. Comparisons with state-of-the-art methods further validate the approach. Overall, the methodology and evaluation align well with the problem setting.
Theoretical Claims: In the review process, I examined several key proofs in the paper to ensure their correctness. The proofs of Lemmas 3.2 and 3.5 are mathematically rigorous, leveraging properties of Frobenius norms and matrix decompositions effectively. The proof of Theorem 3.3 is sound, demonstrating the relationship between the KOM discrepancy and the gap between optimal classifiers, using empirical kernel mappings and linearization techniques.
However, the proof of Theorem 3.4, which involves probabilistic bounds, requires careful consideration of concentration inequalities and random matrix theory. While the steps appear logical, the complexity of the proof necessitates a thorough verification of the application of McDiarmid's inequality and operator Khintchine's inequality to ensure the bounds are correctly derived. Overall, the proofs are generally well-structured, but the probabilistic aspects could benefit from additional clarity to fully confirm their correctness.
Experimental Designs Or Analyses: The experimental design in the study is commendable for its thoroughness and rigor. The use of multiple datasets and comparison with leading methods effectively showcases the versatility and robustness of the OPFES approach. The random splitting of feature spaces and the integration of random Fourier features highlight a sophisticated strategy for managing dynamic feature sets.
The parameter choices, including the buffer size and step sizes, are judiciously selected to optimize performance and computational efficiency. The study's comprehensive evaluation and statistical analyses provide strong support for the validity and effectiveness of the OPFES method. The positive outcomes and clear methodologies contribute significantly to advancing the field, demonstrating a well-executed experimental design.
Supplementary Material: In reviewing the supplementary material, I focused on several key components to gain a deeper understanding of the research. Specifically, I examined the detailed proofs provided in the appendices, which are crucial for validating the theoretical claims of the paper. I reviewed the proofs related to the KOM discrepancy, including Lemmas 3.2, 3.5, and Theorem 3.3, to ensure the mathematical rigor and logical coherence. Additionally, I looked at the convergence analysis and optimization details, such as Algorithm 1, to assess the practical implementation of the proposed method.
Furthermore, I considered any additional experimental results or analyses provided in the supplementary material, such as runtime environments and detailed dimensionality analyses. These sections offer valuable insights into the practical aspects and scalability of the OPFES method.
Relation To Broader Scientific Literature: The paper's key contributions are deeply intertwined with the broader scientific literature on feature evolvable learning, focusing on efficient adaptation to dynamic feature spaces in streaming data. By introducing the Kernel Ortho-Mapping (KOM) discrepancy, the study builds on prior work in kernel alignment and extends the understanding of feature relationships. The development of a one-pass algorithm aligns with the need for efficient online learning methods, leveraging random Fourier features for scalable kernel approximation. The approach integrates existing frameworks for adapting to evolving features, offering a theoretically grounded and practically applicable method that addresses the computational and adaptive challenges in dynamic feature spaces. The paper's contributions advance the field by providing a novel and comprehensive solution to the challenges of feature evolvable learning.
Essential References Not Discussed: No. Given that the paper appears to cite a comprehensive range of related works in the field, it is likely that the authors have covered the essential literature in feature evolvable learning. While the paper covers foundational work, discussing recent innovations in dynamically adjusting kernel functions could help readers understand how the proposed method fits into the current trends and innovations in adapting kernels to changing data scenarios.
Other Strengths And Weaknesses: Strengths:
1.The concept of kernel ortho-mapping discrepancy introduced in this paper is novel. Unlike most existing methods that rely on static feature spaces, which struggle to handle the evolving nature of feature-adaptive data streams, the proposed approach effectively captures information across different stages. By leveraging ortho-mapping discrepancy with kernel functions, the authors establish relationships between distinct feature spaces, enabling more efficient and effective learning.
2.The theoretical framework of this paper is well-developed and rigorous. They present the convergence result of kernel ortho-mapping discrepancy and utilize it to the regret analysis of the feature evolvable stream setting. The final theorem reveals the key factors influencing model convergence in this setting, which are consistent with empirical observations.
3.The proposed one-pass learning framework is well-designed, and the integration of online kernel learning with random Fourier features ensures both memory and computational efficiency. This approach enables the model to process large-scale datasets in a streaming manner without the need to store the full or partial training data. Such a design is particularly valuable for real-world applications that involve continuous data streams, making it highly practical and scalable.
Weaknesses:
1. The scenario of the OPFES algorithm can be applied is a little bit limited. The feature space can only change once during the whole learning process.
2. This article only considers hinge loss without analyzing other popular loss functions. The theoretical analysis is based on hinge loss, and the online learning process focuses on hinge loss as well. This article could be improved if more types of loss functions can be added to this algorithm.
Other Comments Or Suggestions: The notation in part “Previous Model Reuse” could be simplified for better readability. Besides, it would be better if the implementation detail of compared methods are presented in Appendix.
Questions For Authors: 1.Are the methods in this paper applicable to general kernel functions? The methods in this paper are founded on shift-invariant kernels. I'm especially interested in whether they can be applied to general kernels.
2.Why does the article restrict the loss function to hinge loss only? What effects will it have on the theorems in this article if we substitute the loss function with MSE, logistic loss or exponential loss? Will the learning procedure in the OPFES algorithm still be effective for other loss functions?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: [Q1] The scenario of the OPFES algorithm that can be applied is a little bit limited. The feature space can only change once during the whole learning process.
[A1] We will clarify that this work focuses on one feature evolution as in [Hou et al., 2021], and we can consider multiple feature evolutions similarly, that is, we learn feature relationships one by one as for evolving feature space, and maintain a model for every feature space via online ensemble.
[Q2] This article only considers hinge loss without analyzing other loss functions...What effect will it have on the theorems in this article if we substitute the loss function with MSE, logistic loss, or exponential loss …
[A2] We will clarify that this work focuses on one-pass learning with hinge loss, motivated from [Shalev-Shwartz et al., 2011, Lu et al., 2016], and it is feasible to generalize to other convex and Lipschitz-continuous loss functions, such as MSE, logistic loss and exponential loss, in which we need to take different Lipschitz constant and Fourier online gradient descent w.r.t. other loss functions.
[Q3] Are the methods in this paper applicable to general kernel functions? The methods in this paper are found on shift-invariant kernels. I'm especially interested in whether they can be applied to general kernels.
[A3] We will clarify that this work considers the shift-invariant kernels such as Gaussian kernel, which has been commonly-used in online kernel learning [Kivinen et al., 2001; Orabona et al., 2008; Takizawa et al., 2019; Ghari et al., 2022], and it is interesting to generalize to other kernels by considering similar approximation with random feature techniques. | Summary: This work proposes "One-Pass Feature Evolvable Learning" (i.e. OPFES), this is a method for handling streaming data where old features vanish and new features emerge. The core contribution is the kernel ortho-mapping discrepancy as $E\left(S\_n, K^{(1)}, K^{(2)}\right)=\min \_{U \in \mathrm{U}\_n} \frac{1}{\sqrt{n}}\left\\|U \sqrt{K^{(1)}}-\sqrt{K^{(2)}}\right\\|\_F$, which quantifies the transformation between two feature spaces via kernel embeddings. Using this discrepancy, the OPFES adaptively learns a new feature space in a single pass without retaining past data, leveraging random Fourier feature approximations and mirror descent optimization. The authors establish regret bounds, proving that classifier discrepancy $\rho\left(h_*^{(1)}, h_*^{(2)}\right)$ is upper-bounded by $O\left(E\left(S_n, K^{(1)}, K^{(2)}\right) / \lambda\right)$. Empirically, OPFES outperforms kernel-alignment and $\ell_2$-based baselines in binary classification tasks.
Claims And Evidence: see Strengths And Weaknesses
Methods And Evaluation Criteria: see Strengths And Weaknesses
Theoretical Claims: The proofs in this manuscript are somewhat sloppy. Despite spending considerable time analyzing them, I find it difficult to fully verify their correctness. The authors frequently provide detailed proofs for simple results while glossing over more complex derivations.
Experimental Designs Or Analyses: The experiments (benchmarks, analysis, ...) make sense and be supportive to me.
Supplementary Material: I have reviewed most of the appendix but did not examine their code files.
Relation To Broader Scientific Literature: This work is a follow-up in the field spanned by e.g. [1-3]
[1] Zhang et al., ICML 2020: “Learning with Feature and Distribution Evolvable Streams”
[2] Hou et al., AAAI 2021: “Storage Fit Learning with Feature Evolvable Streams”
[3] Schreckenberger et al., AAAI 2023: “Online Random Feature Forests for Learning in Varying Feature Spaces”
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: - The KOM discrepancy can be understood as a measure of the difference between two sets of kernel mappings (or Gram matrices). Conceptually, it is closely related to existing methods, such as the well-known Orthogonal Procrustes Problem. Therefore, it does not constitute a novel contribution. The paper claims that KOM is superior to kernel alignment and ell_2 distance, but the comparison is only empirical, there's no real theoretical argument showing why this discrepancy should be preferred.
- The min-max formulation over unitary transformations $U$ is mathematically elegant, but practically, why enforce rotational constraints in the first place? Many feature transformations in real-world apps are not merely orthogonal rotations. what about affine transformations? nonlinear shifts? Meanwhile the authors justify their empirical kernel mapping with methods from (Schölkopf & Smola 2002), but the assumption that two kernel feature spaces should be mapped in an orthogonal-preserving way is tenuous at best. Kernel alignment methods are widely used in community because they are more flexible, while kom enforces an artificial constraint that has no strong empirical motivation.
- I also think the connection between KOM discrepancy and classifier performance is weak; theorem 3.3 states that the difference between optimal classifiers in different feature spaces is upper-bounded by the KOM discrepancy. However, this is trivial, as any well-defined measure of feature space similarity will yield some bound on classifier discrepancy. The actual bound itself seems loose, i.e. there’s no guarantee that minimizing KOM results in an `actually good' model.
- Lastly, the theoretical results in paper are only valid for a very narrow setting. A vast number of feature evolvable problems (like in paper section 1) are regression tasks, yet the proposed algorithm is useless in these scenarios. And the vast majority of real-world scenarios where feature spaces change involve more than two classes. It’s also unclear if the results hold for other functions like cross-entropy or other margin-based losses e.g., logistic regression.
Other Comments Or Suggestions: - line 667: shouldn't the $\partial(\cdot)$ represent subdifferentials? because the $\boldsymbol{g}$ in your KKT condition belongs to a set
- do your $c_1$, $c_2$ in Theorem A.4 share the same meaning as in Theorem 3.4, 4.3? similarly, the $c$ in Definition C.5, Lemma C.6, and proof of Lemma D.6
- Theorem A.15, $\lambda$ is the parameter from Eqn. (20)
- proof of Lemma A.17, why we need the convexity of $H$? we could simply verify that $H(x)/x$ is a nondecreasing function when $x\geq 0$
- Lemma C.4 should be $\hat{R}\_1\left(\boldsymbol{w}\_{1 *}\right)-\hat{R}\_2\left(\boldsymbol{w}\_{2 *}\right)\leq...$
- line 1458, what is the definition of $m_1$?
- line 1545: two empirical estimates
Questions For Authors: - Is that possible to extend OPFES to multiclass classification? is your KOM discrepancy even meaningful for non-SVM loss functions, like kernelized ridge?
- You claim that storing past data is infeasible, but many real-world applications do store partial data in buffers. Could you comment on, why not allow a sliding window approach that keeps recent data, instead of forcing a single-pass update?
- How does OPFES perform with different kernel choices, does it collapse if the kernel width is set incorrectly? In Figure 6 why is the CER not monotonically decreasing as the random Fourier feature dimensionality increases?
- All results rely on shift-invariant kernels (like gaussian, laplacian), which is somehow a strong restriction as many real-world kernels (e.g., polynomial, string kernels, graph kernels) do not share that property. Could the key claims potentially extend to these settings, and if not, what is the difficulty?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: [Q1] The KOM discrepancy can be understood as a measure of the difference between two sets of kernel mappings…Orthogonal Procrustes Problem ... KOM is superior to kernel alignment and $\ell_2$ distance…no real theoretical argument …
[A1] We will clarify that the KOM discrepancy presents the first general framework to characterize correlations between two features spaces, and previous feature evolvable methods can be viewed as some special selections of different kernels [Hou et al. 2017, 2022; Chen&Liu 2024]. This discrepancy is motivated by orthogonal procrustes problem [Gover&Dijksterhuis 2004], which originated in matrix approximation problem in linear algebra, irrelevant to kernel functions.
We will also clarify that Lemma 3.6 verifies the superiority of our KOM discrepancy in contrast to $\ell_2$ distance, and Lemma 3.5 presents the relationship between the KOM discrepancy and kernel alignment. Those theoretical results are verified empirically in Figure 3 and Table 2.
[Q2] The min-max formulation over unitary transformations U… why enforce rotational constraints in the first place…affine transformations? nonlinear shifts...
[A2] We will clarify that enforcing orthogonal rotations aims to guarantee closed-form solution for KOM discrepancy (Lemma 3.2) and Gram matrix w.r.t kernel functions in feature evolvable learning. Such formulation shows better characterizations between feature spaces than prior feature evolvable methods, and it is interesting to exploit other transformations.
[Q3] …theorem 3.3 states that the difference between optimal classifiers in different feature spaces is upper-bounded by the KOM discrepancy…some bound on classifier discrepancy…seems loose...
[A3] We will clarify that Theorem 3.3 presents the first theoretical result to correlate performance of classifiers with relationships (KOM discrepancy) between feature spaces, and we do not find tighter bounds before for feature evolvable learning. The proof includes linearization of kernel classifier via empirical kernel mapping, perturbation analysis of strongly convex loss and construction of KOM discrepancy.
[Q4] …theoretical results in paper are only valid for a very narrow setting. A vast number of feature evolvable problems (…) are regression tasks…for other functions like cross-entropy or…
[A4] We will clarify that our theoretical results focuses on binary classifications, since mostly feature evolvable learning considers classification tasks [Hou et al.,2017; Zhang et al., 2020; Hou et al., 2021; Lian et al., 2022; Schreckenberger et al., 2023]. Our results can be generalized to regression and multi-class tasks with other functions, by considering online gradient descent with random features for other functions and generalizing ideal kernel for regression and multiclass tasks as in [Wang et al., 2015].
[Q5] Is that possible to extend OPFES to multiclass classification…non-SVM loss functions…
[A5] We will clarify that our OPFES can be extended for multiclass classification by considering multiclass loss functions and ideal kernel as in [Wang et al., 2015]. We will also clarify that our KOM discrepancy is defined on Gram matrices of kernel functions, which can be used for non-SVM loss functions since it is irrelevant to loss functions and learning tasks.
[Q6] …storing past data is infeasible, but many real-world applications do store partial data in buffers…why not allow a sliding window approach that keeps recent data…
[A6] We will clarify that this work focuses on one-pass learning for large-scale data, which goes through all instances only once without storing training data as online learning [Crammer et al., 2005; Carvalho & Cohen, 2006; Cesa-Bianchi & Lugosi, 2006; Gao et al., 2013]. It is feasible to store partial data in buffers with some sliding window, which is called min-batch learning. Generally, min-batch learning requires more storage and computational costs than one-pass learning.
[Q7] How does OPFES perform with different kernel choices…kernel width…In Figure 6 why is the CER not monotonically decreasing as…
[A7] We will add a figure to show that OPFES achieves stable performance over a range of kernel widths in $[2^{-8}, 2^8]$. We will also clarify that, in Figure 6, most CERs are monotonically decreasing as the random Fourier feature dimensionality increases, except for four datasets, where the algorithm is slightly overfitting due to relatively fewer features and algorithmic randomness.
[Q8] All results rely on shift-invariant kernels (like Gaussian, Laplacian)…a strong restriction as many real-world kernels (e.g., polynomial, string kernels, graph kernels)…
[A8] We will clarify that this work focuses on the shift-invariant kernels such as Gaussian kernel, which has been commonly-used in online kernel learning [Kivinen et al., 2001; Orabona et al., 2008; Hong et al., 2023], and we make similar analysis for other kernels functions (polynomial and graph kernels) by random feature approximation.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed response. I agree with the replies to Q6 and Q8, those point do fall outside the scope of the paper. Unfortunately, after reading all rebuttal blocks, my main concern about this work remain unresolved. For example
1. I did notice Lemmas 3.5 and 3.6, but they appear to provide only a theoretical connection, rather than a rigorous mathematical argument demonstrating that KOM is superior to existing metrics. The numerical experiments are similarly unconvincing, they obviously cannot cover most scenarios and are susceptible to cherry-picking. The response to Q2/Q3 also felt rather tenuous to me.
2. The scope of the theoretical analysis is extremely narrow, which limited to binary classification using hinge loss and shift-invariant kernels. While authors claim generality, they also concede that extending to multiclass, regression, or other learning tasks (such as clustering or semi-supervised learning) would require substantial modification. This restriction to binary SVMs renders the proposed method largely irrelevant for the vast majority of modern applications. The rebuttal’s assertion that “our method can be extended” is unsupported by any concrete extensions or results in multiclass or regression settings.
3. I have to say that the theoretical portion of the paper is quite sloppy (see examples in review). Additionally, for line 1553,
> We completes the proof by applying Lemma C.3 and some calculations
the authors write in an incomplete manner. It's hard to believe that these derivations can be easily completed in just several steps. This make it difficult for the reader to verify the correctness of work.
---
Reply to Comment 1.1.1:
Comment: [Q1] … Lemmas 3.5 and 3.6, but they appear to provide only a theoretical connection rather than a rigorous mathematical argument demonstrating that KOM is superior to existing metrics. The numerical experiments…cherry-picking.
[A1] We will clarify that this work proposes new KOM discrepancy to characterize correlations between two feature spaces, and previous methods can be viewed as some special selections of different kernels [Hou et al., 2021; Chen et al., 2024]. Lemma 3.6 shows that KOM discrepancy is a lower bound of $\ell_2$ distance, and minimizing KOM discrepancy is better than minimizing $\ell_2$ distance theoretically by combining with Theorem 3.3. Lemma 3.5 presents the relationship between the KOM discrepancy and kernel alignment; it is not easy to compare them directly. Therefore, we present an empirical study to show the effectiveness of KOM discrepancy in Figure 3 and Table 2.
We will also clarify that most datasets in this work have been well-studied in previous feature evolvable learning [Hou et al., 2017; Zhang et al., 2020] and we take the same setting for fair comparisons.
[Q2] The scope of the theoretical analysis is extremely narrow, which is limited to binary classification using hinge loss and shift-invariant kernels...
[A2] We will clarify that, for feature evolving learning, this is the first work to characterize relationships between two feature spaces with theoretical guarantees, and it is natural to focus on binary classification, as in most theoretical studies such as PAC learning [Valiant, 1984], online learning [Rosenblatt, 1958; Aizerman et al., 1964; Cesa-Bianchi & Lugosi, 2006].
We will clarify that this work focuses on hinge loss since it is the most popular loss function for batch and online SVMs [Cortes & Vapnik, 1995; Shalev-Shwartz 2008; Duchi & Hazan, 2011; Hajewski et al., 2018; Gentinetta et al., 2023], and our work is essentially an online SVM but with random fourier features.
[Q3] …the theoretical portion of the paper is quite sloppy…for line 1553, We completes the proof by applying Lemma C.3 and some calculations…write in an incomplete manner…
[A3] We will clarify that, for line 1553, we have Eqn. (24) from Lemma C.3 and strong convexity, and derive the upper bound for $\hat{R}_{T_2} (\pmb{w}\_{T_1+T_e}^{[2]}) - \hat R\_{T_2}(\pmb w\_{T_2*}^{[2]})$ as in lines 1514-1552. This completes the proofs by substituting the upper-bound into the right-hand side of Eqn. (24). We can definitely guarantee the correctness of our theoretical results and improve the presentations in proofs with more details. | Summary: The paper focuses on "feature evolvable learning" -- a setting in which features are being learned during a data stream, with new features learnt over time. The goal is not to retrain the model from scratch but to transfer from an older feature space to a new feature space. Similar problems have been studied in the past in the context of online continual learning.
The paper proposes a new measure (KOM) to quantify/characterize differences between feature spaces -- using kernel functions.
It also proposes an algorithm called OPFES to learn features in this evolvable manner without storing all training data.
Claims And Evidence: The main claim of the paper can be stated as: it is possible to do feature evolvable learning in a streaming manner (one-pass over the data) using a new metric called Kernel OrthoMapping (KOM) discrepancy, which captures the relationship between old and new feature spaces.
The paper provides sufficient evidence for this claim, through theoretical results, algorithm development (OPFES) and several experimental results.
Methods And Evaluation Criteria: The paper is strong methodologically, using mathematical analysis when it is appropriate to do so, designing systematic experiments, and being thorough in how it states its claims.
Theoretical Claims: In my opinion, the main theoretical results of the paper are:
1) Theorem 3.3 giving an upper bound (in terms of KOM) for the differences between optimal classifiers trained on old vs new feature spaces
2) Theorem 3.4, which gives a convergence result, showing how the empirical KOM discrepancy approximates its distributional counterpart.
I admit that I have not checked the proofs of the theoretical claims -- as this was one of urgent reviews I had to provide.
Experimental Designs Or Analyses: The paper is quite thorough and systematic in its experimental study. The main result in that portion of the paper is to show that the OPFES outperforms other feature evolvable learn methods (such as random fourier-based or kernel-based) on many different datasets.
Supplementary Material: No, I did not review the supp material of the paper (as this was one of the urgent reviews I had to provide).
Relation To Broader Scientific Literature: The paper also relates to the broader literature of learning theory as well as to continual learning.
Essential References Not Discussed: I think that the paper is missing some references from the continual learning literature -- in the context of deep learning. Some of those papers also consider this stream-based setup -- even though the do not use the term "feature evolvable learning". Instead they use terms such as "online continual learning" or "unsupervised online continual learning".
Other Strengths And Weaknesses: One main weakness of the paper is that it does not leverage the power of deep learning. As the authors also mention at the very end, a follow-up work could consider the Neural Tangent Kernel framework that has been developed for the approximate analysis of neural networks -- to examine how the KOM approach can be applied there.
Other Comments Or Suggestions: One possible improvement, mostly in terms of presentation, is to somehow explain the insights and steps of the OPFES algorithm a bit more -- maybe through a visual illustration.
Questions For Authors: how does OPFES deal with partial feature overlap between tthe old an new feature spaces? For example, some features may persist while others disappear?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: [Q1] One main weakness of the paper is that it does not leverage the power of deep learning … a follow-up work could consider the Neural Tangent Kernel framework that has been developed for the approximate analysis of neural networks…
[A1] We will clarify that this work tries to answer two fundamental problems on feature evolvable learning following previous settings [Hou et al., 2021; Chen et al., 2024], and it is interesting to leverage the power of deep learning for further work, where we could incorporate deep learning technique via random features of Neural Tangent Kernel [Zandieh et al., 2021], and distinguish representations from two network structures [Kornblith et al., 2019].
[Q2] One possible improvement, mostly in terms of presentation, is to somehow explain the insights and steps of the OPFES algorithm a bit more— maybe through a visual illustration.
[A2] We will consider a visual flowchart to outline the main (previous, evolving, and current) stages of the OPFES algorithm and explain how feature information and label information are incorporated via our KOM discrepancy and how the model is updated in a one-pass manner using random Fourier features.
[Q3] How does OPFES deal with partial feature overlap between the old and new feature spaces? For example, some features may persist while others disappear.
[A3] We will clarify that this work focuses on the non-overlap between old and new feature spaces, following previous feature evolvable learning [Hou et al., 2017; Zhou et al., 2022], and it is interesting to consider some partial feature overlaps, for which we could learn two relevant models by exploiting correlation from two feature spaces as in [Hou et al., 2016; Ye et al., 2018; Gu et al., 2024].
We will add relevant references on online and unsupervised continual learning [Lange et al., 2021; Wickramasinghe et al., 2023; Wang et al., 2024]. | Summary: This paper tackles the problem of feature evolvable learning in streaming data settings, where features may vanish and emerge over time—a scenario that arises in applications like sensor networks or dynamic monitoring systems. The authors propose a new metric, the Kernel Ortho-Mapping (KOM) discrepancy, which quantitatively characterizes the relationship between two evolving feature spaces via kernel functions.
They then develop a one-pass online learning algorithm, called OPFES, that:
Leverages random Fourier features for efficient kernel approximation,
Integrates prior model knowledge via KOM-based mappings,
Reuses both feature and label relationships from the previous feature space without retaining training data.
The theoretical contributions include:
A bound between KOM discrepancy and the difference in predictions from classifiers trained in two distinct feature spaces (Theorem 3.3),
Generalization error bounds (Theorem 4.3),
Regret guarantees for their mirror descent-based optimization procedure (Theorem 3.7).
Empirically, OPFES consistently outperforms a suite of state-of-the-art methods across 20 real-world datasets, demonstrating both lower cumulative error rates and faster convergence.
Claims And Evidence: The primary claims are:
KOM discrepancy better captures relationships between evolving feature spaces than kernel alignment or ℓ₂-distance.
OPFES achieves state-of-the-art performance in both accuracy and convergence speed.
Theoretical convergence and regret bounds are valid under the assumed setting.
Evidence:
The KOM discrepancy is supported by both a theoretical upper bound (Theorem 3.3) and empirical correlation analyses (Figures 2 and 3).
Experimental results across 20 datasets (Table 2, Figures 4–5) substantiate the superior performance of OPFES.
The convergence of their optimization strategy is justified through mirror descent analysis (Theorem 3.7), and prediction deviation is bounded through KOM discrepancy (Lemma 4.2).
Evaluation: Claims are well-supported with theoretical derivations and extensive experiments. There are no overtly problematic claims, though practical scenarios with high noise or extremely non-stationary distributions could be addressed more thoroughly.
Methods And Evaluation Criteria: The authors use appropriate methods:
Random Fourier Features for scalable kernel learning,
KOM discrepancy for measuring evolution of feature spaces,
Mirror descent on the simplex for optimizing spectral density.
Evaluation criteria—Cumulative Error Rate (CER) and convergence speed—are suitable for online learning benchmarks. The use of diverse datasets and baselines strengthens the empirical evaluation. Te-splitting strategy and buffering policies are transparent and consistent with prior work.
Theoretical Claims: Several key theoretical results are presented:
Theorem 3.3: Bounded classifier divergence via KOM discrepancy.
Theorem 3.4: Generalization guarantee for empirical KOM discrepancy.
Lemma 3.5, 3.6: Relationship of KOM discrepancy with kernel alignment and ℓ₂-distance.
Theorem 3.7: Mirror descent convergence rate for KOM optimization.
Theorem 4.3: Generalization bound for OPFES.
These are grounded in solid theoretical tools—e.g., McDiarmid’s inequality, matrix norm inequalities, polar decomposition, and Rademacher complexity. The proofs (referenced in appendices) appear consistent and technically correct from inspection of their assumptions and derivations.
Theorem 3.3: Bounded Classifier Divergence via KOM Discrepancy - The derivation is consistent with prior work on RKHS norm bounds and empirical classifier discrepancy measures.
Theorem 3.4: Generalization of Empirical KOM Discrepancy - The proof also incorporates non-commutative Khintchine-type inequalities, which are non-trivial but correctly referenced.
Lemma 3.5: KOM discrepancy is upper bounded by a function of 1 minus the kernel alignment. - Lemma 3.5 leverages normalized Gram matrices and a cosine similarity-based interpretation.
Lemma 3.6: KOM discrepancy is also upper bounded by the average ℓ₂ deviation between kernel mappings. - Lemma 3.6 mimics regularization path analyses and is essentially a generalization of kernel regression upper bounds.
Theorem 4.3: Generalization Bound for OPFES
Claim: Provides an excess risk bound for the OPFES predictor w.r.t. the optimal classifier in the current stage, incorporating the KOM discrepancy.
Validation:
Combines regret analysis (Hazan et al., 2016) with generalization error bounds using Rademacher complexity.
Proper use of online-to-batch conversion techniques is evident.
The bound reflects interplay between past information (via T1 ,Te ) and the KOM discrepancy, offering useful theoretical insights.
Experimental Designs Or Analyses: The experiments are well-designed:
20 datasets from OpenML/UCI.
Comparisons with 8 strong baselines (e.g., rff-FESL, align-FESL).
Multiple performance metrics: CER, convergence speed, sensitivity to Fourier dimension.
Statistical significance tests (t-tests) confirm robustness.
The setup (T1, Te, T2 split; kernel and hyperparameter tuning) is explicitly stated. However, more insight into variance across random splits or ablation of individual components (e.g., only using KOM without label alignment) could enhance the experimental depth.
Supplementary Material: I saw the code attached in the supplimentary
The class OnePassProjector_v2 (in Projector.py) implements a form of one-pass least-squares projection to align the old and new feature vectors. but, the actual mathematical equivalence between this and the KOM-based orthogonal initialization is not clearly explained or validated. There is no theoretical or empirical validation that the OnePassProjector_v2 logic matches the U matrix derived in Proposition 4.1 of the paper.
The code in exper_kom.py runs dataset-specific hyperparameter searches for learning rate, gamma1, and gamma2 using Bayesian optimization. This may violate the claim of full online adaptability—the algorithm seems to require up-front hyperparameter tuning via cross-validation, which is inconsistent with the one-pass learning paradigm.
The method second_period() (from KOM_FES_v2) evaluates only the performance on the current stage (after the feature evolution).
The initial learning on T1 (old features) and its effect on transfer or initialization is not separately measured. This makes it hard to assess how much performance comes from the old model vs. new learning.
Experiments are run with 20 random seeds using joblib.Parallel. While averaging over seeds is good practice, the standard deviations shown in the main paper are quite low, possibly due to seed correlations or improper shuffling. No fixed seed is used within the bayes_optmization() loop, which could cause optimization noise and reproducibility issues.
In EvolvingDataset.get_data(), the evolving stage samples are taken from overlapping time windows using parameter B, but the process lacks commentary or parameter sensitivity analysis. Performance could be highly sensitive to the overlap size B, especially for smaller datasets. No ablation is provided for B.
Relation To Broader Scientific Literature: The paper builds on a rich literature in:
Online kernel learning (Lu et al., 2016; Shen et al., 2019),
Feature evolvable streams (Hou et al., 2017; 2021; 2022),
Kernel alignment (Cristianini et al., 2001),
ℓ₂-mapping in transfer learning (Romero et al., 2015).
It adds a new formal metric (KOM) that generalizes and theoretically bounds earlier methods (alignment, ℓ₂-distance). The approach synthesizes kernel theory with online convex optimization, contributing novel algorithmic and theoretical insights.
Essential References Not Discussed: The paper is generally well-cited. However:
It omits Rahimi and Recht (2007), who formally introduced Random Fourier Features, although Bochner’s theorem is referenced.
More recent works on deep kernel transfer learning or meta-learning over kernel spaces could be relevant for contextual breadth, especially if extensions to deep networks are envisioned.
In the context of evolving data streams, work on concept drift detection (e.g., Gama et al., 2014) might be conceptually adjacent and worth mentioning.
Other Strengths And Weaknesses: Strong Theoretical Proofs - The integration of kernel theory, generalization analysis, and convex optimization underpins the theoretical soundness. Convergence, generalization, and transferability are all rigorously treated.
Empirical Breadth and Depth - Evaluation across 20 diverse datasets, paired with statistical testing and convergence analysis, validates the model’s robustness and practical effectiveness.
Interpretability via Kernel Metrics- KOM provides not just an operational tool but a theoretical lens for understanding how and why models transfer across evolving feature spaces.
Weakness :
No Analysis for Adversarial or Disjoint Feature Evolutions - While the KOM discrepancy is effective under smooth transitions, its applicability under non-overlapping feature spaces or adversarial drift remains unexplored. For example, when new features are independent of the old, KOM may offer no meaningful signal, yet the algorithm is still applied.
Unclear Role of Label-Based Ideal Kernel Under Imbalance- The paper proposes using an ideal kernel 𝐾∗(𝑥𝑖,𝑥𝑗)=𝑦𝑖𝑦𝑗 to align labels in the evolving stage. But, This approach assumes balanced label distributions, which do not hold for several datasets (e.g., acoustic, runwalk). The approach is sensitive to noisy or ambiguous labels, potentially destabilizing kernel alignment.
Scalability Beyond Fourier Features- While random Fourier features are scalable, they are limited for high-dimensional structured domains (e.g., graphs, sequences). Extension to non-stationary kernels or deep kernel surrogates is a natural direction not yet addressed. (Perhaps opened up for future direction)
Other Comments Or Suggestions: The pseudocode could benefit from a clearer distinction between training phase and transfer phase.
Appendix A, Lemma Proofs:
The paper shall consider including (in main or appendix) sensitivity plots for learning rate, gamma1, and gamma2. The Bayesian optimization results could be volatile depending on the dataset.
Questions For Authors: How does the OPFES framework behave in scenarios where the new feature space is statistically independent from the old one (i.e., KOM discrepancy is large or near-maximal)?
Given that the ideal label kernel 𝐾∗(𝑥𝑖,𝑥𝑗)=𝑦𝑖𝑦𝑗 assumes balanced and clean labels, how does your method adapt when labels are noisy or highly imbalanced (e.g., acoustic, runwalk datasets)?
Can the authors provide ablation studies separating the contributions of (i) KOM discrepancy, (ii) label kernel alignment, and (iii) model reuse from previous stages?
What is the runtime overhead (in wall-clock time and computational complexity) of the mirror descent procedure used to optimize KOM in comparison to other ensemble or retraining approaches?
Does your method generalize to non-shift-invariant kernels or structured inputs (e.g., graph, sequence kernels)?
Ethical Review Concerns: Not applicable.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | null | null | null | null | |
BARNN: A Bayesian Autoregressive and Recurrent Neural Network | Accept (poster) | Summary: This paper introduces a Bayesian version of an RNN by modeling the time-dependent weights as a reparameterized random variable, where the posterior is learned in an amortized way using a time-dependent ELBO. A specific variational posterior parameterization is provided, where each layer’s weights are parameterized via multiplicative Gaussian dropout with a time-dependent scaling factor as parameters. These dropout scaling factors evolve overtime and are generated by an encoder network that conditions on past observations. Instead of a fixed isotropic Gaussian prior, the model employs a Temporal Variational Mixture of Posteriors (tVAMP) prior, which aggregates past variational posteriors to improve uncertainty modeling and calibration. The framework is applied to Neural PDE solvers and molecular language models (unconditional SMILES generation), demonstrating better uncertainty quantification (lower NLL) in PDE modeling and higher validity in unconditional molecular generation.
Claims And Evidence: Overall, all the claim are supported by clear and convincing evidence.
Strong claim:
- BARNN offers greater accuracy -> Table 1
- BARNN calibrated and sharp uncertainty estimates -> Table 1, Fig. 2
- BARNN excels in modelling long-range molecular dependencies compared to related methods. -> Fig. 5
- BARNN is the first approach that transforms any autoregressive or recurrent model into its Bayesian version with minimal modifications. -> see related work below
Weak claim(promise):
- autoregressive models are also prone to overfitting to the specific tasks they are trained on, challenging their application outside their training domain. -> Not explicitly demonstrated
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense to me, for detailed elaboration please see below "Experimental Designs Or Analyses".
Theoretical Claims: This paper does not have any theoretical claim.
The ELBO derivation and the marginalized prior are plausible to me.
Experimental Designs Or Analyses: - Neural PDE solver benchmark:
- The use of autoregressive models for solving time-dependent PDEs is well-motivated, as these models naturally capture temporal dependencies, and evaluating uncertainty estimation is important.
- The uncertainty calibration metrics (NLL/ECE) are appropriate and commonly used in probabilistic modeling.
- SMILES molecular generation benchmark:
- Autoregressive models are widely used in molecular generation, making SMILES-based generation a standard benchmark.
Bayesian uncertainty is particularly valuable for molecular design and exploration, where uncertainty-aware sampling improves model reliability.
- The Wasserstein distance, t-SNE visualization (Fig. 10), and property histograms (Fig. 11) faithfully capture different aspects of distribution similarity—global shape (Wasserstein), structural overlap (t-SNE), and individual property distributions (histograms).
Supplementary Material: No supplementary material is provided
Relation To Broader Scientific Literature: This work is related to Bayesian deep learning and Bayesian autoregressive models. The proposed Bayesian RNN structure, which uses a multiplicative scaling factor scheme, provides a generalizable framework for transforming autoregressive models and RNNs into their Bayesian counterparts. This builds on prior work in variational Bayesian methods [1] and VAMP priors [2], extending them to time-dependent settings. The method is of pragmatic interest due to its broad applicability in uncertainty-aware sequence modeling.
[1] https://proceedings.neurips.cc/paper/2015/hash/bc7316929fe1545bf0b98d114ee3ecb8-Abstract.html
[2] https://arxiv.org/abs/1705.07120
Essential References Not Discussed: I think literature [1] , which also introduces a variational Bayesian scheme for RNNs, is highly relevant and should be discussed.
[1] https://arxiv.org/pdf/1704.02798
Other Strengths And Weaknesses: Strength
- **A unified framework for Bayesian RNNs and autoregressive models**
The paper provides a complete pipeline, making it potentially applicable to various autoregressive tasks.
Weaknesses
- **Hard to position in context**
- The incorporation of Bayesian methods into RNNs has a long history, with key prior work (see "Essential References Not Discussed") that is not acknowledged or discussed.
- Combined with the presentation style, this gives the impression of overclaiming novelty, as if applying Bayesian inference to RNNs is itself a novel contribution.
- **Ablation study of the prior**: The tVAMP prior is a key contribution, yet the paper does not include an ablation study comparing it to a simpler isotropic Gaussian prior.
- **Motivation for using RNNs over stronger sequence models**: While this may not be a direct weakness, it is unclear why the paper focuses on Bayesian RNNs when stronger sequence models (e.g., S4) exist.
Other Comments Or Suggestions: - Eq.~8: should it be $E_{\phi}(\boldsymbol y_{0:t-1}^k)$ (i.e. a $E_{\phi}(\cdot)$ is missing)?
Questions For Authors: - **Intuition Behind Long-Term Dependency Claims**
- BARNN is claimed to help capture long-term dependencies. In the SMILES generation task, do the authors have any intuition on why Bayesian modeling specifically mitigates the ring-closing issue compared to deterministic baselines?
- **Practical Setting of
$N$ in Eq.~(8)**:
- is the $N$ in Eq.\~(8) pragmatically set by Batch size of training data or the whole data size?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for their time and valuable feedback. In particular, we are delighted that the reviewer believes our work *is of pragmatic interest due to its broad applicability in uncertainty-aware sequence modeling*, and that their appreciate the experiment section. We answer the questions raised below point by point:
**Weak claim(promise):
autoregressive models are also prone to overfitting to the specific tasks they are trained on, challenging their application outside their training domain. -> Not explicitly demonstrated**
- We thank the reviewer for analysing in depth our results section, we appreciate the time and effort. We agree that we did not stress in our experiment analysis autoregressive model overfitting. However, we want to highlight that our Table 2 results show the overfitting behaviour of autoregressive models. In particular, SMILES LSTM (it has dropout on the recurrent layers) and BARNN can be thought of as regularized versions of the vanilla LSTM. The latter, compared to the regularised models, is more prone to reproduce molecules already seen during training, shown by lower novelty and uniqueness, which is an indication of overfitting to the training data distribution.
**Essential References Not Discussed:
I think literature [1] , which also introduces a variational Bayesian scheme for RNNs, is highly relevant and should be discussed.**
- We thank the reviewer for pointing out the methodology to us, which shares the similarity of modelling uncertainties in RNNs, and we will add in section 2. However, differently from our methodology, the approach is only suited to RNNs, while our approach can be applied to various autoregressive or recurrent models. Furthermore, but most importantly, the model in [1] can hardly scale to large networks mainly due to two reasons (i) weights are sampled directly, in contrast we use the local-reparametrization allowing scaling to large networks (ii) the number of parameters doubles because they model mean and standard deviation directly, while we use a variance proportional to the mean.
**Weaknesses:
Hard to position in context: The incorporation of Bayesian methods into RNNs has a long history, with key prior work (see "Essential References Not Discussed") that is not acknowledged or discussed.**
- See above in “Essential References Not Discussed” answer.
**Weaknesses: Combined with the presentation style, this gives the impression of overclaiming novelty, as if applying Bayesian inference to RNNs is itself a novel contribution.**
- We thank you for the comment, and we do not want to claim that applying Bayesian inference to RNNs is our novel contribution. However, to the best of our knowledge, we are the first approach that can transform any autoregressive or recurrent model into a Bayesian one with minimal modification, obtaining similar or superior performances to non-Bayesian counterparts, scale to large networks and at the same time provide UQ.
**Weaknesses: Ablation study of the prior: The tVAMP prior is a key contribution, yet the paper does not include an ablation study comparing it to a simpler isotropic Gaussian prior.**
- We thank the reviewer for the suggestion, which is shared with reviewer 4bFX. We conducted a test on a synthetic time-series dataset to show how tVAMP excels over standard log uniform prior and fixed dropout rates. A simple isotropic Gaussian would not lead to a KL which is independent on the likelihood model parameters, therefore losing the regularization property of VD. Please see in reviwer's 4bFX answer the result table and explanation (not inserted here for words limit).
**Weaknesses: Motivation for using RNNs over stronger sequence models: While this may not be a direct weakness, it is unclear why the paper focuses on Bayesian RNNs when stronger sequence models (e.g., S4) exist.**
- We thank the reviewer for the comment, also suggesting a possible application to BARNN for SSMs. We wanted to focus on RNNs because the model in eq. (2) (3) is heavily recurrent. Of course, extending to SSMs and Transformers models would be interesting, but a specific ancoder $E$ is needed, and further research needs to be conducted. In summary, with the paper, we wanted to show a general methodology for Bayesian sequence modelling, hoping to inspire potential applications to future work.
**Eq.~8: should it be $E_{\phi}(\mathbf{y}_{0:t-1}^k)$?**
- Yes, we write it more compactly as $\alpha^l_t(\mathbf{y}_{0:t-1}^k)$, we will add a note on the paper, thank you for spotting it!
**Intuition Behind Long-Term Dependency Claims**
- We thank you for the interesting question, and we claim that this is due to the time-adaptivity of the model weights. Indeed, both SMILES LSTM (with Dropout) and standard LSTM obtain similar performances, worse than BARNN, which only has temporal adaptivity differently from them.
**Practical Setting of $N$ in Eq.~(8)**
- Correct! $N$ is set to the batch size. We will update the paper to be clearer. | Summary: This paper addresses the problem of uncertainty quantification with autoregressive and recurrent neural neworks. Variational Bayes method is applied to infer the posterior distribution of network parameters, and further techniques are developed using variational dropout methods. Applications are made to PDE solving and molecular generation.
Claims And Evidence: The motivation is clear and the use of variational inference for infering autoregressive and recurrent neural network parameters is a reasonable choice.
Eq.(3) on the factorization form of variational posterior appears to a good choice, which also associates with the idea of modular Bayes. More to be discussed later.
For Eq.(6), why is the normal distribution assumed to have variance being the square of its mean?
Methods And Evaluation Criteria: Evaluations are reasonable, ranging over both metrics on predicitve accuracy and metrics on uncertainty quantification.
Theoretical Claims: The derivations of ELBO and related formulas look correct.
Experimental Designs Or Analyses: The experimental designs look nice and the choices of applications are real-world driven.
I do have a question regarding he confidence intervals presented in Figure 2. It is said in the paper that the presented CIs are 99.7%, which are still very narrowly concentrating near the mean prediction/posterior predictive mean. To me, it appears that the variational inference approach might have signficantly underestimated the uncertainty.
Supplementary Material: I have reviewed the theoretical derivations.
Relation To Broader Scientific Literature: The design of the factorization format of variational posterior in Eq.(3) reminds of the line of work on modular Bayes inference, where we deliberately cut off certain dependencies between parameter blocks and parts of observations, in order to improve computational efficiency / reduce potential influence of model misspecification due to incorrect inclusion of certain dependencies.
Essential References Not Discussed: Here is an example reference on modular Bayes:
Bayarri, M. J., J. O. Berger, and F. Liu. "Modularization in Bayesian analysis, with emphasis on analysis of computer models." Bayesian Anal. 4(1): 119-150
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: N/A.
Questions For Authors: See Claims And Evidence & Experimental Designs Or Analyses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in making constructive comments and providing us with additional references. We are glad that the reviewer finds our experiments *nice* and acknowledges that *the choices of applications are real-world driven*. We answer below the questions raised by the reviewer, and we also ask if there are other concerns about the paper which not lead to a higher score in the initial review round; in that case, we will be glad to discuss.
**For Eq.(6), why is the normal distribution assumed to have variance being the square of its mean?**
- Thank you for pointing out this crucial step in our methodology. We choose to model the variance as the square of the mean, as also done in vanilla Variational Dropout, for efficiency proposes. This parametrization allows us to not use extra model parameters, permitting scaling to very large neural networks. Also, this approach allows us not to change the original (non-Bayesian) model and add the BARNN methodology on top of it, which, as 4bFX highlighted, is “easy to implement”
**I do have a question regarding he confidence intervals presented in Figure 2. It is said in the paper that the presented CIs are 99.7%, which are still very narrowly concentrating near the mean prediction/posterior predictive mean. To me, it appears that the variational inference approach might have signficantly underestimated the uncertainty.**
- We thank the reviewer for the comment on the CIs. We agree with the reviewer that in the picture, the confidence interval seems concentrated near the mean prediction/posterior predictive mean. However, this is mainly a plotting issue because we used a very thick line width to have three nice displayed figures side by side. To convince you, please look at the ECE and NLL in Table 1. Underestimating the variance (as in the case of Refiner, for example) would lead to a very high ECE and exploding NLL (which is not our case).
**The design of the factorization format of variational posterior in Eq.(3) reminds of the line of work on modular Bayes inference, where we deliberately cut off certain dependencies between parameter blocks and parts of observations, in order to improve computational efficiency / reduce potential influence of model misspecification due to incorrect inclusion of certain dependencies.**
- We thank the reviewer for pointing out this reference, which we believe is relevant. Indeed, we cut off non-causal dependencies in eq (3) (i.e. the current weights are only dependent on the past). We will look more in depth into modular bayes inference and add a connection to eq. (3) in the final manuscript. If there are other concerns or suggestion, we are happy to discuss. | Summary: This article proposes a new Bayesian recurrent neural network, mainly by introducing variational dropout into recurrent neural networks. Experiments can prove the model's ability to quantify uncertainty.
Claims And Evidence: This paper builds a variational Bayesian autoregressive and recurrent neural network. BARNNs aim to provide a principled way to turn any autoregressive or recurrent model into its Bayesian version. BARNN is based on the variational dropout method, allowing us to apply it to large recurrent neural networks as well. I think is correct .
Methods And Evaluation Criteria: Quantifying uncertainty in autoregressive models is a crucial and key issue.
Theoretical Claims: A posteriori inference technique based on variational autoencoder, I think, is correct
Experimental Designs Or Analyses: The experiments can verify the method is effective and are taken on different tasks.
Supplementary Material: The derivation of the model is basically correct
Relation To Broader Scientific Literature: Some work attempts to probabilistic model attention in order to obtain good uncertainty estimation ability.
[1] Bayesian Attention Modules; Xinjie Fan, Shujian Zhang, Bo Chen, and Mingyuan Zhou; NeurIPS 2020: Advances in Neural Information Processing Systems, Dec. 2020.
[2] Deng, Yuntian, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. "Latent alignment and variational attention." Advances in neural information processing systems 31 (2018).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strongth
1. Uncertainty modeling is a key problem in neural networks, and I think the paper as a whole is workable, and the results look good.
2. The experiments are sufficient to demonstrate the validity of the model.
3. The paper is well wrightten.
Weakness
1. The proposed method can be seen as an application of variational dropout to an autoregressive model but does not provide any new insight.
2. Autoregressive models based on attention mechanisms are not discussed.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Is there any difference between probabilistic modeling of autoregressive models and normal neural network models, like convolutional neural networks?
2. Why not use Transformer, a more general and powerful autoagressive neural network?
3. Why not experiment on text to better illustrate the effectiveness of the model on large data sets?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and comments, and we appreciate that the reviewer finds the paper *well written* with the experiments *sufficient to demonstrate the validity of the model*. We address the reviewer's comments point by point below in the hope of clarifying some misunderstandings, improving the paper and eventually raise their score:
**Some work attempts to probabilistic model attention in order to obtain good uncertainty estimation ability**
- We thank the reviewer for the references; we find Bayesian Attention Modules very relevant and will cite them in the related works. A key difference between Bayesian Attention Modules and our methodology is that our method explicitly models uncertainties in time using a joint state-weight model and applies to any autoregressive or recurrent model, while Bayesian Attention Modules are a technology specifically tailored to Transformers models, and the Bayesian weights are static.
**Weakness**
**The proposed method can be seen as an application of variational dropout to an autoregressive model but does not provide any new insight.**
- We thank the reviewer for their time in reading the paper and finding parallelisms with Variational Dropout (VD). We understand that our method shares similarities with VD, from which we inherit the scalability to large networks. However, in variational dropout methods (i) the weights do not evolve in time, and (ii) the prior is empirical, while we provide a time-dependent new prior, which is the best for the objective in eq. (4). As reviewer 4bFX highlighted, *the derived prior (tVAMP) and time-dependent dropout mechanics feel both novel and easy to implement*, and the reviewer WqtE believes that our methodology *is of pragmatic interest due to its broad applicability in uncertainty-aware sequence modeling*. Finally, we also found that static weights lead to less accurate uncertainty metrics and adaptivity. For example, see Table 1, where ARD Dropout or Dropout shows lower accuracy in terms of NLL and ECE compared to our methodology.
**Autoregressive models based on attention mechanisms are not discussed.**
- We thank the reviewer for pointing out attention-based models, which we agree we did not mention exhaustively. We tried to cover a wide variety of references on autoregressive models, not focusing only on a specific model. Nevertheless, we did mention the Vaswani, A. et al. Attention Is All You Need paper for transformer reference, and the Radford, A., et al. Language Models are Unsupervised Multitask Learners for moder decoder-only language models. If the reviewer thinks we missed other relevant papers which will improve the manuscript, we will be happy to add them if pointed to.
**Questions For Authors:**
**Is there any difference between probabilistic modeling of autoregressive models and normal neural network models, like convolutional neural networks?**
- We thank the reviewer for the very interesting question. We did find that standard techniques used to model uncertainties in non-autoregressive models tend to obtain less accurate uncertainty metrics when applied to autoregressive solvers (see, for instance, Table 1. where Ensemble Dropout does not obtain as good performances as our methodology). This is mainly due to error accumulation and long-term dependencies, which non-autoregressive models do not need to deal with.
**Why not use Transformer, a more general and powerful autoaggressive neural network?**
- We thank the reviewer for the question, which we are glad to discuss. In our experiment, we focused in the AI4Science field since we believe uncertainty quantification will be key there. In the AI4Science field, we choose PDEs and Molecule generation as tasks since they are very different and use different models. For PDEs, autoregressive models are mostly based on multiple-step predictions using a Neural Operator. For molecule generation, RNNs were first applied, and just more recently, Transformers have been used, and it is not clear that transformers are more powerful in this specific task. In summary, with the paper, we wanted to show a general methodology for Bayesian sequence modelling, hoping to inspire potential applications to future work given the simplicity to implement and the generality of the methodology.
**Why not experiment on text to better illustrate the effectiveness of the model on large data sets?**
- We agree with the reviewer that the NLP task is an interesting avenue of research. Even though not an NLP task, we did employ text for the molecule generation task where the SMILES syntax is learned by the model to generate new molecules. For a more specific NLP task (e.g. language translation), a complication for us was how to define uncertainty properly, which seems to be an ongoing open research. Indeed, we choose our experiment tailored to support each claim in the paper, as reviewer WqtE also highlighted (*all the claim are supported by clear and convincing evidence*).
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply.
What is unique about your autoregressive models for time series uncertainty modeling,
[1] Desai, A., Freeman, C., Wang, Z., and Beaver, I. Timevae:
A variational auto-encoder for multivariate time series
generation. arXiv preprint arXiv:2111.08095, 2021.
[2] Kollovieh, M., Ansari, A. F., Bohlke-Schneider, M.,
Zschiegner, J., Wang, H., and Wang, Y. Predict, refine,
synthesize: Self-guiding diffusion models for probabilistic time series forecasting. In NeurIPS, 2023.
[3] Probabilistic Transformer For Time Series Analysis, NrurIPS 2021
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading our rebuttal and providing the interesting question. Regarding the question posed, we are, to the best of our knowledge, the first approach that can transform any autoregressive or recurrent model into a Bayesian one with minimal modification, obtaining similar or superior performances to non-Bayesian counterparts, scale to large networks and at the same time provide UQ.
For time series (but in general for sequential data) this means that we can use any (not Bayesian) model (as the suggested references) and seamlessly integrate our Bayesian approach on top of it, without performance loss (actually improving in performance for what we observed in our experiments) and provide calibrated uncertainties, which is key for real-life applications.
We also observed that the joint Bayesian model is capable to capture longer-time dependencies compared to the non-Bayesian counterpart, due to the adaptive in time Bayesian weights, as evidenced in the molecular experiment section. | Summary: This paper introduces BARNN, a framework designed to turn any autoregressive or recurrent deep learning model into a Bayesian version with minimal modifications. The authors propose jointly modeling both the states (e.g., tokens, PDE solutions, molecule strings) and the model’s weights as they evolve in time. By extending existing approaches to variational Bayesian inference, this work derives a novel temporal version of a variational lower bound and leverages a time-dependent extension of Variational Dropout. The work demonstrates BARNN in two main applications: (1) Neural PDE solvers for uncertainty quantification, and (2) RNN-based molecule generation, with experiments showing stronger calibration of uncertainties, improved long-range dependency handling, and better alignment with observed statistical properties of the underlying data.
Claims And Evidence: 1. BARNN can be applied to “any” autoregressive or recurrent model with minimal architectural changes. The paper shows that only a small set of modifications is needed: introducing time-dependent “dropout coefficients” and a new prior derived from the aggregated posterior. They illustrate this with PDE models and an LSTM-based molecule generator, both using standard training objectives (MSE for PDE forecasting, cross-entropy for next-token prediction).
2. BARNN provides better-calibrated and sharper uncertainties than existing methods. In experiments with PDE solvers, BARNN outperforms Monte Carlo Dropout, Input Perturbation, and other baselines on negative log-likelihood and expected calibration error.
3. BARNN enhances long-range dependency modeling. In the molecule-generation experiments, the authors highlight ring-closure errors as a challenging long-range dependency in SMILES. BARNN yields fewer ring-closure mistakes compared to baseline LSTMs, suggesting improved capacity to track tokens that appear far apart in the sequence.
4. BARNN-derived ensembles require fewer samples for stable statistical estimates and can revert to a single forward pass via MAP if uncertainty is not needed. The paper demonstrates that with about 30 ensemble members, the PDE solver’s predicted mean and variance converge. If uncertainty is not required, using the MAP estimate makes it as fast as a single forward pass, matching deterministic baselines on RMSE.
Overall, the claims are generally supported by well-chosen experiments that compare BARNN with multiple baselines. However, some details (e.g., how BARNN might scale to extremely large language models) are not deeply addressed, and real-world performance in large-scale domains is left for future work.
Methods And Evaluation Criteria: The evaluation criteria appear sound for the chosen tasks. For additional confidence, the paper checks multiple seeds, compares ensemble sizes, and tests different PDEs. Overall, the methodology and metrics are consistent with accepted best practices in both scientific PDE modeling and generative modeling in chemistry.
Theoretical Claims: 1. The key theoretical contribution is a novel variational lower bound that explicitly models time-varying network weights and states, together with a new prior inspired by VAMP.
2. The authors provide a derivation that connects BARNN’s objective to the standard VAE framework.
3. The correctness of the proofs hinges on: factorization of the generative model over states and weights, the application of the local reparameterization trick for weight sampling, the derivation of the “best prior” in the sense of aggregated variational posteriors.
From a high-level view, the presented proofs appear logically consistent. However, a more detailed or step-by-step check of the proofs would help confirm there are no hidden assumptions (e.g., independence assumptions) that might be restrictive.
Experimental Designs Or Analyses: - The PDE examples use well-known benchmark equations (Burgers, Kuramoto–Sivashinsky, Korteweg–de Vries) and evaluate multi-step rollouts (320 steps), which is a reasonable stress test.
- For molecule generation, a large ChEMBL-based dataset is used, covering real-world chemical diversity. Standard metrics in de novo drug design—validity, novelty, uniqueness, distribution matching—are computed.
- The comparison includes widely used baseline methods for uncertainty quantification (Dropout, Input Perturbation, etc.) and for molecule generation (standard RNNs).
- One limitation is that the PDE results rely on single-dimensional or low-dimensional PDE domains (1D PDE experiments). Similarly, the molecule generation uses a single tokenization approach (SMILES). Nonetheless, these design choices are typical entry points in the literature.
Supplementary Material: I have skimmed it, no issues stand out. The experimental setups in the Appendix (architectures, hyperparameters) are standard.
Relation To Broader Scientific Literature: This paper broadly builds on two themes: (1) Bayesian methods for deep networks (Variational Dropout, VAMP priors, etc.) and (2) autoregressive structures in PDEs, language modeling, or molecule generation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The approach is elegant and general, allowing flexible application across domains.
- Empirical performance is strong, particularly regarding calibration and the ability to generate valid molecules.
- The derived prior (tVAMP) and time-dependent dropout mechanics feel both novel and easy to implement.
Weaknesses:
- The method’s computational overhead for extremely large models (like multi-billion-parameter LLMs) is not fully explored.
- Real-world PDEs often involve high-dimensional grids (2D, 3D); the current experiments may not fully reflect potential complexities in multi-dimensional domains.
- More thorough ablations could be done on alternative priors besides the new “tVAMP prior.”
Other Comments Or Suggestions: It could be helpful to include an additional ablation that compares time-dependent dropout with a simpler “static” dropout factor, clarifying precisely how much performance gain is due to the time dimension vs. simply having an advanced prior.
Questions For Authors: Why did you pick a tVAMP prior specifically, instead of simpler approximations (e.g., a fixed log-uniform prior for dropout)? Did you try them, and how did they perform?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are delighted that the reviewer finds our approach *elegant and general, allowing flexible application across domains* and the tVAMP prior and time-dependent dropout mechanics *novel and easy to implement*. We appreciate that the reviewer considers that our *claims are generally supported by well-chosen experiments*. Finally, we thank the reviewer for the detailed comments and suggestions, which we address point by point below:
**From a high-level view, the presented proofs appear logically consistent. However, a more detailed or step-by-step check of the proofs would help confirm there are no hidden assumptions that might be restrictive.**
- We think the reviewer understood correctly the core of the proofs. For the temporal variational lower bound proof (Appendix A.1), given the specifics of the generative model distribution and the posterior distribution, no assumptions (e.g. independence or Markovian-memoryless properties) are made for deriving the lower bound. For the temporal VAMP prior proof (Appendix A.2), the approximation comes in eq. (19) while the general formulation of eq. (18) does not make any assumption.
**Weaknesses:**
**The method’s computational overhead for extremely large models (like multi-billion-parameter LLMs) is not fully explored**
- We acknowledge the fact that we did not test, due to computational resources, the methodology on large-scale models. However, we want to highlight that our methodology can scale to large models due to the decoupling between posterior and main model weights. Indeed, the temporal dropout coefficients are obtained from a posterior model with shared weights, which contains order of magnitude lower number of parameters, compared to the main network which is left untouched in the parameter size.
**Weaknesses:**
**Real-world PDEs often involve high-dimensional grids (2D, 3D); the current experiments may not fully reflect potential complexities in multi-dimensional domains.**
- We thank the reviewer for the suggestion, which we also find interesting to investigate. We believe our results will also scale to 2D and 3D experiments as we decouple the dropout rates from the main model weights. In our paper, we did not apply to 2D and 3D experiments for computational resources limitations (as GPU memory was limited).
**Weaknesses:**
**More thorough ablations could be done on alternative priors besides the new “tVAMP prior**
- We thank the reviewer for this suggestion, which is shared with reviewer WqtE. To show how tVAMP excels over standard prior and fixed dropout rates, we conducted a test on a synthetic time-series dataset. This test was handmade to show how the prior affects the network modelling (i) for varying amplitudes and frequencies in the states, (ii) understand the effect of time adaptivitly of the prior. We also add a simpler static dropout model for reference in the hope of answering the question raised in Other Comments or Suggestions. The results report RMSE, NLL, and ECE statistics. *Static* method reports the RMSE obtained if the initial state is not propagated, while *MLP* is the base (non-bayesian) architecture (2 layer of with 64 with relu activation). BARNN uses the same MLP architecture, and it is ablate on different priors.
Dataset:
\begin{align}
\begin{cases}
x_t &= x_{t-1} + \frac{3\pi}{100}, \quad x_0=0\\\\
y_t &=\frac{1}{5}\sum_{j=1}^5\sin(\alpha_i x_t +\beta_i), \quad \alpha_i\sim U[0.5, 1.5],\, \beta_i\sim U[0, 3\pi],\, t\in\{1, 2, \dots, 100\}
\end{cases}
\end{align}
Table:
| **Method** | **Prior** | **MSE** (↓) | **NLL** (↓) | **ECE** (↓) |
|-------------------|----------------|---------------------|-----------------------|-----------------------|
| Static | - | 0.490 ± 0.000 | - | - |
| MLP | - | 0.081 ± 0.011 | - | - |
| Dropout (p=0.5) | - | 0.072 ± 0.004 | 0.593 ± 0.461 | 0.084 ± 0.010 |
| Dropout (p=0.2) | - | 0.048 ± 0.004 | -0.075 ± 0.004 | 0.068 ± 0.009 |
| BARNN | log-uniform | 0.045 ± 0.003 | -0.092 ± 0.064 | 0.050 ± 0.016 |
| BARNN | tVAMP | **0.043 ± 0.001** | **-0.166 ± 0.019** | **0.049 ± 0.008** |
**Why did you pick a tVAMP prior specifically, instead of simpler approximations?....**
- We thank the reveiwer for the interesting question. We picked the tVAMP prior because during preliminary experiments we found that using a standard log-uniform prior (commonly used for VD) did not give us good uncertainties, and we attribute it to the fact that the prior pushed to heavily spared network weights. This is also confirmed by the time-series experiment results or the PDE (Table 1) results, where log-uniform and ARD priors obtain suboptimal results. | null | null | null | null | null | null |
Unpaired Point Cloud Completion via Unbalanced Optimal Transport | Accept (poster) | Summary: This paper proposes UOT-UPC, a novel approach to unpaired point cloud completion using the Unbalanced Optimal Transport framework. The model formulates the completion task as an optimal transport problem and trains a neural network-based Neural OT map to learn the transport mapping from incomplete to complete point clouds. UOT-UPC is designed to handle class imbalance, a common issue in real-world unpaired completion tasks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The proposed methods are well-aligned with the problem of unpaired point cloud completion. The paper formulates unpaired point cloud completion as an Optimal Transport (OT) problem, which is theoretically sound for aligning distributions between incomplete and complete point clouds. The introduction of Unbalanced Optimal Transport (UOT) is particularly meaningful, as real-world point cloud datasets often exhibit class imbalance, making traditional OT less effective.
Theoretical Claims: The paper presents theoretical claims related to the OT and UOT formulation for unpaired point cloud completion. The proofs supporting these theoretical claims are mathematically sound. The paper conducts a comprehensive analysis of various cost functions and concludes that InfoCD is the most suitable for completion tasks. This conclusion is primarily supported by experimental validation rather than additional mathematical proofs. The paper acknowledges that the UOT-UPC training process may exhibit instability, resembling the mode collapse phenomenon observed in GAN training. However, no theoretical analysis is provided to explain why UOT-UPC training might be unstable.
Experimental Designs Or Analyses: The proposed method was evaluated on the USSPA and PCN datasets, where it demonstrated superior performance compared to previous state-of-the-art methods. Extensive ablation studies were conducted to thoroughly analyze the impact of different cost functions, further validating the effectiveness of the proposed approach.
Supplementary Material: Implementation details (architecture, training settings) and additional experiments on cost function ablation, class imbalance, and dataset variations.
Relation To Broader Scientific Literature: Prior unpaired completion methods (Cycle4 (Wen et al., 2021), USSPA (Ma et al., 2023)) use heuristic-driven adversarial training. The paper reformulates unpaired point cloud completion as an UOT problem which bridges OT theory and unpaired point cloud completion, addressing limitations in existing heuristic-driven methods.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths:**
- First work using UOT for unpaired point cloud completion.
- Strong experimental validation across multiple datasets.
- Addresses class imbalance problem, a major issue in real-world data.
**Weaknesses:**
- Training instability due to adversarial training aspects of OT.
Other Comments Or Suggestions: The results of the proposed approach should be added to Figure 11.
Questions For Authors: Why is InfoCD the most suitable for completion tasks? Please provide a theoretical explanation. MSNet and SpareNet previously used EMD to calculate the distance between point sets. Would EMD based on optimal transport be better than InfoCD?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for carefully reading our manuscript and providing valuable feedback. Moreover, we appreciate the reviewer for considering our work addresses "limitations in existing heuristic-driven methods" by "bridging OT theory and unpaired point cloud completion". We hope our responses to be helpful in addressing the reviewer's concerns.
$ $
---
> **Q.[Questions For Authors]** Why is InfoCD the most suitable for completion tasks? Please provide a theoretical explanation. MSNet and SpareNet previously used EMD to calculate the distance between point sets. Would EMD based on optimal transport be better than InfoCD?
**A.** Thank you for providing the insightful comments. Following the reviewer's suggestion, **we conducted an additional ablation study using the EMD as the cost function for our UOT-UPC model**. The results are as follows (Table 7):
- Comparison of EMD cost function with other cost functions on the USSPA benchmark, assessed by L1 Chamfer Distance $cd^{l 1} \times 10^2$ ($\downarrow$).
|Cost function| Multi-category | trash bin | TV |
|:---|:---|:---|:---|
|$l_{2}$| 24.16 | 45.57 | 23.71|
|$cd^{l2}$| 10.12 | 10.40 | 6.47|
|$cd^{l2}_{fwd}$| 13.58 | 10.16 | 7.39|
|EMD| 9.66 | 10.46 | 6.41|
|Ours(InfoCD)| **8.96** | **8.83** | **6.07** |
Our results show that **InfoCD consistently outperforms EMD** across all evaluated categories. Additionally, **training with EMD incurs significantly higher computational costs** due to its iterative computation procedure, as shown below:
- Train time comparison on 'lamp' category from the USSPA [1].
|Train (480 Epoch) | Time (sec) |
|:---|:---|
|EMD | 3781.91|
|Ours(InfoCD) | 1320.80 |
InfoCD is designed to prevent multiple points from being matched to a single point, **encouraging a more evenly distributed alignment** between point sets, compared to EMD (Fig 4 in [1]). This advantage arises from the **contrastive nature of the InfoCD** cost function [1], which introduces an additional repulsion effect between negative pairs.
This property aligns well with the goals of point cloud completion, where generating globally coherent and evenly spaced completions is critical. Empirically, this is supported by the higher-fidelity completions of the UOT-UPC model using InfoCD in Fig 2 and 10, where the **completions exhibit more globally evenly separated completions**.
$ $
Reference
- [1] Lin, Fangzhou, et al. "InfoCD: a contrastive chamfer distance loss for point cloud completion." NeurIPS 2023.
$ $
---
> **Q. [Theoretical Claims]** ... The paper acknowledges that the UOT-UPC training process may exhibit instability, resembling the mode collapse phenomenon observed in GAN training. However, no theoretical analysis is provided to explain why UOT-UPC training might be unstable.
**A.** We appreciate the reviewer for providing constructive comments. As the reviewer said, we believe the observed training instability is due to the inherent difficulty of finding a Nash equilibrium in the min-max optimization, similar to challenges encountered in GAN training. In this regard, **several works investigated GAN instability both theoretically [1] and empirically [2]**. We agree that extending this analysis to the UOT-UPC setting would be a valuable future research direction.
Reference
- [1] Mescheder, Lars, Andreas Geiger, and Sebastian Nowozin. "Which training methods for GANs do actually converge?." ICML 2018
- [2] Salimans, Tim, et al. "Improved techniques for training gans." NeurIPS 2016.
$ $
---
> **Q.[Other Comments Or Suggestions]** The results of the proposed approach should be added to Figure 11.
**A.** We agree with the reviewer that including UOT-UPC results in Fig 11, i.e., providing a completion result on the exact same incomplete point cloud, would allow a clearer qualitative comparison. However, we were unable to find the random seed necessary for reproducibility in the ACL-SPC model. As an alternative, **we chose to present multiple completion results on the KITTI dataset in Fig 10**. Across multiple incomplete point clouds, our model consistently produces higher-fidelity completions, exhibiting better global structure and more evenly distributed points.
---
Rebuttal Comment 1.1:
Comment: I appreciate the thorough reply from the authors. The majority of my questions have been clarified. In particular, the comparison between the InfoCD and EMD losses was especially helpful. | Summary: This paper studies the problem of reconstructing 3d objects from partial observations, an important problem in real-world graphics applications. While some approaches to this problem consider settings where a large dataset of partial and full observations of the same objects are available, this work focuses on the likely more practical settings where one merely has a dataset of partial observations and another dataset of full observations but the observations don't correspond to the same objects. Their method is based on solving an optimal transport problem with neural networks, with the important modification of allowing for un-balancedness, which improves the robustness of their method to imbalances in the class distribution of the data.
Claims And Evidence: Their main claims are that their method is roughly state-of-the-art for unpaired data, and at least competitive for paired data, and also that it is robust to imbalances in the class distribution of the data. These claims are well-supported in my view.
I also appreciated their careful exploration of the relative merits of cost functions as well as the performance of other methods under major class imbalances. Overall, I feel that the paper is very careful and empirical solid.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: It is a practical paper so there aren't significant theoretical claims.
Experimental Designs Or Analyses: The experimental designs seem legitimate but I have not made a careful study of them.
Supplementary Material: Not in detail.
Relation To Broader Scientific Literature: This paper pushes forward the state of the art of point cloud completion by considering the un-paired problem and designing a strong method based on optimal transport which is also robust to class imbalances.
Essential References Not Discussed: I am not aware of any un-discusses essential references.
Other Strengths And Weaknesses: None additional.
Other Comments Or Suggestions: - You define the "unbalanced optimal transport map" on line 129, but does this necessarily exist?
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for carefully reading our manuscript and providing valuable feedback. We are especially grateful for the reviewer’s recognition of our contribution, noting that “this paper pushes forward the state of the art of point cloud completion by considering the unpaired problem and designing a strong method based on optimal transport, which is also robust to class imbalances.” Moreover, we appreciate the reviewer's positive assessment of our cost function analysis and empirical evaluations under class imbalance.
$ $
---
> **Q. [Other Comments Or Suggestions]** You define the "unbalanced optimal transport map" on line 129, but does this necessarily exist?
**A.** Thank you for the great question. In this work, we assume that the incomplete and complete point cloud distributions are absolutely continuous with respect to the Lebesgue measure (Lines 76-77). This is a mild assumption, as it is satisfied whenever the distributions admit probability density functions (pdfs). Under this assumption, **the existence of the unbalanced optimal transport map is guaranteed**, as stated in Thm 3.3 in [1].
Reference
- [1] Liero, Matthias, Alexander Mielke, and Giuseppe Savaré. "Optimal entropy-transport problems and a new Hellinger–Kantorovich distance between positive measures." Inventiones mathematicae 211.3 (2018): 969-1117.
---
Rebuttal Comment 1.1:
Comment: Ok, could you add this reference to the main text? Thanks!
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our work! We will revise **Line 129** to clarify the existence of the unbalanced optimal transport map and incorporate the suggested reference as follows:
> We refer to the optimal transport map $T^{\star}$ from $\pi_{0}$ to $\pi_{1}$ as the unbalanced optimal transport map (UOT Map). Note that, under our assumption that the source and target distributions are absolutely continuous, the existence of this UOT Map is guaranteed ([1], Thm 3.3). | Summary: This paper introduces Unbalanced Optimal Transport Map for Unpaired Point Cloud Completion (UOT-UPC), which is a novel point cloud completion approach that uses unpaired point clouds during training. Unlike previous approaches that have formulated the point cloud completion task as an optimal transport problem, the authors propose to instead formulate it as an unbalanced optimal transport problem to loosen the exact matching constraint in OT, which helps with training on unbalanced datasets. They train a Neural OT to learn a UOT map that transports incomplete point cloud to complete point cloud using InfoCD as the cost function. The authors evaluate their approach on different, real, synthetic, and hybrid datasets and show impressive performance compared to other models.
Claims And Evidence: The claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense of this problem.
Theoretical Claims: I have not checked the correctness of any proofs for theoretical claims.
Experimental Designs Or Analyses: The experimental designs are sound. The datasets used consist of real and synthetic scans of objects, as well as real world scans which cover a variety of different types of data.
Supplementary Material: I have looked at the cost function evaluation and additional results sections
Relation To Broader Scientific Literature: There has been a growing research focus on training models with unlabeled or minimally labeled data, particularly in 2D feature representation and unsupervised learning. This paper relates to this as it uses unpaired point cloud data to train a model for point cloud completion, which is unsupervised in nature. Another problem that is investigated is unbalanced datasets, which the paper shows UOT-UPC performs well with. I think that the most important contribution is that this paper is the first to bring UOT to unsupervised point cloud completion, where the problem has been typically formulated as a regular OT problem.
Essential References Not Discussed: There are not necessarily related works that are essential to understanding the key contributions which are missing.
Other Strengths And Weaknesses: This paper introduces an original and well-motivated approach to unpaired point cloud completion by using the unbalanced optimal transport formulation. The authors provide a comprehensive evaluation of their approach through their experiments and ablations, which show impressive performance and robustness on unbalanced datasets. The paper itself is also well written overall, with an algorithm figure that effectively describes their approach. However, the paper's clarity could be improved by explicitly defining important benchmarks before using them in the text. For example, Section 3.1 first compares UOT-UPC with USSPA without first introducing USSPA.
Other Comments Or Suggestions: The name of the approach is misspelled in the bolded section of the third paragraph in the introduction. Also, the beginning of Section 3.1 defined the set $X$ as $X = \{x_i | x_i \in X, i = 1, \cdots , N\}$ for incomplete point clouds and $Y = \{y_j | y_j \in Y, j = 1, \cdots , M\}$ for complete point clouds. However, the Cost Function Comparison uses $n$ for complete and $m$ for incomplete. I believe it would be a bit clearer to change the variables used for the Task formulation section.
Questions For Authors: No questions so far
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for carefully reading our manuscript and providing valuable feedback. Moreover, we appreciate the reviewer for considering "the most important contribution is that this paper is the first to bring UOT to unsupervised point cloud completion". We hope our responses to be helpful in addressing the reviewer's concerns.
$ $
---
> **Q. [Other Strengths And Weaknesses]** The paper's clarity could be improved by explicitly defining important benchmarks before using them in the text. For example, Section 3.1 first compares UOT-UPC with USSPA without first introducing USSPA.'
**A.** We appreciate the reviewer for providing valuable suggestions. Following the reviewer's advice, **we moved the Related Works section (Sec 4) before the Method (Sec 3)**. This reordering allows us to introduce USSPA before presenting experimental results in Section 3.1.
$ $
---
> **Q. [Other Comments Or Suggestions]** The name of the approach is misspelled in the bolded section of the third paragraph in the introduction. Also, the beginning of Section 3.1 defined the set $X$ as: $$X = \{x_i \mid x_i \in X, i = 1, \dots, N\}$$ for incomplete point clouds and the set $Y$ as: $$Y = \{y_j \mid y_j \in Y, j = 1, \dots, M\}$$ for complete point clouds. However, the cost function comparison uses $n$ for complete and $m$ for incomplete point clouds. I believe it would be a bit clearer to change the variables used for the Task formulation section.
**A.** We appreciate the reviewer for the careful comment. (1) We corrected the typo in the bolded section of the third paragraph in the introduction. (2) Following the reviewer's advice, we revised the variable notation in the cost function definitions to ensure consistency. Specifically, we now consistently use the $n$ for the incomplete point clouds $x$ and $m$ for the complete point clouds $y$ throughout the paper. | null | null | null | null | null | null | null | null |
Complex Wavelet Mutual Information Loss: A Multi-Scale Loss Function for Semantic Segmentation | Accept (poster) | Summary: This work presents a loss function for semantic segmentation based on the steerable pyramid decomposition of images, *i.e.*, wavelet transform. The insight is that the steerable pyramid preserves structural similarity while capturing multi-scale features across multiple orientations. The proposed CWMI consists of several mutual information terms that are well-suited for directional features. CWMI is validated on U-Net and AttenUNet using four benchmarks. Qualitative visualizations suggest that CWMI helps reduce false positive and false negative predictions.
**Update after rebuttal**: I am going to maintain my current rating at 2-WEAK REJECT due to insufficient experimental studies.
Claims And Evidence: Reasonable claims and sufficient analysis.
Methods And Evaluation Criteria: The proposed CWMI and evaluation make sense to the studied problem.
Theoretical Claims: The equations are either based on the strong foundation of steerable pyramid decomposition or are straightforward.
Experimental Designs Or Analyses: The proposed CWMI is simple and effective on U-Net and AttenUNet. However, U-Net and AttenUNet are classic methods that were published a long time ago. It is necessary to include more recent SOTA frameworks to validate the effectiveness of the proposed loss function, such as [r1]–[r5]. There are also other works (e.g., [r6][r7]) that focus on developing novel loss functions for semantic segmentation. If the proposed loss function could outperform them, the value of your submission would be significantly enhanced.
[r1] Chen et al., Masked-attention mask transformer for universal image segmentation. In CVPR 2022.
[r2] Wang et al., InternImage: Exploring Large-Scale Vision Foundation Models With Deformable Convolutions. In ICCV 2023.
[r3] Kirillov et al., Segment Anything, In ICCV 2023.
[r4] Tang et al., Category feature transformer for semantic segmentation. In arXiv 2023.
[r5] Yue et al., MedMamba: Vision Mamba for Medical Image Classification. In arXiv 2024.
[r6] Wu et al., Conditional Boundary Loss for Semantic Segmentation. In TIP 2023.
[r7] Tang et al., Increase the sensitivity of moderate examples for semantic image segmentation. In IMAVIS 2025.
Supplementary Material: N/A.
Relation To Broader Scientific Literature: The proposed CWMI is potentially a network-agnostic loss function and could be applied to any existing semantic segmentation framework.
Essential References Not Discussed: Refer to **Experimental Designs Or Analyses**.
Other Strengths And Weaknesses: N/A.
Other Comments Or Suggestions: N/A.
Questions For Authors: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback and the helpful list of recent models and loss functions for comparison. In response, we have conducted additional experiments using Vision Mamba UNet (VM-Unet, Ruan et al, 2024), a segmentation architecture based on the recently proposed MedMamba ([r5])—a direction gaining rapid attention in the vision community due to its sequence modeling efficiency and competitive performance.
The table below summarizes our CWMI results on VM-UNet and the SNEMI3D dataset. CWMI demonstrates consistent improvements over all tested baseline loss functions, including RMI, Dice, and sensitive loss [r7]:
||mIoU↑|mDice↑|VI↓|ARI↑|HD↓|
|:-------:|:------:|:-------:|:-------:|:-------:|:-------:|
|CE|0.723±0.021|0.829±0.016|2.58±0.99|0.414±0.141|1.71±0.19|
|CE|0.723±0.021|0.829±0.016|2.58±0.99|0.414±0.141|1.71±0.19|
|BCE|0.739±0.003|0.843±0.002|1.50±0.05|0.594±0.008|1.27±0.19|
|BCE|0.739±0.003|0.843±0.002|1.50±0.05|0.594±0.008|1.27±0.19|
|Dice|0.725±0.024|0.831±0.017|2.74±0.99|0.402±0.135|1.77±0.59|
|Focal|0.725±0.008|0.833±0.006|1.68±0.12|0.563±0.020|1.70±0.30|
|Jaccard|0.725±0.016|0.831±0.011|1.95±0.44|0.499±0.061|1.72±0.46|
|Tversky|0.737±0.013|0.840±0.009|1.62±0.26|0.555±0.037|1.56±0.33|
|WCE|0.714±0.004|0.826±0.004|1.74±0.02|0.552±0.005|1.43±0.14|
|ABW|0.678±0.003|0.800±0.002|2.08±0.02|0.490±0.004|1.57±0.02|
|RMI|0.762±0.008|0.857±0.006|1.42±0.15|0.588±0.025|1.07±0.25|
|clDice|0.714±0.001|0.824±0.001|1.75±0.05|0.542±0.003|1.99±0.21|
|Sensitive|0.539±0.115|0.625±0.143|6.12±0.73|0.032±0.055|2.27±0.76|
|**CWMI**|**0.783±0.004**|**0.872±0.002**|**0.98±0.09**|**0.660±0.018**|**0.63±0.06**|
We also conducted a comprehensive comparison between CWMI and the sensitive loss from [r7] across all four datasets (SNEMI3D, GlaS, DRIVE, MASS ROAD) using both U-Net and Attention U-Net. In every case, CWMI outperformed the sensitive loss across pixel-wise, region-based, and topology-aware metrics. An example of the results on the GlaS dataset is shown below:
||||SNEMI3D||||
|:-------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|
||| mIoU↑|mDice↑|VI↓|ARI↑|HD↓|
|**U-Net**|Sensitive|0.762±0.002|0.857±0.002|1.60±0.03|0.563±0.010|0.99±0.14|
||CWMI|0.778±0.003|0.869±0.002|1.15±0.04|0.642±0.006|0.75±0.09|
|**AttenUNet**|Sensitive|0.752±0.001|0.850±0.001|2.04±0.23|0.495±0.041|1.10±0.10|
||CWMI|0.777±0.003|0.868±0.002|1.14±0.01|0.642±0.002|0.79±0.02|
||||GlaS||||
|:-------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|
||| mIoU↑|mDice↑|VI↓|ARI↑|HD↓|
|**U-Net**|Sensitive|0.816±0.012|0.893±0.008|0.97±0.11|0.675±0.027|3.12±0.17|
||CWMI|0.845±0.018|0.911±0.012|0.71±0.10|0.752±0.025|2.64±0.64|
|**AttenUNet**|Sensitive|0.820±0.015|0.897±0.009|0.88±0.12|0.694±0.040|2.70±0.23|
||CWMI|0.841±0.018|0.909±0.011|0.76±0.09|0.733±0.039|2.76±0.71|
||||DRIVE||||
|:-------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|
||| mIoU↑|mDice↑|VI↓|ARI↑|HD↓|
|**U-Net**|Sensitive|0.757±0.006|0.846±0.006|1.49±0.03|0.356±0.032|2.96±0.45|
||CWMI|0.800±0.003|0.880±0.003|1.04±0.04|0.605±0.026|1.03±0.30|
|**AttenUNet**|Sensitive|0.754±0.015|0.843±0.013|1.45±0.04|0.384±0.027|3.16±0.56|
||CWMI|0.799±0.013|0.879±0.010|1.01±0.10|0.618±0.032|1.15±0.30|
||||MASS_ROAD||||
|:-------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|
||| mIoU↑|mDice↑|VI↓|ARI↑|HD↓|
|**U-Net**|Sensitive|0.738±0.016|0.830±0.013|2.98±0.69|0.282±0.123|11.52±4.74|
||CWMI|0.768±0.010|0.855±0.008|1.11±0.24|0.674±0.066|10.07±2.18|
|**AttenUNet**|Sensitive|0.742±0.008|0.834±0.006|2.66±0.46|0.335±0.096|10.97±2.40|
||CWMI|0.763±0.002|0.850±0.002|1.13±0.08|0.672±0.025|7.95±2.52|
We recognize their relevance and will include a discussion of their strengths and the potential role CWMI could play as a structural loss component in these systems. However, we were unable to evaluate CWMI within the full pipelines of recent universal segmentation models ([r1]–[r3]) due to their architectural complexity and multi-objective training design. Similarly, although we could not reproduce [r6] due to unavailable code, we will cite and acknowledge it in our revision.
In summary, we prioritized experiments on [r5] due to its relevance and recency and performed a thorough evaluation against [r7], showing CWMI’s consistent improvements in both classic and modern architectures. We thank the reviewer again for these valuable suggestions and for encouraging us to broaden our empirical evaluation.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addresses some concerns. However, I am maintaining my current rating due to the somewhat insufficient experimental studies. The absence of many recent SOTA methods and loss functions weakens the validity of the experimental results, making them less convincing. | Summary: This paper proposed a complex wavelet-based loss function and proved its effectiveness in several segmentation tasks.
## update after rebuttal”
After I review the author's response and other reviewers' comments, I raise my score to 2 to thank the authors for their efforts. No higher score because the contribution doesn't reach the bar of ICML.
Claims And Evidence: The reviewer is confused about the motivation and thinks the contribution of this paper is incremental. Please see details in the section on weaknesses.
Methods And Evaluation Criteria: Several concerns are raised in the section on weaknesses.
Theoretical Claims: yes, the theoretical claims are weak in the paper.
Experimental Designs Or Analyses: see weaknesses
Supplementary Material: n/a
Relation To Broader Scientific Literature: BiconNet: An Edge-preserved Connectivity-based Approach for Salient Object Detection
Directional Connectivity-based Segmentation of Medical Images
Spatial coherence loss for salient and camouflaged object detection and beyond
Essential References Not Discussed: BiconNet: An Edge-preserved Connectivity-based Approach for Salient Object Detection
Directional Connectivity-based Segmentation of Medical Images
Spatial coherence loss for salient and camouflaged object detection and beyond
Other Strengths And Weaknesses: Strength: Introducing complex wavelet-based mutual information loss
Weaknesses:
1. The contribution of this paper is minimal and experiments (ablation) also verify this modification is incremental (Using this loss can only surpass traditional L1 or L2 loss in margin increase).
2. The motivation of this paper is also confused and the experiments are not convincing.
Given the above contribution, I recommend to reject this paper.
Other Comments Or Suggestions: n/a
Questions For Authors: see weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and the opportunity to clarify our motivation and contributions. We respectfully disagree with the assessment that the proposed CWMI loss function is incremental and provides only marginal improvements over traditional losses like L1 or L2. Below, we address each concern in detail.
### **1. Empirical Gains Are Statistically Significant, Not Marginal**
To support the significance of our results, we conducted additional statistical analyses following Bouckaert and Frank (2004), repeating three-fold cross-validation multiple times and applying adjusted t-tests. These analyses demonstrate that CWMI provides statistically significant improvements over both L1/L2 and several state-of-the-art loss functions across multiple datasets and evaluation metrics (including mIoU, mDice, VI, ARI, and HD). While some improvements may appear modest in absolute terms, they are consistent, reproducible, and statistically robust—highlighting the practical effectiveness of CWMI, particularly in tasks requiring fine-grained boundary and structural precision.
### **2. Novelty of Wavelet Integration in Loss Design**
We believe that incorporating wavelet-based multiscale decomposition into the loss function is a novel and underexplored direction in semantic segmentation. While wavelet transforms have been increasingly adopted in network architectures—e.g., WaveletNet (Jing et al., 2018), WavResNet (Kang et al., 2017), and SwinWave-SR (Dharejo et al., 2024)—these methods use wavelets primarily for efficient encoding or downsampling operations in the feature extractor. By contrast, CWMI is, to our knowledge, the first loss function to directly supervise predictions in the complex wavelet domain, thereby leveraging rich frequency and orientation-aware representations at multiple scales. This framing allows us to encode structured priors about image content directly into the loss signal. Moreover, our ablation studies show that even conventional L1/L2 losses applied in the wavelet domain outperform sophisticated spatial-domain losses like RMI, Tversky, or clDice. This highlights that multiscale representations—when integrated into the objective function—offer a powerful and underutilized mechanism for improving segmentation quality. We believe CWMI opens a new line of inquiry into wavelet-domain loss functions.
### **3. Combining Wavelet Transform with Mutual Information Is New**
Mutual information–based losses (e.g., RMI) have emerged as strong alternatives to standard pixel-wise losses by capturing statistical dependencies between prediction and ground truth. Our work extends this promising direction by combining MI estimation with wavelet-domain features—specifically, complex steerable pyramid coefficients. This combination enables CWMI to align multiscale, orientation-sensitive representations in a statistically principled way. To our knowledge, no prior work has explored this integration. Unlike RMI, which operates in the spatial domain, CWMI captures fine-grained structural patterns across scales and directions, aligning them through mutual information. This results in a loss function that is both information-theoretically grounded and structurally aware. Our empirical results—especially the consistent superiority of CWMI over RMI across datasets and model backbones—further validate the effectiveness and novelty of this design.
### **4. Relation to BiconNet and Related Work**
We thank the reviewer for pointing out relevant works such as BiconNet and other connectivity-aware segmentation frameworks. These contributions are highly relevant to structure-preserving segmentation, and we will include a detailed discussion in the revised manuscript. Importantly, CWMI is complementary to such methods and could potentially be used as a regularization term to further enhance structural consistency in connectivity-based models. We are particularly interested in exploring this synergy in future work, as integrating CWMI into BiconNet may combine topological priors with frequency-domain mutual information.
---
Rebuttal Comment 1.1:
Comment: I insist on my score due to my major concerns haven't been addressed. I don't think simply combining wavelet and mutual information, a well-established direction in low-level vision, is really a big deal to be one paper that is qualified to be accepted by ICML.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the follow-up comment. While we respect their opinion, we would like to clarify for the Area Chair that our work is not a simple combination of existing concepts. To our knowledge, this is the first loss function to integrate mutual information estimation with complex steerable wavelet decomposition, specifically designed for semantic segmentation.
Unlike prior uses of regional information in segmentation loss functions, our method formulates an information-theoretic objective in the wavelet domain, capturing multiscale and directional structure. Our empirical results show statistically significant improvements over strong baselines, including RMI, across classic and modern segmentation models.
We trust the Area Chair will consider both the technical formulation and the strength of the experimental evidence in evaluating the contribution. | Summary: This paper introduces Complex Wavelet Mutual Information (CWMI) loss for semantic segmentation tasks. The proposed method employs a complex steerable pyramid to perform multi-scale and multi-orientation wavelet decomposition on both prediction and label images, computing mutual information in each subband to effectively capture local phase and structural features. This design not only alleviates issues of class and instance imbalance but also maintains computational efficiency. Experiments conducted on SNEMI3D, GlaS, DRIVE, and MASS ROAD datasets demonstrate that CWMI loss significantly outperforms traditional pixel-wise and region-based loss functions in terms of pixel accuracy and topological consistency metrics, while incurring minimal computational overhead. This work provides a novel perspective and solution for structure-sensitive segmentation tasks, showcasing strong generalization and promising practical applicability.
Claims And Evidence: The main claim presented in the paper—*“CWMI loss improves semantic segmentation performance, particularly for small instances and fine boundaries”*—is clearly stated and supported by evidence that is convincing to a certain extent. The manuscript provides a thorough discussion of various perspectives on the problem and articulates the technical background comprehensively. The reasoning behind choosing a frequency-domain approach, particularly using wavelet transforms for designing the loss function, is clearly explained. Moreover, the overview of semantic segmentation loss functions is appropriately detailed, explicitly linking the challenges of instance imbalance and the need for more efficient and balanced approaches. Quantitative results, qualitative visualizations, and ablation studies are comprehensive, collectively demonstrating the superiority of CWMI across multiple datasets. Tables 1 and 4 effectively illustrate CWMI’s strong performance in modeling large-scale structural dependencies while also demonstrating advantages in parameter efficiency.
GPT previously mentioned that the description of computational overhead was insufficiently specific, weakening the claim regarding efficiency; however, upon review, this seems adequately addressed.
Nevertheless, the provided evidence has certain shortcomings:
1. **Limited depth in ablation studies**, which fails to fully elucidate the mechanisms behind CWMI’s advantages.
The current ablation experiments only compare the complete complex representation (CWMI) against the real-only representation (CWMI-Real), showing that utilizing the complex representation (which simultaneously incorporates both magnitude and phase information) achieves better segmentation performance. However, the paper does not further dissect this complex representation to separately evaluate the contributions of magnitude (reflecting feature intensity) and phase (reflecting local structural information). Consequently, existing experiments cannot definitively determine whether the improvements are due to the entire complex representation or specifically driven by either magnitude or phase components individually.
2. **Lack of statistical significance testing** (e.g., t-tests or confidence interval analysis), making it difficult to confirm whether the reported performance improvements exceed the range of random fluctuations, thereby reducing the reliability and persuasive power of the results.
Methods And Evaluation Criteria: (1) The proposed CWMI loss and the associated evaluation metrics are reasonable and appropriate for addressing the semantic segmentation problem tackled in this paper. The method's design leverages frequency-domain analysis and statistical dependencies to handle multiscale challenges, aligning logically with the intended application. The choice of datasets and evaluation metrics effectively captures the complexity of the issues—namely class/instance imbalance and structural preservation—thus ensuring a robust evaluation of the proposed method. Furthermore, the utilization of standard segmentation models grounds the evaluation within realistic application scenarios. Overall, I see no significant mismatch between the proposed method, the evaluation framework, and the goal of improving segmentation performance, particularly for small-scale instances and fine boundaries. In summary, both the method and evaluation criteria are well-suited to the task at hand and provide a solid foundation for validating the paper's contributions.
(2) The CWMI loss and its evaluation framework are generally suitable for semantic segmentation tasks. By employing the complex steerable pyramid and multi-scale, structure-aware mutual information design, the method effectively addresses class and instance imbalance. Combining CWMI with cross-entropy loss further enhances its broad applicability. The evaluation extensively tests the approach across multiple datasets and metrics. However, regarding hyperparameter configuration, the paper predominantly uses a fixed value of λ = 0.1 and performs grid search hyperparameter tuning only for certain baseline loss functions. The limited discussion on hyperparameter sensitivity and broader tuning strategies may impact the method's generalizability and extensibility.
Theoretical Claims: As a reviewer, I have verified the correctness of the CWMI loss formulation (Equations 6-10)—the central theoretical claim of the paper—and found it mathematically sound, consistent with standard definitions of mutual information and the design of loss functions. Since the paper prioritizes empirical validation and does not provide formal proofs, it is acceptable that I did not need to verify theoretical proofs. However, the theoretical claims could be enhanced by providing theoretical justifications for the effectiveness of mutual information, clearly articulating the MI estimation process and its theoretical implications, and clarifying the rationale behind hyperparameter choices. These points are not correctness issues but rather theoretical elaboration shortcomings, which, if addressed, would make the proposed framework more convincing and robust beyond its demonstrated empirical success.
Experimental Designs Or Analyses: I have reviewed the experimental design (datasets, baselines, models, metrics, ablations) and analyses (quantitative, qualitative, computational) for reasonableness and validity and found them to be generally robust. The design choices—including diverse datasets, comprehensive metrics, and fair baselines—adequately test the effectiveness of CWMI. The use of cross-validation and multiple models further strengthens validity. The analysis supports the claimed performance improvements, particularly regarding small instances and boundary details.
However, there are some issues:
(1) **Lack of statistical significance tests**, weakening the reliability of the results.
(2) **Shallow ablation studies**, limiting insights into underlying mechanisms.
(3) **Incomplete computational analysis**, reducing clarity on practical usability.
These concerns do not invalidate the experiments but suggest potential improvements—such as including statistical tests, deepening ablation studies (e.g., detailed feature analyses, hyperparameter sensitivity), and providing detailed breakdowns of running time—to further strengthen reasonableness and validity.
Additionally, it might be beneficial to add citations directly into the comparison tables of baseline experiments to improve clarity and traceability.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The Complex Wavelet Mutual Information (CWMI) loss proposed in this paper makes a significant contribution to the field of semantic segmentation and closely aligns with broader scientific literature. Firstly, the CWMI loss addresses long-standing challenges in semantic segmentation, such as class imbalance and instance imbalance, particularly difficulties related to segmenting small objects and fine boundaries. Traditional loss functions like Cross-Entropy (CE) and Dice have limited effectiveness in tackling these issues. By drawing inspiration from recent region-based losses such as Regional Mutual Information (RMI) loss proposed by Zhao et al. and topology-aware losses like clDice loss introduced by Shit et al., CWMI loss incorporates a complex steerable pyramid to substantially enhance the model’s ability to capture multi-scale and multi-orientation features, significantly improving segmentation accuracy and efficiency.
Secondly, CWMI loss integrates classic wavelet transform theories (such as Mallat’s multi-scale wavelet decomposition and Simoncelli’s steerable pyramid) with the concept of mutual information. This innovative fusion of information-theoretic approaches with deep learning strengthens the model’s capability in modeling high-dimensional dependencies and structural consistency.
Furthermore, the core principles of CWMI—multi-scale feature extraction and structural consistency—possess broad applicability across diverse fields such as medical imaging and remote sensing, extending to tasks including image super-resolution and dehazing. Overall, the CWMI loss not only advances the field of semantic segmentation but also introduces novel perspectives and methodologies applicable to a broader range of computer vision tasks.
Essential References Not Discussed: Upon review, the current manuscript has already referenced many relevant studies related to regional mutual information and wavelet decomposition. However, it still omits several important recent developments in the fields of high-dimensional mutual information estimation and deep representation learning. For example, Mutual Information Neural Estimation (MINE) proposed by Belghazi et al. (2018) and Deep InfoMax by Hjelm et al. (2019) have demonstrated excellent performance in robust estimation of high-dimensional mutual information. These works could significantly complement the theoretical justification behind the mutual information lower-bound estimation approach adopted by CWMI loss. Furthermore, recent advances in wavelet information theory, such as those described by de Oliveira & de Souza (2015), could provide additional theoretical context to better understand the role of complex wavelet decomposition in capturing multi-scale structural features.
Below are the links to the aforementioned relevant papers:
1. **Mutual Information Neural Estimation (MINE)** – Belghazi et al. (2018)
Link: [https://arxiv.org/abs/1801.04062](https://arxiv.org/abs/1801.04062)
2. **Deep InfoMax** – Hjelm et al. (2019)
Link: [https://arxiv.org/abs/1808.06670](https://arxiv.org/abs/1808.06670)
3. **Wavelet Analysis as an Information Processing Technique** – de Oliveira & de Souza (2015)
Link: [https://arxiv.org/abs/1502.05879](https://arxiv.org/abs/1502.05879)
Other Strengths And Weaknesses: The proposed CWMI loss combines a complex steerable pyramid with mutual information, presenting a novel multi-scale loss function that effectively addresses class imbalance and instance imbalance in semantic segmentation. This innovative combination not only enhances segmentation accuracy but also demonstrates excellent computational efficiency, offering broad potential for applications, particularly in medical imaging and remote sensing. The experimental section is rigorously designed, clearly validating the superiority of the CWMI loss through comparative experiments across multiple publicly available datasets.
However, the paper also has certain limitations. First, although CWMI exhibits strong performance on 2D images, the applicability of CWMI to multi-class and 3D segmentation tasks has not been thoroughly investigated, limiting its generalizability. Second, despite supplementary materials containing some ablation experiments, the theoretical analysis of CWMI loss remains relatively shallow. Specifically, the mathematical foundations underlying its capability to capture multi-scale dependencies in the wavelet domain could be explored in greater depth.
Overall, the paper excels in methodological innovation and experimental validation but leaves room for improvement in terms of theoretical depth and applicability extensions.
Other Comments Or Suggestions: The overall structure of the paper is clear, but several points could be improved. Firstly, some equations (such as the complex covariance calculation in Equation 10) could benefit from additional textual explanations to help readers understand the underlying derivation logic. Additionally, a minor spelling error appears in Figure 3, where "Organge arrows" should be corrected to "Orange arrows."
Questions For Authors: 1. **Regarding the estimation method of the lower bound of Mutual Information (MI):**
Could you please elaborate on how you ensure a tight approximation between the MI lower bound used in your method and the true MI value? A significant gap between this lower bound and the actual MI could impact the loss function's ability to effectively capture structural information, thus affecting the robustness and validity of the proposed approach.
2. **Relationship between Complex Wavelet Decomposition and recent MI estimation methods:**
Have you considered integrating recent advanced deep mutual information estimation techniques, such as MINE or Deep InfoMax, into your approach? If these methods could further enhance MI estimation accuracy, it might positively influence the performance of the CWMI loss, consequently strengthening the overall competitiveness of your method.
3. **Hyperparameter λ selection and sensitivity:**
Could you provide more detailed insights into how the regularization parameter λ was chosen in your experiments, and whether you performed a systematic sensitivity analysis of this parameter? If the final results are highly sensitive to λ, detailed guidance on parameter optimization would be necessary, influencing the perceived stability and general applicability of your method in real-world scenarios.
4. **Applicability to multi-class and 3D segmentation tasks:**
Although your paper primarily addresses two-dimensional semantic segmentation, do you have any preliminary ideas or experimental results regarding extending your method to multi-class or three-dimensional segmentation tasks? Demonstrating adaptability to these more complex scenarios would significantly enhance the practical value of your approach, potentially broadening the evaluation of your paper's contributions.
5. **Computational efficiency and real-time applications:**
The CWMI loss combines complex wavelet transforms with mutual information calculations, potentially resulting in substantial computational overhead. Have you considered or planned any optimizations to improve computational efficiency, particularly for high-resolution or real-time applications? If computational complexity could be significantly reduced without compromising performance, it would substantially enhance the practicality and usability of your method in real-world contexts.
Ethical Review Flag: Flag this paper for an ethics review.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We especially appreciate the recognition of the novelty and practical value of our proposed CWMI loss. Your comments helped us refine the manuscript and gain deeper insights into both theoretical and empirical aspects of our method.
Below we address your main concerns. Due to space constraints, full updated tables are omitted here but will be included in the revised manuscript.
### 1. Statistical Significance
We appreciate the reviewer’s observation regarding the importance of statistical significance in validating the effectiveness of CWMI. To address this, we followed Bouckaert & Frank (2004) and repeated three-fold cross-validation three times with adjusted t-tests across all metrics.
We summarize the key findings as follows:
- On SNEMI3D and DRIVE, CWMI significantly outperforms all SOTA loss functions.
- On GlaS and MASS ROAD, where multiscale structural features are less dominant, CWMI significantly outperforms most baselines and is competitive with RMI. Specifically, CWMI shows significantly better performance in VI and ARI metrics compared with RMI, while showing marginal improvement compared with RMI on mDice and mIoU.
### 2. Ablation Study
We thank the reviewer for pointing out the need to deepen our ablation study and further dissect the contribution of the complex representation in CWMI. We extended our ablation study to include magnitude-only and phase-only variants:
- CWMI-Phase: consistently underperformed across all metrics, indicating that phase alone lacks sufficient discriminative power for accurate segmentation.
- CWMI-Mag: achieved the best performance on VI, ARI, and Hausdorff Distance (HD), suggesting that the magnitude component is particularly effective at capturing clustering and topological structures.
- CWMI (full representation) achieved the best performance on pixel-wise accuracy metrics (mIoU and mDice) and was the second-best on VI, ARI, and HD, offering a well-balanced advantage across regional, clustering, and structural metrics.
- CWMI-Real performed worse than both CWMI and CWMI-Mag on most metrics, though better than CWMI-Phase, further validating the importance of combining both magnitude and phase information.
These results indicate that while magnitude contributes strongly to clustering and topological consistency, the full CWMI formulation provides a more comprehensive optimization across all segmentation dimensions. Notably, our findings also suggest that a magnitude-only version of CWMI could be explored as a potential alternative for faster training or lightweight deployment scenarios.
### 3. Hyperparameter $\lambda$ Sensitivity
We thank the reviewer for raising this important point. To assess the sensitivity of the regularization parameter $\lambda$, we tested $\lambda = 0.1, 0.5, 0.9$ and observed no significant variation in metrics, indicating that CWMI contains sufficient complementary information to cross-entropy, and is robust to $\lambda$. Additionally, we found the cross-entropy term essential for directional learning, as mutual information is symmetric and cannot differentiate between correct and inverted predictions.
### 4. MI Estimation and Deep Estimators
We thank the reviewer for the insightful questions on MI estimation and the potential use of deep MI estimators. Our current method follows the RMI loss (Zhao et al., 2019), which estimates a lower bound of mutual information under the assumption of approximately Gaussian-distributed feature vectors. While this approach is efficient and non-parametric, its tightness depends on distributional assumptions. Our preliminary analysis shows that the complex wavelet subbands are near-Gaussian but with sharper central peaks, indicating the bound may not be perfectly tight. We will acknowledge this limitation in the revised manuscript.
We also reviewed learned MI estimators such as MINE and Deep InfoMax, which offer tighter bounds via parameterized networks. While promising, these methods introduce additional training and complexity, potentially undermining CWMI’s computational efficiency. As future work, we plan to integrate MINE into our framework to study the trade-offs between accuracy and overhead.
### 5. Multi-Class and 3D Segmentation
CWMI is extendable to multi-class and 3D segmentation. Both Fourier transform and MI estimation naturally generalize to 3D. Although steerable pyramids are 2D by design, prior work (e.g., Delle Luche et al., 2004) introduces 3D steerable filters using polyhedral decompositions. We plan to explore these extensions in future work.
### 6. Minor Comments
We will (1) clarify grid search was applied to all baselines, (2) add inline citations to comparison tables, (3) expand textual explanations for key equations (e.g., Eq. 10), and (4) correct the typo in Figure 3 ("Organge" $\rightarrow$ "Orange"). We appreciate the reviewer’s attention to these details.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, I will keep my positive score. | null | null | null | null | null | null | null | null |
Meta Optimality for Demographic Parity Constrained Regression via Post-Processing | Accept (poster) | Summary: This paper considers fair regression with respect to statistical parity under the attribute-aware setting. The main focus/contribution is on obtaining a minimax rate for learning fair regressors:
1. Taking the post-processing perspective/approach to achieving fairness, and leveraging the error decomposition result in prior work, this paper bound the minimax rate for fair regression as (minimax rate of the unconstrained regression problem) + (sample complexity of learning the fair post-processor); the fair post-processor is the optimal transports to the barycenter of the output distributions of the unconstrained regressors.
2. The authors cast the problem of learning the barycenter/optimal transports as optimizing over the congruent potentials.
3. A sample complexity bound for learning the barycenter/optimal transports.
Claims And Evidence: Yes
Methods And Evaluation Criteria: N/A
Theoretical Claims: Did not check proof in detail, but the presented results are reasonable/expected.
Experimental Designs Or Analyses: N/A
Supplementary Material: No
Relation To Broader Scientific Literature: - Paper adopts the "post-processing" perspective of [Chzhen et al., 2020] and [Le Gouic et al., 2020] for fair regression, and leveraged its error decomposition to derive the minimax rate (of which there are also prior work, even though are limited to specific data generation processes; see weakness 1).
- The barycenter/optimal transport estimator used in this paper is inspired by [Korotin et al., 2022], for which the authors derived the sample complexity bound.
Essential References Not Discussed: Le Gouic et al. Projection to Fairness in Statistical Learning. 2020.
Other Strengths And Weaknesses: Weaknesses.
1. Theorem 3.4 states a minimax rate in terms of the minimax rate of the unconstrained regression problem plus the sample complexity of learning the barycenter/optimal transports. A heuristic interpretation of the result is given in remark 3, which says that if the regression problem is *harder* than that of learning the transports ("Such a
situation may frequently happen"), then the minimax rate would be dominated by that of the regression problem. Unfortunately, no concrete examples is provided to show when would the minimax rate be dominated by the first term.
In particular, the reviewer is curious about the complexity of learning the barycenter/optimal transport as conveyed in the second term. For example, on synthetic problems constructed in prior work that analyzes minimax optimality of fair regression, what does the constants $\alpha$ and $\beta$ instantiate to, and does the second term decay faster than the first?
2. Because Theorem 3.4 is derived from the *post-processing* perspective, the second term could have been replaced by any sample complexity bound for learning the barycenter/optimal transports. How does the derived bound compare (or why incomparable) to that in section 4 of [Chzhen et al., 2020], which was derived for a nonparametric estimate?
3. The result in Theorem 3.4 is obtained by taking the *post-processing* perspective, rather than analyzing the fair regression problem in an *end-to-end* manner (i.e., in-processing), so the caveats of post-processing vs. in-processing applies. In particular, post-processing can be suboptimal if the class of regressor is constrained, that is, constraining the $\inf_{f_n}$ in $\mathcal E_{n_s}$; see [Woodworth et al., 2017].
4. The algorithm in Section 4 basically takes the same form of that in [Chzhen et al., 2020], except that here the barycenter/optimal transports are estimated by optimizing the congruent potentials on the empirical problem. It is unknown how it compares to the original nonparametric estimator, and there is no experiment results in this paper, so the contributions in this section is unclear. What are the benefits of using the new formulation compared to the original?
Other Comments Or Suggestions: - line 180: incomplete sentence
- line 301: Dodley -> Dudley
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed feedback. Here, we address the main concerns:
W1:
A pertinent example is when $f^*_{\mu,s}$ is a composition of multiple functions, i.e., $f^*_{\mu,s} = g_q \circ ... \circ g_0$, within the Holder class as used by Schmidt-Hieber (2020), and $\vartheta^*_{\mu,s}$ belongs to the Sobolev space. In this scenario, $\mathcal{E}_{n_s}(\mathcal{P}_s) = \Theta(\max_i n_s^{-2\beta^*_i/(2\beta^*_i + t_i)})$, where $\beta^*_i$ represents the cumulative smoothness for $g_q, ..., g_i$, and $t_i$ indicates the dimensionality of the input of $g_i$. For the Sobolev space with smoothness $\gamma > 0$, Definition 3.3 is satisfied with $\alpha = 2\gamma$ and $\beta = 1$ by selecting $\Theta_j$ as a set of functions spanned by the first $j$ wavelet bases. The rate becomes $\max_i n^{-2\beta^*_i/(2\beta^*_i + t_i)} + n^{-2\gamma/(2\gamma + 1)}$, up to logarithmic factors, under the assumption $n_s \ge cw_s n$. If $g_i$ and $\vartheta^* _{\mu,s}$ share the same level of smoothness, i.e., $\beta^* \approx \gamma$, the first term dominates the second if $t_i > 1$ for some $i$. Here, $t_i$ denotes the dimensionality of the essential intermediate data representation. Such an intermediate representation may have multiple dimensions, i.e., $t_i > 1$, causing the first term from the conventional regression problem to dominate the rate.
We will incorporate this example after discussing the implications of the main theorem in the revised version.
W2, W4:
Our results offer significant advantages over the error bounds presented by Chzhen et al. (2020). Firstly, our results are derived under weaker assumptions. Their results necessitate the conventional regression algorithm to have a sub-Gaussian high-probability error bound, whereas our results only require a bound on the expected square error. Additionally, while they demand constant upper and lower bounds on the density of $\nu_i$, we only assume the Poincare-type inequality. This broadens the applicability of our meta-theorem compared to their findings.
Secondly, their results cannot achieve a rate faster than $n^{-1/2}$, as their upper bound on the estimation error of $\vartheta^*_{\mu,:}$ is $n^{-1/2}$ and dominates other terms. However, our results can achieve a rate faster than $n^{-1/2}$ by leveraging the smooth structure of $\vartheta^*_{\mu,:}$.
Thus, our results can demonstrate a faster rate under weaker assumptions compared to those provided by Chzhen et al. (2020). We will include this discussion as an implication of our main theorem in the revised version.
W3:
We wish to clarify that our results do not contradict those of (Woodworth et al., 2017), as their results pertain to equalized odds, not demographic parity. Our findings show that concerning sample size, post-processing can be optimal. However, it may be suboptimal for other parameters, such as the Lipschitz constant $L$ and the number of groups $M$. Analyzing optimality for these parameters is an important direction for our future work.
---
Rebuttal Comment 1.1:
Comment: The reviewer appreciates the authors' response, and has raised their score! | Summary: This paper studied the fair regression problem with demographic parity as a constraint. It claimed that existing minimax optimal regression algorithms are coupled with data generation methods, and proposed meta-theorems to validate the fair minimax optimality. Then they demonstrated that the optimal regression can be achieved through post-processing methods, which thus can be efficiently and flexibly achieved.
Claims And Evidence: The claim that existing analyses are coupled with data generation is not clear. I saw it was mentioned in introduction while I cannot connect it in the main methodology.
Methods And Evaluation Criteria: This is a theoretical paper which provides analyses instead of new methods.
Theoretical Claims: I read through the theoretical analyses but only understand some parts of them.
Experimental Designs Or Analyses: No experiments were presented in the paper.
Supplementary Material: The appendices contain some theory proofs and no additional supplementary materials provided.
Relation To Broader Scientific Literature: I checked the related papers about OT map estimation in Wasserstein barycenter problems and minimax optimality under regression research. However, I cannot immediately get the corresponding improvements over these works although the authors have highlighted their differences.
Essential References Not Discussed: I have related comments below.
Other Strengths And Weaknesses: Strengths: I think the three listed contributions are important if they are correct. The meta-optimality connects the minimax optimal error for fair regression and traditional regression. They demonstrated that optimal regression models can be fair through post-processing method. And they provide convergence rate analysis for the transport map estimation.
Weaknesses:
1. The first thing I felt confused is that in fairness research, minimax optimality also refers to minimizing the risk of the worst group [1], which is under the Rawlsian fairness concept. I found the authors are working at a different area after I checked the work of Chzhen& Schreuder (2022) and Zeng (2024). The authors mentioned the worst-case error taken over a certain set of data generation models, while I did not understand how this limitation is addressed in this paper.
2. I found regressor f is group-wise (line 75). Then a conventional regressor may be group unaware. How to do post-processing in this case? Maybe I am questioning in an incorrect way.
3. I realised this work and some earlier ones focus on regression analyses. I am curious why regression is considered only? Will these analyses be true for classification as well? A recent paper [2] pointed out loss/error is more sensitive than discrete perditions when considering risk distributions. Is this paper sharing a similar insight?
[1] Minimax Pareto Fairness: A Multi Objective Perspective. ICML 2020.
[2] Towards Harmless Rawlsian Fairness Regardless of Demographic Prior. NeruIPS 2024.
Other Comments Or Suggestions: Is the proposed post-processing algorithm a new method compared with existing post-processing fairness work? Then would the experiments on it help justify the advances?
Questions For Authors: I would consider it is a good paper if authors can address my above concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's comprehensive feedback and would like to address their main concerns:
W1: Rawlsian Fairness
We wish to clarify that Rawlsian fairness is fundamentally different from equality-based fairness concepts like demographic parity and equalized odds. Rawlsian fairness focuses on minimizing adverse effects on the most disadvantaged group, whereas equality-based fairness aims for equal treatment across groups. Due to these fundamental differences, results from Rawlsian and equality-based fairness approaches are not directly comparable.
Furthermore, we emphasize that minimax optimality is a well-established concept in statistical literature, used to define the best estimator for a statistical estimation problem. In our context, "minimax" involves maximizing over all possible underlying distributions and minimizing over all estimation algorithms. This concept differs from Rawlsian fairness, as we can define a minimax optimal Rawlsian fair learning algorithm that minimizes the worst-case error, considering both underlying distributions and groups.
Regarding data generation models, they represent potential underlying distributions in nature. Considering a specific set of data generation models is not a limitation of the learner's ability but rather reflects prior knowledge about the data. The sample complexity varies based on this prior knowledge. Extensive research in statistical literature has shown how prior knowledge fundamentally influences sample complexity. Our meta-theorem can accommodate various prior knowledge assumptions by leveraging these existing results.
W2: Group-wise Regressor
We use a group-wise predictor because it is commonly employed in fairness literature. Developing an optimal post-processing method for an unaware predictor is an important area for future research.
W3: Classification
In the context of minimax optimality, regression problems are often considered more fundamental than classification problems, as the minimax optimal error for classification is typically derived using regression techniques. Extending our work to classification problems is a significant direction for future research. | Summary: This paper investigates the theoretical properties of fair regression problems by leveraging optimal transport techniques. It provides important theoretical bounds in the context of fair regression, and designs regression algorithm matching the upper bound.
Claims And Evidence: Yes. The overall structure of this paper is logical and sound, with the logic flow clear.
Methods And Evaluation Criteria: Not applicable. This is a theoretical paper.
Theoretical Claims: Not in detail - looks reasonable.
Experimental Designs Or Analyses: Not applicable
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper provides new theoretical analysis for the problem of fair regression.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and for recognizing the importance of our theoretical contributions in fair regression. | null | null | null | null | null | null | null | null |
Preference Learning for AI Alignment: a Causal Perspective | Accept (poster) | Summary: This paper proposes a causal framework for preference learning in the context of aligning LLMs with human values. The authors argue that relying solely on observational data can lead to reward functions that pick up spurious correlations rather than true causal drivers of user preferences. To address this, the authors develop a model that incorporates causal inference principles such as potential outcomes, confounding, and latent treatments. Through theoretical analysis, they emphasize the critical assumptions needed for causal identification, most notably unconfoundedness and latent overlap, and discuss how these assumptions often fail in real-world data collection settings. Empirically, they demonstrate that strong correlations among latent features or user-specific objectives can cause overfitting and hinder the robustness of reward models under distribution shifts. The paper then proposes an “adversarial multi-objective reward model” to mitigate confounding effects, showing improved generalization, particularly in highly confounded scenarios.
## update after rebuttal
Thanks the authors for their detailed responses. My questions and concerns are fully addressed, and I will keep my original rating.
Claims And Evidence: Yes. The paper provides detailed explanation on the problem formulation, theoretical statements, and numerical experiments.
Methods And Evaluation Criteria: The paper’s methods and evaluation align well with its theoretical goals with well-known dataset like HH-RLHF and UltraFeedback, but they mainly use synthetically augmented datasets instead of real-world data.
Theoretical Claims: The propositions in the main paper seems fine to me.
Experimental Designs Or Analyses: The authors conducted case study over two public datasets. The analyses look fine to me, but the synthetic task design remain my key concern.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper provides an interesting causal perspective over AI alignment, and discusses the key challenges critical assumptions for generalizing reward models to unseen texts and contexts.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
A key strength is its clear explanation of the preference learning problem and seamless integration of a causal framework. It effectively connects user preferences, latent features, and rewards, making it easy to see how spurious correlations arise and hurt generalization. The solid theoretical backing on latent overlap and confounding, supported by a case study, reinforces these ideas. Overall, it offers a well-structured causal perspective.
**Weaknesses**
Some arguments in the paper seem either insufficiently supported or somewhat overstated. For example, the concern over "unobserved confounding" suggests that latent user attributes (like professional background) might create misleading correlations, such as academic-style queries leading to a preference for rigorous answers. However, in many real-world cases, these correlations are exactly what a model should capture. Medical experts, for instance, genuinely need detailed and precise explanations. If most academic queries come from users who actually require rigorous answers, then learning the pattern “academic → rigorous” isn’t necessarily a problem.
Regarding the “low overlap” issue, the paper argues that rare combinations of latent features can hurt generalization under distribution shifts and suggests extensive interventions to ensure broad coverage. However, in practice, dedicating significant resources to extremely rare cases may not always be worthwhile—focusing too much on edge cases could come at the cost of improving more common scenarios. The paper doesn’t address this trade-off, making its push for systematically controlling all latent factors feel somewhat idealistic in real-world data collection.
Other Comments Or Suggestions: No.
Questions For Authors: See Weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and for engaging with the core ideas of our work. We appreciate your time and feedback, and we’d like to address your concerns in hopes of clarifying our contributions and positioning.
**The semi-synthetic design** We appreciate the reviewer’s attention to our experimental design and understand their concern about the use of semi-synthetic tasks. However, we’d like to clarify why semi-synthetic evaluations are a necessary and principled choice for studying causality in preference learning. Evaluation of causal inference methods demands knowledge of the true data-generating process (e.g., presence of confounders, the true set of causal variables), which is *unknowable* in purely observational real-world datasets. Semi-synthetic data allows us to:
- Control the degree of overlap.
- Inject controlled confounding (to simulate real-world biases).
- Precisely measure how well the competing methods recover true relationships.
Real-world datasets only show the observed human choices, not the counterfactual "what-if" responses needed to assess how a user would react if the same prompt were answered differently or if they acted according to a different objective. By augmenting these real-world public datasets with synthetic variations (e.g., simulating distribution shifts or conflicting objectives), we create a controlled testbed to isolate causal challenges—something impossible with static, observational data. Our approach aligns with established causal inference literature, where semi-synthetic data is the standard for evaluating methods [1, 2].
**Useful vs. harmful correlations** We agree with your observation that correlations like “academic-style queries → rigorous answers" are often valid and desirable—indeed, in many cases, these patterns reflect genuine user needs. However, our focus is on the challenges such correlations pose when training reward models from observational data where confounding is present. This issue is particularly critical in the context of steerable alignment, where we aim to gain control over which objective an LLM aligns to at inference time, rather than letting the model infer it purely from its input. Our case study in Section 4 examines the problem of learning a multi-objective, steerable reward model, where the two objectives considered are learned from different data distributions. We demonstrate that the naive approach fails when the train reward model is applied to data distributions different than at training time, which can plausibly occur in the real world. We also show how insights from causal representation learning can help build more robust reward models.
**ACTION:** We acknowledge that not all correlations are harmful, and we will clarify this nuance in the revised version of our manuscript.
**Practicality vs. coverage** You also raise a valid concern about the practicality of ensuring broad coverage of latent factors. We agree that exhaustive interventions are often infeasible, and we do not advocate for unrealistic data collection burdens. However, our goal is not to advocate for rigid control but rather to bring attention to challenges in learning across heterogeneous user populations, merging datasets collected with different objectives, or collecting feedback directly from users who wrote the prompts—practices that are present in preference learning today. While training reward models in an unsupervised, uncontrolled manner is flexible and scalable, we argue that the community should consider seeking a better balance between practicality and robustness. We highlight cost-effective strategies such as active querying (Appendix C) and causal regularisation techniques (as exemplified with the adversarial model) that can improve robustness without excessive overhead.
**ACTION:** We will explicitly discuss the trade-offs between practicality and robustness in the final version, clarifying our position.
---
We hope this response alleviates your concerns about our paper’s positioning. Our work does not dismiss observational data but instead provides tools to diagnose its limitations—a step we find crucial to achieving scalable and robust alignment. We believe the paper’s insights are valuable for the community, especially as alignment research moves towards greater personalisation.
Thank you again for your time and feedback. We are happy to answer any further questions you may have and incorporate further revisions to ensure clarity in the final version.
[1] arxiv.org/abs/1606.03976
[2] arxiv.org/abs/1705.08821 | Summary: - This paper introduces a causal framework for preference learning in AI alignment, specifically focusing on reward models trained on LLM prompts and response pairs. The authors frame prompt-response-response tuples as treatment variables, with latent rewards for each prompt-response combination serving as mediators for the observed binary preference label (the outcome variable).
- Reward models aim to learn to predict these latent rewards as a function of text, essentially estimating a treatment effect. However, these models may fail if they don't account for the underlying causal structure. The authors particularly focus on cases where contextual variables also affects the reward (and in some cases, the prompt too). For example, if a user's domain knowledge determines both the prompt and their assessment of responses, this creates confounding that precludes causal interpretation without additional assumptions. The paper draws on standard causal inference techniques and insights from causal representation learning to identify these necessary assumptions.
## update after rebuttal: the authors have addressed all of my concerns and suggestions.
Claims And Evidence: - The paper's primary theoretical contribution is a causal framing of preference learning. The theoretical results about causal identification are standard results from the causal inference literature, adapted to the preference learning context and accounting for the latent structure of textual features. They claim to be the first to address confounding due to user-specific covariates, though they note previous work that has identified the existence of user-specific covariates and their effect on rewards [but not necessarily through confounding].
Methods And Evaluation Criteria: - The HH-RLHF study effectively addresses the motivating challenge of confounding in a semi-synthetic environment. The dataset is appropriate.
- See my comments under “Other Strengths And Weaknesses,” where I suggest that the UltraFeedBack experiment may be unnecessary or out of place.
Theoretical Claims: - Overall the theoretical claims are well-justified. But see my comments in “Other Strengths and Weaknesses.”
Experimental Designs Or Analyses: - The HH-RLHF study provides a very nice comparative analysis of reward models under confounded data.
- There could be more explanation of how the Multihead architecture incorporates the causal structure of the data. (The review suggests that the $\hat{z}$ being learned as a function of just $(x, y)$ means it doesn't use information from $c$, but it would be helpful to clarify that $c$ doesn't use information from $\hat{z}$ either, mirroring the conditional independence found in the underlying causal graph.)
- See my comments under “Other Strengths And Weaknesses,” where I suggest that the UltraFeedBack experiment may be unnecessary or out of place.
- The experiments don't address treatment effect heterogeneity, which is one of the paper's theoretical motivations.
Supplementary Material: - The anonymous code appears clear and comprehensive.
Relation To Broader Scientific Literature: - The related works section nicely situates this paper within the broader causal inference and reward modeling literatures.
Essential References Not Discussed: - In the discussion of causal representation learning on page 5 and in Appendix C, the authors should reference the fundamental challenges in learning causal representations from stationary data. Much theoretical work has been done demonstrating how identification of causal representations requires auxiliary labels (for instance, using paired examples where only a small number of ground-truth causal variables have been intervened upon). This is especially relevant to the paragraph “Active querying strategies and interventions” in Appendix C.
- See Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations by Locatello et al., https://arxiv.org/abs/1811.12359 and any more recent relevant work citing this paper.
Other Strengths And Weaknesses: - Strengths
* The paper is very readable with clear visual cues and figures.
* Extensive use of examples enhances understanding of the concepts.
- Weaknesses
- Overlap/Positivity
* Inconsistent terminology: The use of the term "overlap" in Assumption 3 and Assumption 5 to cover the case without conditioning on covariates is confusing, as this term is typically used to refer to overlap across covariate subpopulations (which the paper itself discusses on page 6, line 315). Even before this section, the paper describes the overlap assumption in the more standard "conditioning on covariates" way on page 4, line 202, noting that "The assumption of positivity requires that every user has a non-zero probability of being assigned any combination of texts given their covariates."
* Unnecessary for identification: The overlap assumption as defined in Assumption 3 and Assumption 5 is not actually necessary for the proofs of Proposition 1 or Proposition 2. What the paper appears to be getting at is that overlap is not needed for causal identification in the case without confounding, but it is needed if one wants to estimate those quantities in a non-parametric form. If we assume a parametric model (e.g., a linear model), then we don't actually need to observe all possible combinations of X, Y, Y', contrary to what the paper claims.
* Statistical vs. causal confusion: The UltraFeedBack Case Study (Appendix D, referenced in Table 1) is fundamentally about misgeneralization due to failure of the IID assumption. The experiments show how models fail when correlations present during training disappear at test time, but this doesn't necessarily involve causal misconceptions - it's about the IID assumption failing. A truly causal problem would exist if, even with IID data, infinite data would not allow us to estimate a treatment effect of interest. Including this study dilutes the paper's causal focus and may confuse readers about which problems are genuinely causal versus purely statistical.
Other Comments Or Suggestions: - Page 2, line 86: “maximising” should be corrected to “minimizing.”
- Page 4, line 194: “underepresentation” should be corrected to “underrepresentation.”
- Page 5, line 260: ensure clarity regarding the distinction between causal discovery and causal representation learning.
- Though some motivation for the causal framing is presented in the introduction, some additional comments here may be helpful, perhaps by emphasizing the downstream role of reward models in alignment.
Questions For Authors: * Could you clarify why the UltraFeedback case study is framed as a causal problem rather than a statistical generalization issue? What makes the correlation shifts in this experiment fundamentally causal rather than just IID violations?
* Your theoretical motivation discusses both confounding and heterogeneous treatment effects, but your experiments focus primarily on confounding. Do you have plans to extend your empirical analysis to demonstrate the benefits of your causal approach for handling heterogeneous treatment effects?
* Your paper mentions that "confounding due to user-specific covariates has not been addressed in prior works." Could you elaborate on the distinction between your contribution and prior work that has identified the existence of user-specific covariates (like Siththaranjan et al.'s work on "hidden context")?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful review. We appreciate the overall positive evaluation of our work. Below we address your comments and concerns:
**The Multihead architecture** We appreciate your suggestion, **ACTION:** we clarify how the Multihead architecture incorporates the causal structure in the camera-ready version.
**Additional References** Thank you for your suggestion regarding the additional references. **ACTION:** We suggest including [1, 2, 3, 4] as additional references strengthening our claims on page 5 and Appendix C.
**Overlap/Positivity.**
- **Terminology:** Preference learning conventionally assumes homogeneity of user preferences, so we stated identifiability assumptions in the *unconditional* case; this also reduces the notational clutter. However, we acknowledge that *overlap* is usually used conditionally. **ACTION:** We propose to rename Assumption 3 as *Unconditional Positivity* and Assumption 5 as *Unconditional Latent Positivity*. We will also clarify that we use *positivity* as shorthand for the unconditional case, while *Conditional Positivity (a.k.a. Overlap)* applies when user-specific covariates are considered (e.g., Example 4, Section 4).
- **Sufficiency**: Indeed, positivity is not necessary for identification but is required for non-parametric estimation as shown in the proofs of Proposition 1 and Proposition 2. Thank you for pointing this out. **ACTION**: We will adapt the presentation of these results to make this clear.
- **Statistical vs. causal confusion**: We agree with the reviewer that the UltraFeedback experiment primarily addresses statistical generalisation. The setup does not strictly violate the positivity condition, making the problem solvable in the limit of infinite data. Yet, a perfect correlation is not expected to occur in the real world so we focus on the practical challenge of *limited* latent positivity. It has been shown that near-violations of positivity inflate estimator variance and necessitate impractical amounts of data for robust inference [5, 6, 7]. Thus, while the observed performance degradation is a statistical issue, it stems from the weak satisfiability of the causal positivity assumption. **ACTION:** We appreciate the feedback and agree that the presentation of this experiment could be sharpened. We will explicitly frame this experiment as a statistical challenge arising from near-violations of the causal assumption and not a causal non-identifiability problem.
**Other comments** Thank you for catching all the typos and your suggestions regarding the introduction. **ACTION:** we will implement them!
**Heterogeneity of user preferences** We would like to highlight that while the case study of section 4 is presented as a study of confounding, the issues discussed are inherently a consequence of the heterogenous preferences of the two user groups: the helpfulness and the harmlessness preferring, where the two objectives are often *conflicting*.
We find the investigation of more fine-grained variability in user preferences, going beyond the simplified setup of just two, an exciting direction for future work.
**Distinction from prior work with user-specific covariates** Siththaranjan et al. highlight the possibility of heterogeneity in human preferences, pointing out the existence of potentially unobservable, hidden contexts that can influence user preferences. They demonstrate the negative consequences of implicitly aggregating over these hidden contexts when performing preference learning under a homogenous BTL model. Using our notation, they show that applying $\mathbb{E}[L(x; y, y')]$ as a global estimate of the expected preference for all users (instead of $\mathbb{E}[L(x; y, y') \mid C = c]$ separately for each set of user-specific covariates $c$) can lead to misalignment, especially with respect to the minority groups.
However, their work does not address the issue of confounding, which arises when user-specific covariates $C$ not only influence the preference $L$ but also the distribution of the prompts $X$, making $C$ a confounder. More generally, the limited number of related works considering user-specific covariates in preference learning assumes that user-specific preference datasets are collected under a randomised distribution of treatments. Our work challenges this assumption and explicitly considers the scenario where this distribution is influenced by $C$ (which is the case for instance when the prompts $X$ are written by the users themselves), leading to confounding issues examined in the case study of Section 4.
---
We thank the reviewer for their positive evaluation of our work and are happy to answer any further questions!
[1] arxiv.org/abs/1811.12359
[2] arxiv.org/abs/2105.06422
[3] arxiv.org/abs/2209.11924
[4] arxiv.org/abs/2203.16437
[5] doi.org/10.2307/2998560
[6] doi.org/10.1111/1468-0262.00442
[7] doi.org/10.3386/t0330
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply! Your response addresses all of my comments and questions. | Summary: This paper uses a causal framework to articulate several assumptions commonly made when reward modeling from preference data. Namely, users are modeled as having implicit rewards which they assign to each response: hence, a preference label is formalized as being a function of these two potential rewards. The paper then investigates what assumptions are sufficient for the Bradley-Terry-Luce model to identify user preferences.
The first three assumptions are typical in causal inference (consistency, unconfoundedness, and positivity/overlap). Positivity is weakened to latent-overlap under an additional assumption of latent sufficiency. The paper suggests that these assumptions are not satisfied in practice by appealing to high-level, hypothetical examples (e.g. LLM responses aren’t expected to be simultaneously informal and professional).
There are two sets of experiments. Firstly, the practical relevance of the latent-overlap assumption is demonstrated by showing that linear reward models fail to generalize OOD on the UltraFeedback dataset. Secondly, the second suite of experiments demonstrates that confoundedness is by default an issue on the HH-RLHF dataset which can be addressed by careful architecture design when training the reward model (especially via a causality-inspired adversarial multi-objective reward model).
## update after rebuttal
The authors addressed my concerns to my satisfaction. I particularly appreciated the final rebuttal comment which elaborated on how a reward model could overfit conditionally to the confounder in "multi-objective" datasets. Hence, I am convinced that this really is saying something interesting about realistic training setups where many datasets are used during post-training: the key insight is that the way in which these datasets are mixed is itself providing information on which the reward model can overfit!
I am raising my score to 4, with the expectation that the authors will include these details in particular in the main body or appendix.
Claims And Evidence: I worry that due to the presentation, many readers will misunderstand the significance of the assumptions in this paper as “necessary” when all that is shown is that they are “sufficient” for identification. Propositions 1 and 2 merely show that making these assumptions are sufficient, not necessary. The experiments show that under a particular training setup issues arise, but do not attempt to assess the scope of these issues “in the wild”: the experiments only demonstrate that misidentification can occur when these assumptions are violated on two datasets, the UltraFeedback dataset and HH-RLHF.
In particular, this paper should at least consider alternative assumptions which could provide identifiability guarantees while being more tenable for real-world preference learning setups. Specifically, consider the alternative assumption of “having access to observational data from multiple environments”, for which there is a rich literature demonstrating that there is rich signal (for both the causal identification task of Section 3.1, and causal representation learning task in “discovery of Z” in line 255 (right)). For instance, “Robust Agents Learn Causal World Models” https://openreview.net/pdf?id=pOoKI3ouv1, “Interventional Causal Representation Learning” https://arxiv.org/abs/2209.11924, Chapter 5 of “Identifiable Causal Representation Learning” https://arxiv.org/pdf/2406.13371, to name a few. This multi-environment assumption is more likely to be satisfied (thus providing identifiability) for leading reward models which train on multiple datasets, unlike the experiments here which only use singular datasets. This undermines the warnings for practitioners in e.g. Section 3.2.1 and Section 4 of this paper.
Methods And Evaluation Criteria: As discussed above, the experiments conducted here suffice for demonstrating that issues can arise, but are insufficient for claiming that these issues are *actually* observed for leading reward models and aligned LLMs. In practice, reward models are not trained on only a single dataset (as is the implicit assumption in the experiments).
Under the multi-environment assumption (that is, training a reward model across many different datasets simultaneously, as is done for leading reward models such as RLHFlow/ArmoRM-Llama3-8B-v0.1 https://arxiv.org/abs/2406.12845 which was trained on HelpSteer, UltraFeedback, BeaverTails-30k, CodeUltraFeedback, Prometheus, and various Argilla datasets), it seems quite possible that replicating the experiment in Section 4 will not yield the same negative results, even when inducing strong correlations in each dataset individually.
In summary: how much are the experimental results of Section 4 simply due to the restriction to single datasets?
Theoretical Claims: I checked the proofs of Propositions 1 and 2, which look fine: they use standard proof techniques from causal inference.
Experimental Designs Or Analyses: Please note how the experimental results in Section 4 do not localize confounding alone, as they also implicitly make a single-environment assumption. Do these results still hold even when models are trained on multiple datasets, as specified above?
Supplementary Material: I read Appendices A, B, and D carefully, and skimmed the other appendices. I did not check the anonymized codebase.
Relation To Broader Scientific Literature: The causal assumptions and latent discussion are well-established approaches in the literature on causal representation learning.
I am not aware of any work which applies it to reward models in precisely the way done in this paper. There are somewhat close papers, namely 1. “RATE: Causal Explainability of Reward Models with Imperfect Counterfactuals” https://arxiv.org/abs/2410.11348v2, which introduces a causal framework for understanding and evaluating the spurious correlations of reward models regardless of whether they identify the original user preferences and leverages a similar latent variable framing and 2. “Aligning Large Language Models with Counterfactual DPO” https://arxiv.org/abs/2401.09566 which uses counterfactual pairs to address spurious correlations during the alignment process.
Given this context, the main contribution of this paper is in clearly articulating several sufficiency assumptions for identification of preferences. The assumptions themselves are unsurprising to anyone versed in causal inference, but may be new to reward model developers. (Which is precisely why it is important that this paper address that the assumptions here are sufficient, not necessary, and also highlight competing assumptions like multi-environment settings).
Essential References Not Discussed: Line 173 (left): “To the best of our knowledge, confounding due to user-specific covariates has not been addressed in prior works.”
There is lots of work which view language models as inferring latent user attributes. e.g. “Language Models as Agent Models”: https://arxiv.org/pdf/2212.01681. Since in practice leading reward models are almost always fine-tuned LLMs, any work that discusses user-specific covariates in the context of LLMs is relevant for reward modeling.
Other Strengths And Weaknesses: The paper is clearly organized, and can serve as a good introduction for practitioners not already familiar with causal representation learning.
The adversarial multi-objective reward model was neat.
Weaknesses have already been sufficiently noted elsewhere in this review.
Other Comments Or Suggestions: Typo: line 20 (left) should be “causal” not “casual”.
Questions For Authors: Do the results of Section 4 still hold under a multi-environment assumption, that is, even when models are trained simultaneously on multiple datasets, e.g. HelpSteer, UltraFeedback, BeaverTails-30k, CodeUltraFeedback, Prometheus, and various Argilla datasets as used for leading real-world reward models? Leading reward models are not trained on single datasets anymore, so it is not clear that the base model’s worse performance is due to confounding (as claimed) or of only using a single environment.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough and insightful evaluation of our work. We appreciate the time and effort you put into assessing our paper. Below, we address your questions and comments point by point.
**Presentation of assumptions** We acknowledge your concern regarding the potential misinterpretation of our assumptions as necessary rather than sufficient. **ACTION:** We will make appropriate adjustments in the phrasing in the revised version of our paper. In particular, we suggest that the first paragraph of section 3.1 is instead formulated as:
> Causal inference provides a framework to answer counterfactual, 'what if' questions even when only observational data is available. A key part of causal analysis involves ensuring that the causal quantity of interest can be estimated from observational data alone. The following commonly made assumptions, adapted to our preference learning setup, are sufficient to guarantee identifiability and non-parametric estimability:
>
**The multi-environment setup** We greatly appreciate your suggestion to consider multi-environment setups and acknowledge their importance in broadening identifiability guarantees. **ACTION:** To further emphasise the fact that the assumptions discussed in our paper are sufficient rather than necessary, we will elaborate that alternative setups, such as having access to data from multiple environments, under certain conditions may also enable identifiability. We will reference the related literature you highlighted and contextualise it within multi-dataset reward model training. While learning reward models from multiple datasets is an emerging research area, its causal implications warrant further investigation. We view extending our framework to multi-environment settings as a promising avenue for future work.
That said, our case study in Section 4 underscores a fundamental challenge in merging datasets collected under divergent objectives. Specifically, the HH-RLHF dataset comprises two independent subsets: the helpfulness subset and the harmlessness subset, which can be interpreted as distinct environments. Our experiments highlight potential pitfalls in training a multi-objective reward model when datasets exhibit distributional differences.
We acknowledge the relevance of investigating our results in larger-scale multi-environment settings, such as those considered in ArmoRM. However, the datasets involved in ArmoRM lack counterfactual preference labels—i.e., they provide preferences only under their respective training objectives but do not indicate how preferences would transfer across datasets. The lack of overlap in objectives between the datasets of ArmoRM makes it difficult to assess generalisation beyond the dataset-specific correlations present at training time. In our experiments, the assessment of the counterfactual scenarios is possible thanks to the provision of additional, counterfactual labels to the HH-RLHF by [1]. The presented results exemplify potential risks associated with training multi-objective, steerable reward models from a composition of datasets collected under different, potentially conflicting objectives. We hope this motivates the development of new datasets tailored for robust reward model training.
**Related work & contributions** We appreciate your pointers to related work. **ACTION:** we will incorporate the suggested references into our discussion of related literature. Additionally, we would like to point out that our contribution extends beyond articulating sufficiency assumptions for preference identification. A second key contribution of this work is the highlighting of the potential confounding effects due to user-specific covariates.
While there indeed exist prior works that aim to infer user-specific covariates (e.g., [2, 3]), they do not examine the confounding issue. The implicit assumption of prior works is that the covariates $C$ only influence the preference label $L$ but not the prompt distribution $X$. Our work considers cases where $C$ influences both $X$ and $L$, and thus it acts as a confounder. This is the case when users write the prompts $X$ and subsequently score the LLM’s generated responses to these prompts, which may in practice be the case (e.g. the HH-RLHF dataset).
**Minor corrections** Thank you for catching the typo on line 20, we will correct it!
---
We are grateful for the opportunity to refine our work based on your thoughtful feedback. Your suggestions have helped us improve the presentation clarity, depth and broader impact of our research. We hope our explanations and proposed revisions address your concerns. Thank you for your time and consideration.
[1] arxiv.org/abs/2312.08358
[2] arxiv.org/abs/2402.05133
[3] arxiv.org/abs/2409.11901
---
Rebuttal Comment 1.1:
Comment: > While learning reward models from multiple datasets is an emerging research area, its causal implications warrant further investigation. We view extending our framework to multi-environment settings as a promising avenue for future work.
That said, our case study in Section 4 underscores a fundamental challenge in merging datasets collected under divergent objectives. Specifically, the HH-RLHF dataset comprises two independent subsets: the helpfulness subset and the harmlessness subset, which can be interpreted as distinct environments. Our experiments highlight potential pitfalls in training a multi-objective reward model when datasets exhibit distributional differences.
Question: Can you elaborate on how your case study in Section 4 highlights potential pitfalls when datasets exhibit distributional differences, as you say HH-RLHF exhibits?
Note: I acknowledge that when RLHF was first introduced (https://arxiv.org/abs/1706.03741), reward models were trained on homogeneous data, but in practice all of the leading models on RewardBench have been fine-tuned using multi-objective data. So the relevance gap is very significant, and the paper would be much stronger if it addressed this. However, I acknowledge that this lies outside the current scope of the paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your question. Due to character constraints in the main rebuttal, we could only briefly reference this issue. We’re happy to elaborate below:
As described in [4], the HH-RLHF dataset was constructed by combining *two distinct datasets* collected under different objectives: *helpfulness* and *harmlessness*. As observed by [1], this resulted in the two datasets having distinct distributions of prompts:
> *"… we found that the distribution of prompts was quite different between the helpfulness and harmlessness splits of the dataset. In the helpfulness split, most prompts were harmless questions or requests for assistance. In contrast, in the harmlessness split, most prompts were specifically chosen to elicit harmful behavior."*
>
This data collection setup induced a correlation between the labelling objective $C$ (helpful vs. harmless) and the prompt type, which we denote by $\mathrm{type}(X)$. As a result, the objective $C$ acts as a *confounder*—any learned reward model could exploit the correlation between $C$ and $\mathrm{type}(X)$ to predict the objective-conditioned preferences: $\mathbb{E}[L(x; y, y') \vert C=c]$.
To study this effect, we use the synthetic counterfactual labels from [1], which allow each sample $(x, y, y')$ to be evaluated under both objectives: the “factual” ones where the prompt type agrees with the objective (as in the original dataset), and the “counterfactual” ones where $\mathrm{type}(X) \neq C$. By controlling the training time correlation $\rho := P(\mathrm{type}(X) = C)$, our experimental setup lets us isolate how well the reward models generalise beyond their training distribution.
Importantly, at $\rho = 1.0$—corresponding to the original HH-RLHF dataset—the reward models tend to overfit: they perform well when evaluated on instances and objectives for which $\mathrm{type}(X) = C$, but fail on the *counterfactual* ones where $\mathrm{type}(X) \neq C.$ As $\rho$ decreases, this effect is less severe. At $\rho=0.5,$ i.e. when the distribution of prompts for learning each objective is perfectly overlapping, all reward models perform well on both types of evaluations.
In practice, however, when learning from static, observational datasets, $\rho$ is not controllable—posing risks for the robustness of learned reward models to test-time distribution shifts. Our results highlight a potential challenge in multi-objective, multi-environment preference learning: if each objective is learned from a distinct distribution of prompts (effectively, different environment), reward models may overfit to spurious environment-specific features rather than correctly recognising the true underlying factors driving objective-conditioned preferences.
Our case study motivates, for example, the use of targeted interventions during data collection to increase overlap between prompt distributions across different objectives or the introduction of appropriate regularisers—such as in the Adversarial model presented in this study.
We hope this response clarifies our reasoning. Thank you again for engaging with our work.
[1] *Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF* – https://arxiv.org/abs/2312.08358
[4] *Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback* – https://arxiv.org/pdf/2204.05862 | Summary: This paper introduces a causal framework for preference learning in AI alignment, aiming to improve the robustness of reward models. Reward modeling from preference data is a crucial step in aligning large language models (LLMs) with human values. The authors propose integrating causality into reward modeling, arguing that this approach enhances model robustness.
A key challenge in preference learning is that pairwise preference datasets are often collected opportunistically. LLM users both evaluate model responses and generate prompts, introducing individual-specific confounders that bias the learning process. The paper illustrates this issue with examples and experimental results, demonstrating the vulnerability of naïve reward models to OOD (out of distribution) samples.
To address these challenges, the authors propose controlled, randomized experiments, where prompt-response pairs are allocated randomly across a representative population. Additionally, they suggest that preference data collection methods can be improved to infer user-specific objectives through: Explicit feedback: Users provide rationales for their preferences, offering richer insights into their underlying objectives and Contextual information modeling: Incorporating external context to refine preference interpretation. This causal perspective provides a more systematic and reliable approach to reward modeling, ultimately strengthening AI alignment with human values.
Claims And Evidence: Please refer to the Questions For Authors Section.
Methods And Evaluation Criteria: Please refer to the Questions For Authors Section.
Theoretical Claims: I did not find any apparent errors in the author's mathematical derivations. However, the theoretical content in this paper is relatively light. I reviewed the derivations of Proposition 1 and Proposition 2, and they appear to be correct. That said, I recommend that the authors provide better citations to relevant literature and more discussion on key intuitions, especially for readers unfamiliar with causal representation learning.
For instance, in Proposition 1, it would be helpful to clarify why the expectation in Line 149 suggests that observed statistical associations have a causal interpretation. Providing additional context or references could strengthen the argument and make the results more accessible to a broader audience.
Experimental Designs Or Analyses: Please refer to the Questions For Authors Section.
Supplementary Material: I have checked the related work, proofs, extended discussion, and experiment setting. I did not reproduce the experiment results based on the provided code. However, since the code is very readable and very well written, I believe the experiment results are trustworthy.
Relation To Broader Scientific Literature: It opens new research directions, sparks discussions on integrating causal frameworks into reward modeling to enhance robustness, and provides valuable guidance on observational data collection processes. I believe this is a meaningful addition to the broader literature.
Essential References Not Discussed: Please refer to the Questions For Authors Section.
Other Strengths And Weaknesses: Strengths:
Please refers to the Relation To Broader Scientific Literature section.
Weaknesses:
Please refer to the Questions For Authors Section.
Other Comments Or Suggestions: Please refer to the Questions For Authors Section.
Questions For Authors: 1. I did not fully understand the counterfactual outcome associated with not receiving the (X, Y, Y') treatment. Could you provide an intuitive example? For instance, in the case of drug treatment effects, the counterfactual outcome represents the difference in health status of the same individual under treatment versus without treatment. Typically, we compare the observed outcome under treatment with the hypothetical (counterfactual) outcome the individual would have had if they had not received the drug. However, in this paper, it seems that the author defines the expected outcome L(X,Y,Y′) as the counterfactual outcome. This approach is not intuitive to me—could you clarify the reasoning behind this definition?
2. Lack of Supporting Evidence
In Line 14, the authors state:
"Moreover, pairwise preference datasets are often collected opportunistically, with LLM users both evaluating model responses and generating the prompts eliciting them."
Could you provide references to studies that actually collect data in this manner? This is a key claim, as the authors argue that current practices in preference data collection are problematic. However, without supporting evidence, it is difficult to assess the validity of this concern. I would appreciate citations or empirical examples to substantiate this argument.
Questions on examples and case studies
1. Example 1 shows the existence of confounding X. But why not model it as contextual dueling bandits? Such as the modeling of Nearly optimal algorithms for contextual dueling bandits from adversarial feedback?
2. Example 2, the latent effect model, is fine. However, I am uncertain about how this approach will balance model misspecification and robust generalization. This modeling framework seems to assume precise knowledge of the reward model, which raises the question: does this approach truly enhance robustness? Could the authors clarify how the proposed modeling method mitigates misspecification while still ensuring robust generalization? Since in real world, model misspecification will be very common and it is possible that overfitting performs even better than identifying "wrong" or "incomplete" casual relationship.
3. Table 1 highlights the issue, but I find the result unsurprising—OOD (out-of-distribution) performance degradation is expected. However, is there any evidence that modeling reward causally can actually alleviate this problem? Additionally, does similar behavior occur in other datasets, or is this effect specific to the dataset used? Could you clarify why UltraFeedback was chosen for evaluation? A broader justification or comparison with other datasets would strengthen the argument.
4. Case Study Results. I believe it is crucial to include experimental results for Direct Preference Optimization (DPO) as well. My main concern is that reward modeling is not necessarily required for effective preference learning. If DPO is used instead of reward modeling, would the same concerns about robustness and generalization still apply? Including a comparison with DPO would provide a stronger evaluation and help clarify whether reward modeling is essential in this context.
I am willing to reevaluate my rating if all my questions are clearly addressed.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the time taken to evaluate our work. We your recognition of our work as a meaningful addition to the broader literature. Below we address your comments:
**Presentation.** In the camera-ready version, we will use the additional space to clarify key intuitions, especially for readers less familiar with causal representation learning, and include citations to introductory causality resources. Thank you for this suggestion.
**Q1: Treatment effects.** Thank you for your question regarding our causal framework. Due to space constraints, we have provided an intuitive explanation [here](https://imgur.com/a/YqzO5KG).
**Q2: Evidence.** Thank you for prompting us to substantiate the claim about data collection practices. We are happy to provide supporting evidence:
1) the widely-used Anthropic HH-RLHF dataset provides concrete evidence of this. As described in [2], the dataset construction process explicitly involved users both writing prompts and providing preference judgments on LLM’s responses.
2) OpenAI's ChatGPT interface empirically demonstrates this collection approach. Users are frequently presented with pairs of responses to their own queries and asked to indicate preferences. OpenAI's privacy policy confirms these interactions are used for model improvement.
**Q1: Bandits.** Thank you for the suggestion to consider contextual dueling bandits. However, we clarify that the confounder in Example 1 is the unobserved user-specific variable $C$, which influences both the prompt $X$(during dataset generation) and the preference labels **$L$**—not $X$ itself. Secondly, contextual dueling bandits focus on optimising a selection policy for choosing responses **$Y,Y'$** given the observable context $X$ (the prompt), whereas in our setup, these responses are passively sampled from an LLM, whose policy we do not control. Our work focuses on learning a reward model from static, observational data. While applying a contextual dueling bandit framework to jointly learn preferences and control the sampling policy is an interesting direction (e.g. [3]), our work focuses on biases in preference learning from observational data rather than on designing a strategy for sampling the candidate responses. Furthermore, even in a dueling bandits, preferences are typically modelled with the BTL model, which as discussed, does not account for confounding. If an unobserved $C$ influences both the prompts $X$ and preference labels $L$, the learned preference model would remain biased.
**Q2: Example 2.** We would like to clarify that Ex. 2 is not intended to prescribe a fixed functional form for the reward model for learning. It is an illustration showing why conditioning on prompt-specific features ($Z^X$) is essential. The provided equation exemplifies a possible structure of the ground-truth reward function to highlight the plausibility of the effects of $Z^X$ being non-additive.
In practice, to learn a reward model we would not assume knowledge of the parametric form or the set of relevant latent features. Instead, as discussed in the paragraph *Discovery of Z*, the latent factors are learned from data. This flexibility, however, induces challenges associated with causal discovery under minimal supervision [4, 1], including overfitting to spurious correlations, as illustrated by our case studies.
**Q3: The UltraFeedback experiment.** This experiment highlights the impact of the latent overlap assumption—when strictly violated, causal identification is impossible. We show that weaker overlap makes the recovery of the ground-truth model significantly more challenging. This experiment highlights the inherent limitations of learning from observational data only, rather than promoting a specific approach.
UltraFeedback was chosen for its unique structure, allowing control over correlations between latent features. Unlike other datasets like Stanford Human Preferences or HH-RLHF, which label instances based on a single objective, UltraFeedback scores each prompt-response pair across four latent factors.
**Q4: DPO.** Thank you for this suggestion. While DPO bypasses the explicit reward modelling of RLHF, it defines an *implicit reward function* by shaping the policy’s logits to match preference probabilities according to the same BTL model as used in RLHF. Consequently, we expect DPO to inherit the same generalisation failures in OOD settings. While an empirical DPO comparison could offer additional insights, we prioritised analysing reward modelling, as RHLF remains the predominant approach in alignment and DPO is based on the same underlying assumptions.
---
Thank you for your valuable feedback, which helped us improve the paper. We hope our answer resolves your concerns, and we’d be grateful if you could reconsider your initial rating to help us make a meaningful addition to the literature.
[1] arxiv.org/abs/2102.11107
[2] arxiv.org/abs/2204.05862
[3] arxiv.org/abs/2402.00396
[4] arxiv.org/abs/1811.12359
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's detailed responses. Most of the reviewer's concerns have been addressed. However, the reviewer remains particularly concerned about the absence of DPO experimental results as a baseline. Including DPO would significantly strengthen the empirical evaluation, as it would help demonstrate the importance of reward modeling and the robustness of the proposed approach. The reviewer would increase his rating if such results were included.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you again for your feedback. We are happy to hear our response addressed your concerns. Following your suggestion, we attempted to include a DPO baseline. However, despite our best efforts, we encountered practical challenges that prevented us from obtaining meaningful results:
**Computational constraints.** DPO requires LLM policy fine-tuning, which is computationally intensive. Given the compute available at our institution, we were only able to perform LoRA fine-tuning with small batch sizes (8–16), which are significantly smaller than those we could use for training the reward models based on the pre-computed LLM embeddings (i.e., bs=128).
**Policy conditioning.** In our setup (Section 4), conditioning on the user objective $C$ is crucial. In reward modelling, this is achieved by passing $C$ at the input level—either by concatenating a one-hot encoded objective with the prompt-response embedding (Base model), or by directly incorporating $C$ into the model architecture (Multihead model). Enabling a similar form of conditioning in DPO is non-trivial and constitutes a significant challenge in itself. It can be, for instance, attempted at the input level ([5]) or within the parameter space of the LLM (e.g., via so-called steering vectors [6]). We tried adopting an input-level prompt-based approach (similar to [5]) to encode the objective:
```
Objective: <helpfulness / harmlessness>
User: <prompt>
Assistant: <response>
```
Nevertheless, in our setting, this method did not lead to successful convergence of DPO training under the available resources. This suggests that prompt-based conditioning may be suboptimal for multi-objective policy learning—especially in settings where the objectives are in many instances conflicting. It also highlights a broader point: identifying a suitable and practical conditioning method for conditional policy optimisation is a separate research question, which we consider beyond the scope of this work.
While we would have liked to include DPO as a baseline, we were unable to obtain conclusive results. Nonetheless, we note that the practical challenges we encountered—particularly around conditioning the LLM policy—are orthogonal to the core focus of our work: analysing the BTL model and preference learning practices from a causal perspective and identifying their potential pitfalls, such as the confounding effects exemplified in the Case Study. Our core theoretical arguments regarding preference learning remain conceptually applicable to any framework based on the BTL model.
We hope this clarifies our rationale for the empirical focus on RLHF in this work, and we believe the insights and experimental evidence provided remain valuable and informative.
Thank you for taking the time to engage with our submission, we really appreciate your feedback.
Kind regards,
The Authors
---
[5] *Aligning to Thousands of Preferences via System Message Generalization* (NeurIPS 2024)
[6] *Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization* (NeurIPS 2024) | null | null | null | null | null | null |
ML$^2$-GCL: Manifold Learning Inspired Lightweight Graph Contrastive Learning | Accept (poster) | Summary: This paper proposes a lightweight graph contrastive learning (GCL) framework that integrates manifold learning theory, i.e., Manifold Learning Inspired Lightweight Graph Contrastive Learning (ML^2-GCL), aiming to optimize embedding representations through geometric structural constraints while reducing the computational complexity of traditional GCL methods. The method demonstrates innovative design, supported by rigorous theoretical analysis and extensive experimental validation.
Claims And Evidence: The claims are well-supported by both theoretical analysis and extensive experiments.
Methods And Evaluation Criteria: The proposed ML^2-GCL method is both effective and lightweight for the problem of graph contrastive learning.
Theoretical Claims: I checked the correctness of the proof for Propositions 1-2 and Theorem 1.
Experimental Designs Or Analyses: I checked the soundness of the experimental designs and analyses.
Supplementary Material: This paper does not contain any supplementary material.
Relation To Broader Scientific Literature: The application of lightweight graph contrastive learning across multiple scientific domains demonstrates its versatility. For bioinformatics, this paper may integrate heterogeneous graph structures of multi-omics networks for gene-protein association prediction. For knowledge graphs, this paper can combine with knowledge-enhanced graph convolutional networks to enhance cross-entity reasoning accuracy.
Essential References Not Discussed: To my best knowledge, no essential related works are missing.
Other Strengths And Weaknesses: Strengths:
1. This approach aligns with recent trends in geometric property modeling of graph structures, emphasizing the intrinsic low-dimensional manifold characteristics of high-dimensional data. To my best knowledge, ML^2-GCL is the first to marry manifold learning with graph contrastive learning.
2. This paper has a clear research motivation and logical structure. It is well-written and easy to follow.
3. Solid theoretical analysis and extensive experimental results demonstrate the effectiveness and light weight of ML^2 -GCL.
Weaknesses:
1. The first step of ML^2-GCL is the neighboring sampling, which includes anchor node reconstruction combination weights, and then the authors apply the graph encoder. Such procedure is different from general GCL. The authors should give more detailed and reasonable explanation.
2. There are many numbers or symbols in the charts that are not displayed correctly. The author should check and modify them carefully.
3. There are some suggestions for grammar revision and improvement:
- Sentence structure issues: Line 16, "basic principle that " --> " basic principle of", Line 17, "while pushing negative pairs" --> "and pushing negative pairs".
- Inconsistent parallel structures: Line 20, "failure of" --> "the failure of", Line 21, "rigidness" --> "and the rigidness"
Other Comments Or Suggestions: Have the authors tried other types of models as graph encoders, such as Graph Transformers?
Questions For Authors: What are the core technologies or solutions that play a critical role in modeling lightweight in this paper? Could these technologies or solutions be applied to other tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive feedback and careful reading. Below, we will provide a point-by-point response.
W1: See Q of **Reviewer m6R5**.
**W2: This may be caused by software incompatibility. You can try Adobe Acrobat 10.0 to open the PDF file.**
W3: We will modify this in the final version.
C: We did consider using other types of graph encoders, such as Graph Transformers (GT). However, our decision to adopt GCN as the encoder was ultimately driven by the following factors:
**1. Computational Efficiency and Model Lightweight**
One goal of this work is to develop a lightweight graph contrastive learning framework. Compared with Graph Transformers, GCN offers lower computational complexity and fewer parameters, making it easier to train on large-scale graph data while maintaining robust performance.
**2. Compatibility with Manifold Learning**
Our method is rooted in manifold learning, emphasizing the preservation of local linear relations. GCN’s hierarchical neighborhood aggregation mechanism inherently aligns with this design philosophy, enabling the learned embeddings to better retain local geometric information. In contrast, Graph Transformers primarily rely on global attention, which, while effective for capturing long-range dependencies, may dilute the fidelity of local geometric structures.
**3. Experimental Results and Comparability**
To ensure fair comparisons with mainstream GCL methods, we adopted the same encoder setup. Since most baseline models use GCN, maintaining consistency ensures experimental comparability and highlights the advantages of our framework itself.
Even so, we acknowledge the potential of Graph Transformers in modeling long-range dependencies. Future work could explore their integration into the ML²-GCL framework to further enhance global structural modeling capabilities.
Q: The lightweight modeling in this paper primarily relies on the following core technologies and solutions, which reduce computational complexity while ensuring model effectiveness and generalization capability:
**1. Manifold Learning-Driven Positive Pair Weight Calculation**
We compute the positive pairs weight matrix W_p with locally linear embedding, enabling direct learning of global nonlinear representations on the original graph structure without pairwise distance computation.
This strategy applies to other contrastive learning tasks requiring structural preservation, such as text and protein interaction networks. Additionally, it can be used for dimensionality reduction tasks, e.g., manifold-based image feature extraction.
**2. Closed-Form Solution Optimization**
We solve the positive pairs weight matrix W_p through a closed-form solution, which significantly reduces computational complexity compared with gradient descent-based optimization methods, making the approach more lightweight.
This closed-form solution can be generalized to unsupervised dimensionality reduction, spectral clustering, and other tasks, particularly suitable for large-scale datasets such as graph neural network pretraining.
**3. Single-View Modeling Avoiding Data Augmentation**
Traditional graph contrastive learning relies on data augmentation to generate multiple views. This study implements contrastive learning directly on a single view, reducing storage requirements while eliminating semantic bias introduced by augmentation.
This method applies to scenarios where data augmentation is difficult to define, such as social network analysis and biological graph applications, where complex data structures pose challenges in generating high-quality augmented views.
The lightweight design in this study is not task-specific but based on rational improvements in optimization strategies, sampling methods, and modeling approaches. It can thus be generalized to other graph learning tasks, visual contrastive learning, and embedding learning in natural language processing. We believe these strategies can inspire the development of more efficient and scalable unsupervised learning methods.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. As all of my concerns have been solved, I am glad to raise my score. | Summary: Recent years have witnessed a phenomenon that graph contrastive learning faces the balance between effectiveness and efficiency. In spite of its popularity and success, several potential risks including underlying semantic disturbance brought by augmentation strategies, failure of GCN in capturing long-range dependence, rigidness and inefficiency of node sampling techniques still remain to be solved. In this paper, authors provides a novel perspective for graph contrastive learning, called ML^2-GCL. It achieves global nonlinear structure recovery from locally linear fits, which can make up for the defects of GCN. The most amazing advantage is about the lightweight due to its closed-form solution of positive pairs weights and removal of pairwise distances calculation. A series of theoretical analysis and empirical performance completely demonstrate its superiority.
Claims And Evidence: Yes, the claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: The cross-disciplinary innovation of lightweight graph contrastive learning with other technologies drives scientific literature development includes integration with traditional supervised learning and synergy with transfer learning. Specifically, in weakly supervised scenarios, contrastive loss compensates for the scarcity of annotated data. Cross-domain feature alignment enables knowledge transfer and model generalization.
Essential References Not Discussed: No, all the essential related works have been adequately discussed
Other Strengths And Weaknesses: 1) Through introducing manifold geometric similarity metrics, such as local neighborhood preservation, the distributional consistency of positive and negative pairs in the latent space is improved.
2) Removal of both graph augmentation and pairwise distances calculation can reduce the computational costs significantly, which aligns with the current demand for efficient graph models.
3) Both theoretical analysis and empirical performance adequately verify the advantages in terms of effectiveness and lightweight.
4) I've noticed that the proposed novel contrastive loss function uses the closed-form solution of anchor node reconstruction combination weights. Authors had better give more detailed explanations.
5) In the Conclusion Section, future work is missing.
Other Comments Or Suggestions: The authors should check some typos in Lines 16-17.
Questions For Authors: What are the differences between personalized graph embedding and traditional graph embedding?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive feedback and attention. Here, we will provide a point-by-point response.
W1: Thank you for your attention. In fact, the proposed novel contrastive loss can be regarded as personalized graph embedding, where positive pairs with larger weights should be more similar, while negative pairs with smaller weights should be more different.
W2: In future, we will investigate deep integration mechanisms between manifold learning and dynamic graph structures to address nonlinear evolution patterns in temporal data or dynamic networks. We can also extend such unsupervised contrastive paradigms to multimodal data for efficient computation in large-scale graph scenarios. The content will be supplemented upon acceptance of the paper.
C: We will modify "While existing works follow the basic principle that pulling positive pairs closer while pushing negative pairs far away..." to "While existing works follow the basic principle of pulling positive pairs closer and pushing negative pairs far away...".
Q: Personalized graph embedding is a more generalized framework which assigns each positive pair or negative pair its own weight. Traditional graph embedding can be regarded as a special form of personalized graph embedding. In Appendix A.3, we have briefly discussed the differences. | Summary: This paper explores an effective and lightweight graph contrastive learning method called ML^2-GCL, highlighting the need for a deeper understanding of graph contrastive learning methods from a manifold learning perspective. ML^2-GCL recovers global nonlinear structure from locally linear fits with closed-form solution of positive pairs weights, using which to design a novel contrastive loss function and update graph encoders. In addition, this paper proves the existence of the optimal closed-form solution and analyses the essence of ML^2-GCL. Extensive experiments verify both effectiveness and lightweight of ML^2-GCL.
Claims And Evidence: Yes. This paper explores the marriage of manifold learning to graph contrastive learning for the first time, develops a new paradigm of effective and lightweight contrastive learning, and derives a series of theoretical analysis. These theoretical analysis proves the existence of the optimal closed-form solution of positive pairs reconstruction weights and reveals its connection with graph embedding, which provide strong evidence for the claim.
Methods And Evaluation Criteria: Yes. The proposed ML^2-GCL method is proper for the problem of GCL.
Theoretical Claims: Yes. I checked the correctness of the proofs for theoretical claims, which proves the existence of the optimal closed-form solution and the correlation between ML^2-GCL and graph embedding.
Experimental Designs Or Analyses: Yes. I checked the validity of the experimental designs and analyses, both of which are sound and complete.
Supplementary Material: Yes. I have reviewed the detailed proofs of the theoretical analysis in the supplementary material.
Relation To Broader Scientific Literature: (a) Dynamic graph adaptation: Existing methods primarily focus on static graphs, necessitating exploration of temporal enhancement strategies.
(b) Interpretability improvement: Lightweight model simplification may compromise semantic interpretability, requiring optimization through causal inference and related methods.
Essential References Not Discussed: No. This paper does not omit any essential references.
Other Strengths And Weaknesses: Strengths:
(a) This paper proposes a novel and insightful method for GCL. By incorporating manifold learning theory, i.e., the manifold smoothness assumption, ML^2-GCL optimizes the geometric constraints of graph node embeddings and enhances the capability of low-dimensional representations to characterize complex graph structures.
(b) By simplifying the contrastive pair generation process, ML^2-GCL reduces the computational complexity of traditional graph contrastive learning methods, especially those exploiting graph augmentation strategies.
Weaknesses
(a) The Conclusion Section lacks an exploration of potential future research directions.
(b) Publicly releasing the code and experimental details may ensure the reproducibility of the results and enhance the impact of this work.
Other Comments Or Suggestions: (a) Spelling errors: Line 31, " lightweightness " --> " lightweight".
(b) The mathematical expressions in the framework diagram are partially truncated, affecting the readability of key equations.
Questions For Authors: In general, graph encoding is the first step in GCL. However, in this paper, the computation process of positive pairs reconstruction weights does not include graph encoders, could authors explain why?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the reviewer’s careful reading and valuable suggestion. In the following, we will provide a point-by-point response.
Wa: In future, we will investigate deep integration mechanisms between manifold learning and dynamic graph structures to address nonlinear evolution patterns in temporal data or dynamic networks. We can also extend such unsupervised contrastive paradigms to multimodal data for efficient computation in large-scale graph scenarios. The content will be supplemented upon acceptance of the paper.
Wb: The code will be made publicly available upon acceptance of the paper. For experimental details, please refer to Appendix C.
Ca: Thanks for the careful reading. We will modify this in the final version.
**Cb: This may be caused by software incompatibility. You can try Adobe Acrobat 10.0 to open the PDF file.**
Q: Thank you for the question. Our method, ML²-GCL, intentionally excludes the graph encoder when calculating the reconstruction weights for positive pairs. This is a deliberate design choice, primarily motivated by the following reasons:
**1. Avoiding Encoder Interference to Ensure Geometric Consistency**
Traditional GCL methods typically rely on embeddings from graph encoders to measure node similarity and construct positive/negative pairs. However, this approach is susceptible to instability caused by encoder initialization, training states, and parameter updates, which can lead to inconsistent positive pairs selection.
In ML²-GCL, the calculation of positive pair weights is based on locally linear embedding-based geometric optimization, directly constructing the positive pairs weight matrix W_p in the raw feature space. This avoids biases introduced by graph encoders, ensuring that W_p is solely determined by the inherent geometric structure of the data, aligning with the principles of manifold learning.
**2. Enhancing Robustness of Contrastive Learning**
Computing W_p in the raw feature space can serve as prior information for the encoder’s input data, providing stable supervisory signals. In contrast, computing W_p after GCN processing would risk instability, as updates to GCN parameters could disrupt the construction of positive pairs. Furthermore, positive pair selection should depend on the graph structure and node features themselves, rather than the encoder’s initial state.
**3. Reducing Computational Complexity and Improving Efficiency**
Traditional GCL methods require similarity calculations in the encoder’s embedding space, whereas ML²-GCL computes W_p via a closed-form solution, significantly lowering computational costs and enabling a lightweight framework. Our method also avoids exhaustive pairwise similarity computations by focusing on local geometric structures, further enhancing efficiency.
**4. Ensuring Generalizability Across Graph Encoders**
Since W_p is encoder-independent, ML²-GCL can flexibly integrate with various graph neural networks, such as GCN, GAT, and GraphSAGE, without altering the positive pair construction strategy. This design ensures broad applicability, allowing ML²-GCL to adapt to diverse GNN architectures seamlessly.
In summary, our design is grounded in manifold learning theory, ensuring stable positive pair construction, computational efficiency, and method generalizability. The experimental results validate the effectiveness of this approach, demonstrating its theoretical and practical soundness. | Summary: As a mainstream and representative unsupervised learning method, contrastive learning has achieved great success in the field of computer vision. Inspired by such achievements, graph contrastive learning (GCL) has attracted much interests in the past few years. Despite its excellent performance, GCL suffers from underlying semantic disturbance, rigid and inefficient node sampling, etc. To address this issue, this paper develop ML2-GCL, a Manifold Learning Inspired
Lightweight Graph Contrastive Learning method. It is the first exploration to marry manifold learning with graph contrastive learning, which avoids the semantic disturbance and high computational complexity completely. Theoretical analysis and experimental results demonstrates its effectiveness and lightweight.
Claims And Evidence: Yes, they are.
Methods And Evaluation Criteria: Yes, they do.
Theoretical Claims: Yes, I did.
Experimental Designs Or Analyses: Yes, I did.
Supplementary Material: Yes, I did.
Relation To Broader Scientific Literature: The application of lightweight graph contrastive learning in recommendation system has verified its universality. With lightweight representation of user product interaction graphs, it solves the problems of data sparsity and long tail recommendation.
Essential References Not Discussed: As far as I know, no key reference is missing.
Other Strengths And Weaknesses: Strengths:
1. The idea is novel. This paper combines manifold learning with graph contrastive learning and proposes a lightweight design, which is in line with the current research trend of contrastive learning in reducing computational costs.
2. The technique is reasonable. It designs a novel contrastive loss function with the closed-form solution of anchor node reconstruction combination weights to achieve both effectiveness and lightweight.
3. Theoretical analysis proves the existence of the optimal closed-form solution. Extensive empirical studies on benchmark datasets demonstrate that the proposed method achieves state-of-the art performance in terms of effectiveness and lightweight.
Weaknesses:
1. To improve readability, authors are encouraged to discuss the similarities and differences between manifold learning and graph contrastive learning.
2. Authors should carefully check and revise a few spelling and grammar mistakes. For example, "While existing works follow the basic principle that pulling positive pairs closer while pushing negative pairs far away...", where " principle that pulling ... while pushing ..." should make some adjustments.
3. There are minor errors in Table 7. For the GPU memory usage on Amazon-Photo dataset, GRACE is the second least and AFGRL is the third least. The color is reversed.
Other Comments Or Suggestions: See Weaknesses.
Questions For Authors: 1. Are the positive pairs weight matrix Wp and negative pairs weight matrix Wn of equal size?
2. From Tables 5-6, I observe that the optimal parameter k is relatively small. Could authors give more explanation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive feedback and valuable comments. Below, we will provide a point-by-point response to each comment.
W1: We have briefly discussed this in Introduction. Here, we will give a detailed discussion.
**Similarities**
**1. Consistency in Core Objectives**
Both aim to extract effective low-dimensional representations from complex data structures. Manifold learning reveals low-dimensional manifold structures embedded in high-dimensional data through dimensionality reduction, while graph contrastive learning learns discriminative embeddings through contrasting different graph structures or node relations.
**2. Focus on Local Structures**
Manifold learning algorithms preserve local geometric properties through constructing neighborhood relations. Graph contrastive learning enhances sensitivity to local structures through contrasting node or subgraph neighborhoods.
**3. Advantages in Handling Nonlinear Relations**
Manifold learning captures nonlinear low-dimensional structures of high-dimensional data. Graph contrastive learning excels at modeling non-Euclidean relations of graph data.
**Differences**
**1. Theoretical Foundations and Input Data Types**
Manifold learning is based on topological manifold theory, assuming high-dimensional data lies on low-dimensional manifolds. Its inputs are high-dimensional vectors, such as images and text. Graph contrastive learning is rooted in graph theory and contrastive learning frameworks. Its inputs are graph-structured data, such as nodes and edges.
**2. Methodological Approaches**
Manifold learning reduces dimensionality via constructing local neighborhoods or global constraints, while graph contrastive learning generates positive/negative pairs and optimizes models using contrastive loss functions to distinguish similar/dissimilar structures.
**3. Application Scenarios**
Manifold learning is primarily used for data visualization, denoising and feature extraction. Graph contrastive learning is applied to graph classification, node classification and graph generation, particularly in scenarios with missing or improvable graph structures .
**4. Mathematical Tools and Optimization Goals**
Manifold learning relies on graph Laplacian matrices, geodesic distance calculations, and minimizes reconstruction errors or preserves local geometry. Graph contrastive learning uses contrastive loss to maximize similarity of positive pairs and minimize similarity of negative pairs, focusing on discriminability of representation spaces.
W2: We will modify "While existing works follow the basic principle that pulling positive pairs closer while pushing negative pairs far away..." to "While existing works follow the basic principle of pulling positive pairs closer and pushing negative pairs far away...".
W3: We will modify this in the final version.
Q1: Yes, both of them are square matrices with dimension N.
Q2: We observe that the optimal range of k values is generally small across all datasets, which can be primarily explained from the following two perspectives:
**1. Theoretical Importance of Local Neighborhoods**
In ML²-GCL, k determines the scope of the positive pair set P_i, where the positive samples of an anchor node are derived from its k-hop neighbors. Theoretically, a smaller k makes positive pairs more localized, enhancing the preservation of local geometric structures while reducing noise interference from distant neighbors. Our optimization objective, based on manifold learning, constructs the positive pairs weight matrix W_p through locally linear embedding, making it more suitable for information propagation within small-scale neighborhoods. When k becomes excessively large, distant positive pairs may introduce excessive noise, compromising the fidelity of manifold structures and resulting in overly homogeneous representations that degrade contrastive learning performance.
**2. Empirical Observations on k-Sensitivity**
Our experiments reveal that larger k values lead to performance degradation and reduced model discriminability. This may occur because a larger k introduces excessive positive pairs, potentially amplifying noise and destabilizing local structures. When k increases, more nodes are included in the positive pair set. However, information from many of these nodes may already have been propagated through GCN layers, leading to information redundancy and diminished model discriminative power.
Based on the above analysis, we conclude that smaller k values allow ML²-GCL to better preserve manifold-localized structures while avoiding over-smoothing issues, thereby delivering superior experimental performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. I have also reviewed the comments from the other reviewers and the corresponding replies from the author. I will maintain my score. | null | null | null | null | null | null |
Pessimism Principle Can Be Effective: Towards a Framework for Zero-Shot Transfer Reinforcement Learning | Accept (poster) | Summary: This paper studies transfer learning where one aims to learn a good policy for a target domain with data collected from multiple source domains. And they consider the distributed and decentralized setting, where one central server can only access partial data from source domains. The authors apply the principle of pessimism in this problem. Specifically, their algorithm estimates a pessimistic evaluator which lowers bounds the performance of the policy in the target domain by utilizing an average of robust Bellman operators. With this conservative estimator, they can provide performance guarantees of their learned policy. What's more, they provide a minimal pessimism operator with which they can mitigate the issue of negative learning which previous methods may suffer in transfer learning.
Claims And Evidence: One of the claims seems not clearly supported.
The authors made the following claims:
They introduce pessimism into transfer learning and based on that principle, they can construct the performance estimator of the learned policy which is a lower bound of the performance in the target domain. With this conservative estimator, they solve two problems existing in transfer learning, first, there is no performance guarantee of the learned policy in the target domain, second, the existing of source domains which are very distinct from the target domain may lead to negative transfer learning.
The claim on avoiding negative transfer is not supported clearly. They do show their algorithm can treat source domains differently, but I am expecting a theorem that can quantify how much performance gained or an example, that can show by treating all source domains equally, we suffer negative transfer but with the proposed algorithm, it is avoided.
Methods And Evaluation Criteria: Yes, it makes sense. The method proposed in this paper primarily builds on the idea of robust RL and q-learning. With robust RL, the authors build pessimistic performance estimator of the learned policy in the target domain which can lower bound the true performance. And with Q-learning, they propose practical algorithms to learn that pessimistic estimator.
Theoretical Claims: I checked the correctness of Lemma 4.1, Lemma 5.1, Theorem 5.2 and Theorem 5.5. They seem correct to me.
Experimental Designs Or Analyses: No, I cannot evaluate the soundness of the experimental designs. The authors implemented experiments on two tasks with different target domain uncertainty parameters and compared their algorithms with two baselines which are non-robust algorithms.
Supplementary Material: Yes, I review Section A which discusses additional related works, Section B which presents the experiments conducted and parts of Section C which provides the proofs to the theorems in Section 5 in the main page.
Relation To Broader Scientific Literature: The contributions in this paper are related to two main areas which are offline RL and robust RL.
First, the key contribution is the application of the principle of pessimism to the problem of transfer learning which is related to offline RL where pessimism is widely used to provide high-probability lower bound estimator of Q-functions. In offline RL, the estimator is constructed using some concentration inequality.
In this paper, the authors construct the lower bound estimator via robust RL. They assume an uncertainty model set for each source domain where the target domain lies in. By finding the optimal policy in this uncertainty, they can lower bound the performance in the target domain.
Another broad related area is Q-learning. The algorithm proposed in this paper is a modified version of Q-learning in the framework of distributed transfer learning.
Essential References Not Discussed: I don't see essential related works that are not discussed. The discussion on robust RL which severs the base stone in this paper is placed in Appendix.
Other Strengths And Weaknesses: Strengths:
1. The presentation is very clear which makes the paper easy to follow.
Weaknesses:
1. The results seem build on the assumption that the reward function is identical across all domains including source domains and the target domain. It restricts the scenarios where the algorithm is applicable. Correct me if my understanding is wrong.
2. The authors comment in the main paper that their results can be directly extended to the case with different similarities parameter in source domains (which I agree), but it would be better to at least mention and supplement it in Appendix which I didn't find.
3. The results are built on the tabular settings because of the average step of Q-values from source domains. Then the claim in line 384 on the scalability should be more conservative, though the authors specify the local update step (not global step) can be implemented by any scalable robust RL algorithm.
Other Comments Or Suggestions: 1. The authors specify a parameter $E$ in their algorithm which trades off the convergence rate and communication cost, it would be interesting to see the experimental results on varying $E$.
Questions For Authors: 1. Is it true that the rewards of all source domains and the target domain are assumed to be same? If it is true, is there any way to get rid of it?
2. I can see the advantage of the proposed algorithms in the distributed setting since only Q-values are communicated. What if the communication of any raw data between these source domains is safe and possible, do the proposed algorithms still outweigh those estimators defined in line 220-224?
3. In Theorem 5.6 and 6.4, when $E=1$, on the right hand side, the similarity parameter $\Gamma$ does not affect the convergence rate, why communicating at each round can get rid of the $\Gamma$, it sounds unreasonable to me, is there any explanation?
4. According to Theorem 5.6, the final $Q_{AO}$ is approaching $\frac{\sum_{k=1}^KQ _k}{K}$, why doesn't one directly learn a good $Q_k$ in each source domain since we have lots of data in each domain, and then take the average instead of doing it in an iterative way?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the time and feedback. Please refer to the link https://anonymous.4open.science/r/ICML-2663/README.md for our newly conducted experiments.
**Negative transfer**
Our initial environments were relatively simple, so negative transfer was not evident. To support our claims, we ran a new Cartpole experiment (Fig 1). The source domains are with randomly generated parameters (length of pole), and the target is the default one. As shown, MDTL-Avg suffers from negative transfer, while MDTL-Max successfully mitigates it. We will elaborate further in the final version.
**Experiments on E and robust baselines**
We validate the trade-off in Fig. 2 and Table 1. As E increases, convergence slows (under fixed step size) and performance declines. However, computational time also decreases, indicating reduced communication.
We also add two robust baselines (see Table 2 and reply to R rfmi).
**Identical reward assumption**
We made this assumption to simplify presentation and analysis, but our framework can extend to settings with differing rewards.
1. If the (unknown) target domain reward differs from the sources, additional knowledge on its relation to the source reward should be known, e.g., $|r_t(s,a)-r_s(s,a)|\leq R(s,a)$, otherwise the target may have different goals with source domains, making transfer impossible. With such knowledge, we additionally define a **reward** uncertainty set for source: $\mathcal{R}^k_{s,a}=\{r': |r'-r(s,a)|\leq R(s,a) \}$ and modify the robust Bellman operator to $\mathbf{T}^k(Q)=(r-R)+\sigma(V)$, preserving pessimism and effectiveness.
2. If source rewards also differ from each, we can still apply different local operator $\mathbf{T}^k(Q)=(r^k-R^k)+\sigma(V)$ to similarly ensure conservative.
We will discuss more in the final version.
**Extended to the case with different similarities parameter in source domains**
We thank the reviewer for this insightful point. While most proofs are independent of the radius, some technical adjustments are needed. For example, in extending Thm. 6.4, we can adjust (89) to yield a bound $\gamma\lambda\frac{E-1}{1-\gamma}\max_i\{\Gamma_i\}$. We will discuss this further in the final version.
**Claim on the scalability**
Thank you for this valuable comment. We will revise the claim. By scalability, we refer to the local updates, which can leverage scalable, model-free methods. While global aggregation is necessary in tabular Q-learning, our approach is not limited to this setting. Our method can be paired with function approximation or policy gradient approaches to improve scalability. For instance:
1. With function approximation for robust RL [1], or
2. Robust policy gradient [2], the global step only requires parameter aggregation (e.g., Jin et al., 2022), making it scalable in practice.
We further numerically verify this by develop experiments under Cartpole and DVRP (please see our response to #tyX8). We will revise the statement and leave further extensions for future work.
[1] Zhou, R. et al., Natural actor-critic for robust reinforcement learning with function approximation, 2023
[2] Wang, Y. & Zou, S., Policy gradient method for robust reinforcement learning, 2022
**Proxies in Line 220**
Our proxies outperform options (1)-(3) due to reduced conservativeness. Though proxy (4) is less conservative, it presents practical challenges:
1. Its uncertainty set lacks a ball structure, making standard robust RL methods inapplicable.
2. Intersection-based methods require accurate knowledge of each local model and set, which is not available via raw data communication alone.
Thus, our method has two advantages over (4): lower communication requirements and easier implementation.
**E=1 in Theorem 5.6/6.4**
When E = 1, the global Q-table is updated every step using $Q \leftarrow \frac{1}{K} \sum_k \mathbf{T}^k(V)$. This operator is a $\gamma$-contraction, so convergence rate is independent of $\Gamma$. Even in the presence of unbiased noise, the expected update preserves this rate. Technically, Eq. (34) bounds $\|\sigma(V) - \sigma(V')\| \leq \|V - V'\| \leq \frac{1}{1 - \gamma}$, independent of $\Gamma$. However, $\Gamma$ still affects the pessimism level $\zeta$, which impacts the **effectiveness**, though not the **convergence rate**.
When E > 1, agents perform multiple local updates with their own robust operators. The global update then reflects local differences, making it depend on $\Gamma$. A similar effect is observed in Wang et al., 2024 (Theorem 1), where heterogeneity does not affect convergence when E = 1.
**Learn locally and average**
We apologize for the possible misunderstandings here. $Q_{AO}$ is the intended solution, and $Q_k$ is the algorithm’s output. The reviewer may have confused $Q_k$ with $Q^*\_k$, the robust value function for local domain k. Importantly, $Q_{AO}$ is different from $\frac{\sum Q^*\_k}{K}$, so simply learning locally and averaging does not yield the correct solution.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their efforts on the rebuttal, especially for providing additional experimental results. The elaborations are detailed. Some of my concerns are solved including different reward functions, the situation of $E=1$. Some remaining concerns or questions are:
1. You explained that $Q_k$ in Theorem 5.6 is the output of the algorithm instead of $Q_k^*$. I am still confused here since Algorithm 1 outputs $Q_k$ in each round $t$, right? Then what value does $Q_{AO}$ converge to by $||E[Q_{AO}-\frac{\sum_{k=1}^K Q_k}{K}]||$ in Theorem 5.6. I may misunderstand something here. This is important for me to evaluate the results, especially the claim on negative learning, since Theorem 5.6 and 6.4 are almost same except the term within the expectation.
2. Thank the authors for providing additional results, however, the experiments of negative learning are not that evident if we look at the peak performance. Since the mitigation of the negative learning is one of the contributions, I am expecting more convincing results. Also can the authors explain more on Theorem 6.4 comparing with Theorem 5.6 that what value do MDTL-MAX and MDTL-AVG converge to and how MDTL-MAX converge to a better value? This may be closely related to my first question.
3. Please revise the claim of scalability to make it precise. It would not negatively affect my evaluation of this work.
4. Does the algorithm need any prior information about the uncertainty set?
I am keeping the score at the current stage but I am staying neural and willing to increase the score based on the further explanation to my above questions and discussions with other reviewers.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your response.
1. We apologize for the notational confusion. In Theorem 5.6, $ Q_{AO} $ denotes the fixed point of the average-based operator defined in eq(6). It is a static value and is not updated during training. Our algorithm is designed to estimate it and derive policy from it. At time step $t$, each local agent produces a local Q-table $Q_t^k$ ($Q_k$ in our previous notation), and the algorithm will take their aggregation $\bar{Q}\_t := \frac{1}{K} \sum\_{k=1}^K Q\_t^k$ as the output. Our convergence result measures how well this aggregated Q-table approximates $Q_{AO}$:
$|| \mathbb{E} [ \bar{Q}\_t- Q_{AO} ]|| \to 0.$ $Q^*_k$ is the robust value function of each local source domain, and is different from the above notations and does not occur in convergence.
Similarly, in Theorem 6.4, $Q_{MP}$ is the fixed point of the max-based operator, which is the value our algorithm aims to estimate. The global output is then the max-aggregation $\max_k Q_t^k$,
and the convergence is $|| \mathbb{E} [ \max_k Q_t^k - Q_{MP}] || \to 0.$
Both Theorems 5.6 and 6.4 establish that the algorithm converges to its corresponding fixed point, enabling recovery of the respective transferred policy. These convergence guarantees are orthogonal to the study of negative transfer, which instead arises from comparing of $Q_{MP}$ and $Q_{AO}$ (as elaborated in the next point).
2. The mitigation of negative transfer through MDTL-Max is in fact captured by Theorem 6.1 (2).
We first highlight the fundamental advantage of our pessimism principle TL framework. As established in Section 4.2, the performance in the target domain is monotonic with respect to the conservativeness of the proxy—less conservative proxies yield better transferred policies (see Lemma 4.1).
We note that $V_{MP}\geq V_{AO}$ by Thm 6.1 (2), indicating that $V_{MP}$ serves as a **less conservative proxy** than $V_{AO}$. Therefore, while the MDTL-Max algorithm (which targets $V_{MP}$) may require additional effort due to max-aggregation, it ultimately leads to improved transfer performance compared to MDTL-Avg (which targets $V_{AO}$).
Furthermore, Theorem 6.1 (2) shows that $V_{MP}$ also outperforms **any individual local proxy** $ V\_{\mathcal{P}\_k} $. No such guarantee exists for $V_{AO}$. This result is crucial: it implies that MDTL-Max inherently mitigates negative transfer. Because of the monotonicity, MDTL-Max will perform at least as well as the best local source policy, effectively **ignoring misleading information from poorly aligned source domains**.
To better illustrate this, consider a setting where only one source domain is similar to the target, while the others are significantly different. An ideal transfer method would selectively leverage the similar domain and ignore the rest. However, in practice, we cannot identify the "good" source domain in advance. If we treat all source domains equally—as in MDTL-Avg—this can lead to **negative transfer**, where the poor-quality sources degrade the overall performance. In contrast, MDTL-Max is designed to be **robust to such variation**: it guarantees performance at least as good as the best source, effectively and automatically filtering out the harmful domains and preventing negative transfer.
We further develop 2 experiments to verify this, please refer to https://anonymous.4open.science/r/ICML-2663/README.md.
In the Robot Recycle environment (fig 5), we set the target domain to have parameters $\alpha=\beta=0.8$. The source domains are configured as $\alpha_k=\beta_k \approx 0.1$ for $ k = 1,...,K-1$, and $\alpha_K = \beta_K=0.7$. Clearly, domain $K$ is much more similar to the target, and an ideal transfer should primarily rely on it while ignoring the others. However, MDTL-Avg treats all source domains equally, resulting in an overly conservative policy due to influence from dissimilar sources. In contrast, MDTL-Max achieves significantly better performance than all source domains, including domain $K$, thereby avoiding negative transfer.
We observe similar results in the Frozen-Lake environment under an analogous setup (fig 4). For each algorithm, we run experiments across 10 random seeds. For each seed, the policy is evaluated over 10 independent episodes to compute the average return. Result shows that MDTL-Max consistently outperforms MDTL-Avg and successfully mitigates negative transfer.
3. We sincerely appreciate your suggestion and will revise the claim to:
Our algorithms can also be implemented in a model-free manner to enhance computational and memory efficiency, where any model-free algorithm for robust reinforcement learning can be integrated into the local update step.
4. The algorithms require knowledge of the uncertainty set radius, which represents our prior knowledge of task similarities. In general, effective transfer cannot be expected without such knowledge. Please also refer to our response to Reviewer rfmi for a more detailed discussion. | Summary: The paper introduces a novel pessimism-based transfer learning framework to address critical challenges in zero-shot transfer RL. The authors propose constructing conservative proxies—via robust Bellman operators and novel aggregation schemes (both averaged and minimal pessimism operators)—that yield lower bounds on the target performance. In addition, they develop distributed algorithms with convergence guarantees.
**Update after rebuttal**--I appreciate the authors’ thorough revision and the additional experiments addressing my concerns. I hope those will be implemented in the revised draft. I have decided to keep my original score.
Claims And Evidence: The paper claims that incorporating a pessimism principle can yield a conservative proxy and the proposed distributed algorithms are computationally efficient and scalable. These are supported by theoretical proofs of contraction properties. However, empirical evidence is deficient to some extent.
Methods And Evaluation Criteria: This paper will largely benefit from widely used benchmark datasets. One example could be contextual MDP benchmarks.
Theoretical Claims: I haven’t checked all the details in proof, but the theoretical contributions are both novel and rigorous. Some proofs could benefit from additional intuition or summary comments to aid understanding.
Experimental Designs Or Analyses: The experimental design is sound; however, a discussion on hyperparameter sensitivity and empirical analysis in many different benchmarks could further strengthen the work.
Supplementary Material: Yes, I did. It includes detailed proofs for all theoretical claims and detailed experimental results.
Relation To Broader Scientific Literature: The paper is well situated within the broader literature:
- It builds on established ideas in robust RL, domain randomization, and multi-task learning.
- It addresses limitations in existing transfer RL methods (no guarantee in safety and performance when transferred) by providing performance guarantees and methods to avoid negative transfer.
Essential References Not Discussed: There are some papers discussing zero-shot transfer in the context of contextual reinforcement learning. I highly recommend the authors to include them in related works. Furthermore, a brief discussion of recent advances in safe RL and risk-sensitive RL (which also aim to avoid over-optimistic policies) could further contextualize the pessimism principle. However, the omission does not detract significantly from the paper’s contributions.
Other Strengths And Weaknesses: - The paper offers a thorough theoretical justification with detailed proofs. However, some proofs and algorithmic descriptions could benefit from additional intuition or summarizing remarks.
- The distributed algorithm design and privacy-preserving updates are well aligned with modern large-scale and decentralized applications. While simulations are convincing, experiments on real-world tasks would further strengthen the empirical claims.
Other Comments Or Suggestions: - I didn’t find any significant typos.
- Font size in figure is too small.
Questions For Authors: - In Table 2, it’s hard to see difference between MDTL-Avg and MDTL-Max. Please provide other examples that could distinguish these two.
- Actually, more experiments should be done to support this.
- I’m curious how critical unbiased estimation assumption is and how robust is the method to the assumption.
- I’m curious about the comparison with other methods that authors mentioned throughout the papers. How does this approach perform better than others both in your theory and experiments?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and feedback, and appreciate the reviewer identifying our contribution. Please refer to the link https://anonymous.4open.science/r/ICML-2663/README.md for our newly conducted experiments.
**Hyperparameter sensitivity; Benchmark like contextual MDPs.**
We appreciate the reviewer’s suggestion.
Two hyperparameters are included: E and $\lambda$.
1. We implement algorithms with different E in Fig 2, Table 1. A larger E results in slower convergence but less computation, matching our theoretical results.
2. We run experiments with $\lambda=0.2$ in Table 2. Together with results in our paper for 0.1, it shows our methods are uniformly better and robust to it.
We present 2 more experiments. (1). Cartpole Experiment (Fig. 3): We apply our methods to the Cartpole environment, a common example of a contextual MDP. Results show that even when updates introduce bias, our methods outperform the baseline, confirming robustness. (2). Dynamic Vehicle Routing Problem (DVRP): We design a real-world scenario—DVRP [Iklassov, Z, et al., Reinforcement Learning for Solving Stochastic Vehicle Routing Problem, 2024]—which extends the classic vehicle routing problem with real-time requests and environmental uncertainty. Three important objectives are considered: Minimizing overall routing cost, Enhancing route smoothness, and Improving route stability. As shown in Table 3, our methods consistently outperform baselines across all criteria, and are more effective.
We will add more experiments in the final version.
**Additional references.**
We thank the reviewer for recognizing our contributions and providing constructive suggestions.
We briefly discussed contextual RL (cRL) for zero-shot generalization in Appendix A. cRL seeks to optimize performance under a contextual distribution to enable transfer without fine-tuning. However, theoretical guarantees remain limited. For instance, [1] provides a suboptimality bound, but it requires samples from the contextual distribution, potentially including target domain data. Furthermore, it remains unclear whether cRL can mitigate negative transfer, especially when the context distribution is assumed to be uniform.
We also appreciate the reviewer’s mention of safe and risk-sensitive RL, which aim to optimize reward while maintaining low cost under specific criteria [2]. However, most of these works are developed for single-environment settings, and do not address transfer scenarios. We will incorporate a more thorough discussion in the final version.
[1] Wang, Z., et al., Towards Zero-Shot Generalization in Offline Reinforcement Learning, 2025
[2] García, J., et al., A Comprehensive Survey on Safe Reinforcement Learning, 2015
**Intuition or summarizing remarks.**
Thank you for the suggestion. Our algorithm consists of two main parts: local updates and global aggregation. Each agent independently updates its local Q-table using its data, and after every E steps, a global aggregation step unifies the local Q-tables. Our convergence proof mirrors this structure: it is decomposed into a local part (governed by the local Bellman operator) and a global part (governed by the aggregation mechanism). We will include further discussion in the final version.
**Unbiased Estimation Assumption.**
The unbiased estimation assumption is introduced to simplify convergence analysis. However, our convergence results can be extended to settings with controlled bias, such as the bias introduced by max aggregation. This bias can be mitigated using techniques like threshold-MLMC [1].
Importantly, even in the presence of bias, if the expected proxy remains pessimistic, our transfer learning framework and theoretical guarantees still hold. We thus argue that our method is robust to such biases.
To support this, we include an experiment on Cartpole (Fig. 3) analyzing the impact of bias. Results show that our proxy remains conservative, and the pessimism-based framework continues to outperform the baseline, confirming robustness.
[1] Wang, Y., et al., Model-Free Robust Reinforcement Learning with Sample Complexity Analysis, 2024
**Comparisons with baselines.**
As noted in the paper, our primary advantage lies in theoretical guarantees: we optimize a lower bound on the target domain’s performance, avoiding overly optimistic decisions that may fail under domain shift. In contrast, domain randomization (DR) lacks such guarantees and can be overly optimistic, especially when the randomized training environments fail to capture true uncertainty.
We acknowledge that when the uncertainty set is constructed using inaccurate prior knowledge, our method may become overly conservative, potentially leading to suboptimal performance compared to DR. Nonetheless, the ability to guarantee robust performance across all scenarios is a central contribution—especially important for safety-critical or high-stakes applications, where conservativeness is not only expected but necessary. | Summary: This paper studies zero-shot transfer reinforcement learning. The authors incorporate a pessimism principle into transfer learning to serve as a lower bound to conservatively estimate the target domain’s performance. The authors propose and analyze two types of conservative estimates, rigorously characterizing their effectiveness, and develop distributed, convergent algorithms to optimize them.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: No.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper is related to transfer learning and federated learning. The authors propose to optimize a lower bound of the objective in the transfer reinforcement learning, and utilize some ideas from the federated learning literature.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. This paper proposes a new framework based on the pessimism principle, which constructs and optimizes a conservative estimation of the target domain’s performance.
2. The paper is well-written and easy-to-follow.
3. The authors provide some upper bounds for the performance of their proposed algorithms. They also provide experimental results to support the theoretical findings.
Weaknesses:
1. The main concern is that the suboptimality gap is indeed dependent on the $\max_{\pi}\zeta^\pi $. Though optimizing the lower bound is a good solution and the authors provide the convergence results to the lower bounds, the gap $\max_{\pi}\zeta^\pi $ seems not controllable, meaning that there is no theoretical upper bound on the actual suboptimality gap.
2. The section 2.1 can not formally define the problem setting. It would be better to specify the setting and the assumptions needed explicitly in this section.
3. Theorem 7.2. seems to be too abrupt to appear, the $\Delta_T$ is used without definition.
4. The proposed methods borrow some ideas from federated learning, it would be better to briefly introduce some related work in federated learning in the Related Work section (in appendix A. Additional Related Works).
Other Comments Or Suggestions: See the weaknesses above.
Questions For Authors: 1. Regarding my concern in the weaknesses 1. above, could you please give some explanations?
2. In Proposition 5.4., you prove the results under total variation or Wasserstein distance, do similar results hold under other metrics?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and feedback, and appreciate the reviewer identifying our contribution. Please refer to the link https://anonymous.4open.science/r/ICML-2663/README.md for our newly conducted experiments.
**Upper bound on $\max_\pi\zeta^\pi$.**
We first clarify that $\|\zeta\|$ has an upper bound and can be controlled via the construction of the uncertainty sets. With our proxies, it holds that $\|\zeta\|\leq \mathcal{O}((1-\gamma)^{-2}\Gamma)$. Moreover, when adapting our Max proxy with different radii $\Gamma_i$, it holds that $\|\zeta\|\leq \mathcal{O}((1-\gamma)^{-2}\min\{\Gamma_i\})$. Hence the suboptimality gap can be upper bounded by the domain similarities, which is reasonable and subject to the nature of the problem. This dependence also motivates one of the potential extensions of our framework -- a safe-to-improve principle: once a conservative proxy is constructed, any improvement to it directly improves target performance, without the loss of theoretical performance guarantees. For instance, if additional information is available (e.g., a small set of target domain data or allowance of explorations), we can further reduce radii $\Gamma$ to ensure a better transfer performance.
**Formally define the problem setting. It would be better to specify the setting and the assumptions needed explicitly in this section.**
Our formulation is as follows. Consider the target domain $\mathcal{M}\_0 = (\mathcal{S},\mathcal{A},P\_0,r,\gamma)$ with the unknown target kernel $P_0$. We have no data from it, but instead have access to multiple source domains $\mathcal{M}\_k = (\mathcal{S},\mathcal{A},P\_k,r,\gamma)$. Our goal in zero-shot multi-domain transfer learning is to optimize the performance $V_{P_0}^\pi$ under $\mathcal{M}_0$. For simplicity, we consider the case of identical reward (please see our response to Reviewer QZdx for extensions).
The only assumption is that, there exists an upper bound $\Gamma\geq D(P_0||P_k)$, that is known by the learner. This assumption is reasonable, as such information is essential to obtain performance guarantees. Even if in the worst-case, we can set $\Gamma=1$ (in total variation) to construct an (overly) conservative proxy, which yet still avoids decisions with severe consequences in the target domain, preferred in transfer learning settings.
**Theorem 7.2. seems to be too abrupt to appear, the $\Delta_T$ is used without definition.**
We apologize for missing the definition of $\Delta_T$. It is the difference between the algorithm output and the fixed points of the aggregated operators: $\Delta_T := Q_{\rm AO} - Q^T$ for MDTL-Avg, and $\Delta_T := Q_{\rm MP} - Q^T$ for MDTL-Max. We will clarify these.
**The proposed methods borrow some ideas from federated learning, it would be better to briefly introduce some related work in federated learning in the Related Work section (in appendix A. Additional Related Works).**
We thank the reviewer for the helpful suggestion. We will include related work on federated reinforcement learning (FRL). We note, however, that directly extending FRL to our setting poses non-trivial challenges: (1) FRL has linear update rules (non-robust operators), while ours involve non-linear robust operators; (2) FRL focuses on optimizing average performance across local environments, whereas we address the more challenging max-based multi-domain transfer; and (3) more importantly, FRL is not designed for transfer learning, while our pessimism principle enables transfer with theoretical guarantees.
**In Proposition 5.4., you prove the results under total variation or Wasserstein distance, do similar results hold under other metrics?**
According to our proof, the result holds as long as $\sigma\_{\bar{\mathcal{P}}}(V) - \frac{1}{K}\sum_{k=1}^K\sigma\_{\mathcal{P}\_k}(V) \leq 0$. We note that additional for $l_p$-norm, the support function has a duality (Clavier et al., 2024. Near-Optimal Distributionally Robust Reinforcement Learning with General $L_p$ Norms.): $\sigma(V)=\max_{\alpha} ( PV\_{\alpha}+f(\Gamma,V\_{\alpha}))$ for some function $f$. Thus $\sigma\_{\bar{\mathcal{P}}}(V) - \frac{1}{K}\sum_{k=1}^K\sigma\_{\mathcal{P}\_k}(V)=\max\_{\alpha} ( \bar{P}V\_{\alpha}+f(\Gamma,V\_{\alpha}))-\frac{1}{K}\sum_i \max_{\alpha} ( P_iV\_{\alpha}+f(\Gamma,V\_{\alpha}) )\leq 0$. It is our future interest to extend to other uncertainty sets. | Summary: The paper introduces a novel framework for zero-shot transfer reinforcement learning (RL) based on the pessimism principle. The key idea is to construct a conservative proxy for the target domain's performance, ensuring that the transferred policy achieves a robust lower bound on performance while avoiding negative transfer. The authors propose two types of conservative proxies:
(1) an averaged operator-based proxy and (2) a minimal pessimism proxy. They develop distributed algorithms for optimizing these proxies, with convergence guarantees. The framework is shown to be robust against model uncertainties and scalable to large-scale problems. Experiments on the recycling robot and HPC cluster management problems demonstrate that the proposed methods outperform non-robust baselines, especially in scenarios with domain shifts and model uncertainty.
Claims And Evidence: - **Claim 1**: The pessimism principle ensures a robust lower bound on target domain performance and avoids negative transfer.
- **Evidence**: Theoretical analysis (**Lemma 4.1**) shows that the sub-optimality gap depends on the level of pessimism\( ||\zeta|| \), and experiments consistently outperform non-robust DR baselines (Figure 3).
- **Claim 2**: The proposed algorithms are efficient and scalable., enabling decentralized learning across multiple source domains (Partially supported).
- Theorems 5.5 & 6.4: Provide convergence guarantees for both MDTL-Avg (Averaged Proxy) and MDTL-Max (Minimal Pessimism Proxy).
- Algorithm 1 is designed for distributed training, ensuring privacy-preserving updates.
- **Concern**: The multi-level Monte Carlo (MLMC) method is computationally expensive, which may limit scalability in high-dimensional RL tasks.
- **Claim 3**: The framework is robust to model uncertainty.
- Proposition 7.1 suggests that pessimistic proxies provide robustness against perturbations in the target domain, implying distributional robustness.
- **Concern**: No explicit robustness tests (e.g., varying noise levels, adversarial perturbations) are conducted.
Methods And Evaluation Criteria: The authors propose two robust transfer proxies:
- Averaged Operator-Based Proxy (AO): Uses a weighted combination of source domains' robust value functions to construct a conservative but smooth estimate.
- Minimal Pessimism Proxy (MP): Uses a max-aggregation approach to prioritize similar domains while maintaining pessimistic guarantees.
The algorithms are federated-style, designed for distributed policy learning without direct data sharing, making them privacy-friendly and applicable in decentralized systems.
The experiments are conducted on two specific problems:
- **Recycling Robot Problem**: A classic RL problem where a robot with a rechargeable battery must decide whether to search for cans or wait for someone to bring them. This is a relatively simple environment with discrete states and actions.
- **HPC Cluster Management Problem**: A task where a cluster manager must decide whether to allocate incoming tasks immediately or enqueue them for later processing. This is also a discrete environment.
The evaluation focuses on performance in the target domain and robustness to model uncertainty.
The paper compares against domain randomization (DR) methods, which is a reasonable baseline for zero-shot transfer RL. However, it ignores other relevant robust RL approaches, such as:
- Adversarial robust RL (e.g., worst-case minimax RL), which directly optimizes for robustness against worst-case scenarios.
- Distributionally robust RL (DRO) approaches, which handle uncertainty by optimizing policies under distributional shifts.
Theoretical Claims: I verified the key theoretical claims, particularly:
**Lemma 4.1** (Pessimism Gap & Sub-Optimality Bound)
- Correctly characterizes the effect of pessimism on target policy performance.
- Uses standard value function contraction properties to establish robustness guarantees.
**Theorem 5.2** (Averaged Proxy is Conservative)
- Proves that the AO proxy remains a valid lower bound to the target domain’s value function.
**Theorem 6.1** (Minimal Pessimism Proxy is More Effective)
- Assumes prior knowledge of domain similarity, which is non-trivial in real-world applications.
Experimental Designs Or Analyses: The experiments are conducted on two specific problems:
- **Recycling Robot Problem**: A classic RL problem where a robot with a rechargeable battery must decide whether to search for cans or wait for someone to bring them. This is a relatively simple environment with discrete states and actions.
- **HPC Cluster Management Problem**: A task where a cluster manager must decide whether to allocate incoming tasks immediately or enqueue them for later processing. This is also a discrete environment.
### **Strengths**
- Well-structured comparison against proximal domain randomization (DR) baselines.
- Clear ablation study on uncertainty levels (Table 1, Figure 2).
### **Concerns**
- No explicit negative transfer tests: Does not include cases where misleading source domains degrade performance.
Supplementary Material: **Reviewed:**
- Appendix C, D, E (Theoretical proofs).
- Appendix G (Model-Free Algorithm Variant).
Relation To Broader Scientific Literature: This work connects to several research directions:
1. The framework extends **robust RL** (Iyengar, 2005; Nilim & El Ghaoui, 2004) and **transfer RL** (Chen et al., 2021) by incorporating pessimism-based transfer guarantees.
2. The **federated learning** approach aligns with distributed RL (Jin et al., 2022; Wang et al., 2024).
Essential References Not Discussed: 1. **Adversarial Robust RL:**
- Rajeswaran et al., 2017. "EPOpt: Learning Robust Neural Network Policies Using Model Ensembles."
- Tessler et al., 2019. "Action Robust Reinforcement Learning."
2. **Distributionally Robust RL:**
- Wiesemann et al., 2014. "Distributionally Robust Convex Optimization."
- Panaganti & Kalathil, 2022. "Sample Complexity of Robust RL with a Generative Model."
Other Strengths And Weaknesses: **Strengths:**
- Novel use of pessimism in zero-shot transfer RL, the pessimism-based approach is well-motivated
- Strong theoretical contributions with formal performance guarantees.
- Federated-style implementation ensures privacy in multi-domain learning.
**Weaknesses:**
- Computational cost of MLMC aggregation not addressed
- Assumes Prior Knowledge of Domain Similarity(which may not hold in real-world applications).
- No explicit negative transfer experiments.
Other Comments Or Suggestions: The paper covers the most relevant experiment but could benefit from performing negative transfer experiments to validate the robustness of pessimistic aggregation.
Questions For Authors: 1. How does the MLMC-based pessimistic aggregation scale to large state-action spaces?
2. Can the pessimism level \( \zeta \) be adaptively tuned instead of being fixed?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and feedback, and appreciate the reviewer identifying our contribution and novelty. Please refer to the link https://anonymous.4open.science/r/ICML-2663/README.md for our newly conducted experiments.
**Other relevant robust RL approaches**
We thank the reviewer for pointing these methods out. We first want to highlight that our method is indeed based on distributionally robust RL, where our uncertainty set is constructed to account for the potential distributional shift. The adversarial robust RL, although commonly studied in experiments, generally lacks theoretical guarantees on, e.g., convergence, and tends to be overly conservative. Our methods, however, enjoy both convergence and performance guarantees. We will include more discussion in our final version.
We then numerically verify that our method is effective, and present the results for the recycling robot problem in Table 2 in the above link. We use adversarial action-robust RL [1] and distributionally robust RL [2] as new baselines. As the results show, although adversarially robust and distributionally robust RL outperform
non-robust methods, their performance remain significantly inferior to ours.
[1] Tessler et al., 2019. "Action Robust Reinforcement Learning and Applications in Continuous
Control."
[2] Panaganti \& Kalathil, 2022. "Sample Complexity of Robust RL with a Generative Model."
**Cost of MLMC**
MLMC is primarily designed for proof under the tabular settings, and is essential for obtaining an unbiased update. As we discussed, the computational cost of MLMC can be viewed as a trade-off of transfer performance, thus such a high cost may be acceptable in safety-critical applications such as robotics and autonomous driving.
Its cost can be reduced along several potential directions: (1) The level number $N$ of MLMC can be controlled through techniques like threshold-MLMC [1], which applies an upper bound on $N$. Although it results in a biased estimation, the bias can be controlled and hence still implies convergence (numerically verified in Fig 3 in the link). (2) Techniques to reduce the computational cost of the robust Bellman operator can also be applied. One potential approach is to relax the uncertainty set constraint, which would result in a conservative and efficient solution [2]. (3) Reducing the aggregation frequency by increasing $E$ is also another way to reduce the computational cost, but introduces a trade-off in the convergence rate, as we have shown in our theoretical results (further numerically shown in Fig 2 of the link).
[1] Wang, Y. et al. Model-Free Robust Reinforcement Learning with Sample Complexity Analysis, 2024
[2] Kumar, N.et al. Policy gradient for rectangular robust Markov decision processes, 2023
**No robustness tests** We developed some robustness testing experiments in Appendix, where our methods are shown to be more robust to target domain uncertainty. We also conduct additional experiments to verify robustness. Firstly, as the results in Table 2 in the link show, our methods are more robust to uncertainty in the target domain. Under different levels of uncertainty, our methods outperform baselines and is hence robust. Secondly, in Fig 3, we showed that even with noise or bias in the update, our algorithms still outperform the baseline and are also robust.
We will include more robustness tests in the final version.
**Assuming prior knowledge of domain similarity** Generally, such knowledge can be obtained through domain experts, or estimated from a small amount of target data.
Moreover, our method remains applicable even without prior knowledge, by setting $\Gamma$ to a known upper bound on distributional distance (e.g., total variation between any two distributions is at most 1). While this yields an over-conservative proxy, it still helps prevent significant drops in transfer performance due to our built-in pessimism principle. We believe this is the most reliable approach in settings where no similarity information is available.
We also note that this limitation applies to other methods as well. For instance, domain randomization (DR) assumes prior access to a set of environments that includes the target domain (Chen et al., 2021). Without such prior knowledge, our pessimism principle helps prevent undesirable outcomes, whereas DR provides no such guarantees.
**Can ($\zeta$) be adaptively tuned?**
We first note that $\zeta$ depends on $\Gamma$ in our algorithm. Since we consider a zero-shot setting, $\Gamma$ is pre-set and fixed, and it cannot be tuned without any additional information. However, if any additional knowledge is available, e.g., a small amount of target domain data, or we are allowed to fine-tune via exploration, we can use it to shrink the uncertainty radius and enhance the effectiveness.
**Negative transfer** We sincerely refer the reviewer to our response to Reviewer QZdX. The results are in Fig 1. | null | null | null | null | null | null |
Subobject-level Image Tokenization | Accept (poster) | Summary: This paper presents a method to encode an image at sub-object level. Specifically, it first detects edges and boundaries in the image with a small model, then utilizes the watershed algorithm to segment the image into sub-object parts. The authors conduct both intrinsic evaluations to validate the segment quality and extrinsic evaluations to verify the derived sub-object token embeddings.
## update after rebuttal
Thank the authors for the detailed response. However, the rebuttal did not satisfactorily address my concern regarding the lack of VQA evaluation. Given that the primary objective of token segmentation is to improve token embeddings for visual understanding, it is crucial to assess performance on diverse understanding-oriented VQA benchmarks, not just simple image captioning tasks. I am NOT requesting that the authors develop a SOTA VLM, but rather that they provide a fair comparison with CLIP embeddings that do not involve token segmentation on VQA benchmarks. The authors’ reluctance to provide such an evaluation leads me to question the true effectiveness of the proposed method. Therefore, I will maintain my rating as weak reject.
Claims And Evidence: The claims in this paper supported by experimental results or prior studies.
Methods And Evaluation Criteria: The extrinsic evaluation benchmarks used in the paper are not common for the VLM evaluation.
Theoretical Claims: There is no proof or theoretical claim in this paper.
Experimental Designs Or Analyses: The experimental designs are solid and fair.
Supplementary Material: I have checked the supplementary material.
Relation To Broader Scientific Literature: This paper studies the image tokenization problem from a novel perspective, i.e., at sub-object level, which is analogous to sub-word tokenization in NLP.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
- The idea of tokenizing an image at sub-object level is intuitive and intriguing. It reduces the number of tokens required to encode an image compared to traditional patch-wise tokenization, and facilitates fast convergency in downstream tasks.
- The paper is well-written and easy to follow.
**Weaknesses:**
- Sub-object image tokenization involves two stages: first, partitioning an image into sub-object parts, and second, encoding each part into tokens. This paper overlooks the design of the latter stage. It still relies on a patch-wise encoder to generate 2D feature maps, and then pool the features corresponding to each segment. This design could fail to fulfill the monosemanticity of sub-object tokens.
- The extrinsic evaluation benchmarks used in this paper are not common in VLM evaluation. The authors are encouraged to conduct evaluation under standard settings, i.e., the LLaVA framework.
Other Comments Or Suggestions: I suggest that the authors move the introduction of 'Token Embedding for Adaptive Segmentation' (Section 5.1) to the 'Method' section (Section 3), as deriving sub-object token embeddings should be an indivisible part of the proposed method.
Questions For Authors: How does the throughput of the sub-object tokenizer compare to that of the patch-wise tokenizer (e.g., CLIP or DINOv2) for a similar model size, including the phase of deriving token embeddings?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback! We realize that there are some concerns that arise from the interplay between token segmentation and token embedding. Therefore, before addressing your comments point-by-point, we first clarify how and why we disentangle these two components in our paper:
In this study, we explicitly separate the image tokenization into two stages: 1. **token segmentation** and 2. **token embedding**. Our paper emphasizes the segmentation step, which has been largely overlooked in existing research focusing primarily on improving embeddings. **Our claim** is that: **adaptive segmentation** facilitates better image understanding, and this is consistent across **different embedding** methods.
We adopted this separation due to several considerations:
- Segmentation and embedding have fundamentally different roles and characteristics. They are agnostic to each other and can be flexibly combined.
- Disentangling these two components allows controlled experiments and clearer attribution of improvements.
- This separation aligns well with previous works investigating token segmentation methodologies, both in language modeling and image understanding literature.
Below, we provide detailed responses addressing each of your specific comments.
---
### 1. **Token Embedding for Subobject Segmentation** (Weakness 1)
> **TL;DR** Rather than overlooking token embeddings, our work demonstrates that adaptive segmentation consistently enhances image understanding **across various embedding methods**.
Existing studies typically focus on improving token embedding quality while defaulting to simple, uniform patch-based segmentation. In contrast, our paper explicitly addresses the overlooked aspect of adaptive token segmentation. This segmentation method is designed to remain **agnostic** of embedding choices, facilitating a controlled evaluation of segmentation effectiveness across diverse embedding schemes, as in Fig.6 (left).
Although popular ViT-based encoders are convenient and commonly used, they are by no means mandatory for our method. Indeed, as in Fig. 5(e), our EPOC achieves **stronger advantages** when coupled with **convolutional VAE** encoders or using raw **RGB pixels**.
---
### 2. **Extrinsic Evaluation Settings** (Weakness 2)
> **TL;DR** We indeed largely adopt the LLaVA framework, but with some minimal sufficient modifications. Without these changes, the original LLaVA framework **cannot provide meaningful evaluation** for adaptive token segmentation methods.
The goal of this extrinsic evaluation is to probe the effectiveness of adaptive segmentation, not to propose any new VLM framework. The VLM evaluation setup in our paper closely follows common practices already established in the literature. The commonalities includes:
- We adopt the standard "Visual Encoder → MLP Connector → LLM" architecture from LLaVA.
- We adopt visual encoders (CLIP, DINOv2, VAE), MLP settings (2-layers), and LLM (autoregressive decoder-only Transformer) that is well aligned with existing studies.
- We adopted ShareGPT4V and Pixmo-cap, which are both widely used for pretraining VLMs.
However, minimal changes were necessary:
- Standard LLaVA assumes fixed-size and raster-ordered patches, thus omitting positional embeddings. We introduced positional embeddings specifically to handle adaptive tokenization, which involves irregular segment shapes and arrangements.
- Standard VLM benchmarks commonly evaluate visual reasoning through tasks like VQA, reflecting both vision and LLM capabilities. Since our goal is exclusively to measure improvements in image understanding capability, we only measure the caption quality—thereby minimizing confounding from LLM knowledge and reasoning factors.
---
### 3. **Throughput Comparison** (Questions For Authors)
According to the disentanglement between token segmentation and token embedding, we interpret *“patch-wise tokenizer”* in the question as combining patch segmentation with CLIP or DINOv2 embeddings. A fair throughput comparison thus requires combining the same embedding backbones with subobejct segmentation. Since **embedding methods do not impact segmentation latency**, the throughput conclusions from Table 1 directly apply – *although patch segmentation is negligible in latency, the overhead introduced by our EPOC is minimal and practically insignificant relative to overall VLM training*.
---
### 4. **Structure and Clarity** (Other Comments Or Suggestions)
Thank you for this constructive suggestion. While our original intention was to clearly separate the token segmentation from embedding strategies for conceptual clarity, we acknowledge that they are methodologically relevant. We will reorganize our paper in the final version to move "Token Embedding for Adaptive Segmentation" (Section 5.1) into the "Method" section (Section 3) as suggested.
---
Once again, we sincerely appreciate your detailed review and valuable suggestions!
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses, which address some of my concerns. However, I disagree that token segmentation and token embedding should be disentangled, because in this case, token embedding still suffers from fused multiple semantics within a single token (polysemanticity), an undesirable property that hinders effective learning of token representations, as mentioned in the introduction. Applying token segmentation on top of the fused token embedding is less effective than deriving segmented token embedding in a bottom-up manner.
Besides, the author's response did not convince me for not using VQA benchmarks for evaluation. As suggested in Cambrian-1, combining VLM frameworks with VQA benchmarks is considered a modern evaluation protocol for visual representations.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response and thoughtful follow-up comments. Below, let us clarify the points with additional evidence:
---
### 1. **Token Embedding Brings Polysemanticity?**
To clarify, our claim is that adaptive token segmentation achieves **better** monosemanticity, rather than **perfect (100%)** monosemanticity. Here we provide additional analysis demonstrating that adaptive segmentation indeed **reduces feature variance**, which directly measures polysemanticity within each visual token.
We extracted feature maps (in the shape of HxWxC) and generated token segmentation from 1k samples per dataset. Then we downsampled token segmentations (to HxW) to align it with the feature map and retrieved the corresponding token embeddings vectors (each with dimension of C). We calculate intra-feature variance within each segment to measure the polysemanticity. Results confirm that adaptive token segmentation consistently reduces feature variance compared to patch-based segmentation, with EPOC achieving the lowest variance:
| Dataset (Embedding) | Metric | Patch 10x10 (baseline) | Mask2Former (object-level) | EPOC (subobject-level) |
|---|---|---|---|---|
| ImageNet (VAE) | Variance | 0.504 | 0.473 | **0.442** |
| ImageNet (DINOv2) | Variance | 6.287 | 6.104 | **6.019** |
| | # Tokens | 100 | 5.0 | 50.5 |
| Pixmo-cap (VAE) | Variance | 0.560 | 0.474 | **0.465** |
| Pixmo-cap (DINOv2) | Variance | 6.296 | 6.040 | **5.999** |
| | # Tokens | 100 | 5.1 | 63.0 |
| ShareGPT4V (VAE) | Variance | 0.579 | 0.465 | **0.457** |
| ShareGPT4V (DINOv2) | Variance | 6.450 | 6.132 | **6.047** |
| | # Tokens | 100 | 7.7 | 71.6 |
Additionally, the method of **“deriving segmented token embedding in a bottom-up manner”** mentioned in your comment is already discussed in our submission. The limitations have been outlined in page 8, Section 6 of our submission:
*“… **Feature-based methods, such as slot attention [1] and various token pooling methods [2-6], suffer from low-resolution feature maps which limit fine-grained segmentation, and unreliable segmentation quality in early training stages.** Our approach of using a separate segmentation model, EPOC, avoids these issues …*”
> [1] Object-centric learning with slot attention. NeurIPS’20
>
> [2] Which tokens to use? investigating token reduction in vision transformers. ICCV’23
>
> [3] Vision transformers with mixed-resolution tokenization. ICCV’23
>
> [4] Token pooling in vision transformers. WACV’23
>
> [5] Efficient vision transformer via token merger. IEEE TIP, 2023
>
> [6] Vision transformer with super token sampling. CVPR’23
---
### 2. **Evaluating Token Segmentation via VQA?**
The focus in Cambrian-1 study is to evaluate different **token embeddings** (e.g., CLIP, SigLIP, DINOv2, MAE) for their effectiveness in providing **general-purpose visual representations** across a wide range of **AI assistant tasks**, including knowledge, OCR, chart, etc.
Our paper tackles a fundamentally different research question: whether **adaptive token segmentation** facilitates better learning of image understanding models. We clearly scoped our work and did not aim to present a state-of-the-art VLM. Instead, we provided thorough intrinsic and extrinsic evaluations to directly and empirically support our claim.
The tasks of image captioning and VQA are closely related [7]. To illustrate, the following are some VQA examples from CLEVR, which shares the same visual source with our CLEVR-cap:
**Examples in CLEVR (VQA)**
- *Are there an equal number of large things and metal spheres? Yes.*
- *What size is the cylinder that is left of the brown metal thing that is left of the big sphere? Big.*
- *How many objects are either small cylinders or red things? 5.*
**Example in CLEVR-cap**
- *Total 10 objects: a small green rubber cylinder, a large blue metal cube, a small red rubber cylinder…*
As it can be seen, both task formulations **assess overlapping capabilities**, including identification of attributes (shape, size, material, color), counting, and spatial relationships. Although VQA additionally evaluates logical reasoning capability required for producing the final answer, this reasoning capability is fundamentally **agnostic to image token segmentation**.
We empirically demonstrate below that under the same LLM, the conclusions (i.e., performance ranking) drawn from token segmentation effectiveness using captioning datasets **translate directly and consistently to the VQA evaluation setting**:
| | CLEVR (VQA accuracy%) | CLEVR-cap (ratio of fully matched captions%) |
|---|---|---|
| **Patch** (10x10) | 21.5 - 3rd | 67.9 -3rd |
| **Superpixel** (subobject-level) | 40.3 - 2nd | 75.5 - 2nd |
| **EPOC** (subobject-level) | 48.0 - 1st | 80.9 -1st |
> [7] All You May Need for VQA are Image Captions. NAACL’22
---
Thank you again for your detailed feedback and constructive suggestions. We warmly welcome any further suggestions or questions you may have! | Summary: Tokenization is an important step for any transformer-based model. For the vision transformers, it is often performed on a patch-level, where locally neighboring parts of the image are tokenized together in the form of small square patches. However, patch-based tokenization is not adaptive, that is it tokenizes any given patch equivalently. This work proposes an adaptive image tokenization strategy based on sub-object level features, aiming to address the drawbacks of the existing image tokenization strategies. The work further includes intrinsic (with respect to image) and extrinsic experimental analysis for highlighting the effectiveness of the proposed approach.
Claims And Evidence: The proposed tokenization method claims to improve the patch-based tokenization with respect to token polysemanticity and token redundancy. In addition, it also claims to improve other adaptive tokenization methods with respect to computational efficiency and effective segmentation of subobject level regions.
With respect to the first claim, the authors present compelling verbal arguments, such as the issue of large patches encompassing multiple concepts (object classes, sub-object level details) and the redundancy coming from patchifying everywhere equivalently, e.g diving a background region into multiple tokens even though it is not very informative. In addition, the visualizations provided in Figure 3 and the quantitative analysis on Figure 4 demonstrate these issues more concretely. Given the fact that these arguments are also well-discussed and demonstrated in both the language and vision literatures, these claims are well-supported.
For the second claim, the authors present extrinsic evaluation results while utilizing an adjusted encoder-decoder model to accommodate for the dynamic nature of tokens following their tokenization process. While the discussion over the experiments is brief (and there is a lack of clarity in the presented figures, such as Figure 5), it can be seen that the proposed approach achieves better validation perplexity over certain alternatives while matching that of superpixel tokenization. In addition, the authors discuss the efficiency of the proposed approach and provide quantitative results backing this part of the claims. However, as detailed in the methods and evaluation criteria, there are several points that are preventing me from stating that this claim is overall well-supported.
Methods And Evaluation Criteria: - While it is understandable that it is non-trivial to utilize dynamically tokenized patches with off-the-shelf VLMs, I find the proposed methodology in Section 5 to be confusing, especially the rationale behind using a completely separate visual encoder and a VLM on top. Given the plethora of models with their own visual encoder and language decoder on top [A, B, C, D], I am not fully sure why the authors performed the analysis the way it is, instead of performing small adjustments on the aforementioned models.
- In addition, only the validation perplexity is presented for the utilized benchmarks. However, various other metrics associated with these benchmarks could have been reported (e.g BLEU scores [E], CIDEr [F] for captioning-based ones, classification accuracy for Imagenet) but were instead omitted.
Other than these points, the methodology seems reasonable.
[A] Liu, Haotian, et al. "Visual instruction tuning." Advances in neural information processing systems 36 (2023): 34892-34916.
[B] Li, Junnan, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation." International conference on machine learning. PMLR, 2022.
[C] Yu, Jiahui, et al. "Coca: Contrastive captioners are image-text foundation models." arXiv preprint arXiv:2205.01917 (2022).
[D] Tschannen, Michael, et al. "Image captioners are scalable vision learners too." Advances in Neural Information Processing Systems 36 (2023): 46830-46855.
[E] Papineni, Kishore, et al. "Bleu: a method for automatic evaluation of machine translation." Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 2002.
[F] Vedantam, Ramakrishna, C. Lawrence Zitnick, and Devi Parikh. "Cider: Consensus-based image description evaluation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
Theoretical Claims: The paper does not include detailed theoretical discussions or theorems. However, the intuitions and motivations behind the work are clearly explained.
Experimental Designs Or Analyses: The experimental analysis presented seems sound, though most of the results follow a rather ad-hoc structure for the work. Specifically, the instrinsic evaluation only discusses the edge-related metrics while omitting mask-level ones. While this is partially understandable given the nature of the work, it is still expected that the patch-based tokenization methods or those which do not necessarily rely on edges may not perform as the presented method. As detailed under the methods and evaluation criteria part, extrinsic evaluation also contains several ad-hoc choices.
Supplementary Material: The supplementary material includes more details on the utilized models, more visualizations and more details on intrinsic and extrinsic evaluations. The results in the supplementary materials also echo the strengths and the aforementioned concerns of the results presented in the main paper. Finally, the authors provide a brief limitations section.
Relation To Broader Scientific Literature: The work aims to tackle a timely problem in the vision literature, namely adaptive tokenization for vision transformers, for allowing efficient allocation of computational resources to more informative areas of images. Given the prominence of recent similar approaches in the language domain [A], if evidenced strongly, the work could be interesting to the broader transformers/deep learning community.
[A] Pagnoni, Artidoro, et al. "Byte latent transformer: Patches scale better than tokens." arXiv preprint arXiv:2412.09871 (2024).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The proposed idea based on the watershed algorithm and edge detection is interesting and novel for image tokenization. The goals of the work are also significant and are very timely. With respect to clarity, I think that the work could benefit from a diagram or a more fluent description of the extrinsic evaluation set-up, otherwise the thoughts are presented fluently.
Other Comments Or Suggestions: - Minor typo on L105: toknizers -> tokenizers
- Minor typo on L369: coverts -> converts
- The Figures 4 and 5 are not very easy on the eye and do not quickly convey the message of the work. I think that the work could benefit from moving some of the sub-figures to Appendix and emphasizing the most significant results from them.
Questions For Authors: - I am not sure what the Figure 6 is presenting: Is it just the convergence with respect to training loss? Would not it benefit from the presentation of generalization performance too?
- As a very minor question, where do you think this work stands in comparison to [A] and [B]?
[A] Kim, Young Kyung, J. Matías Di Martino, and Guillermo Sapiro. "Vision transformers with natural language semantics." arXiv preprint arXiv:2402.17863 (2024).
[B] Aasan, Marius, et al. "A Spitting Image: Modular Superpixel Tokenization in Vision Transformers." arXiv preprint arXiv:2408.07680 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your detailed and comprehensive review. We sincerely appreciate the considerable effort and depth of analysis you provided! Below, let us address each of your concerns point-by-point and outline concrete steps we will take to improve the final manuscript:
---
### 1. **VLM Architecture in Extrinsic Evaluation** (in “Methods And Evaluation Criteria”)
> **TL;DR** Our approach is precisely a **minimal adjustment** of the LLaVA architecture. The modification involves adding positional embeddings to **support adaptive token segmentation**.
We appreciate your comment and realize that our original description might have caused confusion. To clarify, our VLM architecture directly builds upon the well-established **LLaVA framework** (visual encoder → MLP connector → LLM). We do not introduce any new paradigm, such as *"separate visual encoder and a VLM on top"* as suggested. In the final version, we will explicitly emphasize this fact in Sec 5.
The sole reason for the modification is to handle **adaptive token segmentation**, which inherently involves variations in token size, position, and arrangement—an aspect not supported by existing architectures. Apart from this, our training settings and datasets (e.g., using ShareGPT-4V, Pixmo-cap) **align well with common practices** in VLM pretraining.
---
### 2. **Perplexity-based Metric in Extrinsic Evaluation** (in “Methods And Evaluation Criteria”)
We appreciate this important suggestion. To clarify, we used validation perplexity mainly to **maintain consistency and clarity** across different datasets. We fully agree that additional metrics are valuable. For all data points in Fig. 5, we calculated their accuracy or BLEU as suggested, and measured their correlation with perplexity. As below, two metrics **strongly correlate** with each other. We will include these results in the appendix of the final manuscript:
| |CLEVR-cap | ImageNet | ShareGPT4v | Pixmo-cap |
|---|---|---|---|---|
| Metric | Accuracy | Accuracy | BLEU | BLEU |
| Pearson Correlation with Perplexity | -0.86$^*$ | -0.97 | -0.93 | -0.91 |
| p-value | p < 0.0001 | p < 0.0001 | p < 0.0001 | p < 0.0001 |
$^*$The correlation on CLEVR-cap seems weaker compared to others. This is because they exhibit a log-linear relationship according to our visualization. The correlation between perplexity and log(accuracy) is -0.95.
---
### 3. **Boundary-based Metric in Intrinsic Evaluation** (in “Experimental Designs Or Analyses”)
> **TL;DR** Mask-based metrics are fundamentally unsuitable for evaluating image tokenizers. Boundary-based metrics accurately assess tokenization quality in **class-agnostic** and **multi-granularity** settings and have been widely adopted in CV and NLP.
Mask-based metrics, such as mIoU, depend heavily on clearly defined semantic classes, which makes them inappropriate for evaluating our tokenizer. They also struggle when segmentation granularities differ between predictions and ground truth. Accurately subdividing an annotated object into several meaningful subparts may produce **misleadingly low IoU scores**.
Boundary-based metrics naturally overcome these limitations. These metrics have also long been standard in boundary detection literature and have been used by SAM-related studies. They are also applied in NLP tokenizer evaluation, assessing the alignment to linguistic morphological boundaries.
Additionally, the **Token Monosemanticity Score** is essentially a mask-based metric (see response 1 to reviewer Ev7G).
---
### 4. **Reporting Training Loss in Fig.6.** (in “Questions For Authors”)
The current Fig.6 is presenting **average training loss**, which is an indicator of convergence speed. To align with Fig.5, we will update it to report **validation perplexity** following your suggestion. In fact, the two metrics closely correlate with each other, as it can be seen in Fig. 15. With validation performance in Fig.6, the relative performance relationship stays **exactly the same** and the conclusions in Sec. 5.3 remains unchanged.
---
### 5. **Relation to Recent Works** (in “Questions For Authors”)
Thank you for highlighting these strongly related recent works. The submission already included [B] (Aasan et al., 2024), and we will cite and discuss [A] in our final version. Both studies explore **adaptive token segmentation**, closely aligning with our research, yet with important differences:
- Compared to EPOC, the superpixel approach in [B] performs bottom-up pixel grouping without **semantic understanding**, and SAM-based segmentation in [A] cannot ensure **efficient** and **panoptic** segmentation. They are already involved as baseline methods in our paper.
- These works focus on **ViTs** and **image classification**, while our approach extends to **VLMs** and evaluates performance on the more challenging task of **detailed image captioning**.
---
Once again, thank you very much for your comprehensive and insightful comments!
---
Rebuttal Comment 1.1:
Comment: I apologize to the authors for my late reply and appreciate their detailed response. In particular, I appreciate that the authors presented brief results regarding how BLEU correlates with their measures and further clarifications on the architecture and relations to recent works. I encourage the authors to integrate these to their work as well.
While I still have concerns (e.g correlation is good, but insufficient, what about the actual numbers for BLEU4/CIDEr?), the rebuttal presented by the authors both to my review and other reviewers' is sufficient to tilt my decision towards acceptance. Accordingly, I will be raising my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful follow-up and for reconsidering our manuscript! We greatly appreciate your valuable suggestions. As recommended, we will integrate detailed results (including many exact numbers, which cannot be fully enumerated in the response above due to the 5k character limit) into the final manuscript to support our findings.
Your feedback has significantly helped us improve the clarity and rigor of our paper. We're delighted that our rebuttal addressed your concerns and greatly appreciate your updated evaluation. If you have any further comments or suggestions, please feel free to let us know—we warmly welcome additional discussion! | Summary: This paper proposes sub object-level image tokenization, which tokenize image based on the morphological structure of the image. Compared to other potential subobject tokenizers, EPOC improves efficiency. Experiments on multiple VLMs demonstrate the advantages of the subobject tokenizer.
## update after rebuttal
During the rebuttal, the authors provided additional experiments that addressed my and other reviewers' concerns. I keep my score as accept.
Claims And Evidence: All the claims made in the submission supported by clear evidence.
Methods And Evaluation Criteria: Regarding the question on extrinsic evaluation, why is the comparison only made on caption data and not on general VQA benchmarks? Is it because it doesn't perform as well?
The second question concerns Section 5.3, "Results and Discussions - Compatibility to Different Token Embeddings." It mentions that when using the subobject tokenizer, the DINO model performs better than CLIP. However, in mainstream models, DINO does not outperform CLIP. The explanation provided in the paper is that this is due to the lower resolution. Does the author believe that, for now, the subobject tokenizer still has limited practical value?
Theoretical Claims: yes, I check the correctness of proof in the paper.
Experimental Designs Or Analyses: Please see Methods And Evaluation Criteria section
Supplementary Material: yes, I review all the supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The writing is clear and convincing.
Other Comments Or Suggestions: N/A
Questions For Authors: In my view, under image-LLM, the image tokenizer doesn't really need significant improvements, as even ultra-high resolutions are still manageable for current GPUs. I believe the subobject tokenizer is more suitable for use in video.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your encouraging review and thoughtful questions. Your recognition of the strengths of our method is greatly appreciated. Below are our clarifications on your valuable questions:
---
### 1.**Extrinsic evaluations on caption data rather than general VQA** (in “Methods And Evaluation Criteria”)
Our main goal was to isolate image understanding performance clearly from complex reasoning tasks. Standard VQA datasets tend to fuse perception with significant **knowledge** and **reasoning** capabilities of LLMs. For example, questions like “Where is he looking?”, or “What time is it on the clock?” which commonly occurs on VQA data requires the VLM to understand concepts like “third-person pronoun”, “sense of direction”, and “time” which can potentially **confound the assessment of the effectiveness of adaptive token segmentation**. Thus, we focused on detailed captioning datasets to precisely evaluate token-level perception capability.
---
### 2. **Practical value regarding the comparison between DINO and CLIP** (in “Methods And Evaluation Criteria”)
The observed advantage of DINO over CLIP in our experiments primarily results from the higher resolution and dense supervision available in DINO embeddings. CLIP, especially the original OpenAI version, lacks resolution flexibility with a 7x7 feature map, limiting fine-grained perception that is essential for subobject tokenization. We agree CLIP still holds significant practical value, especially when high-resolution features are integrated, which can further enhance subobject tokenization.
---
### 3. **Future direction towards video tokenization** (in “Questions For Authors”)
Indeed, as we stated in Appendix E, subobject-level tokenization's greatest potential may lie in video understanding, where computational efficiency and token management are even more critical due to the temporal dimension. Our work provides foundational insights that naturally extend to video data, and we consider this a key future direction.
Thank you once again for your valuable input and for clearly identifying the broader applications of our method.
---
Rebuttal Comment 1.1:
Comment: I have read the other reviewers' comments (including the negative remarks by Reviewer 7brk D5bb) as well as the authors' rebuttal. While I acknowledge some of the shortcomings raised, I still find the positive aspects outweigh the negatives. In particular, I appreciate the proposed sub-object-level image tokenization approach. I thank the authors for their response and maintain my acceptance rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful consideration and continued support for our paper! We are trying our best to provide additional evidence to address the concerns raised by other reviewers. Your positive feedback has greatly encouraged us, and we warmly welcome any further suggestions or discussions! | Summary: This paper introduces **Subobject-level Image Tokenization**, a novel adaptive image tokenization strategy inspired by subword tokenization in NLP. Previous patch-based image tokenization methods suffer from inefficiencies and polysemanticity. To address these limitations, the paper proposes a new tokenizer called **Efficient and PanOptiC (EPOC)**, which combines boundary detection and watershed segmentation to guarantee comprehensive segmentation and computational efficiency.
Main Contributions:
1. **Panoptic Segmentation Improvement** The proposed EPOC integrates boundary detection and watershed segmentation.
2. **Computational Efficiency** The proposed EPOC achieves better token efficiency.
Main Results:
1. Intrinsic Evaluation: Evaluations on five datasets demonstrate that EPOC tokens align well with human semantic annotations, achieving higher monosemanticity scores.
2. Extrinsic Evaluation(VLMs): EPOC-based tokenization achieves faster convergence, improved generalization, and better token efficiency.
Claims And Evidence: The claims made by the paper are strongly supported by a clear proposed method (EPOC), well-designed experiments (intrinsic and extrinsic evaluations), and quantitative and qualitative experimental results.
Methods And Evaluation Criteria: The proposed methods make clear sense for the problem of image tokenization. The paper carefully compares the proposed methods with the previous one (patch-based tokenization and object-level segmentation).
Theoretical Claims: There are no theoretical claims that require correctness in this paper.
Experimental Designs Or Analyses: The experiments in this paper are well-designed.
1. Intrinsic evaluations are performed across five datasets. The authors focus on boundary precision-recall metrics and monosemanticity score.
2. Extrinsic evaluations are performed on four datasets which are high-quality and widely-used. Besides, the authors use different feature embeddings to demonstrate the robustness of the proposed method.
Supplementary Material: I reviewed all the supplementary material and mainly focused on the extended intrinsic and extrinsic evaluations.
Relation To Broader Scientific Literature: 1. This paper is related to vision transformers using patch-based image tokenization.
2. This paper is related to the image segmentation models.
Essential References Not Discussed: The essential references are well discussed in Section 6.
Other Strengths And Weaknesses: Paper Strengths:
1. The paper is well-written and easy to follow.
2. The paper introduces a novel analogy from NLP to CV, which is conceptually innovative.
3. The proposed EPOC achieves computational efficiency, which could lead to high-resolution images with fewer computational resources.
Weaknesses:
1. Limited novelty.
2. Limited ablation studies on position and content embeddings.
Other Comments Or Suggestions: Here are some comments and suggestions for the authors:
- Highly recommend reorganizing the highlighted or underlined words in section 5.2.
- As for Figure 5, I highly recommend consistently plotting with grids.
Questions For Authors: Questions for authors:
1. Can the authors formulate the token monosemanticity score?
2. Why do you choose **SegFormer** as a boundary prediction model? Did the authors try to use other models to perform the same role, and what could be the performance? Can the authors provide some experiment with other boundary prediction models?
3. Can the authors provide some ablation on position and content embeddings?
4. Can the EPOC be scaled up?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough and insightful review. Below, we provide detailed responses addressing your specific questions and suggestions:
---
### 1. **Formulating Token Monosemanticity Score** (Question 1)
Yes. Below is the explicit formal definition, which will be added to the final version:
**Definition 1 (Token Monosemanticity Score):** Given an image \\( \\mathbf{X}\\in\\mathbb{R}^{H\\times W\\times 3} \\), a predicted token segmentation \\( \\mathbf{M}\\in\\{0,\\dots,N-1\\}^{H\\times W} \\), and a ground-truth segmentation \\( \\mathbf{M}^{\\ast}\\in\\{0,\\dots,K-1\\}^{H\\times W} \\), define the \\( i \\)-th predicted token \\( \\mathbf{t}_i \\) as the set of pixel coordinates assigned token index \\( i \\), i.e., \\( \\mathbf{t}_i=\\{(h,w)\\mid \\mathbf{M}(h,w)=i\\} \\).
We say a token \\( \\mathbf{t}_i \\) is **monosemantic** if it lies entirely within exactly one ground-truth semantic region. Formally, the indicator function of token monosemanticity is defined as follows:
\\[
\\mathbb{I}_{\\text{mono}}(\\mathbf{t}_i)=
\\begin{cases}
1, & \\text{if } \\exists k \\text{ such that } \\forall (h,w)\\in \\mathbf{t}_i, \\mathbf{M}^{\\ast}(h,w)=k \\\\
0, & \\text{otherwise.}
\\end{cases}
\\]
Then, the **Token Monosemanticity Score** is defined as the fraction of all predicted tokens that are monosemantic:
\\[
\\text{monosemanticity}(\\mathbf{M}, \\mathbf{M}^{\\ast})=\\frac{1}{N}\\sum_{i=0}^{N-1}\\mathbb{I}_{\\text{mono}}(\\mathbf{t}_i).
\\]
Intuitively, this metric quantifies how effectively the predicted segmentation avoids polysemantic tokens, which span multiple semantic regions.
---
### 2. **Boundary Detector Backbone** (Question 2 & 4)
We chose SegFormer primarily due to its popularity, simplicity, and computational efficiency. However, EPOC is definitely not confined to SegFormer. Boundary detection itself is a historically well-studied and relatively straightforward task, effectively handled by various convolutional or Transformer backbones [1-4].
> [1] DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection. CVPR’15
>
> [2] Richer convolutional features for edge detection. CVPR’17
>
> [3] EDTER: Edge detection with transformer. CVPR’22
>
> [4] DiffusionEdge: Diffusion Probabilistic Model for Crisp Edge Detection. AAAI’24
To directly address your questions regarding scalability and alternative backbones, we conducted an additional scaling experiment comparing the original SegFormer-b0 to a substantially larger SegFormer-b5. Both models were trained on SA-1B dataset for 2 epochs, and the followings are performance evaluation (boundary recall) results:
| Model | Parameters | Object-level (ADE20k) | Subobject-level (PPP) |
|---|---|---|---|
| SegFormer-b0 (Submission) | 3.7M | 58.88 | 69.22 |
| SegFormer-b5 (New) | 87.5M (x24) | 65.73 (+6.85%) | 71.46 (+2.24%) |
These results clearly show that while EPOC can indeed scale up with larger models, the performance gains are marginal relative to the substantial increase in computational overhead. This further justifies our original choice of the efficient SegFormer-b0.
---
### 3. **Ablation on Position and Content Embeddings** (Question 3)
We agree that an ablation study on position and content embeddings provides valuable insights into how VLMs utilizing adaptive token segmentation interpret images. We conducted an experiment with our EPOC-based VLM trained on CLEVR-cap, where we shuffled positional or content embeddings across visual tokens, and then evaluated the impact on performance.
Since CLEVR-cap is synthetic and the captions follow a structured template—e.g., *“Total 5 objects: a small gray metal cube, a small red rubber sphere, …”*—it allows us to parse the generation and calculate average accuracy (%) for individual attributes. The results are summarized below:
| | Count | Size | Color | Material | Shape |
|---|---|---|---|---|---|
| No Ablation | 57.0 | 71.4 | 59.3 | 77.7 | 69.3 |
| Shuffle Position Embeddings | 35.0 | 49.9 | 16.8 | 47.3 | 50.5 |
| Shuffle Content Embeddings | 48.0 | 66.4 | 20.8 | 52.0 | 59.3 |
| Chance | 10.0 | 50.0 | 12.5 | 50.0 | 33.3 |
These results show that both positional and content embeddings contribute to accurately capturing semantic attributes and correctly associating them with their corresponding objects, confirming their complementary roles in adaptive token segmentation-based image understanding.
---
### 4. **Comments and Suggestions on Section 5.2 and Fig. 5**
Thank you very much for these practical suggestions. Indeed, Section 5.2 is densely packed due to the strict page limits of the ICML submission format. The final version is allowed an extra page, enabling us to reorganize the presentation. Additionally, we will ensure Figure 5 includes consistent grid lines across all subplots.
---
Finally, thank you again for your constructive and detailed feedback, which significantly improves the quality and clarity of our paper.
---
Rebuttal Comment 1.1:
Comment: I apologize to the authors for my late reply and appreciate their detailed response. I appreciate that the authors respond to my main concerns about the paper., especially the ablation on position and content embeddings. I will be raising my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful follow-up and for reconsidering our manuscript! We deeply appreciate your acknowledgment of our efforts! If you have any additional suggestions or thoughts, please feel free to let us know—we warmly welcome further discussion! | null | null | null | null | null | null |
Multi-agent Architecture Search via Agentic Supernet | Accept (oral) | Summary: This paper introduces the concept of “agentic supernet”, which transforms the automatic LLM-based multi-agents design paradigm from a static, one-size-fit-all approach to a dynamitic and adaptive framework. Their MaAS framework samples components from the supernet to assemble appropriate multi-agent systems according to the difficulty and domain of the given query. Sufficient experimental results demonstrate their method effectively reduces API costs while improving performance across a wide range of tasks and LLM backbones.
Claims And Evidence: The paper is well-organized and well-written, enhancing the clarity and impact of their proposed method and findings.
Methods And Evaluation Criteria: This work presents a meaningful advancement by dynamically designing multi-agent system based on different queries through optimization of probabilistic, continuous distribution of agentic architectures, and the MaAS framework's pipeline is theoretically well-defined.
The paper lacks clarity regarding the parameter ϕ in Eq(6), where "Q_ϕ is parameterized by ϕ." The initialization and nature of this controller parameter remain unexplained.
What’s the reason the threshold value thres in Eq(9) is set as 0.3? The selection of this parameters would benefit from theoretical justification or empirical analysis, similar to the Sensitivity Analysis presented in Section 4.5.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The authors conducted extensive experiments across multiple benchmarks to evaluate their proposed MaAS, the amount of their experiments is fair and convincing.
There appears to be a discrepancy in the reported performance improvements. The authors mentioned MaAS “surpassing existing handcrafted or automated multi-agent systems by 0.54%∼11.82%” in terms of performance, but these statistics do not in sync with the results shown in Table 1, I wonder where the value 11.82% is derived from. Besides, the improvements highlighted in red are derived from comparisons with the Vanilla baseline in Table 1, while the actual improvements over “handcrafted or automated multi-agent systems” baselines such as AFlow appear modest.
Supplementary Material: didn't see additional material
Relation To Broader Scientific Literature: improve the performance on benchmark dataset
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Please refer to the weaknesses outlined above. Besides, there are some minor typos in this paper:
1. Caption of Table 1: get-4o-mini -> gpt-4o-mini
2. L81: MATh benchmark -> MATH benchmark
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your careful comments and thorough understanding of our paper! Here we give point-by-point responses to your comments and describe the revisions we made to address them.
---
>**`Weakness 1`: Clarification on parameter $\phi$** The paper lacks clarity regarding the parameter $\phi$ in Eq(6), where "$Q\_\phi$ is parameterized by $\phi$."
Thank you for pointing out! The $\phi$ represents the parameters of the controller network $\mathbb{Q}\_\phi$, which is essentially composed of the sampling functions for each layer, $\pi\_l: q \rightarrow \mathcal{V}\_\ell$ (where the definition of $\pi_\ell$ is given in Eq. (9)). They are MoE-style networks that select the activated operators for the $\ell$-th layer based on the query. Therefore, $\phi$ can also be expressed as $\phi = \\{\pi\_1, \cdots, \pi\_\ell\\}$.
We hope this clarifies the nature of the controller network.
---
>**`Weakness 2: The selection of threshold value`** What’s the reason the threshold value thres in Eq(9) is set as 0.3?
We supplement the parameter sensitivity analysis of threshold value in Eq. (9) as follows:
|Dataset|Metric|0.1|0.2|0.3|0.4|0.5|0.6|0.7|
|-|-|-|-|-|-|-|-|-|
|HumanEval|Perf.|90.07|91.60|92.85|93.02|92.36|92.36|90.83|
||Avg. Cost|0.69|0.92|1.01|1.121|1.113|2.590|4.300|
|GSM8K|Perf.|90.99|91.46|92.30|92.22|92.70|92.09|91.75|
||Avg. Cost|0.38|0.44|0.48|0.55|0.70|0.77|0.92|
We observe that while increasing the threshold value leads to some performance gains, the improvement plateaus beyond 0.3. Additionally, a higher threshold increases inference costs due to the activation of more operators per layer. Therefore, we ultimately set $thres = 0.3$.
---
>**`Weakness 3: Performance discrepancy`** There appears to be a discrepancy in the reported performance improvements.
Thank you for your detailed review! After thorough inspection, we found that 11.82% was a typo, as the main text and experimental results were not properly synchronized. It should be corrected to 16.89\%, which is derived from the performance difference between MaAS and MacNet on MBPP dataset.
---
>**`Weakness 4`** the improvements highlighted in red are derived from comparisons with the Vanilla baseline in Table 1, while the actual improvements over “handcrafted or automated multi-agent systems” baselines such as AFlow appear modest.
Thank you for your insightful suggestion! We chose to present the improvements over vanilla LLMs in Table 1 following prior practices in GPTSwarm (ICML 2024) and AgentPrune (ICLR 2025), which adopt the same approach.
To address your concerns, we have:
1. Provided a version of Table 1 with standard deviations, replacing the subscript values with the standard deviations from three runs, as shown in **Table1-stdev** (https://anonymous.4open.science/r/maas-rbt/table1-stdev.png).
2. Explained the key advantages of MaAS over the SOTA baseline AFlow:
- Although MaAS achieves moderate improvements over AFlow on certain benchmarks, it shows clear advantages on a broader set of benchmarks (e.g., +1.14% on GSM8K, +2.58% on MultiArith).
- MaAS achieves these gains with significantly lower computational costs—its training cost is only 15% of AFlow’s, and inference cost is just 25%.
We sincerely hope this demonstrates MaAS’s superiority over AFlow in both cost efficiency and performance.
---
>**`Minor Typos`**
Thank you for the meticulous review! We have repaired the mentioned typos in our revised manuscript. | Summary: This paper introduces MAAS (Multi-agent Architecture Search), an innovative framework for automating the design of multi-agent systems powered by Large Language Models (LLMs). MaAS addresses the limitations of existing methods that seek to identify a single, static, and complex multi-agent architecture, which often fails to dynamically allocate resources based on the difficulty and domain of each query. Instead, MaAS proposes the concept of an "agentic supernet", a probabilistic and continuous distribution of multi-agent architectures that can be sampled to tailor the system to specific queries.
Claims And Evidence: The following primary claims in this paper are well-supported:
1. **Pursuing an agentic supernet rather than a one-size-fits-all MAS.** The authors advocate for optimizing their proposed agentic supernet instead of the previously attempted giant, high-latency MAS. Their experiments support this claim: MAAS outperforms the SOTA baseline AFlow with only 15% of the training cost and 25% of the inference cost. The task adaptiveness emphasized by MAAS aligns well with intuition.
2. **Comprehensive automation is essential for MAS.** The authors provide a clear and insightful overview in Section 2 of the evolution of MAS from fully manual setups to partial automation and, finally, to full automation. The mapping of the technical trajectory from neural architecture search to automated MAS is particularly interesting. The MAAS framework achieves comprehensive automation, from prompts to communication topologies.
Methods And Evaluation Criteria: The concept of the agentic supernet introduced in MAAS is novel. While it appears to be inspired by works like DARTS in NAS, it follows a completely different technical approach. The probabilistic sampling and the introduction of the early-exit operator effectively fulfill the authors’ vision of a task-dynamic MAS.
Theoretical Claims: N/A
Experimental Designs Or Analyses: MAAS is evaluated across six benchmarks, covering domains such as mathematics, coding, and tool usage. The authors also emphasize resource consumption metrics, including token count, API cost, and wall-clock time. MAAS is comprehensively demonstrated, and I find no significant flaws in the evaluation.
Supplementary Material: I reviewed the Technical Details section on operator space and baseline setup.
Relation To Broader Scientific Literature: Automated MAS is an emerging and highly relevant research direction, with works like AgentVerse, GPTSwarm, and later ADAS and AFlow falling under the category of one-size-fits-all MAS. MAAS represents a new paradigm in this field and is closely connected to broader areas such as collaborative AI and autonomous AI.
Essential References Not Discussed: I recommend that the authors include Flow[1] (ICLR 2025), which also focuses on automating agentic workflows.
[1] Flow: Modularized Agentic Workflow Automation, ICLR 2025
Other Strengths And Weaknesses: Strength:
1. The proposed paradigm shift is innovative and significant for this field. Regardless of whether the agentic supernet becomes the mainstream form of MAS in the future, I believe this is an important contribution.
2. The evaluation is thorough, covering performance, token cost, API cost, and time consumption.
Weakness:
I find this work well-organized and convincing, with no apparent shortcomings. One thing I can suggest is to test on additional benchmarks such as ALFWorld, SciWorld, and ToolBench.
Other Comments Or Suggestions: N/A
Questions For Authors: See weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to express our sincere respect for your insightful review! In response to your comments, we have carefully prepared a point-by-point reply:
---
>**`Essential References Not Discussed`**
Thank you for the valuable supplement! We have added this important citation in our revised manuscript.
---
>**`Weaknessm 1: Additional benchmarks`** One thing I can suggest is to test on additional benchmarks such as ALFWorld, SciWorld, and ToolBench.
Thank you immensely for the instructive advice! Our performance on ALFWorld is summarized in the table below. Since ALFWorld involves multiple trials, MaAS samples an architecture for each episode, meaning the same agentic workflow is used across multiple trials. Other trainable baselines, such as GPTSwarm and AFlow, are implemented similarly.
| Method | Perf. (Max_trial=20) | Perf. (Max_trial=30) | Avg. Cost (10^-3 $) |
|--------|---------------------|---------------------|-------------------|
| Vanilla GPT-4o-mini | 48.71 | 50.12 | 4.68 |
| CoT | 49.92 | 51.82 | 4.95 |
| LLM-Debate | 54.68 | 56.90 | 17.30 |
| GPTSwarm | 53.19 | 57.40 | 12.14 |
| AgentSquare | 66.42 | 69.75 | 7.66 |
| AFlow | 59.16 | 60.81 | 13.02 |
| **MaAS (Ours)** | **68.14** | **72.66** | 9.15 |
As observed, MaAS achieves a performance improvement of up to 22.54% on the embodied task ALFWorld while maintaining a relatively low average cost, demonstrating its cost-effectiveness.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the detailed response and for providing additional experiments that resolved my doubts. I have decided to increase my score and recommend accepting the paper.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer `hocx`**,
Thank you for your thoughtful feedback and for taking the time to review our responses. We truly appreciate your constructive insights and your willingness to engage with our clarifications. Your support of our work, especially **the recognition of our vision toward fully automated, task-dynamic MAS**, means a lot to us.
Best Regards,
Authors | Summary: The paper proposed a novel automated multi-agent framework through agentic supernet (MaAS), which both delivery satisfactory performance and resource allocation efficiency for user queries across different domains. The framework was comprehensively evaluated on six benchmark tasks with comparison to about 15 baselines and state-of-the-art agentic systems. Coding scripts are also provided through anonymous GitHub.
Claims And Evidence: Yes, most claims are well-supported by either reference literature or experiment results.
The only complain lies in lack of variance estimation or statistical significance of performance results in Table 1, given some of the best and runner-ups are relatively close to each other, which might weaken the claims that the proposed MaAS wins over all other baselines. The experiments are not cross-validated or randomized with multiple attempts.
Methods And Evaluation Criteria: Yes. The proposed methods and evaluation criteria make sense for the problem. Evaluation metrics of benchmark datasets and tasks are well referenced. The benchmarks covers various datasets and tasks. The baseline methods include single agent system, hand-craft multi-agent systems and autonomous multi-agent systems.
Evaluation is comprehensive including not only task performances and computation costs of training and inference, but also parameter sensitivity analysis and ablation analysis.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Yes.
- It seems the experiments are not cross-validated (a single train-test split) or randomized with multiple attempts. Thus the results in table 1, 2 and 3 lack the confidence intervals. The paper is very close to strong accept if CIs are provided.
- It seems the optimal parameter set L, $\lambda$, and K are set the same for all tasks. Could you pls clarify which benchmark task yield the optimal parameter set in Figure 7? And are the optimal parameters subject to change given a different task?
Supplementary Material: Yes. Mostly Part A for the nations and Part C for additional experiments results mentioned check the supporting evidence for claims in the main paper, especially the interesting transferability analysis and inductive analysis.
Relation To Broader Scientific Literature: The key contributions of the paper lie in proposing a novel multi-agent framework with agentic supernet, along with comprehensive evaluations with regards to state-of-the-art agentic AI frameworks, which could potentially be a paradigm shift from seeking a single optimal system (e.g. CoT, ComplexCoT) to dynamically and autonomously optimizing a distribution of agentic architectures (various LLMs and tools). The computational efficiency of the proposed framework is also of great practical value, which could be inspiring future agentic AI research.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper is well-structured with logic flow and nice graph illustrations. I enjoy reading it very much. My questions raised during reading are well addressed in the subsequent sections. Literature review and existing state-of-art methods are well-summarized in a comparative fashion with close relation to the proposed framework. The evaluation is comprehensive with comparison to meaningful baseline methods on popular benchmark tasks. Analysis and experiment results in the supplementary also provide inspiring viewpoint in results interpretation.
Other Comments Or Suggestions: Comments on typos and minor issues:
- Definition 3.2, missing a left parenthesis
- Missing underlines for some tied runner-ups in table 2.
Suggestions:
- It seems the cost analysis (Figure 4,Table 3), parameter sensitivity analysis (Figure 7) and ablation analysis (Table 4) is limited to MATH and HumanEval benchmarks only. If it is the case, it would be great if the authors could share the corresponding analysis for all benchmarks in supplementary for a thorough and complete benchmark analysis. If not, please help clarify in the paper if they were averaged across benchmarks.
- Some analysis in Part 4.5 Framework analysis might be too compressed even with the supplementary materials, especially transferability analysis and inductive analysis. If more details could be provided on experiment settings in supplementary, it'll be a good reference for other researchers.
Questions For Authors: - How will the proposed framework scale with number of agentic operators? Could you elaborate more on the potential impact of search efficiency?
- In Figure 7, the performance of sampling times K does not monotonically increasing, e.g. dip at K=6. What's the possible reason and indication?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Sincere thanks for the thoughtful and constructive reviews of our manuscript! Based on your questions and recommendations, we give point-by-point responses to your comments.
---
>**`Weakness 1: Lack of variance estimation`**
Thank you for your insightful suggestion! In fact, all results in Table 1 represent the average of three runs. Following prior practices in GPTSwarm (ICML 2024) and AgentPrune (ICLR 2025), we chose to present the performance difference from the vanilla LLM in the main result table (rather than reporting standard deviations).
To further address your concern, we provide a version of Table 1 with standard deviations, as shown in **Table1-stdev** (https://anonymous.4open.science/r/maas-rbt/table1-stdev.png). We sincerely hope this resolves your concern.
---
>**`Weakness 2`** It seems the experiments are not cross-validated (a single train-test split) or randomized with multiple attempts.
Thank you! The results are reported as the average of three runs, and we have supplemented them with standard deviations in our response to Weakness 1.
---
>**`Weakness 3: Clarification on Hyperparameters`** Could you pls clarify which benchmark task yield the optimal parameter set in Figure 7? And are the optimal parameters subject to change given a different task?
Thank you for your insightful inquiry! In fact, our proposed MaAS has not undergone extensive hyperparameter tuning, and the current setting ($L=4, K=4$) is not necessarily the optimal one in terms of performance. As shown in Figure 7, increasing $L$ from 4 to 8 leads to a 0.9% improvement in pass@1.
However, we have consistently observed the following trends across multiple benchmarks:
- Beyond $L=4$, the performance gain from increasing the supernet depth becomes marginal.
- Beyond $K=4$, the performance gain from increasing the sampling times also plateaus.
Above is the rationale behind our choice of $L=4, K=4$. Since our current sensitivity study is based on HumanEval, we further provide a sensitivity analysis on GSM8K to substantiate our findings:
|$L$|2|4|6|8|
|-|-|-|-|-|
|Perf.|89.09|92.30|92.88|93.50|
|Inf. Cost|0.34|0.48|0.67|0.89|
|**K**|**2**|**4**|**6**|**8**|
|Perf.|91.6|92.30|92.35|92.83|
|Inf. Cost|0.48|0.48|0.49|0.49|
---
>**`Comment 1: Typos and Minors`**
Thanks immensely for pointing out! We will add the missing left parenthesis and underline the tied runner-up in Table 2.
---
>**`Suggestion 1: Ablation & Sensitivity analysis on other benchmarks`**
Thank you for your valuable insights! In our response to **Weakness 3**, we have supplemented the sensitivity analysis for GSM8K. Due to time constraints and API resource budget, we were unable to promptly provide ablation/sensitivity analyses for all datasets. However, we sincerely commit to including the corresponding analyses for other datasets in the appendix of the camera-ready version. Once again, we truly appreciate your feedback!
---
>**`Suggestion 2: More details on transferability and inductive analysis`**
Thank you for your valuable feedback! We sincerely commit to including additional details on transferability and inductive analysis in the appendix.
---
>**`Question 1`** How will the proposed framework scale with number of agentic operators? Could you elaborate more on the potential impact of search efficiency?
To address your question, we gradually increase the number of operators on HumanEval and report the performance, cost, and time-related metrics as follows:
|operator_num|score |cost ($) |infer-time|Train-time|
|-|-|-|-|-|
| 3| 89.31 | 0.00093 | 6min47s | 18min |
|4|91.6|0.00088|8min|20min|
|5|92.13|0.00121|8min|22min|
|6|92.36|0.00117|7min|25min|
|7|92.85|0.00101|10min|26min|
|8|93.89|0.00113|11min|29min|
As observed, with more operators, performance exhibits a steady improvement, while cost remains largely stable, and training/inference time does not increase significantly. We believe this demonstrates the efficiency advantage of the agentic supernet.
---
>**`Question 2`** In Figure 7, the performance of sampling times K does not monotonically increasing, e.g. dip at K=6. What's the possible reason and indication?
Your keen insight is truly appreciated! To further investigate this phenomenon, we conducted a finer-grained hyperparameter analysis on $K$, with results summarized in the table below:
|K|2|3|4|5|6|7|8|9|
|-|-|-|-|-|-|-|-|-|
|Perf. (3-run avg.)| 89.50 | 91.04 | 92.28 | 91.38 | 91.17 | 92.01 | 92.45 | 91.55 |
| Perf. (5-run avg.) | 89.13 | 91.10 | 92.05 | 91.30 | 91.71 | 92.10 | 92.05 | 92.14 |
The results indicate that when $K\geq4$, MaAS's performance stabilizes within the range of 91.20–92.20 on HumanEval, suggesting that the benefit of additional sampling saturates at $K=4$, with subsequent fluctuations remaining within a normal range. We hope this properly addresses your concern.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed responses with additional experiment results for reference. The paper might be of high contribution to the agentic AI research. I have updated my overall recommendation to strong accept.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer `43f8`**,
Thank you for your thoughtful feedback and strong support of our work! We greatly appreciate your constructive insights, particularly regarding variance estimation, hyperparameter configuration, and scalability. As per your suggestions, we will incorporate additional ablation studies and sensitivity analyses on more datasets in the revised manuscript. We are also sincerely grateful for your recognition of MaAS's contribution to the agentic AI community and its practical value.
Thank you once again for your time, expertise, and constructive review!
Best regards,
Authors | Summary: This paper introduces a novel mechanism called the "Agentic Supernet" to enable dynamic inference within multi-agent systems. Unlike traditional fixed agentic systems, the supernet and its subnet agents, which are instantiated through parameterized sampling, allowing for adaptive inference across a variety of tasks and difficulty levels. This represents a significant contribution, as it attempts to address a core limitation of workflow-based agentic systems, which often lack sufficient adaptability to diverse tasks and complexities. Extensive experimental results demonstrate the method’s effectiveness, showcasing its ability to deliver both high-performing and cost-efficient agentic systems owing to its adaptive nature.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No proofs.
Experimental Designs Or Analyses: yes. All experimental designs are sound and valid.
Supplementary Material: Yes. I went through all of them.
Relation To Broader Scientific Literature: The work is related recently emerging research topic of automating agent design. Additionally, the concept of the Agentic Supernet draws inspiration from supernet techniques in Neural Architecture Search (NAS).
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
- The proposal of an Agentic Supernet to facilitate adaptive inference in multi-agent systems across diverse tasks and difficulty levels is both novel and compelling. This approach introduces a fresh perspective on addressing the adaptability challenges inherent in traditional agentic frameworks.
- The experimental design is sound, following practices from prior work while expanding the scope through the inclusion of additional benchmarks and comprehensive analyses.
- The results and examples effectively demonstrate the method’s capability to adapt agent configurations to varying task difficulties, achieving high performance while maintaining cost efficiency. This underscores the practical utility of the proposed framework besides its novelty in scientific ideas.
**Weaknesses**
1. Insufficient Clarity in Presentation
- The description of the supernet’s layer structure lacks clarity, particularly regarding whether each layer contains all operators from the operator space.
- The process of handling multiple activated operators within a layer and how their outputs are integrated as inputs to subsequent layers remains ambiguous.
2. Drawing from supernet-related work in NAS, such as Liang et al. "DARTS+", some learned lessons can be discussed:
- Collapse Issue: While the absence of "skip" operations in the proposed supernet may avoid the collapse issue in the proposed method, this could be validated through longer runs to confirm its robustness.
- Overfitting is also a known challenge in NAS supernets like DARTS. Seemingly the adaptive sampling network to the query will mitigate this issue. But some discussions on it will be interesting.
- It is also known that DARTS has a biased sampling issue (e.g. discussed in Chu et al. "Fair DARTS"). I.e. Early over-sampling of certain paths (nodes) could bias the optimization process. Because these paths (nodes) are sampled and updated at the early stage, the subsequent sampling will further favor those paths (nodes), even other paths (nodes) may have more potential. A discussion of this risk, along with potential investigation and mitigation strategies as future work, would enhance the paper’s depth.
3. The ability of the resulting agentic system to dynamically adapt to multiple tasks is a highly appealing feature, briefly evidenced in Table 8 through transferability experiments. However, further elaboration and concrete examples would provide deeper insight into this capability. Additionally, an experiment training the supernet across multiple domains simultaneously—rather than a single domain—could be interesting to reveal whether such a setup improves generalization compared to domain-specific training.
4. The paper presents two mechanisms that enable adaptive inference in agentic systems: (1) the Agentic Supernet and (2) query-based sampling during inference. The current evaluation showcases the combined effect of both (1+2), suggesting that the supernet’s effectiveness may depend on query-based sampling. However, an alternative approach—relying solely on (2) with an archive of agentic systems tailored to specific queries—could also achieve adaptive inference without the supernet. This raises a critical question: Is the supernet the primary driver of the method’s superior performance, or does the query-based sampling mechanism play the dominant role? The paper would benefit from a discussion or ablation study disentangling these contributions. Such an analysis would enhance the work’s depth and bolster the persuasiveness of the central claim regarding the supernet’s significance.
Other Comments Or Suggestions: - In NAS, supernets are primarily employed to reduce evaluation costs during architecture optimization. In contrast, this work leverages the supernet concept to enable dynamic inference tailored to varying queries. A detailed discussion highlighting this distinction—particularly how it shifts the focus from cost reduction to adaptability—would enrich the paper’s contribution and clarify its novelty within the broader literature.
- The choice of a feed-forward supernet is interesting. Potentially, as the information flow does not appear to strictly depend on a layer-by-layer structure, it might be also interesting to have a directed acyclic graph as a "supergraph". Including ideas like this, some future works can be discussed to inspire other researchers.
- Several baselines lack specificity regarding the underlying LLM used.
- In Figure 4, the meaning of circle size is not explained.
- Table 4 reveals an intriguing result: removing the textual gradient leads to a significantly lower-cost agentic system on HumanEval, despite a performance drop. Discussing more on this could provide valuable insights into the method’s cost-performance dynamics.
- The learning progress of the proposed algorithm is not shown. Including a visualization or analysis of the optimization process (e.g., performance convergence, operator sampling trends over time) would offer a deeper understanding of how the Agentic Supernet evolves and adapts.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: >**`Weakness 1: Insufficient Clarity in Presentation`**
Thank you for the insightful comment! Each layer shares the same set of operators, except for the first layer, where the early-exit operator is excluded. The operators in each layer produce outputs in parallel, which are then concatenated and passed as input prompts to the next activated operator, following standard practices in Mixture-of-Agents (ICLR 2025) and GPTSwarm (ICML 2024).
---
>**`Weakness 2: Lessons from DARTS+`**
Thank you for this highly insightful discussion and enlightenment!
- **Collapse Issue** We empirically demonstrate that the agentic supernet does not encounter the collapse issue observed in traditional DARTS, as presented in **Table-collapse** (https://anonymous.4open.science/r/maas-rbt/table-collapse.md).
- **Overfitting Issue** Intuitively, we argue that MaAS's agentic supernet is inherently resistant to overfitting due to two key factors: (1) **Query-aware sampling**. In fact, prior work on customizable NAS like GRACES [1], has demonstrated the advantage of input-dependent supernet in out-of-distribution generalizability. (2) **Cross-domain training data**. The training data itself can span multiple domains (e.g., GAIA benchmark includes web searching and file analysis), inherently promoting cross-domain generalization.
- **Biased Sampling Issue** We commit to incorporating this intriguing discussion in our revised manuscript, borrowing lessons from FairNAS/DARTS-/DARTS-PT.
[1] Graph Neural Architecture Search Under Distribution Shifts, ICML'22
---
>**`Weakness 3.1: Transferability of agentic supernet`**
Following your suggestion, beyond the numerical transferability study in Table 8, we further visualize the underlying mechanism of MaAS’s transferability, with results and analysis presented in **Figure-transfer** (https://anonymous.4open.science/r/maas-rbt/figure-transfer.md).
---
>**`Weakness 3.2: Cross-domain optimization of agentic supernet`**
We would first like to point out that, the GAIA benchmark we used inherently falls under cross-domain optimization (web searching + file reading). To further address your concerns, we report the results of training the agentic supernet under a math/coding cross-domain setting:
*(M->MATH, G->GSM8K, H->HumanEval)*
|Train on|Test on|Perf.|
|-|-|-|
|M|M|51.82|
||G|92.80|
|M+G|M|51.66|
||G|93.70|
|H|H|92.85|
||M|50.27|
|H+M|H|93.05|
||M|52.69|
Notably, when trained on the HumanEval+GSM8K mixture, the performance surpasses that of training on HumanEval or MATH alone.
---
>**`Weakness 4: Ablation on (1) supernet and (2) query-based sampling`**
To validate that the driving force of MaAS relies not only on query-based sampling but also on the agentic supernet, we construct a baseline called Agent-Archive, which consists of an agentic system archive populated with operators and agentic workflows. Results are in **Table-archive** (https://anonymous.4open.science/r/maas-rbt/table-archive.md).
---
>**`Comment 1`** How agentic supernet shifts the focus from cost reduction to adaptability
We respectfully state that MaAS's adaptability stems from its query-aware supernet sampling mechanism.
By analogy, MaAS removes DARTS's final one-shot pruning step that determines a fixed CNN. Instead, during actual usage, it dynamically customizes each layer’s kernel size/skip connections/pooling operators, as well as the network depth, based on each input (e.g., image). This allows MaAS to retain DARTS's advantage of reducing training costs while simultaneously enhancing adaptability in agentic scenarios.
---
>**`Comment 2: Possibility of supergraph`**
Thank you for your inspiring thoughts! We will incorporate this interesting discussion in our updated manuscript.
---
>**`Comment 3: Specifying underlying LLM of baselines`**
Thank you! This has been specified in Line 277 and Appendix C.2. Besides, in Table 2, GPt-4o-mini is used for TapaAgent/Sibyl and GPT-4 for AutoGPT.
---
>**`Comment 4: Circle size in Figure 4`**
Thank you! The circle size in Figure 4 is proportional to the value of y-axis.
---
>**`Comment 5: Removal of textual gradient in Table 4`**
After carefully reviewing Table 4, we identified a typo that exaggerated the impact of removing textual gradient on inference cost. Specifically, 0.09 should be corrected to 0.90. To provide a more detailed analysis, we present Table 4 with standard deviation included for finer observation (https://anonymous.4open.science/r/maas-rbt/table4-stdev.md). We believe the cost reduction occurs because textual gradient increases the length of prompts in MaAS.
---
>**`Comment 6: Learning visualization of agentic supernet`**
Following your insightful suggestion, we have visualized the evolution of operator sampling trends as the sampling count increases (https://anonymous.4open.science/r/maas-rbt/figure-learning.pdf). It learns to avoid overly confident early stopping and instead prioritizes testing and self-refinement in deeper layers.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response to my questions. The detailed reply, along with the additional experiments, has addressed my concerns effectively. I believe this paper advances the field significantly and could inspire many exciting future research. I will raise my score and recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer `ZEuX`,**
Thank you for your thoughtful feedback and generous support of our work! We truly appreciate your meticulous review and high-caliber suggestions, including **insights from NAS, cross-domain optimization**, and **advanced visualization**. These have significantly enriched the depth and quality of our manuscript. It has been an honor for us to incorporate your recommendations and suggestions into our revised manuscript.
Sincerely,
The Authors | null | null | null | null | null | null |
An analytic theory of creativity in convolutional diffusion models | Accept (oral) | Summary: The paper proposes a formula to predict images generated by convolutional diffusion models. The analysis suggests that biases in convolutional neural networks—such as locality and translational equivariance—prevent diffusion models from learning a perfect score function, encouraging them to generate samples that were not present in the training data.
Claims And Evidence: The evidence is promising in the cases of MNIST and FashionMNIST. However, the evaluation based on CIFAR-10 is less conclusive. This is still acceptable because the ResNet-based diffusion model itself still struggles to generate high-quality images.
Methods And Evaluation Criteria: r^2 measures how well the predicted outputs (ELS machine) correlate with the actual outputs (diffusion model).
This is pixel-level measurement. In the image generation, there are multiple choices of metrics that can be used along with pixel-level metrics, such as FID and Inception Score. The only concern is this work does not include these metrics.
Theoretical Claims: Although I did not go into the details of proofs, the high-level idea and motivation sounds promising.
Experimental Designs Or Analyses: Experimental design is straight-forward so there is no issue with it.
Supplementary Material: part A
Relation To Broader Scientific Literature: There are public debates about whether diffusion models copy art from humans.
This work theoretically demonstrates that diffusion models can be creative and generate something no human has seen before. This work is important in relation to the broader scientific literature, as it has the potential to extend the framework to other generative AI models, such as large language models (LLMs).
Essential References Not Discussed: -
Other Strengths And Weaknesses: I cannot think of the weakness.
Other Comments Or Suggestions: -
Questions For Authors: No question
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions. Below, we hope to carefully address a few of their concerns:
>The evidence is promising in the cases of MNIST and FashionMNIST. However, the evaluation based on CIFAR-10 is less conclusive. This is still acceptable because the ResNet-based diffusion model itself still struggles to generate high-quality images.
We would like to emphasize that the primary objective of our paper is to find a predictive theory for small convolutional neural networks, rather than a predictive theory of high performance networks which require more elements like transformers. While it is true that these networks are limited in their performance, particularly on CIFAR-10 (as the reviewer notes), this is a feature of the models we are attempting to study, and a theoretical model that obtained higher performance would not be an accurate description of the model class. Predicting the defects of models is in fact part of the objective of our paper-- for instance, we are interested in understanding the origin of common spatial consistency problems in diffusion models, as shown e.g. in figure 4.
>r^2 measures how well the predicted outputs (ELS machine) correlate with the actual outputs (diffusion model). This is pixel-level measurement. In the image generation, there are multiple choices of metrics that can be used along with pixel-level metrics, such as FID and Inception Score. The only concern is this work does not include these metrics.
While FID and Inception Score are commonly used metrics, they are measures of distributional similarity between large sets of unpaired images. However, the primary task in our paper is to measure the similarity between individual pairs of images, and thus FID and Inception Score are not appropriate for the task. Indeed any *distributional* distance metrics is too weak for our purpose: we don’t just want to show that the ELS machine and the trained UNet/Resnet sample from similar distributions; we want to show a much stronger result that for every single noise realization, the particular image generated by the ELS machine, and the corresponding paired image generated by the UNet/ResNet is very similar.
Feature-wise distances might be considered, but there is less of a standard consensus on the features to use for small black and white datasets such as MNIST and FashionMNIST (InceptionV3, from which FID is derived, was trained on ImageNet, whose statistics do not resemble the statistics of MNIST and thus whose features are unlikely to be useful for capturing the features of that dataset). Rather than look for a nonstandard feature space, we felt it was better to stick with a metric in the underlying pixel space, where the resulting numbers would be clearly interpretable.
There are other pixelwise distance metrics that are standard in the literature that could be considered, such as L2 distance and cosine similarity. The former is not invariant to rescaling, and since we wanted to have a comparison between the performance of the ELS machine on models trained on image datasets with different statistics, we felt it was better to choose a scale-invariant metric such as r^2 or cosine similarity. These two metrics are very similar quantitatively, and we felt it would be redundant to include both. The choice of r^2 does not privilege our model; the values of the median cosine similarities for various model configurations are presented below for comparison:
| Model | ELS/CNN Cosine Similarity |
|-----------------------------|---------------------------|
| MNIST/UNet/Zeros | 0.93 |
| CIFAR10/UNet/Zeros | 0.82 |
| FashionMNIST/UNet/Zeros | 0.88 |
| MNIST/ResNet/Zeros | 0.97 |
| MNIST/ResNet/Circular | 0.82 |
| CIFAR10/ResNet/Zeros | 0.89 |
| CIFAR10/ResNet/Circular | 0.90 |
| FashionMNIST/ResNet/Zeros | 0.92 |
| CIFAR10/UNet+SA | 0.84 |
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I think this paper should be accepted without any concerns. I increase the score to be strong accept. | Summary: This work proposes that biases of translation-equivariance and locality are sufficient to explain novel image generation in fully convolutional diffusion generative models. It does so by showing that a closed-form score model subject to those constraints qualitatively recapitulates the images generated by trained CNNs.
## Update after rebuttal
The authors' rebuttal answers my questions and concerns, and I remain strongly in favor of acceptance.
Claims And Evidence: The claims are largely supported by convincing evidence. Given the broad interest in diffusion generative models, it goes without saying that this paper should be of substantial interest to the ICML audience, and may well be influential.
I would encourage the authors to reconsider their use of the term "creative", as I think "novel" or "original" would be somewhat more precise, and would be less burdened with humanistic baggage. For instance, the paper could alternatively be titled "An analytic theory of novel image generation in convolutional diffusion models". However, this is at least partially a matter of taste, and I leave the decision to the authors' discretion.
Methods And Evaluation Criteria: The methodology seems sound.
Theoretical Claims: Most of the derivations in this paper are quite straightforward. The main theoretical result is Theorem 4.1; its proof appears correct.
Experimental Designs Or Analyses: The experiments are largely well-designed. One concern is that all quantitative comparisons use pixelwise $r^2$, and no clear justification is provided for this choice.
Supplementary Material: I have skimmed the Supplementary Material, which is clearly written. Moreover, the substantial collection of additional image examples adds further support to the main claims.
Relation To Broader Scientific Literature: This paper is related to the broader literature through the position of the literature on diffusion models within the broader machine learning field.
Essential References Not Discussed: I think the authors do a good job addressing relevant literature on theories for how diffusion models generate novel images; no omissions stood out.
Other Strengths And Weaknesses: I think the authors should state more clearly in the Introduction that the excellent predictions resulting from their closed-form model require calibration of the time-dependent patch scale based on the quality of the predictions. I don't think this is a substantial limitation - after all this is only a few parameters - but I do think it's important to mention clearly, if only to highlight that the factors determining the schedule of scale decreases is an interesting topic for future work.
Other Comments Or Suggestions: - There are a number of in-text citations for which \citet should be used in place of \citep, e.g. when citing Kadkhodaie et al. in Line 98.
- Please use a 1:1 aspect ratio in Figures 8-10.
- It would be helpful if the authors swapped the column ordering of Figure 19 so that the ELS machine is on the right like in other figures.
Questions For Authors: - The finding that imposing circular boundary conditions leads to more texture-like generated images is interesting. Can you provide any intuition for why that is? Does it arise purely through interactions between the artificially-joined boundary patches?
- What happened in the ELS-generated image in the eigth row of the second column of Figure 17? Here the model has produced an image very different from the convolutional model, and indeed one that lacks the expected localized features.
- This is something that could be left to future work, but I am curious if the ELS model could recapitulate the results of the forward-backward experiments performed by Sclocchi, Favero, & Wyart. In particular, could some of the transition points they detect be linked to changes in the patch scale?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their highly detailed feedback and insightful suggestions. Below, we carefully address several of the points that they raised.
>I would encourage the authors to reconsider their use of the term "creative", as I think "novel" or "original" would be somewhat more precise, and would be less burdened with humanistic baggage […] However, this is at least partially a matter of taste, and I leave the decision to the authors' discretion.
We appreciate the reviewer’s concern about this choice of nomenclature. While a matter of taste, we did arrive at the term by trying to encapsulate a particular technical phenomenon; roughly, “originality subject to constraints embedded in the data.” We felt that it was important to distinguish this behavior from the trivial originality type of e.g. a white noise sampler. In order to make this case clearer, we will add language in our revisions that makes more explicit the particular characteristics that we are attempting to capture with this word choice.
The term that we felt was otherwise nearest to the concept we were trying to describe is “generalization.” While a less humanistic word choice, we felt that this term too had baggage that we were not totally comfortable with. Generalization is often used to mean something along the lines of “correct generalization” or the gap between training and test set performance. It was not clear to us that this was the right notion for the phenomena in our paper (e.g. the outputs of Figure 5 are "incorrect"); as such, we elected to sidestep this by instead using a nontechnical term.
>One concern is that all quantitative comparisons use pixelwise r2, and no clear justification is provided for this choice.
The reasoning behind the choice of the r2 metric is as follows. The task we are trying to benchmark is a paired image comparison task, and thus taking the median of some image distance metric across the sample set seemed appropriate. Many metrics commonly used in the generative image model literature, such as FID, are *distributional* distance metrics and so we did not consider them as they were too weak for our purpose. Distances between pairs of images in the underlying pixel space seemed both most natural to us and also the most stringent possible test for our model. We considered using the L2 distance directly, however, this metric is sensitive to the overall scale of the image distribution; since we wanted a metric that would allow us to compare the performance across different datasets that have different image statistics, the metric should be invariant to overall scale. The r2 metric is a standard metric which satisfies these criteria. We also felt that the 0-1 scores that the r2 metric assigns were more intuitively understandable than the raw l2 scores.
Another alternative metric choice that satisfied the criteria outlined above was cosine similarity. This metric is very similar to the r2 metric, and the results are given below:
|Model|ELS/CNN Cos|
|---|---|
|MNIST/UNet|0.93|
|CIFAR10/UNet|0.82|
|FMNIST/UNet|0.88|
|MNIST/ResNet/Zeros|0.97|
|MNIST/ResNet/Circular|0.82|
|CIFAR10/ResNet/Zeros|0.89|
|CIFAR10/ResNet/Circular|0.90|
|FMNIST/ResNet/Zeros|0.92|
|CIFAR10/UNet+SA|0.84|
Ultimately, rather than presenting two separate but highly related metrics, we made the choice to use the r2 metric alone.
>The finding that imposing circular boundary conditions leads to more texture-like generated images is interesting. Can you provide any intuition for why that is? Does it arise purely through interactions between the artificially-joined boundary patches?
The intuition for the texture-like behavior is that, without any global positional information, local models are not able to robustly coordinate their generation in order to produce coherent images. Roughly, each independent part of the denoised image spontaneously decides to resemble a totally random location in a training image; this naturally produces a bit of a jumble. Adding borders helps coordinate this generation process by ‘pinning down’ the boundary. This helps especially for datasets such as MNIST and FashionMNIST, where the boundary is very regular (a stereotyped black background).
>What happened in the ELS-generated image in the eigth row of the second column of Figure 17?
This was a plotting bounds issue; the image should appear visually monochromatically black and has been fixed.
>This is something that could be left to future work, but I am curious if the ELS model could recapitulate the results of the forward-backward experiments performed by Sclocchi, Favero, & Wyart.
We thank the reviewer for this interesting suggestion. We suspect that there may be a connection, and in particular we are curious in particular about whether the ELS machine framework might shed light on what the “natural” analogue of the hierarchy of variables that they study might be in real datasets. However, we will leave detailed investigation up to future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, which addresses my concerns. I will maintain my score, as this paper should certainly be accepted. | Summary: This paper develops a theory for why convolutional diffusion models fail to learn the ideal score function. It is theorized that this is due to locality from small receptive fields and translational equivariance. Under these assumptions, an optimal minimum MSE approximation to the ideal score function is derived subject to locality and broken translational equivariance constraints. The so-called equivariant local score machine shows that convolutional diffusion models compose different patches from training examples together, which the papers calls a locally consistent patch mosaic. There is very high quantitative (r^2 values) agreement between outputs from convolutional diffusion models and the ELS machine, suggesting that the ELS provides a plausible explanation for the mechanisms of convolutional diffusion models. Some of the theory also holds up when self-attention is introduced.
## Update after rebuttal
I maintain my score of strong accept. I thank the authors for their effort in addressing my comments, especially the Celeb-A experiments.
Claims And Evidence: The claims are strongly supported by theory and empirical results, showing strong agreement between outputs from the ELS machine and from convolutional diffusion models.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. r^2 is measured between the outputs from the ELS machine and convolutional diffusion models to measure agreement.
Theoretical Claims: The theoretical claims seem to be correct. Ultimately, the empirical results show that the derived analytic solution closely matches with the true outputs from convolutional diffusion models.
Experimental Designs Or Analyses: The experimental designs and analysis are sound. The analysis of spatial inconsistencies from excess late-time locality is especially insightful, providing an explanation for the well-known phenomenon that diffusion models struggle in generating limbs.
Supplementary Material: I have reviewed all of the supplementary material. Details on notation, background on diffusion models, in-depth derivations, details on the models and padding methods, and generated samples are provided.
Relation To Broader Scientific Literature: The paper gives care to discussing previous work that shows that the ideal diffusion model should memorize its data. This provides a nice setup for the contributions of this paper, which explain why convolutional diffusion models don't in fact memorize their data, and essentially compose patches from different training examples.
Essential References Not Discussed: The paper has done due diligence to cite related works such as Kadkhodaie et al., 2023a and concurrent work (Niedoba et al., 2024). So, I do not see any issues in missing essential references.
Other Strengths And Weaknesses: **Strengths**
* I think this is a very strong paper that can have big impact since it provides a theory for why convolutional diffusion models can generalize. This has implications for data attribution, fixing artifacts from diffusion model samples, improving adherence to conditioning, etc. The paper is also very well-written. As a non-theorist, the development of the ELS machine was very intuitive, and I appreciated the illustrations such as Figure 3 to build and intuition. I provide some small suggestions below.
**Suggestions**
* How important is each constraint in the analytic solution? It is mentioned in the paper that the equivariance constrained machine can only "generate training set images globally translated to any other location." Is it possible to ablate each constraint and observe the correlation with the diffusion samples?
* I think Fig. 1 can be further improved by including a column of the input noise. This will emphasize the message that the analytic theory predicts almost the same output as a convolutional diffusion model given the same input noise.
* I understand that the theory can only mainly explain toy, small-scale settings such as MNIST, CIFAR, Fashion-MNIST. But there is not much compositionality in these datasets. CIFAR is a difficult dataset to generate cohesive images, so it is hard to see any "creativity" since it is difficult to see any semantics in the images. If possible, I think applying the ELS machine on CelebA-64x64 (or even 32x32 and grayscale) can produce very convincing results. It is a fairly homogeneous dataset, so fitting a convolutional diffusion model should not be difficult. Also, there is a lot of room for compositionality since facial expressions (e.g., smile) or accessories (e.g., glasses) can be patches that are combined with other patches to demonstrate "creativity." I think this would make it more convincing beyond the theory community.
Other Comments Or Suggestions: In the Supplementary Section D. Samples, the page is blank followed by samples in the next page.
Questions For Authors: Please see **Suggestions** above.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and suggestions. We address some of these below:
>How important is each constraint in the analytic solution? […] Is it possible to ablate each constraint and observe the correlation with the diffusion samples?
An equivariant score machine on its own will only generate memorized training examples, similar to the ideal score machine, but translated a random amount. This is highly discrepant from the observed data, especially in the presence of boundary conditions that fix the spatial position of at least the edges of the image. Due to time constraints we have not yet performed the ablation of locality; however, we expect a priori for the reason outlined before that it will perform uniformly lower across all categories than even the ideal-score baseline.
To address the part of the reviewer’s question about the performance of the locality constraint without equivariance, we discuss several aspects. Firstly, fully-equivariant circularly-padded models display features clearly inconsistent with patches knowing their position. This is shown clearly in the samples from the circularly-padded ResNet on MNIST, which include line segments that intersect the boundary, a feature that is not present in any training example in MNIST and cannot be elicited from the LS Machine.
Secondly, we evaluated the correlation between the outputs of each model we studied in the paper and the outputs of a corresponding LS Machine. The quantitative results of these results are summarized in the table below:
|Model|LS/CNN r^2|ELS/CNN r^2|
|---|---|---|
|MNIST/UNet/Zeros|0.83|0.84|
|CIFAR10/UNet/Zeros|0.80|0.82|
|FashionMNIST/UNet/Zeros|0.91|0.91|
|MNIST/ResNet/Zeros|0.84|0.94|
|MNIST/ResNet/Circular|0.33|0.77|
|CIFAR10/ResNet/Zeros|0.86|0.90|
|CIFAR10/ResNet/Circular|0.80|0.90|
|FashionMNIST/ResNet/Zeros|0.88|0.90|
|CIFAR10/UNet+SA|0.74|0.75|
We find that the ELS machine uniformly outperforms the LS machine for ResNets, and performs above or at par with the LS machine for UNets, but the discrepancy is small for zero-padded models, indicating that locality is the main factor. Qualitatively, however, the LS machine samples are "grainy" and the ELS samples are visually much better, which is not reflected in this metric.
We found one outcome, in response to the suggestion that we study CelebA, which did not follow the trend described above. We describe the results below.
>If possible, I think applying the ELS machine on CelebA-64x64 (or even 32x32 and grayscale) can produce very convincing results.
We strongly thank the reviewer for this suggestion. To address it, we trained a ResNet and a UNet model on CelebA-32x32 grayscale. Our analysis is not yet complete due to the large computational expense required for the calibration and generation process, but we have been able to perform a preliminary analysis of the LS Machine and ELS Machine, using a) a reduced dataset and b) manually calibrated scales. In these experiments, we have so far found the following.
1) The ELS machine matches the ResNet model, with median r^2 ~ 0.96. However, qualitatively, the samples produced by the ResNet are of poor quality, similarly to CIFAR10.
2) The UNet model trained on this dataset is able to produce recognizable faces. We found, to our surprise, that it appears to be better fit by the *LS Machine* (r^2 ~ 0.92) as opposed to the ELS Machine (r^2 ~ 0.90). The LS Machine captures the placement of noise, eyes, and mouth in the image; these details are not captured by the ELS machine, which generates similar images overall but which lacks these key human-interpretable features.
We believe that the explanation for this behavior is as follows. The ResNet is shallow and has a *hard* locality constraint. The UNet receptive field size is formally very large and includes the image border everywhere; the fact that it exhibits local and equivariant behavior is an *emergent phenomenon* that requires pixels to use less information than maximally possible. Restoring positionality while preserving locality is akin to reintroducing some but not all of the information that is discarded.
More experiments will be needed to study this capability, but we believe that this will not be fully achievable within the timeframe of the revisions and defer this to future work. However, we intend to incorporate the additional findings on CelebA into our paper in the final revision.
>I think Fig. 1 can be further improved by including a column of the input noise.
We could in principle pick images for the figure such that the same initial noises could be used for black and white images; however, CIFAR10 images have 3 channels, while FMNIST and MNIST have only one, so we would always need at least two additional columns of noise. We feel that the addition of these multiple columns would dilute the visual impact of the initial figure; thus, if the reviewer agrees, we would like to retain the figure design for figure 1.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. The CelebA results will be a nice addition to the paper. While more experiments to validate your hypothesis for why the LS Machine outperforms the ELS Machine for the UNet are not necessary, the speculation on why this phenomenon may be occurring is still appreciated.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their comment.
We would like to make two additional comments with regards to recent results that we found while performing an additional ablation experiment during the recent revisions process.
Firstly, we were able to achieve high performance on CNNs trained on Celeba32x32 with color, and intend to report those results instead of the results on the grayscaled dataset. We are unfortunately unable to report the final correlation numbers for this dataset at this point pending completion of calibration, but preliminarily it looks like the performance of the ELS and LS models on ResNet/UNet respectively should be in line with the performance on the CIFAR10 models.
Secondly, we had initially picked a relatively short set of timesteps (20) for our reverse process integration, and used the same timesteps for both the ELS machine and the compared CNNs. The primary justification for this decision was the large computational cost of running the ELS machine. We assumed that the highest fidelity theory/experiment agreement would be between the trajectories simulated with the same number of timesteps for both the ELS and the CNN model.
We found however in a recent ablation that we could increase the median theory/experiment correlations across the board, including by up to 8%, using a much finer discretization (150 timesteps) for the CNNs, while keeping the ELS timesteps fixed at 20. We have not attempted to compute ELS machine outputs with a similarly large number of timesteps, which would require a large amount of computational resources.
The specific quantitative values are as follows:
|Model|ELS/CNN r^2 (150 steps)|ELS/CNN r^2 (20 steps)|
|---|---|---|
|MNIST/UNet/Zeros|0.89|0.84|
|CIFAR10/UNet/Zeros|0.90|0.82|
|FashionMNIST/UNet/Zeros|0.93|0.91|
|MNIST/ResNet/Zeros|0.94|0.94|
|MNIST/ResNet/Circular|0.77|0.77|
|CIFAR10/ResNet/Zeros|0.95|0.90|
|CIFAR10/ResNet/Circular|0.94|0.90|
|FashionMNIST/ResNet/Zeros|0.94|0.90|
|CIFAR10/UNet+SA|0.77|0.75|
We intend to report both the older numbers as well as these newer "high-compute" numbers. | Summary: This paper presents an analytic theory of generalization in convolutional diffusion models. It identifies that, given a finite empirical dataset, the optimal score function produces a perfect reverse diffusion process, leading to replicas of training samples. The paper then hypothesizes that the creativity of real trained diffusion models arises from the inductive biases of parametric models (e.g., convolutional neural networks). Assuming equivalence and locality properties of the score function in image generation, the authors derive an analytic score model. This enables a case-by-case study of parametric score models (e.g., CNNs, ResNets), showing impressive correlation with actual trained models. This work provides a transparent interpretation of how diffusion models generate novel images.
Claims And Evidence: The claims in this paper are clear and supported by theoretical derivations as well as convincing empirical evidence.
Methods And Evaluation Criteria: The evaluation primarily rely on $r\^2$, as the main goal of this paper is an analytic interpretation of generation in diffusion models. The $r\^2$ metric along with visual comparisons, show strong correlation between the theory-predicted results and actual generations from trained parametric diffusion models.
Theoretical Claims: Yes, I checked most of the proofs (e.g., the derivation of the ELS machine), which appear correct.
Experimental Designs Or Analyses: The experimental design and analyses are sound. Perhaps one additional aspect could be explored: in real scenarios, two factors influence the final generation of diffusion models—the inductive biases inherent in the model family (e.g., U-Net) and those introduced by training dynamics (e.g., initialization, optimizers). I wonder whether there is a non-monotonic trend in $r\^2$ during the training process, which could provide insight into how optimization choices introduce implicit inductive biases.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This work focuses on the generalization of generative models, which is a core topic in generative AI.
Essential References Not Discussed: Several existing works share the intuition of memorization in ideal score functions:
- Gu, Xiangming, et al. "On memorization in diffusion models." arXiv preprint arXiv:2310.02664 (2023).
- Somepalli, Gowthami, et al. "Diffusion art or digital forgery? investigating data replication in diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
Other Strengths And Weaknesses: This work is particularly interesting as it systematically investigates the failure of the ideal score function and studies the problem from a principled perspective. It also suggests the possibility of nonparametric generative models for high-dimensional data—provided we identify the correct inductive biases. The paper is well-written and convincing, with well-designed notations (although not consistent with common conventions).
One weakness is that while the theory predicts generation to some extent, there remains a clear quality gap. It is unclear whether an (approximate) theoretical solution is still feasible when considering more complex inductive biases, such as attention mechanisms.
Other Comments Or Suggestions: Typos:
- In Eq. (5): $\\sqrt{\\bar{\\alpha}\_t}\\varphi(x)-\phi$ should be $\\sqrt{\\bar{\\alpha}\_t}\\varphi(x)-\phi(x)$.
Questions For Authors: The method for determining patch sizes (e.g., Figure 4) appears somewhat heuristic. Did you attempt to derive it theoretically or formulate it as an optimization problem (e.g., minimizing the score-matching loss through cross-validation)? This could eliminate the need to rely on studying the saliency map of a pretrained model.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s feedback and the thoughtful suggestions that they made for our paper. Below we address some of these comments and suggestions.
>Perhaps one additional aspect could be explored: in real scenarios, two factors influence the final generation of diffusion models—the inductive biases inherent in the model family (e.g., U-Net) and those introduced by training dynamics (e.g., initialization, optimizers). I wonder whether there is a non-monotonic trend in r2 during the training process, which could provide insight into how optimization choices introduce implicit inductive biases.
In response to the reviewer’s questions about the importance of training dynamics and initialization in determining model outputs, we retrained our models several times with identical data but different initial seeds, and evaluated the dynamics of the r2 metric with the ELS outputs.
We found that the resulting generated outputs did not significantly vary between initialization seeds, with the median CNN-CNN pixelwise r2 between different post-training models achieving 0.9-0.97 on different datasets and architectures. We found that we could improve this somewhat by continuing training further and continuing to anneal the learning rate; e.g., training 600 epochs instead of 300 while continuing to decay the learning rate produced correlations typically > 0.95.
We found that the ELS/CNN r2 was somewhat oscillatory over short time ranges (if learning rates remained too high near the end of training), but the overall trend was monotonically upwards across the training process. The oscillations we attributed to an instability in the overall image intensities of the model-generated output during the training process, which eventually disappeared as we annealed the learning rate appropriately. This dynamical effect did not seem to affect the output of the optimization process provided the learning rate was decayed sufficiently over the course of the training process. We intend to defer a more detailed study to further work. But overall there is no strong indication that early stopping consistently helps with achieving a higher r^2 for these datasets.
>The method for determining patch sizes (e.g., Figure 4) appears somewhat heuristic. Did you attempt to derive it theoretically or formulate it as an optimization problem (e.g., minimizing the score-matching loss through cross-validation)? This could eliminate the need to rely on studying the saliency map of a pretrained model.
While we considered alternative options, we observed in our work (as shown in figure 4b) that the optimal patch size differed between the UNet model and the ResNet models, which showed that the optimal patch size for each model could not be an intrinsic statistical characteristic of the dataset and therefore could not be completely deduced through a process that did not use additional characteristics of the post-training models.
We would like to emphasize that the approach that we took to calibrate our theory to the model did not actually use the saliency maps of the post-training models, which were shown in figure (4a) primarily as ancillary evidence for the observed multi-scale behavior. Rather, we performed a direct fit for each time by selecting the maximally performant ELS scale at each time step on a separate validation set. Performance was defined as the correlation between the ELS machine’s predicted score function at each time, and the UNet or ResNet’s score function. We chose the patch-size scale of the ELS machine to maximize this correlation at each separate time. Details can be found in our Appendix. We hope this constitutes a principled way to calibrate the time-dependent scale of the ELS machine. | null | null | null | null | null | null |
BaxBench: Can LLMs Generate Correct and Secure Backends? | Accept (spotlight poster) | Summary: This paper introduces BAXBENCH, a novel benchmark for evaluating large language models' (LLMs) capabilities in generating correct and secure backend applications. The benchmark consists of 392 tasks spanning 28 scenarios implemented across 14 popular backend frameworks in 6 programming languages. BAXBENCH evaluates two key aspects: (1) functional correctness through comprehensive test cases and (2) security vulnerability through end-to-end exploitation attempts. The authors evaluate 10 state-of-the-art LLMs, including flagship models like OpenAI's o1 and Claude 3.5 Sonnet. The results reveal significant limitations: even the best-performing model (OpenAI's o1) achieves only about 60% on functional correctness, and more than half of the functionally correct programs generated by LLMs were found to be vulnerable to security exploits. Performance further degrades when using less popular backend frameworks. The authors position BAXBENCH as a rigorous evaluation of LLMs' ability to generate deployment-ready code that is both functionally correct and secure.
Claims And Evidence: 1. LLMs struggle with backend generation tasks: The evaluation across multiple models shows that even top models achieve at most 60% on functional correctness, providing strong evidence for this claim.
2. Security is a major concern: The authors demonstrate that over half of functionally correct solutions are vulnerable to security exploits, supporting their claim that security evaluation is critical and current LLMs are not yet ready for autonomous coding.
3. Framework familiarity matters: The results clearly show performance variations across different frameworks, with models performing better on popular frameworks and languages.
Methods And Evaluation Criteria: 1. Task diversity: The 28 scenarios across 14 frameworks provide a fine coverage of real-world backend development tasks.
2. Dual evaluation: Assessing both functional correctness and security is important, as both aspects are critical for deployment-ready code.
3. Metrics: The pass@k and sec_pass@k metrics are appropriate for measuring both correctness and security.
Theoretical Claims: The paper does not make theoretical claims requiring formal proofs. The work is primarily empirical in nature.
Experimental Designs Or Analyses: The experimental design is rigorous and appropriate:
1. Model selection: The authors evaluate 10 diverse state-of-the-art LLMs, including both open and closed-source models.
2. Task framework: The evaluation setup tests models on realistic backend development tasks requiring both functional correctness and security.
3. Prompt variations: The comparison between functionality-only and functionality+security prompts provides insights into how explicit instructions affect performance.
4. Exploit verification: The security exploits were iteratively refined on both LLM-generated and human-written solutions, increasing their reliability.
5. Analysis depth: The authors analyze performance variations across frameworks, languages, and scenario complexity, providing nuanced insights.
One minor concern is that the benchmark's exploits might not cover all possible security vulnerabilities, potentially making the sec_pass@k an overestimate of true performance. However, the authors acknowledge this limitation explicitly, stating their measured metrics provide an upper bound.
Supplementary Material: Yes, it contains the scaffolding code of the dataset.
Relation To Broader Scientific Literature: The paper positions itself well within the existing literature on LLM code generation capabilities, It addresses limitations in existing benchmarks like HumanEval, which focus on function-level code or algorithmic tasks rather than end-to-end applications.
Essential References Not Discussed: The paper has good coverage of relevant literature, but there's one significant recent work that should be discussed:
- SWE-Lancer (Miserendino et al., 2025): This benchmark evaluates LLMs on real-world freelance software engineering tasks worth $1 million in actual payouts. While BAXBENCH focuses specifically on backend applications, SWE-Lancer takes a broader approach to full-stack engineering with end-to-end tests. A comparison would strengthen the paper, as both aim to evaluate deployment-ready code generation capabilities.
BAXBENCH does have some unique advantages over SWE-Lancer, particularly in its security-focused evaluation and the systematic coverage of multiple backend frameworks. Acknowledging this related work and positioning BAXBENCH relative to it would strengthen the paper's contribution claims.
Other Strengths And Weaknesses: BAXBENCH makes valuable contributions by focusing on backend applications and combining functional correctness with security evaluation. This focus is distinct from other benchmarks and addresses important real-world concerns. Compared to SWE-Lancer, BAXBENCH offers several unique advantages:
- Dedicated focus on backend applications, which are security-critical
- Systematic evaluation of security vulnerabilities through real exploits
However, SWE-Lancer does offer some complementary strengths that BAXBENCH could learn from:
- Mapping to real economic value through actual freelance payments
- Broader coverage of full-stack engineering tasks
- End-to-end tests validated by professional engineers
Other Comments Or Suggestions: N/A
Questions For Authors: Given that SWE-Lancer also evaluates real-world software engineering capabilities using end-to-end tests, how do you position BAXBENCH's contribution relative to SWE-Lancer?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful review and overall positive assessment of our paper. We address their questions below.
**Q1: Can you discuss how SWE Lancer relates to BaxBench?**
We thank the reviewer for pointing us to SWE Lancer and we will gladly add a discussion on it in the next revision of our paper. However, we would like to highlight that the first public copy of this paper has only been made available one month *after* the ICML submission deadline. Thus, it was impossible for us to include it in the submission.
First and foremost, the key difference between BaxBench and SWE Lancer is that BaxBench does security evaluation, an aspect that is not explicitly considered by SWE Lancer, and especially not using practical exploits.
Regarding the other differences between the papers, we largely agree with the reviewer’s analysis, except that: (i) BaxBench, like SWE Lancer, also uses end-to-end tests verified by professional software engineers for functionality testing (and not only function-level unit tests as other benchmarks), and (ii) BaxBench’s backend tasks, like SWE Lancer’s, are also highly diverse—in fact, while SWE Lancer sources all of its tasks from the Expensify repository, making all tasks correlated, BaxBench scenarios are highly diverse and constructed independently from scratch.
Finally, BaxBench is fully contamination-free, as the scenarios have been constructed manually from scratch. In contrast, SWE Lancer’s tasks stem from an existing open-source repository that could have very well been in the training data of the models.
We will include this discussion in the next revision of the paper. | Summary: This paper introduces BaxBench, a benchmark for evaluating LLMs' ability in generating functionally correct and secure backends.
It evaluates LLMs in 28 scenarios and 14 frameworks and show that generating secure and correct backends is still challenging.
Claims And Evidence: Yes, I find the claims in the submission clear and convincing.
Methods And Evaluation Criteria: I find the evaluation pipeline a bit insufficient.
1) The number of scenarios is 28, which seems too few to me. I understand that by multiplying it with the number of frameworks, there are 392 tasks. However, I think the ability to deal with semantically different scenarios is also important and should be measured with more scenarios.
2) Lack of evaluation of agentic approaches. I understand that the authors are trying to evaluate the ability of LLMs. But lots of evidence shows that agentic approaches can be much better at multi-file and larger-scale software engineering tasks. It would be great if the authors can provide some insights into how agentic approaches perform on BaxBench.
Theoretical Claims: N/A
Experimental Designs Or Analyses: I find it sound.
Supplementary Material: Yes. I took a closer look at the example of calculator in the appendix.
Relation To Broader Scientific Literature: Prior scientific literature mostly focus on either security or functionality, but not both.
This paper tries to close that gap by providing a set of realistic backend development tasks that also have security implications.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: I think the evaluation pipeline and the data curation pipeline could be useful for future benchmark development.
Other Comments Or Suggestions: N/A
Questions For Authors: Do you have any suggestions about improving model's ability on BaxBench, and their ability to generate secure code in general?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and overall positive review, and address their questions below.
**Q1: Do the scenarios provide sufficient semantic diversity?**
Yes. The programs have to handle files, databases, access controls, OS commands, and external binaries (e.g., compilers, png, and pdf tools). The large spread of the models’ performances signifies this diversity, while the complexity is evidenced by the low overall correctness across all models. However, the ultimate goal of BaxBench is to measure the LLMs’ performance on security-critical backend tasks, with other functionality aspects (e.g., algorithmic complexity) already covered by existing benchmarks. Therefore, we do not claim that it covers *all* possible backend coding tasks, but it does cover most security-relevant tasks for web application backends. For a discussion on security coverage we refer to the reply to Q2 of reviewer WvVR.
Further, we believe that in comparison to current standard functional correctness and especially security benchmarks, BaxBench’s semantic diversity and volume is highly competitive:
- SWE-Bench and SWT-Bench [1,2] (agent-functionality) are based on only 12 Python repositories.
- SWE-Lancer [3] (functionality) is based on a single TypeScript repository.
- SafeCoder [4] (function-level, security-only) provides 42 security-only evaluation cases across 6 programming languages (avg. 7 per language).
- SVEN [5] (function-level, security-only) has 24 security-only evaluation cases across 2 languages (avg. 12 per language).
- CWEval [6] (function-level, security + correctness) contains 119 tasks across 5 languages, translating examples from 25 core tasks.
We thank the reviewer for their insightful question, and will add this discussion in the next revision.
**Q2: Can you evaluate coding agents?**
Upon the reviewer’s request, we tested the most advanced open-source general coding agent, OpenHands (OH) powered by GPT-4o and Claude 3.5 Sonnet.
We use our testing environments for the agent, excluding the Python-Django and Compiler environments as these are incompatible with the OH base image. We compare the agent performance against the base models on the remaining 351 tasks:
|Model|sec_pass@1|pass@1|+Security Oracle Prompt $\rightarrow$|sec_pass@1|pass@1|
|-|:-:|:-:|-|:-:|:-:|
|Claude 3.5 Sonnet|31.4|52.4||34.3|39.7|
|Claude 3.5 Sonnet + OH|31.6|59.3||38.2|44.2|
|GPT-4o|21.0|43.7||27.6|36.2|
|GPT-4o + OH|16.6|38.1||23.2|33.5|
Surprisingly, we see that the agent only provides an improvement on Claude 3.5, with GPT-4o (August’24 version) performing worse in the agent—the weaker model is overwhelmed by the agentic framework, making more functional mistakes. However, also on Claude, the improvement is not as drastic as one may expect (max 6.9%). Even the agent struggles strongly with BaxBench tasks, especially when it comes to security. We hypothesize that this is partially due to the limitations of the underlying LLMs, and the nature and complexity of the task at hand, differing from typical agent benchmarks focused on working in large pre-existing repositories (here, the agent needs to build from scratch, use different frameworks, write its own tests, etc.). This shows once again that fundamental development towards secure backend coding is needed.
We thank the reviewer for the interesting suggestion which will make a valuable addition to the next revision of the paper.
*Note:* We did not test on agentic IDEs such as Cursor and Windsurf as they are not suited for large-scale experiments. Further, we did not test Claude Code due to cost budget limitations. We will release the code in BaxBench for independent evaluations of Claude Code.
**Q3: How could models be improved on BaxBench?**
Please see the answer to Q1 of reviewer p9gW for a general discussion on the errors the models make on BaxBench. To improve on these errors, we believe incorporating code security during model training is crucial. For example, during post-training, a low amount of high-quality data could be utilized to steer the base model towards secure code, similarly to [4,7]. We will discuss these learnings and future directions in the next revision of our paper.
**References**
[1] Jimenez et al., SWE-bench: Can Language Models Resolve Real-World GitHub Issues?. ICLR 2024.
[2] Mündler et al., SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents. NeurIPS 2024.
[3] Miserendino et al., SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?. arXiv 2025.
[4] He et al., Instruction Tuning for Secure Code Generation. ICML 2024.
[5] He & Vechev. Large language models for code: Security hardening and adversarial testing. CCS 2024.
[6] Peng et al., CWEval: Outcome-driven Evaluation on Functionality and Security of LLM Code Generation. LLM4Code@ICSE’25.
[7] Xu et al., ProSec: Fortifying Code LLMs with Proactive Security Alignment. arXiv 2025. | Summary: This paper introduces a benchmark for assessing large language models' abilities to generate application backend code. The benchmark comprises 28 distinct scenarios across 14 popular backend frameworks that specify application requirements, API specifications, environment instructions, and database needs. Evaluation occurs through both functional testing and security vulnerability assessment. Each scenario is equipped with a set of security exploits targeting targeting around 3.3 CWEs. The author extensively evaluates the performance of many popular LLMs across both closed and open models.
Claims And Evidence: The authors claim that current leading models struggle with both correctness and security in backend code generation. Their evidence shows that often nearly half of functionally correct solutions contain security vulnerabilities. They demonstrate that security-focused prompting helps reduce vulnerabilities but doesn't eliminate them completely.
Methods And Evaluation Criteria: The methodology involves presenting LLMs with detailed application specifications and evaluating the generated code through automated functional tests. For security assessment, they develop scenario-specific exploit code targeting common weakness enumerations (CWEs). This evaluation approach provides tests for both functionality and security.
Theoretical Claims: N/A: The paper proposes a benchmark for LLM code generation on backend application scenario.
Experimental Designs Or Analyses: The experimental design covers 14 popular backend frameworks, providing broad coverage of real-world development environments. They designed specific exploits to test the security vulnerability which can test specific CWEs.
Supplementary Material: The appendix provide more details about the benchmark dataset and the specific scenarios.
Relation To Broader Scientific Literature: This work contributes to the growing body of research on LLM code generation capabilities and limitations, with a particular focus on the critical area of backend security that has been underexplored in previous benchmarks. This is especially important as many LLM users today may deploy the LLM-generated backend application code to the wild.
Essential References Not Discussed: The related work are discussed.
Other Strengths And Weaknesses: Strengths: The benchmark addresses a practical and important use case as more developers rely on LLMs for backend code generation. The security focus is particularly valuable given the high stakes of vulnerabilities in production systems if the LLM users use them in real world.
Weaknesses: The evaluation could be expanded to include LLM agents, which are increasingly being used by many users for development tasks, and are already widespread like Windsurf or Claude code. This would provide a more complete picture of how current AI tools perform in real-world development scenarios. Additionally, the number of security exploits per scenario appears limited, which may not comprehensively capture the full range of potential vulnerabilities that could exist in real-world applications.
Other Comments Or Suggestions: Consider extending this work to evaluate how LLM agents perform on these tasks, as they represent an increasingly common development approach. And please also consider extending the CWE tested per scenario.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and constructive review and address their questions below.
**Q1: Can you evaluate coding agents?**
Upon the reviewer’s request, we tested the most advanced open-source general coding agent, OpenHands (OH) powered by GPT-4o and Claude 3.5 Sonnet.
We use our testing environments for the agent, however, we found that our Python-Django and Compiler environments are incompatible with the OH sandbox base image. Therefore, we exclude this framework and scenario. We compare the agent performance against the base models on the remaining 351 tasks:
|Model|sec_pass@1|pass@1|+Security Oracle Prompt $\rightarrow$|sec_pass@1|pass@1|
|-|:-:|:-:|-|:-:|:-:|
|Claude 3.5 Sonnet|31.4|52.4||34.3|39.7|
|Claude 3.5 Sonnet + OH|31.6|59.3||38.2|44.2|
|GPT-4o|21.0|43.7||27.6|36.2|
|GPT-4o + OH|16.6|38.1||23.2|33.5|
Surprisingly, we see that the agent only provides an improvement on Claude 3.5, with GPT-4o (August’24 version) performing worse in the agent—the weaker model is overwhelmed by the agentic framework, making more functional mistakes. However, also on Claude, the improvement is not as drastic as one may expect (max 6.9%). Even the agent struggles strongly with BaxBench tasks, especially when it comes to security. We hypothesize that this is partially due to the limitations of the underlying LLMs, and the nature and complexity of the task at hand, differing from typical agent benchmarks focused on working in large pre-existing repositories (here, the agent needs to build from scratch, use different frameworks, write its own tests, etc.). This shows once again that fundamental development towards secure backend coding is needed.
We thank the reviewer for the interesting suggestion which will make a valuable addition to the next revision of the paper.
*Note:* We did not test on agentic IDEs such as Cursor and Windsurf as they are not suited for large-scale experiments. Further, we did not test Claude Code due to cost budget limitations. We will release the code in BaxBench for independent evaluations of Claude Code.
**Q2: Please comment on the CWE coverage per scenario.**
To address the reviewer’s question, we conduct a deeper investigation into CWE coverage.
First, we compare the CWEs that we test for with the CWEs reported by the SOTA SAST tool, Snyk Code, on OpenAI o1’s correct solutions. This analysis allows us to assess the comprehensiveness of our exploit attempts (i.e., the CWE coverage) per scenario. We find that for 24 of the 28 scenarios, we test for all CWEs that Snyk reports and further ones beyond. For the remaining 4 scenarios, Snyk and our tests overlap on all but one CWE. Looking at these cases, we see that this difference stems from Snyk testing for CWE-400 (uncontrolled resource consumption) where we test instead for CWE-703 (improper error handling), raised by crashing the server, which includes it crashing due to resource overconsumption. Note also that Snyk raises CWE-400 for rate limit issues, which are usually handled on an architectural level [2] and not considered relevant on the application level by us.
Beyond scenario-level exploit coverage, we also investigate how many insecure programs do we actually catch compared to Snyk. For this, we examine the Snyk reports on the 237 correct programs generated by OpenAI o1 and compare them to our exploit reports. We find that Snyk misses 64 (27%) vulnerabilities (i.e., where we could execute real exploits), while marking 25 (10.55%) programs vulnerable that were not marked by our exploits. Manually analyzing these 25 programs, we find that 16 are false positives (6.75%) and 2 concern rate limits which are architecture level concerns, i.e., amounting to 7 (2.95%) correct additional flags across 3 CWEs. Note that among these 7, we actually test for each of the raised CWEs, merely, our exploit attempts do not succeed. We will extend our exploits vectors accordingly.
Our analysis shows two things: (i) our exploit attempts are targeting a comprehensive set of CWEs per scenario; and (ii) our exploits are far more reliable than static analysis for actually detecting insecure code (Snyk 6.75% false positives and a whopping 27% false negatives).
Overall, we believe the above is strong evidence showing that BaxBench’s security coverage is both extensive and largely comprehensive. On a benchmark level, as also detailed in Table 4 in the appendix, we cover the most likely and critical security issues, based on the popular MITRE Top 25 and OWASP Top 10. Therefore, our benchmark is suited for supporting our key message of the paper—current models’ code is dangerously insecure.
**References**
[1] Yang et al., SecCodePLT: A Unified Platform for Evaluating the Security of Code GenAI. arXiv 2024.
[2] Serbout et al., API Rate Limit Adoption -- A pattern collection, EuroPLoP 23. | Summary: This paper introduces BAXBENCH, a benchmark to evaluate LLM-based generation of correct and secure backend applications. Functionality of generated code is validated through testing, while security is evaluated through end-to-end exploits. The authors evaluated 10 LLMs on BAXBENCH and found that even the best model achieves only 60% on code correctness. Exploits were successful on more than half of the correct programs.
Claims And Evidence: While the proposed benchmark shows that various LLMs have varying level of success, thus achieving its key function as a benchmark, I am wondering how useful the benchmark is by itself since it does not help attribute the success to particular factors (e.g., framework, complexity of task). For example, given the observation from Fig. 4 that "The model struggles more with less popular programming languages and multifile frameworks", how would one use the results from the benchmark to guide the improvements on a model? Should be model be trained on particular frameworks? On particular tasks?
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are suitable for the evaluating the generation of correct and secure backend code.
Theoretical Claims: -
Experimental Designs Or Analyses: The experimental setup is well-defined, and the results are presented in a clear and detailed manner. The statistical analyses used, such as pass@k and sec_pass@k, are popular metrics in the field.
There are few issues to consider:
* Complexity of scenarios: The scenarios in BAXBENCH might not cover all possible types of backend applications or security vulnerabilities. It would be good to go into more depth about the selection process and how representative these scenarios are. Additionally measuring scenario complexity by length of scenario specification can be misleading.
* Evaluation of security: It is possible that some vulnerabilities may be missed, when testing against a fixed set of exploits. Static code analysis could complement this approach.
Supplementary Material: Appendices A-D
Relation To Broader Scientific Literature: This benchmark may prove useful to future whole-app generation efforts.
Essential References Not Discussed: -
Other Strengths And Weaknesses: * Is there any guidance on how "resilient" this benchmark may be to memorization in future LLMs?
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful review and overall positive assessment. We address their questions below.
**Q1: How can models be improved on BaxBench?**
To understand functionality challenges in BaxBench, we manually investigate 20 incorrect programs generated by OpenAI o1, and find that the model often fails on trivial, boiler-plate tasks, such as adhering to the requirements set in the API specification, adhering to formats and response codes, handling files, setting CLI flags, or producing compilable and executable code. The simplicity of these errors is surprising given the success of LLMs on algorithmic benchmarks. We believe this is due to the focus on algorithmic coding performance in model development and the prioritization of the most popular languages.
Further, models often produce vulnerable code. Meanwhile, when prompted with potential vulnerabilities, the model's vulnerability rates decrease, at the cost of functional correctness. Reasoning models’ correctness decreases much less. This crucial observation highlights the capacity of reasoning models to generate correct, secure code.
In terms of concrete improvements, we believe the gathered insights can be used in the post-training phase, where a low amount of high-quality data could be utilized to steer the base model towards secure code, similarly to [2,3]. The lack of such considerations has already led to commercially deployed exploitable applications [1].
**Q2: Can you provide more details on the coverage and complexity of BaxBench scenarios in terms of backend tasks and security?**
BaxBench contains highly diverse scenarios, requiring correct handling of files, databases, access controls, OS commands, and external binaries (e.g., png and pdf tools). Regarding complexity, we do not aim to measure algorithmic difficulty, already covered by other benchmarks (e.g., HumanEval), but rather application logic, e.g., registration and login, handling server communication, or storing of user messages in databases. The large spread of models’ correctness over scenarios highlights the variety in our tasks.
In terms of security coverage, we cover the most impactful security vulnerabilities, collected in the MITRE TOP 25 and OWASP Top 10 lists. We also achieve high security coverage on a scenario level. We verified this by comparing to the industry-leading SAST tool Snyk-Code on a model different from what we used for development. More details in Q3.
**Q3: Could static analysis complement BaxBench’s exploits?**
To address the reviewer’s question, we conduct an experiment comparing our exploits to the industry leading SOTA SAST tool Snyk-Code on the 237 correct programs produced by OpenAI’s o1. We find that Snyk misses 64 vulnerable programs exploited by our tests, i.e., has at least 27% false negative rate, while marking 25 (10.55%) programs vulnerable that we did not exploit. Through manual analysis we find that 16 (6.75%) of these are false positives and 2 concern rate limits which are often handled outside of the application, amounting to 7 (2.95%) correct additional flags across 3 CWEs. Note that even for these 3 CWEs, we already make exploit attempts, merely, our attack inputs do not succeed.
Due to the high amount of false negatives and false positives, we consider SAST tools unsuitable for benchmarking. However, we consider them useful for guiding exploit design and will add test cases for the 7 correctly discovered exploits.
In addition, SAST tools limit reproducible benchmarking, as the best tools keep constantly changing, are not open-source, and introduce a dependency on external tool providers.
Our manual exploits cover the most important threat vectors, and are clearly sufficient to show that current models’ code is strongly subpar—supporting our key message. Moreover, they guarantee the absence of false positives.
**Q4: How resilient is this benchmark against contamination in future LLMs?**
Once a benchmark is public, we believe the only guaranteed way to avoid contamination is to continuously update the benchmark. As stated in the paper, we plan to continuously add new scenarios and frameworks to BaxBench.
Note that future models are less likely to be contaminated by BaxBench: (i) our tasks are not based on existing code (unlike, e.g. SWE-bench [4]), (ii) accidental contamination is difficult as we do not release golden solutions, publishing only the execution framework and the prompts; and (iii) malicious contamination requires careful curation of such golden solutions.
We thank the reviewer for the thoughtful question, and will add a discussion in the next revision.
**References**
[1] https://x.com/tedx_ai/status/1901640901505827148
[2] He et al., Instruction Tuning for Secure Code Generation. ICML 2024.
[3] Xu et al., ProSec: Fortifying Code LLMs with Proactive Security Alignment. arXiv 2025.
[4] Jimenez et al., SWE-bench: Can Language Models Resolve Real-World GitHub Issues?. ICLR 2024. | null | null | null | null | null | null |
Ensemble Learned Bloom Filters: Two Oracles are Better than One | Accept (poster) | Summary: This paper introduces Ensemble Learned Bloom Filters (ELBF), an approach to improving the performance of Learned Bloom Filters (LBF) by leveraging multiple learning oracles of smaller size instead of a single large oracle. The authors formulate the ELBF design as a combinatorial optimization problem: given a pool of oracles and a total space budget, the goal is to select a subset of oracles and determine the sizes of their backup Bloom filters to minimize the overall false positive rate (FPR). The paper draws structural analogies between this problem and the Knapsack problem, developing a Knapsack-based approximate algorithm with proven (ε,δ)-optimality. The authors also propose an extended design, ELBF++, for scenarios with correlated oracles sharing a common backup filter.
Claims And Evidence: The claims made in the submission are generally supported by evidence:
1. The claim that ELBFs outperform standard LBFs under the same space budget is convincingly demonstrated through theoretical analysis and empirical results across multiple datasets.
2. The (ε,δ)-optimality claim for their Knapsack-based algorithm is supported by formal proofs in Theorem 3.2 and Appendix A.
3. The claim about ELBF++ being more effective for correlated oracles is supported by the theoretical analysis in Section 4 and corresponding experimental results.
Methods And Evaluation Criteria: I am not familiar with this specific area, so I am unable to evaluate whether the evaluation criteria are appropriate.
Theoretical Claims: As mentioned above, I am unable to evaluate the correctness of any proofs for the theoretical claims. If possible, I would suggest seeking the expert opinion of other reviewers who are familiar with this specific area.
Experimental Designs Or Analyses: As mentioned above.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper effectively positions itself within the broader literature on:
1. Bloom filters and their variants - Building on the original work by Bloom (1970) and subsequent extensions.
2. Learned Bloom filters - Acknowledging the foundational work by Kraska et al. (2018) and Mitzenmacher (2018).
3. Ensemble learning - Drawing connections to the broader machine learning literature on ensemble methods.
Essential References Not Discussed: 1. Recent work on adaptive Bloom filters that dynamically adjust their parameters
2. Connections to other probabilistic data structures like Count-Min Sketch and their learned variants, which would provide broader context.
Other Strengths And Weaknesses: Strengths:
1. The formulation of ELBF design as a combinatorial optimization problem is elegant and insightful.
2. The theoretical analysis is rigorous, with formal optimality guarantees.
3. The connection to the Knapsack problem provides a solid foundation for the algorithm design.
Weaknesses:
1. The experimental evaluation could include more diverse application domains beyond the specific data analysis tasks presented.
2. The discussion of practical implementation considerations, such as training the multiple oracles efficiently, could be expanded.
3. The hyperparameter sensitivity analysis could be more comprehensive, particularly regarding how to set optimal values in practice.
Other Comments Or Suggestions: I am not familiar with this specific area, and I will adjust my rating based on the feedback from other reviewers. If other reviewers possess more expertise in this field and provide more technically thorough reviews, their perspectives should take precedence over mine.
Questions For Authors: 1. How does the performance of ELBF scale with the number of available oracles? Is there a point of diminishing returns, and if so, how can practitioners determine the optimal number of oracles to use? This would help understand the practical limits of the ensemble approach.
2. The paper focuses on minimizing FPR under a fixed space budget. Have you explored other optimization objectives, such as minimizing space usage under a fixed FPR constraint? This alternative formulation might be more relevant for certain applications.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank you for your comments. They are mostly insightful and make us step backwards to rethink several design issues of our algorithm. Please find our response below.
**Answer to Q1.**
- **Performance of ELBF w.r.t. number of oracles $n_o$:** Theoretically (worst-case complexity bound), in the case of independent oracles, ELBF scales as $O(n_o^2)$; in the case of correlated oracles ELBF++ scales as $O(n_o^3)$; Empirically we observe that the running time scales slightly more linearly in $n_o$. In our opinion, this does not bring significant burden, given that usually the number of available oracles is rather limited.
- **Number of oracles to use:** There are two parameters/variables here.
* If the number of available oracles is fixed, the number of oracles to use as well as which oracles to use are both computed by our algorithms. In fact, when we are provided a pool of oracles, our algorithm computes the set of oracles to use, which implicitly computes the number of oracles to use.
* If your comment regards the number of oracles available, our response is that it depends on the quality of the available oracles. We provide the following intuitive analysis. We can define the notion of dominance such that, for a pair of oracles $i$ and $j$, $i$ dominates $j$ if the size of $i$ is not larger than the size of $j$ and the false potive rate of $i$ is not larger than that of $j$ and the false negative rate of $i$ is not larger than that of $j$. Given a pool of oracles, we can first perform pre-processing by removing all the oracles dominated by any other oracle in the pool. This pre-processing operation leaves us with only those oracles that are "pareto" with each other. And it is the number of such pareto oracles that has real impact on the performance of our algorithm. As the parameters of oracles can vary one to another, it is generically difficult to give an answer how many pareto oracles in the pool are "sufficient". To provide a reasonable quantitative response to your question, what we do is as follows: we randomly set oracles with reasonable parameters (in terms of size, false positive and negative rates) in practice and run our algorithm; we observe that in most cases no more than $20$ pareto oracles are enough; beyond this limit, we do not have significant performance gain.
**Answer to Q2.**
This is a very pertinent comment that is typically posed regarding multiple-objective optimization. We have thought about this formulation in our research, which is symmetric to ours. Basically in this formulation we interchange objective and constraint in this formulation. We make two clarifications.
- The main reason of analyzing the formulation of optimizing FPR by regarding space as constraint is simply because most related work, against which we compare our algorithm, also analyzed this formulation. It is thus fair and easy to perform comparison.
- The second formulation can be addressed by our algorithm. An intuitive, not necessarily optimal, approach is to gradually increase the space (we discretize the space) and invoke our algorithm to compute the minimal FPR corresponding to the space. Once the FPR reaches the target FPR constraint, we output the corresponding space and the result. This adapted algorithm solves the new formulation of the problem, subject to small error caused by discretization. | Summary: This paper studies Learned Bloom Filters (LBF), which enhance traditional Bloom Filters with a learned model (oracle) as a pre-filter. A key challenge in single-oracle LBFs is that the oracle’s size can become a bottleneck when the overall space budget is limited. To address this, the authors propose an ensemble approach that leverages multiple smaller learning oracles and optimizes the associated backup filters. They design and optimize ensemble LBFs for both independent and correlated oracles, demonstrating empirical performance improvements across three practical data analysis tasks. The main technical challenge of the paper is solving the combinatorial optimization problems required for LBF design. For independent oracles, they develop an approximate solution, while for correlated oracles, they propose a greedy heuristic.
Claims And Evidence: The theorems are mathematically proved, and the performance of the proposed algorithms is empirically examined against state-of-the-art baselines.
Methods And Evaluation Criteria: The proposed algorithms are empirically compared to baselines in terms of memory usage, false positive rate, and time overhead, which are standard evaluation metrics for Bloom filters.
Theoretical Claims: I did not verify the correctness of the proofs, but the proposed algorithms for the optimization problems are reasonable.
Experimental Designs Or Analyses: The experimental setup, baselines, performance measures, and datasets appear reasonable. Moreover, the experimental results align with the theoretical expectations.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The paper empirically compares the proposed Ensemble Learned Bloom Filters (ELBFs) with state-of-the-art baselines, including the canonical Bloom Filter (BF) without learning (Bloom, 1970) and several learned Bloom filters: LBF (Kraska et al., 2018), Ada-BF (Dai & Shrivastava, 2020), PLBF (Vaidya et al., 2021), and Fast PLBF++ (Sato & Matsui, 2023). The authors demonstrate improvements over these baselines in almost all experiments.
Essential References Not Discussed: I am not aware of any related works that are essential to understanding the key contributions of the paper but are not currently cited.
Other Strengths And Weaknesses: The idea of combining smaller oracles to overcome memory limitations is intuitive yet interesting and appears effective in practice. This approach offers a novel way to balance model size and accuracy in learned Bloom filters, which could inspire future research in space-efficient data structures.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for the positive feedback. We envision at least two future research directions related to this work: (1) developing theoretically proven algorithms for the correlated case, either exact or approxiation algorithms, ideally with low complexity, (2) extending our idea of orchestrating multiple oracles to other data structure design to improve compactness. | Summary: This problem examines whether combining multiple learned oracles can generate a system of lower false positive. In the first case, where each learned oracle is paired with a separate filter, the authors provide theoretical analysis to formulate the problem as a knapsack problem and use dynamic programming to select a configuration minimizing the final false positive rate. In the second case, where a few oracles correspond to one filter, the authors design a greedy algorithm to select the configuration.
update after rebuttal:
I noticed that both ELBF and ELBF++ are evaluated in section 6, so I have removed my previous comment on weaknesses. My score remains unchanged.
Claims And Evidence: The authors provide theoretical analysis and experiments to support their claims.
Methods And Evaluation Criteria: The experimental part is done by comparing false positive rate under limited memory budget, which makes sense to me.
Theoretical Claims: I have checked the theoretical proof in section 3, which looks good.
Experimental Designs Or Analyses: Yes, the experiment setup in section 6 looks good.
Supplementary Material: no
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: I am only familiar with one previous work Mitzenmacher 2018, which is listed in the paper, so the discussion of related work looks good to me.
Other Strengths And Weaknesses: Strength:
1. The idea of ensembling learned bloom filters seems to be new to me and the idea is supported by both theoretical and empirical evidence.
2. The theoretical analysis which uses lagrange multiplier to transform the configuration selection into a knapsack problem seems very nice and clean.
3. The proposed method is compared with many other baselines in a very straightforward experiment setup.
Weakness:
1. It seems that there is no theoretical guarantee for the correlated oracle case
Other Comments Or Suggestions: no
Questions For Authors: I wonder why the authors don't perform experiments to test their algorithm in section 4, as I understand that it may be too hard to derive theoretical guarantees for their greedy algorithm.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for the positive feedback. Our greedy algorithm developed in Section 4 is tested and evaluated in our primary experiments in Section 6 on Experiments, where it outperforms baselines under various memory constraints. More in-depth experiments are presented in Appendix due to page limit for the main text. | null | null | null | null | null | null | null | null |
Perceptual-GS: Scene-adaptive Perceptual Densification for Gaussian Splatting | Accept (poster) | Summary: This paper focuses on improving the perceptual quality of 3DGS by 1) extracting edges to represent perceptual sensitivity and embedding it into the primitives for supervision; 2) introducing additional densification strategy based on the sensitivity. Experiments show that the proposed method achieves SOTA performance on multiple standard datasets while retaining efficiency.
Claims And Evidence: The claims are supported in the evaluations. However, the transparency and interpretability can be improved.
Methods And Evaluation Criteria: The method and evaluation are technically suitable.
Theoretical Claims: No theoretical claims needing proofs are involved in the paper.
Experimental Designs Or Analyses: Exhaustive experiments and ablation studies were designed to verify the effect of the contributions. However, the reported results mainly focus on the rendering quality, which causes a lack of interpretability. e.g., as in most 3DGS-related works, it's necessary to report more results about the depth, primitive distribution, and even surface normal if convenient, to enhance the transparency and interpretability of the paper. For this paper, the rendered sensitive map can also be visualized.
Supplementary Material: I have read all the supplementary material.
Relation To Broader Scientific Literature: None.
Essential References Not Discussed: Though the most related works about the specific track were involved, more 3DGS-related works with edge prior can be included for discussion like:
[1] Xiang, Haodong, et al. "Gaussianroom: Improving 3d gaussian splatting with sdf guidance and monocular cues for indoor scene reconstruction." arXiv preprint arXiv:2405.19671 (2024).
[2] Lin, Xin, et al. "HQGS: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes." The Thirteenth International Conference on Learning Representations.
Other Strengths And Weaknesses: Strengths:
1. The proposed approaches of improving perceptual quality stem from a simple edge extraction, of which the conciseness may bring invaluable new insights to the community.
2. Detailed experiments and ablations are designed.
3. Experiment results show the method can achieve SOTA performance while keeping high efficiency. It can also adapt to large-scale scenes like BungeeNeRF.
Weaknesses:
1. The qualitative results somehow have a lack of interpretability. As in most 3DGS-related works, it's expected to report more qualitative
results about the depth, primitive distribution, and even surface normal if convincing, to enhance the transparency and interoperability of the paper. For this paper, the rendered sensitive map can also be visualized.
2. As summarized in Table 8, many additional hyperparameters are introduced. This raises a concern about the robustness for different scenes, despite some ablation studies conducted in Table 9.
Other Comments Or Suggestions: For suggestion, it's better to provide a demo video for such a 3D vision task to show the performance more intuitively and comprehensively.
Questions For Authors: See the weaknesses. May raise the rating if the concerns can be well solved.
## **Update after rebuttal**
The most recent reply addresses my concerns. I'll keep my rating to be positive.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful response to our paper. We provide additional visualizations at https://akfwb.github.io/Perceptual-GS-Rebuttal/ and address specific points below:
**Q1. Interpretable Visualizations**
Thank you for your suggestion. Since our method is designed to optimize rendering quality rather than geometric structure, we do not include depth or normal maps in the original paper. However, based on your valuable feedback, we have provided additional visualizations in the supplementary link, including depth maps, primitive distributions, and rendered perceptual sensitivity maps. These visualizations will also be included in our revision.
**Q2. Discussion about Works with Edge Prior**
Although our method uses the Sobel operator, its goal is to identify perceptually sensitive regions, rather than extracting prominent edges. Therefore, we do not discuss 3DGS methods that rely on edge priors in the paper. Based on your suggestion, we will discuss and cite these works in the revision.
**Q3. Concerns about the Thresholds and Hyperparameters**
Our method does involve the use of certain thresholds and hyperparameters, which is a **common practice** among 3DGS-based approaches. Parameters such as loss weights and the interval of specific operations are essential for the proper execution of the method. However, unlike some prior works that require scene-specific manual tuning, our method adopts a **uniform set of hyperparameters across all experiments**, regardless of the dataset or the baseline it is combined with. This demonstrates the **generalizability and robustness** of our approach.
In addition, some thresholds, such as the high-sensitivity scene threshold and the scenes with sparse initial point cloud threshold, are determined based on **statistical analysis**. Due to space limitations, these implementation details are not included in the main paper, but they are available in the provided link. Based on your suggestion, we will include these details in the revision.
Ablation studies and corresponding explanations, included in the Appendix, further demonstrate that the performance of our model is **not highly sensitive** to the specific values of these hyperparameters, indicating a reasonable degree of robustness. In the following tables, the selected values of each hyperparameter are highlighted in bold.
|$\lambda_S$|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|**0.1**|28.01|0.839|0.172|2.69M|
|0.3|27.82|0.835|0.181|2.10M|
|0.5|27.48|0.823|0.196|1.92M|
|$\tau^{\omega}_h$|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|10|28.05|0.841|0.166|3.61M|
|15|28.00|0.840|0.169|3.09M|
|**25**|28.01|0.839|0.172|2.69M|
|$\tau^{\omega}_m$|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|**10**|28.01|0.839|0.172|2.69M|
|15|27.98|0.838|0.173|2.65M|
|25|27.97|0.838|0.174|2.63M|
|$Iter_h$|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|**1000**|28.01|0.839|0.172|2.69M|
|1500|27.95|0.838|0.174|2.57M|
|2000|27.93|0.837|0.175|2.52M|
|$Iter_m$|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|1000|27.92|0.839|0.172|2.70M|
|**1500**|28.01|0.839|0.172|2.69M|
|2000|27.98|0.839|0.173|2.66M|
**Q4. Demo Video for Better Visualization**
Thank you for your suggestion. A demo video is available at the provided link for better visualization.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. However, the provided geometry results raised some additional concerns about the quality of reconstruction. As shown in the depth maps, there are many little noises near the camera, which is very like the small noisy primitives after the densification. And there are some brush-like traces in the depth map in textureless regions that means a wrong geometry. I'm concerned about the robustness of these regions when view changes. The shown video of bicycle is a relatively simple scene that can be well-reconstructed by many methods, which is less persuadable for the performance. Though the paper aims at the rendering quality, geometry is the core problem of the task, and the geometry results can reflect the real quality more faithfully than just the static RGB rendering or quantitative scores. I'm curious if these concerns can be well addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt feedback. We address the remaining concerns as follows:
**Further Explanation of Geometric Reconstruction**
We first address the issue regarding minor noise and brush-like traces observed in the depth maps. In Figure 6 at the provided link, the visualizations of depth maps are generated following the approach **used in Mini-Splatting**: for each pixel, only the Gaussian primitive **with the highest weight** is considered. This approach is chosen to illustrate how **the density of Gaussian primitives** varies across regions with different perceptual sensitivities. Under this strategy, the depth value of each pixel is determined by the most contributing primitive, causing the depth map to **appear discontinuous**, as the depth varies abruptly between neighboring pixels dominated by different primitives, and can lead to visual artifacts such as minor noise and brush-like traces in the rendered depth maps. While this method helps to visualize primitive distribution, we acknowledge that **it may have caused confusion** about the geometric reconstruction quality.
To provide a more accurate assessment of our geometric reconstruction performance, we have re-rendered the depth maps using **the standard alpha-blending** method and compared our results against Pixel-GS. The updated results, which better reflect the geometric fidelity of our approach, are now available **in Figure 7 at the same provided link**, and **do not** exhibit the noise or brush-like traces present in the depth maps generated with the Mini-Splatting method. Due to the absence of explicit depth supervision, our method may underperform in depth rendering compared to approaches that incorporate depth constraints. However, compared to **other state-of-the-art methods** of the same type **without depth supervision**, our approach demonstrates **more accurate depth reconstruction** in certain regions.
From a methodology perspective, optimizing 3DGS from the **image view** and from the **geometry view** are two common yet distinct directions. Perceptual-GS belongs to the former, where the distribution of primitives is optimized based on the perceptual characteristics of the 2D image space. The core motivation is to guide densification using **human visual sensitivity** to different image regions. Other works in this category include Pixel-GS and Mini-Splatting, which consider the 2D projected size of primitives.
On the other hand, geometry-centric approaches often incorporate **additional geometric cues** during training, such as depth maps or normal maps. Examples include GSDF [r1] and GaussianRoom [r2]. We agree that geometric accuracy plays a crucial role in novel view synthesis tasks, and we consider this an important direction for our future research.
[r1] Yu, Mulin, et al. "GSDF: 3DGS Meets SDF for Improved Rendering and Reconstruction." In Proceedings of the Annual Conference on Neural Information Processing Systems, 2024.
[r2] Xiang, Haodong, et al. "Gaussianroom: Improving 3D Gaussian Splatting with SDF Guidance and Monocular Cues for Indoor Scene Reconstruction." arXiv preprint arXiv:2405.19671 (2024).
**More Demo Videos**
The bicycle scene from the MipNeRF 360 dataset is a highly representative example that has been widely used in prior works [r1], [r2] to demonstrate better performance. To better illustrate the effectiveness of our method, we include comparison with the original 3DGS on this scene. Additionally, based on your suggestion, we include a demo video of **the flowers scene**, which contains more complex textures, for further visualization. Our method significantly reduces blurriness and produces results that are more consistent with **human visual perception**.
[r1] Niedermayr, Simon, Josef Stumpfegger, and Rüdiger Westermann. "Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[r2] Mallick, Saswat Subhajyoti, et al. "Taming 3DGS: High-quality Radiance Fields with Limited Resources." SIGGRAPH Asia 2024 Conference Papers. 2024. | Summary: This paper proposes a 3D Gaussian Splatting (3DGS) optimization method that leverages additional perceptual sensitivity. By incorporating visual sensitivity, specifically edge response, the approach enables more fine-grained optimization of Gaussian representations. The perceptual sensitivity-adaptive densification strategy includes handling perceptually poor regions and scene-adaptive depth reinitialization. The experiments follow the existing pipeline.
Claims And Evidence: This paper argues that incorporating perceptual sensitivity into 3DGS can lead to more effective optimization, enabling higher-quality rendering with a limited number of Gaussians. The approach is motivated by the observation that the adaptive density control in conventional 3DGS struggles to perform densification effectively in regions with complex geometry. To address this issue, perceptual sensitivity is explicitly optimized and used to guide the densification process. An ablation study demonstrates that removing the proposed perceptual densification results in a performance drop of approximately 0.3 dB in PSNR.
Methods And Evaluation Criteria: This paper introduce four major modules.
1. **Perceptual Sensitivity Extraction**.
A binary map is generated using edge detection and smoothing. Thresholding is performed with a Sobel filter to extract perceptual sensitivity.
2. **Dual-Branch Rendering**.
A sensitivity parameter is introduced for each Gaussian. In simple regions, the number of Gaussians is constrained to improve efficiency. To account for multi-view consistency, the sensitivity term is learned, and during rendering, the sensitivity map is estimated through binary classification.
3. **Perceptual Sensitivity-Guided Densification**.
Densification is adapted based on perceptual sensitivity. The learned sensitivity values are used to determine gradient thresholding levels, ensuring more refined optimization.
4. **Scene-Adaptive Depth Reinitialization**.
Gaussian distributions are adjusted to improve rendering quality. Large Gaussians and those with high sensitivity undergo depth reinitialization using a mini-splatting approach.
The method is evaluated using rendering metrics such as PSNR, SSIM, and LPIPS, as well as the quality-efficiency balance (QEB) metric to assess trade-offs between rendering quality and computational efficiency.
Theoretical Claims: No theoretical claims are made.
Experimental Designs Or Analyses: Experiments are conducted on the Mip-NeRF 360, Deep Blending, Tanks & Temples, and BungeeNeRF datasets. The proposed method is compared with approaches that optimize 3D Gaussian Splatting under limited computational resources.
Supplementary Material: Scene breakdown results are provided along with additional explanations.
Relation To Broader Scientific Literature: This paper introduces a method that adjusts gradient thresholding based on perceptual sensitivity to modify the optimization of Gaussians. It is related to approaches for optimizing Gaussians under limited computational resources and, more broadly, to methods for compressing the number of Gaussians.
Essential References Not Discussed: The opacity decline resembles the cloning function in Eq. 9 of 3DGS-MCMC. A comparison and discussion on this similarity are necessary.
[1] Kheradmand S., et al., "3D Gaussian Splatting as Markov Chain Monte Carlo", NeurIPS, 2024.
Other Strengths And Weaknesses: There are concerns about novelty since parts of the proposed method overlap with existing approaches. The perceptual sensitivity extraction modifies the score function measure from Taming-3DGS and applies it to thresholding, while depth reinitialization follows the approach used in Mini-Splatting, making the method appear more like an engineered combination of existing techniques.
QEB does not seem to be an intuitive metric. While it aims to represent an overall trade-off, it produces unfavorable conclusions for models like EAGLES[2] and Efficient GS[3], where the number of Gaussians is significantly reduced compared to LPIPS. These models, characterized by a very low number of Gaussians and high FPS, yield a QEB score as low as 0.01, which fails to effectively capture the overall performance trade-off.
[2] Girish S., et al., "EAGLES: Efficient Accelerated 3D Gaussians with Lightweight EncodingS", ECCV, 2024.
[3] Lee, J. C., et al., "Compact 3D Gaussian Representation for Radiance Field", CVPR, 2024.
This paper introduces an adaptive densification strategy based on perceptual complexity, but it lacks a detailed analysis of the associated trade-offs. It is unclear how the distribution of Gaussians differs between texture-rich and the other areas, or how performance loss in regions where splitting does not occur effectively compares to the performance gain in perceptually complex areas.
Other Comments Or Suggestions: No additional comments.
Questions For Authors: Perceptual sensitivity ultimately relies on the Sobel filter, raising questions about the actual performance gain in highly complex regions such as grass. Additionally, in Figure 2, the entire image appears masked, making it unclear how the proposed method behaves differently from standard 3DGS in such cases.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful response to our paper. We provide additional visualizations at https://akfwb.github.io/Perceptual-GS-Rebuttal/ and address specific points below:
**Q1. Discussion of 3DGS-MCMC**
Eq. 9 in 3DGS-MCMC aims to **maintain** the opacity of a spatial region before and after cloning. However, as noted in our Appendix, cloned Gaussians are often poorly optimized. To avoid redundancy and mitigate negative effects during training, we apply **opacity decline** instead of preserving the original opacity as in 3DGS-MCMC. We will cite and discuss 3DGS-MCMC in the revision as suggested.
**Q2. Claims about the Novelty**
To the best of our knowledge, Perceptual-GS is the first approach to incorporate **explicit modeling of human visual perception** into 3D reconstruction, in addition to modeling color and geometry. While it shares certain similarities with prior works such as Taming-3DGS and Mini-Splatting, our method is fundamentally centered around **perceptual sensitivity**, leading to significant different design in both motivation and implementation.
In Taming-3DGS, complex edges are extracted using a Laplacian filter, and edge complexity is determined by aggregating pixel response values over the projected areas of Gaussian primitives from multiple views. In contrast, our method derives a perceptually representative **sensitivity map**, **learns to** project 2D perceptual cues onto 3D primitives, and uses them to guide densification. Ablation studies (w/o PE) further demonstrate that using a binarized perceptual sensitivity map is more effective in identifying regions that require densification compared to traditional edge detection methods.
|Method|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|3DGS|27.71|0.826|0.202|3.14M|
|w/o PE|27.74|0.825|0.204|2.09M|
|Ours|28.01|0.839|0.172|2.69M|
In Mini-Splatting, depth reinitialization is uniformly applied across all scenes. However, we observe that this strategy may have drawbacks in scenes that are already well reconstructed. To address this, we adaptively decide whether to apply it **based on the sensitivity learning process**. Notably, ablation results (w/o SDR) demonstrate that our method can still achieve significant improvements in reconstruction quality even without adaptive depth reinitialization.
|Method|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|3DGS|27.71|0.826|0.202|3.14M|
|w/o SDR|27.93|0.832|0.176|2.68M|
|Ours|28.01|0.839|0.172|2.69M|
**Q3. The Effectiveness of QEB**
Your concern is valid, as the QEB metric used in this paper primarily emphasizes efficiency. However, since both our method and the compared baselines aim to **improve the performance of 3DGS**, and given that the gains in #G and FPS are not as significant as those achieved by purely lightweight approaches, we believe this metric remains reasonable in this context. In addition to QEB, we also report #G, FPS, and LPIPS in the paper to provide a more comprehensive evaluation that accounts for both rendering efficiency and perceptual quality.
**Q4. Differences Across Regions with Varying Perceptual Sensitivity**
To verify that our approach does not compromise the quality of these regions, we apply the sensitivity maps to mask out sensitive areas in the final renderings on the MipNeRF 360 dataset and evaluate reconstruction quality on the remaining non-sensitive regions. The results confirm that our method **introduces no degradation** in these areas. Additional visualizations are available in the provided link.
| Method | PSNR | SSIM | LPIPS |
|----------|--------|--------|---------|
| 3DGS | 40.18 | 0.990 | 0.014 |
| Ours | 40.72 | 0.991 | 0.014 |
Based on your suggestion, we will include the experiment in our revision and provide additional visualizations to better illustrate how the perceptual sensitivity map guides the distribution of Gaussian primitives in different regions, making the underlying mechanism more interpretable.
**Q5. The Effect of Sensitivity Map**
Directly relying on the absolute response values of gradient maps derived from the Sobel operator can be misleading. Therefore, in our method, we enhance these maps by **simulating multiple characteristics of the human visual system (HVS)**, leading to significant quality improvements, as demonstrated by the results of the w/o PE ablation study presented in Q2.
The sensitivity map reflects how the HVS perceives different regions. As shown in Fig. 3, smooth sky and complex grass areas exhibit distinct values, leading to different densification strategies. In Fig. 2, most pixel values are 1, indicating a high density of perceptually sensitive regions. Our method distributes more primitives in these areas, including those that **the original 3DGS fails to densify**, resulting in clear differences. More visual examples can be found at the provided link.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. The provided results have been helpful in understanding the paper. However, I have several follow-up questions.
The authors claim that they explicitly measure perceptual sensitivity, yet this paper does not propose a novel method for directly measuring perceptual sensitivity itself. Rather, it computes scores based on edge responses from images, subsequently optimizing the Gaussians according to these scores. While explicitly identifying the Gaussians to be optimized has some novelty, claiming that this process integrates perceptual sensitivity seems somewhat of an overstatement. Consequently, this model seems more close to an engineering approach aimed at lowering the threshold around edge regions rather than addressing general perceptual sensitivity. Therefore, a concern remains as to whether this aspect provides significant novelty compared to existing works.
From my understanding, SDR reinitializes Gaussians that are large in scale and have high sensitivity. However, depth reinitialization is introduced to address biased or uneven Gaussian distributions. The proposed method selectively refines Gaussians in high sensitivity regions, but low sensitivity regions do not necessarily imply uneven distributions. Thus, the justification for sensitivity being a necessary criterion for depth reinitialization appears insufficient.
In the case of QEB, it is not a generally applicable metric and is used only for very limited comparisons. This metric seems suitable solely for measuring changes within models sharing the same baseline. Thus, I do not consider it meaningful, as it cannot reliably represent the general performance of models.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt feedback. We address the remaining concerns as follows:
**Further Claims about The Novelty**
We sincerely thank the reviewer for recognizing the novelty of our perceptual modeling strategy in selecting Gaussians for densification. To address your concerns, we further elaborate on the innovation of our method and its differences from existing approaches.
The perceptual sensitivity map forms the foundation of our method. While it is derived from the Sobel operator, we enhance it through perception-oriented enhancement to **simulate the JND thresholding nature** of the HVS, and apply perception-oriented smoothing to **mimic findings from eye-tracking studies**. Due to space constraints, please refer to our response to reviewer kxHE's Q3. for more details.
In image processing, numerous works have incorporated human perceptual properties by leveraging gradient maps to reflect spatially varying visual sensitivity. Building on this long-standing line of research in 2D perceptual modeling, we are the first to explicitly model human perceptual sensitivity in the context of 3D reconstruction. We would like to emphasize that **our novelty builds upon the well-studied 2D perceptual modeling works rather than overstated**.
Existing works such as Taming-3DGS project each Gaussian onto multiple 2D views and aggregate pixel responses from edge detection within the covered areas, transferring 2D edge information to 3D primitives. However, **as discussed in Section 3.4**, this approach does not enforce consistent pixel responses across different views. For example, if a Gaussian projects to pixels with values of 1 in one view and 0 in another, it fails to capture consistent 2D information. In contrast, our **learning-based** method enforces multi-view consistency by adaptively adjusting Gaussian parameters during training, resulting in more accurate projection of 2D sensitivity into 3D space and more effective guidance for densification. **Rather than simply lowering edge detection thresholds**, our method incorporates multiple human perceptual characteristics. As shown in the following table, directly applying edge detection (w/o PE) yields no improvement, further validating the effectiveness of our approach.
|Method|PSNR|SSIM|LPIPS|#G|
|-|-|-|-|-|
|3DGS|27.71|0.826|0.202|3.14M|
|w/o PE|27.74|0.825|0.204|2.09M|
|Ours|28.01|0.839|0.172|2.69M|
In summary, our method is **well-motivated** and **not a simple engineering approach**. It innovatively introduces **human visual perception** into 3D reconstruction. Extensive results demonstrate its effectiveness: with only **72% of the parameters**, our method **outperforms** the original 3DGS in **perceptual quality** and achieves **a 30% speed-up** in large-scale scenes, while some SOTA methods encounter OOM or reconstruction failure. Moreover, our method is **generalizable** and can be applied to various 3DGS-based pipelines. More visualizations can be found at https://akfwb.github.io/Perceptual-GS-Rebuttal/.
**Explanations of the Scene-adaptive Depth Reinitialization (SDR)**
We would like to clarify that SDR **does not** reinitialize large high-sensitive primitives. Instead, it is adaptively applied **across different scenes** based on the characteristics of their initial point clouds.
Specifically, **as noted in Section 3.6**, we apply depth reinitialization (DR) to scenes with overly sparse initial point clouds. Such sparsity leads to oversized Gaussians with **inaccurate spatial positions**, often covering regions of **mixed sensitivity**. These Gaussians are more likely to be densified, but new primitives placed in the same incorrect areas preserve the error and limit performance. In such cases, DR facilitates the redistribution of densified primitives toward more accurate regions. In contrast, for scenes with high-quality initial point clouds, applying DR may disrupt well-learned information and degrade performance since the sampling strategy.
To identify scenes with overly sparse initial point clouds, we measure the proportion $\gamma$ of **large medium-sensitive Gaussians** after warm-up. A high ratio indicates **poor spatial priors** from the initial point cloud, making more corresponding Gaussians in such scenes more prone to be densified, which can hinder reconstruction quality. Thus, applying DR in such cases is both necessary and well justified.
Due to space constraints, our explanation of this strategy is relatively brief, which may have caused confusion. Based on your suggestion, we will include a more detailed explanation of the motivation and implementation of SDR in our revision.
**Explanation of the QEB Metric**
In the paper, we provide standard evaluation metrics for both quality and efficiency (LPIPS, #G, and FPS), which are widely adopted in prior works and sufficiently demonstrate the improvements achieved by our method. Based on your suggestion, we will remove the QEB metric in the revision. | Summary: The paper introduces Perceptual-GS, a method to improve 3D Gaussian Splatting (3DGS) for novel view synthesis by integrating a perceptual-sensitivity mechanism during training. Concretely, the authors compute gradient-based sensitivity maps to model human perception of local structures, then employ a dual-branch rendering pipeline—one for RGB and one for “sensitivity”—to guide Gaussian densification. This aims to focus resources where the human eye would notice errors, thus achieving higher quality with fewer primitives. Further refinements (like scene-adaptive depth re-initialization) bolster performance in large-scale scenes. Experiments on Mip-NeRF 360, Tanks & Temples, Deep Blending, and BungeeNeRF show improvements in both quality (PSNR/SSIM/LPIPS) and efficiency (#Gaussians, FPS) over baselines.
Claims And Evidence: Claim: The approach outperforms prior 3DGS variants in both perceptual quality and efficiency.
Evidence: The authors provide side-by-side comparisons showing modest gains in PSNR/SSIM and reduced LPIPS, often with fewer splats. They also include ablations that remove modules (e.g., perceptual densification) to demonstrate each part’s impact.
Concern: While the reported improvements are consistent, they remain incremental over existing 3DGS-based approaches. Gains in absolute image quality are often relatively small, underscoring the primarily engineering nature. Note that some state-of-the-art methods,eg, mip-splatting, scaffold-gs are not included for comparison.
Methods And Evaluation Criteria: The method is thoroughly tied to 3DGS: it modifies the distribution of Gaussians via a perceptual branch.
Evaluation focuses on standard reconstruction metrics (PSNR, SSIM, LPIPS) plus model size (number of Gaussians) and render speed.
Overall, the methodology is competent for a practical 3D rendering system. However, it offers no major departure in algorithmic or theoretical frameworks beyond applying known perceptual cues to 3DGS densification.
Theoretical Claims: The paper provides no formal theory—it relies on empirical heuristics for densification. No new mathematical results; the approach is closer to a system-level enhancement than a theoretical innovation.
Experimental Designs Or Analyses: Experiments are comprehensive and well-documented across multiple datasets, including large-scale scenes where some baselines fail. The improvements generally validate the approach but reinforce that this is an incremental engineering improvement. Although, as mentioned above, some state-of-the-art methods,eg, mip-splatting, scaffold-gs are not included for comparison.
Supplementary Material: Unfortunately, there is no Supplementary Material provided by the author.
Relation To Broader Scientific Literature: The paper is heavily rooted in 3DGS developments, citing many concurrent methods that tackle densification.
While it references fundamental perceptual concepts (e.g. gradient magnitude, SSIM), it does not connect deeply with prior research on user studies, advanced HVS modeling, or more general ML frameworks.
The method’s incremental nature aligns it more with specialized vision/graphics venues.
Essential References Not Discussed: Some older works on adaptive sampling or foveated rendering in computer graphics, as well as NeRF variants that use perceptual losses, might be relevant. Without them, the novelty claim is overstated—adaptive, perceptual-driven resource allocation is well-explored historically.
Other Strengths And Weaknesses: **Strengths**:
1. The paper offers a clear quality-efficiency trade-off improvement.
2. The paper has relatively thorough experiments, solid ablations, and stable results in large scenes.
3. Potentially easy to integrate with other 3DGS-based pipelines.
**Weaknesses**:
1. Limited novelty: mostly extends known densification heuristics with gradient-based sensitivity.
2. Engineering-heavy with multiple thresholds and hyperparameters and does not propose a general ML contribution.
3. Some state-of-the-art methods,eg, mip-splatting, scaffold-gs are not included for comparison.
Other Comments Or Suggestions: The paper may find a better fit at a vision/graphics venue, where incremental improvements in rendering quality/efficiency are more broadly appreciated.
Questions For Authors: Please refers to the Strengths And Weaknesses part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful response to our paper. We provide additional visualizations at https://akfwb.github.io/Perceptual-GS-Rebuttal/ and address specific points below:
**Q1. Concerns about Engineering-driven Incremental Improvements**
To the best of our knowledge, our work is the first to introduce **explicit human perceptual modeling** into the field of 3D reconstruction, aiming to generate perceptually aligned 3D scenes. The proposed method is simple yet effective, with a clear motivation. Comparing with existing approaches, it shows less blurriness and artifacts, significantly improves perceptual quality. More visualizations can be found in our provided link. Besides, our method uses **less than 60%** of the parameters while **surpassing** the reconstruction quality of other quality-focused approaches, resulting in a significant improvement in overall efficiency. Moreover, our method is **general** and can be seamlessly integrated with various 3DGS-based approaches—such as Pixel-GS and CoR-GS—to further enhance reconstruction quality even under sparse-view settings. The quantitative results are as follows:
|Method|PSNR|SSIM|LPIPS|
|-|-|-|-|
|Pixel-GS|27.85|0.834|0.176|
|w/ Ours|28.01|0.841|0.167|
|Method|PSNR|SSIM|LPIPS|
|-|-|-|-|
|CoR-GS|22.26|0.664|0.341|
|w/ Ours|22.42|0.681|0.281|
**Q2. Concerns about Novelty**
While previous works on adaptive sampling, foveated rendering and NeRF variants with perceptual losses are indeed related to the general concept of human perception, Perceptual-GS is the first to **explicitly model the human visual perception** within the 3D reconstruction process. Our method jointly incorporates geometric, color, and perceptual sensitivity models to represent the scene, leading to reconstructions that better align with visual perception. Following your suggestion, we will cite and include a more detailed discussion of these related works in the revision to better highlight our novelty.
**Q3. Connections to Prior Perception-Related Works**
The concept of visual perception has been extensively studied in the field of image processing. However, in the domain of 3D reconstruction, although some prior works have explored perceptual concepts, they often rely on perceptual loss **without explicitly modeling** the properties of the HVS. Others employ foveated rendering to improve efficiency, but fail to achieve an overall improvement in quality, as the references *Lin et al., 2024* and *Franke et al., 2024* cited in the Introduction of our paper. In contrast, our approach introduces an **explicit perceptual representation** and models multiple key characteristics of the HVS:
* The HVS is **highly sensitive to regions with rich local structures** [r1]. To capture this characteristic, we employ the Sobel operator to extract gradients, which serve as an effective representation of HVS sensitivity across different regions of the image.
* Due to **the thresholding nature** of human perception characterized by the Just Noticeable Difference [r2], we enhance the gradient magnitude maps through Perception-oriented Enhancement, effectively suppress the misleading influence of absolute gradient values.
* **Eye-tracking studies** have shown that adjacent gaze points are often merged into a single fixation, typically representing a region with remarkable difference relative to its surroundings [r3]. To simulate this, we apply Perception-oriented Smoothing to generate a final binary perceptual sensitivity map that is both aligned with human perception and easy to learn.
Based on your suggestion, we will emphasize the connection between the proposed method and key properties of the HVS in our revision.
[r1] Xue, Wufeng, et al. "Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index." IEEE Transactions on Image Processing 23.2 (2013): 684-695.
[r2] Lubin, Jeffrey. "A Human Vision System Model for Objective Picture Quality Measurements." 1997 International Broadcasting Convention IBS 97. IET, 1997.
[r3] Gu, Ke, et al. "Saliency-guided Quality Assessment of Screen Content Images." IEEE Transactions on Multimedia 18.6 (2016): 1098-1110.
**Q4. Comparison with More SOTAs**
As our method targets a different problem, we do not compare it with efficiency-focused Scaffold-GS or quality-focused Mip-Splatting in the paper. Based on your suggestion, we now include comparisons with them. Since Scaffold-GS does not explicitly store all primitives, its #G metric is omitted.
|Method|PSNR|SSIM|LPIPS|FPS|#G|
|-|-|-|-|-|-|
|Scaffold-GS|27.99|0.825|0.207|476|-|
|Mip-Splatting|27.79|0.827|0.203|125|3.97M|
|Ours|28.01|0.839|0.172|166|2.69M|
**Q5. Concerns about the Thresholds and Hyperparameters**
Due to space constraints, we refer the reviewer to our response to reviewer aPkn's Q3., which addresses the same issue in detail.
**Q6. Issues about the Supplementary Material**
We have provided detailed Appendix after the References.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response. First of all, I must say sorry to the authors about neglecting the provided supplementary materials. Based on the rebuttal, which address most of my concerns, especially for the experiment parts, and the feedback from other reviewers, I am happy to keep my rating to be positive.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful comments and appreciation of our work. We will do our best to improve the final version of our paper based on your valuable suggestions. | Summary: Perceptual-GS addresses a core limitation of 3D Gaussian Splatting (3DGS) for novel view synthesis by adaptively distributing Gaussian primitives based on human perceptual sensitivity. Traditional 3DGS methods suffer from either insufficient coverage in visually important areas or over-densification in simpler regions. Perceptual-GS tackles this by first modeling perceptual sensitivity from multi-view images, emphasizing scene details that the human eye is most sensitive to. It employs a dual-branch rendering framework, fusing both RGB and sensitivity signals to adaptively refine and distribute Gaussians, thereby boosting reconstruction quality while constraining the total number of primitives. Additionally, a scene-adaptive depth reinitialization mechanism further improves quality in regions with sparse initial geometry.
Claims And Evidence: Perceptual-GS claims it can be effectively integrated with other 3DGS-based methods, while all of the experiments presented—including those in the supplementary materials—only include the proposed Perceptual module combined with 3DGS and Pixel-GS. It will be helpful if can show additional experiments to incorporate with other GS methods, which can prove its generalization, especially for sparse view GS methods, where the Gaussian distribution problem tends to be more pronounced.
While the paper provides a thorough quantitative evaluation, the limited visualizations shown appear to indicate only small improvements. Also as the paper claimed that the proposed method prioritizing the densification of Gaussian primitives in high-sensitivity regions to human perception and constraining their generation in low-sensitivity areas, it remains unclear whether this approach might compromise clarity or reduce quality in other parts of the scene. Therefore, it would be very helpful to include additional visualizations—either in the main paper or the supplementary material—to more clearly illustrate the advantages and potential trade-offs.
Perceptual Sensitivity map is unclear should looks like Fig 2 Sensitivity GT or should looks like Fig3 Perception-oriented Smoothing results, but both seems not very helpful for grass and ground as shown in Fig5.
Methods And Evaluation Criteria: Proposed method is make sense for the problem.
Theoretical Claims: The paper not contain theoretical claim except some describtion about 3D Gaussian Splatting and Perceptual Sensitivity, which is correct.
Experimental Designs Or Analyses: Please refer Claims And Evidence.
Supplementary Material: Yes, the supplementary give more experiments details and some visualization.
Relation To Broader Scientific Literature: Unknown
Essential References Not Discussed: No
Other Strengths And Weaknesses: The cropped region obscures the full image too much.
Other Comments Or Suggestions: No
Questions For Authors: Please refer Claims And Evidence.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful response to our paper. We provide additional visualizations at https://akfwb.github.io/Perceptual-GS-Rebuttal/ and address specific points below:
**Q1. Concerns about the Generalizability**
Thank you for the valuable suggestion. To further demonstrate the generalizability of our approach, especially for sparse-view 3D Gaussian Splatting methods, we simply integrate our perceptual sensitivity-guided densification strategy with CoR-GS [r1] to validate its adaptability under sparse-view settings. Since the paper only provides code without the initial point cloud used for training, we followed the provided instructions to construct a training dataset containing only 24 views. We then trained both the CoR-GS and the version enhanced with our method under the same settings.
|Method|PSNR|SSIM|LPIPS|
|-|-|-|-|
|CoR-GS|22.26|0.664|0.341|
|w/ Ours|22.42|0.681|0.281|
As shown in the table, our method achieves notable improvements even under sparse-view settings, particularly in the perceptual metric LPIPS. These results further validate the versatility and effectiveness of our approach. Qualitative comparisons are available in the link we provided.
[r1] Zhang, Jiawei, et al. "CoR-GS: Sparse-view 3D Gaussian Splatting via Co-regularization." European Conference on Computer Vision, 2024.
**Q2. Limited Visualization and Small Improvements**
Due to space limitations in the main text, we have included additional visual results in the Appendix. Compared to other related methods, our approach achieves better reconstruction quality, particularly in challenging regions, **effectively reducing issues such as excessive blur and visual artifacts**. More qualitative comparisons are available in the provided link. Following your suggestion, we will incorporate more visual results in our revision.
**Q3. Concerns about the Compromised Quality in Non-sensitive Areas**
Since our method primarily targets perceptually sensitive regions, we do not include comparisons on non-sensitive areas in the experiments. To verify that our approach does not compromise the quality of these regions, we apply the sensitivity maps to mask out sensitive areas in the final renderings on the MipNeRF 360 dataset and evaluate reconstruction quality on the remaining non-sensitive regions. The results confirm that our method **introduces no degradation** in these areas. Additional visualizations are available in the provided link.
| Method | PSNR | SSIM | LPIPS |
|----------|--------|--------|---------|
| 3DGS | 40.18 | 0.990 | 0.014 |
| Ours | 40.72 | 0.991 | 0.014 |
Based on your suggestion, we will include the experiment in our revision.
**Q4. Visualization of the Sensitivity Map**
Figures 2 and 3 both visualize the perceptual sensitivity maps. Fig. 2 presents the full image, while Fig. 3 displays a cropped region to better highlight the variations in sensitivity across different areas. Compared to the baseline, our method distributes more Gaussian primitives in perceptually complex regions such as textured grass areas guided by the sensitivity map, resulting in images with higher visual quality as perceived by the human eye. This also helps to reduce noticeable blur and artifacts in the rendered results. Additional sensitivity map visualizations are available in the link we provided. Based on your suggestion, we will include additional visualizations of the sensitivity maps for further illustration in our revision.
**Q5. The Cropped Region Obscures the Full Image Too Much**
Thank you for your suggestion. We will revise the cropped region in the visualization accordingly in our revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' response, they solve my questions, I change my overall recommendation to 4: Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful comments and appreciation of our work. We will do our best to
improve the final version of our paper based on your valuable suggestions. | null | null | null | null | null | null |
Learn Beneficial Noise as Graph Augmentation | Accept (poster) | Summary: The paper proposes a graph contrastive learning method called Positive-incentive Noise driven Graph Data Augmentation (PiNGDA), which makes the model learn to generate perturbations that benefit the training. Comprehensive experiments are conducted to evaluate the performance of the method.
Claims And Evidence: Yes.
The claims of the effectiveness and efficiency of the method are supported by experiments.
The theoretical claim that "the standard GCL with pre-defined augmentations is equivalent to estimate the beneficial noise via the point estimation" is proved with mathematical derivation based on information theory.
Methods And Evaluation Criteria: Yes.
The performance of the proposed PiNGDA, as a GCL method, is evaluated in several aspects, including node classification, graph classification, node classification on heterogenous graphs, efficiency, visualization, and ablation study.
Theoretical Claims: Yes.
An auxiliary variable is introduced to prove the theoretical claim (Eq. (13)). The proof is correct and rigorous.
Experimental Designs Or Analyses: All the experimental designs and analyses are checked. Comprehensive experiments are conducted to evaluate the proposed method's performance.
However, the detailed hyperparameter of the training is not listed.
Moreover, as is mentioned in the caption of Figure 2, the model can be trained with a batch size to decrease memory burden, which is not discussed in Table. 1, where some methods report OOM on some datasets.
Supplementary Material: All parts of the supplementary material are reviewed. The supplementary material offers details of the theoretical derivation, implementation of the algorithm, motivation, and experiments.
Relation To Broader Scientific Literature: The paper proposes a novel method that learns to generate augmented views, other than the adversarial method (AD-GCL). With a similar computational and time burden to AD-GCL, the proposed method achieves better performance than AD-GCL on most node classification and graph classification tasks, including higher accuracy and lower standards, indicating its effectiveness and stability.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: S1: According to the results of experiments, the effectiveness and stability of the method are validated comprehensively.
S2: According to the visualization, the proposed method tends to remove inter-class edges while maintaining intra-class edges, which is a common strategy applied in supervised learning.
W1&W2: As is mentioned in 'Experimental Designs or Analyses', the detailed hyperparameter of the training is not listed. Moreover, as is mentioned in Figure 2, the model can be trained with a batch size to decrease memory burden, which is not discussed in Table 1, where some methods report OOM on some datasets.
Other Comments Or Suggestions: No.
Questions For Authors: Major questions: Please see weaknesses.
Here are some tiny questions:
1. JOAO is mentioned as 'learnable' in Table 1. However, to the best of my knowledge, JOAO randomly selects augmentation methods, which are all randomly applied. The projection head of each augmentation method differs.
2. The further discussion of 2.3 is placed in Appendix C after the detailed discussion of 3.3 and 4.2, which is kind of disordered.
3. The performance of the method on Wiki-CS in Table 2 is different from that in Table 1, why? The difference also appears between Table 7 and Table 3.
4. The results of the ablation study are kind of hard to read. Maybe the average value of columns and rows can be added to better show the trend of difference between w/o Aug., Random, and Learnable.
5. Can the attribute noise also be visualized?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Response to Reviewer iJqw**
We greatly thank you for the detailed and valuable comments. Please find our responses to the comments as follows:
>**W1&W2:** As is mentioned in 'Experimental Designs or Analyses', the detailed hyperparameter of the training is not listed. Moreover, as is mentioned in Figure 2, the model can be trained with a batch size to decrease memory burden, which is not discussed in Table 1, where some methods report OOM on some datasets.
**Reply:** Thank you for your valuable feedback. We will add detailed hyperparameter settings to improve clarity and reproducibility. Regarding batch size and memory efficiency, all **GCL methods were trained using the same batch size** to ensure a **fair comparison**. For smaller datasets, we do not apply batch processing, for large datasets, we set the batch size of 256. Our experiments are conducted on an NVIDIA 4090 GPU (24 GB memory) for most datasets and on an NVIDIA A100 GPU (40 GB memory) for OGB-arxiv. However, we recognize that this was not explicitly stated in Table 1. We will update Table 1 with a clarification that all methods follow the same batch size setting and discuss the impact of batch size adjustments on memory consumption. We appreciate your insightful comments and will revise the paper accordingly.
>**Q1:** JOAO is mentioned as 'learnable' in Table 1. However, to the best of my knowledge, JOAO randomly selects augmentation methods, which are all randomly applied. The projection head of each augmentation method differs.
**Reply:** Thank you for your insightful comment. While **JOAO does not learn augmentations directly**, it does **adaptively select** augmentation strategies based on a similarity-based policy. Referring to it as "learnable" in Table 1 may not be the most accurate description. Instead, "adaptable" better captures its mechanism. We will revise it in the paper and provide a clearer explanation of this distinction.
>**Q2:** The further discussion of 2.3 is placed in Appendix C after the detailed discussion of 3.3 and 4.2, which is kind of disordered.
**Reply:** We appreciate your feedback regarding the structure of the paper. We will reorganize the content to ensure a more intuitive and coherent presentation. Thank you for your valuable suggestion!
>**Q3:** The performance of the method on Wiki-CS in Table 2 is different from that in Table 1, why? The difference also appears between Table 7 and Table 3.
**Reply:** Thank you for your question. The differences in performance arise because Table 1 and Table 3 report the best results, while in the ablation study, we re-run the experiments to ensure a **fair comparison** across different settings. The results in the ablation study reflect newly obtained experimental outcomes under controlled conditions.
>**Q4:** The results of the ablation study are kind of hard to read. Maybe the average value of columns and rows can be added to better show the trend of difference between w/o Aug., Random, and Learnable.
**Reply:** Thank you for your helpful suggestions. We will update Table 2 to better highlight the best results for improved clarity.
>**Q5:** Can the attribute noise also be visualized?
**Reply:** Thank you for your insightful suggestion. Similar to our response to Reviewer PRCX, visualizing **attribute noise** is challenging because **node representations are high-dimensional** and lack the structured spatial relationships found in visual data. Even if represented as a **matrix or heatmap**, the absence of spatial correlations between adjacent values could lead to **misleading interpretations**. | Summary: This paper proposes a framework named Positive-incentive Noise driven Graph Data Augmentation (PiNGDA). It theoretically analyzes the drawbacks of the existing data augmentation in GCL and leverages a π-noise generator to learn beneficial noise as the augmentations for GCL. Meanwhile, they also design a differentiable algorithm to efficiently generate the noise. From the experimental results, PiNGDA achieves the highest performance compared to the current baselines.
Claims And Evidence: Yes. The authors design theoretical analyses of the proposed model and extensive experiments validate performance.
Methods And Evaluation Criteria: Yes. The proposed PiNGDA can quantify to learn the beneficial graph augmentations not random drop some nodes.
Theoretical Claims: Yes. I have checked the theoretical proofs in Section 3 and Appendix A. They are correct.
Experimental Designs Or Analyses: Yes. I have checked the experimental details in Section 5 and Appendix D. The results can validate performance.
Supplementary Material: Yes. I have checked all supplementary materials.
Relation To Broader Scientific Literature: In this paper, the authors theoretically analyze the drawbacks of the existing data augmentations and propose a novel PiNGDA to differentiable learn the beneficial graph augmentations.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ## Strengths:
1) The authors propose an interesting framework to directly learn the beneficial noise augmentations. After all, the noise generally is regarded as the harmful signal.
2) The theoretical analysis is logical, and the paper is well-organized to follow.
## Weaknesses:
1) The relationships between the proposed model and the learnable methods are not discussed in Section 1.
2) The graph contrastive learning task should be introduced in more detail in Section 1.
3) In contributions, the authors should clarify the merits of the proposed model from the Pi-Noise perspective.
4) The connections and differences between PINGDA and some current GCL models based on learnable strategies should be discussed in details.
5) The authors could add the ablation study such as extending the proposed noise augmentations to more current GCL models.
6) Apart from the Memory-Times experiments, Memory-Performance or FLOPs-Performance should be introduced to validate efficiency.
7) There are some typos. For example, in line 19, “. Where” should be “, where”. In Figure 3, it lacks explanations about the dashed line between two nodes.
Other Comments Or Suggestions: No.
Questions For Authors: I hope the authors can state the generalization of the proposed model. For example, does it improve the performance on other new GCL models or graph tasks? (Please see Weakness 6)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Response to Reviewer J7B2**
We greatly thank you for the detailed and valuable comments. Please find our responses to the comments as follows:
>**W1&2&3&4:** The relationships between the proposed model and the learnable methods are not discussed in Section 1. The graph contrastive learning task should be introduced in more detail in Section 1. The connections and differences between PINGDA and some current GCL models based on learnable strategies should be discussed in details. In contributions, the authors should clarify the merits of the proposed model from the Pi-Noise perspective.
**Reply:** Thank you for your valuable suggestions. Due to space limitations, we did not provide an in-depth discussion of these aspects in Section 1. We will provide a more comprehensive introduction to the graph contrastive learning task in the revised version. Additionally, we will provide a more detailed introduction to the graph contrastive learning task to improve clarity and comprehensiveness. For the specific differences with AD-GCL, please refer to the response to **Reviewer PRCX**.
>**W5:** The authors could add the ablation study such as extending the proposed noise augmentations to more current GCL models.
**Reply:** We have included the requested ablation study by extending our proposed noise augmentations to more GCL models. Specifically, we selected both a classical GCL model (GRACE [1]) and a more recent approach (Sp²GCL [2]) to demonstrate the effectiveness of our method. The results are summarized in the table below:
| | Cora | CiteSeer | PubMed | Amazon-Photo | Coauthor-Phy |
| ------------- | ------------ | ------------ | ------------ | ------------ | ------------ |
| GRACE | 83.43 ± 0.32 | 70.93 ± 0.21 | 85.90 ± 0.24 | 93.13 ± 0.17 | 95.74 ± 0.06 |
| GRACE+PiNGDA | 84.28 ± 0.24 | 71.47 ± 0.20 | 86.79 ± 0.27 | 93.19 ± 0.18 | 95.81 ± 0.05 |
| Sp²GCL | 82.45 ± 0.35 | 65.54 ± 0.51 | 84.26 ± 0.29 | 93.05 ± 0.23 | 95.73 ± 0.04 |
| Sp²GCL+PiNGDA | 83.89 ± 0.48 | 67.21 ± 0.63 | 84.73 ± 0.23 | 93.11 ± 0.14 | 95.74 ± 0.04 |
These results highlight the impact of our augmentation approach across different GCL models.
[1]Zhu, Yanqiao, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. "Deep graph contrastive representation learning." *arXiv preprint arXiv:2006.04131* (2020).
[2]Bo, Deyu, Yuan Fang, Yang Liu, and Chuan Shi. "Graph contrastive learning with stable and scalable spectral encoding." *Advances in Neural Information Processing Systems* 36 (2023): 45516-45532.
>**W6:** Apart from the Memory-Times experiments, Memory-Performance or FLOPs-Performance should be introduced to validate efficiency.
**Reply:** The detailed tables are shown below. The results show how our method compares to existing baselines in terms of both computational cost and accuracy improvements. A negative value in ΔFLOPs or ΔMem. indicates **a reduction of our method** in computational cost or memory usage, while a positive value in ΔAcc% reflects our accuracy improvement over the respective method.
| Methods | Cora ΔFLOPs (G) | ΔAcc% | PubMedΔFLOPs (G) | ΔAcc% | WikiCSΔFLOPs (G) | ΔAcc% | Amazon-Photo ΔFLOPs (G) | ΔAcc% |
|-|-|-|-|-|-|-|-|-|
| DGI| -3.27| +3.60 | -4.93| +2.61 | -0.50| +3.44 | -3.83| +4.81 |
| GMI| -1.99| +3.17 | -5.06| +3.83 | -1.77| +2.60 | -2.93| +3.51 |
| GCA | -0.71| +1.23 | -5.18| +0.31 | -3.04| +1.00 | -2.01| +0.11 |
| BGRL| +0.33| +2.92 | +2.46| +2.03 | +1.45| +0.85 | +0.95| +0.11 |
| GREET| -4.79| +7.18| -16.59| +1.48 | +0.72| +2.69 | -16.60| -0.34 |
| GRACEIS | +0.53| +1.99 | +3.91| +3.30 | +2.33| +3.45 | +1.50| +2.22 |
| Methods | Cora ΔMem. | ΔAcc(%) | PubMed ΔMem. | ΔAcc(%) | WikiCS ΔMem. | ΔAcc(%) | Amazon-Photo ΔMem. | ΔAcc(%) |
|-|-|-|-|-|-|-|-|-|
| DGI| -63.30%| +3.61%| -29.50%| +2.61%| -15.90%| +3.44%| -51.70%| +4.81% |
| GMI| -66.50%| +3.17%| -54.40%| +3.83% | -36.70%| +2.60% | -60.80%| +3.50% |
| GCA| -17.60%| +1.23%| -36.90%| +0.31% | -51.20%| +0.99%| -58.60%| +0.11%|
| GREET| -24.20%| +7.19% | -61.10%| +1.48% | -53.40%| +2.70% | -55.50%| -0.34% |
| AD-GCL| +25.1%| +2.83% | +12.6%| +3.69% | +9.3%| +1.86% | -14.60%| +1.49% |
Our results demonstrate that while some methods, such as GRACEIS and AD-GCL, achieve efficiency gains by significantly reducing FLOPs and memory usage, our approach **strikes a balance between efficiency and performance**.
>**W7:** There are some typos. For example, in line 19, “. Where” should be “, where”. In Figure 3, it lacks explanations about the dashed line between two nodes.
**Reply:** Thank you for your careful review. We will correct the typos. Regarding the dashed lines in Figure 3, they represent connections between nodes of different classes. We apologize for the unclear representation and will revise the figure or improve the explanation to ensure better clarity. We appreciate your valuable feedback and will make the necessary improvements. | Summary: This paper proposes a graph data augmentation method based on beneficial noise. The noise generator learns the optimal perturbation of graph structure and node features to solve the problem of insufficient stability of traditional data augmentation strategies in graph contrastive learning. Experimental verification shows that PiNGDA outperforms baseline methods in tasks.
Claims And Evidence: The paper presents experimental results to support its main claims. The authors verified the method's effectiveness on node classification and graph classification datasets. Additionally, an ablation study on the key noise generation module was conducted, enhancing understanding of the key module's contribution. Thus, the proposed conclusions are somewhat powerful and credible.
Methods And Evaluation Criteria: This paper presents a new augmentation method for graph contrastive learning. The proposed approach effectively tackles the recognized issues by integrating learnable augmentations into the contrastive learning framework. In contrast to traditional augmentation techniques like node and edge dropping, this method enriches representation learning through the introduction of beneficial noise.
The chosen evaluation criteria, which consist of standard benchmarks for node classification and graph classification, are appropriate for assessing the effectiveness and generalization ability of the proposed method. The employment of multiple evaluation metrics guarantees a comprehensive analysis of the model's performance.
Theoretical Claims: The main theoretical assertions proposed in the paper have been proved mathematically. The author lists the assumptions and derives the conclusion. The derivation process is presented clearly.
Experimental Designs Or Analyses: The experimental settings are reasonable. The inclusion of ablation studies further strengthens the analysis by isolating the contributions of different components.
Supplementary Material: I reviewed the supplementary materials, focusing on the theoretical part. There appear to be no issues.
Relation To Broader Scientific Literature: Previous graph contrastive augmentation methods included edge dropping and node dropping. Later GCL methods were developed based on these by introducing learnable dropping techniques. This paper further expands on these approaches. It incorporates beneficial noise into contrastive learning, integrates these techniques into a unified framework, and proposes a new method.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The main conclusion of the theoretical part is convincing.
2. The experiments are somehow abundant, covering both node classification and graph classification tasks.
3. The inclusion of ablation studies provides evidence for the effectiveness of different components of the method.
Weaknesses:
1. The theoretical analysis in Section 3.3 is interesting, but the definition of task entropy are unclear.
2. The paper does not detail the hyperparameters used in the experiment, such as learning rate batch size, etc., which will affect the reproducibility of the research.
Other Comments Or Suggestions: Please refer to the strengths and weaknesses.
Questions For Authors: 1. The theoretical analysis in Section 3.3 is intriguing. However, I am unclear about the definition and role of task entropy. Could you provide a more detailed explanation or an example to clarify its significance?
2. While the experimental setup is well-documented, the hyperparameter settings are not explicitly detailed. Could you provide more information on these aspects, either in the main text or in supplementary materials?
3. The computational efficiency in Figure 2 does not show much advantage of PiNGDA. Can you explain why this is the case and provide analyze of the complexity?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Response to Reviewer smN2**
We greatly thank you for the detailed and valuable comments. Please find our responses to the comments as follows:
>**W1&Q1:** The theoretical analysis in Section 3.3 is interesting, but the definition of task entropy are unclear.The theoretical analysis in Section 3.3 is intriguing. However, I am unclear about the definition and role of task entropy. Could you provide a more detailed explanation or an example to clarify its significance?
**Reply:** In our framework, **task entropy is used to measure the difficulty of a task**. The key idea is that adding noise can help simplify the task, which is quantitatively measured by the task complexity, making it easier for the model to learn meaningful representations. This aligns with our theoretical analysis, where learned beneficial noise can lead to better performance. We hope this explanation can answer your question and we would like to further clarify it if the concern is still well-addressed.
>**W2&Q2:** The paper does not detail the hyperparameters used in the experiment, such as learning rate batch size, etc., which will affect the reproducibility of the research. While the experimental setup is well-documented, the hyperparameter settings are not explicitly detailed. Could you provide more information on these aspects, either in the main text or in supplementary materials?
**Reply:** Thank you for your helpful suggestions. The hyperparameters were selected within the following ranges: For smaller datasets, we do not apply batch processing, for large datasets, we set the batch size of 256. The number of epochs varies from 500 to 2000 depending on the dataset. The learning rate is set between 0.0005 and 0.01, while the weight decay is kept constant at 0.0001 across all datasets. The feature dropout rates range from 0.0 to 0.3, and the edge dropout rates vary between 0.1 and 0.4. The temperature is typically set to 0.3 for most datasets, with an exception for Coauthor-Phy, where it is set to 0.5. This is a simple description, the detailed version will be added later.
>**Q3:** The computational efficiency in Figure 2 does not show much advantage of PiNGDA. Can you explain why this is the case and provide analyze of the complexity?
**Reply:** The computational efficiency of PiNGDA may not appear significantly advantageous in Figure 2 because our method involves adding noise to all data points, which introduces additional computations. In the two noise generation modules, since each edge and each node feature is calculated by MLP, the computational complexity of the noise module is $\mathcal{O}(\mathcal{E} \cdot d) + \mathcal{O}(N \cdot d)$, where N is the number of nodes, $\mathcal{E}$ is the number of edges in the graph, and d is the dimension of the node feature. However, this process is crucial for improving the robustness and generalization of the model. To provide a more comprehensive comparison, we have included detailed memory-performance analyses in the tables. Please refer to response to **Reviewer J7B2**. | Summary: This paper proposes a novel method called PiNGDA for addressing the instability of traditional heuristic augmentation techniques in graph contrastive learning (GCL). The authors introduce the concept of π-noise, which is beneficial noise that reduces task complexity, and design a trainable noise generator to produce optimal perturbations in both graph topology and node attributes. Experimental results on graph benchmarks demonstrate superior performance compared to existing methods.
Claims And Evidence: The authors claim that learning beneficial noise through a π-noise framework can provide a more reliable and stable augmentation strategy. This claim is supported by theoretical analysis and extensive experiments across multiple datasets.
Methods And Evaluation Criteria: The proposed PiNGDA consists of a π-noise generator and a contrastive learning module. The noise generator produces topological and attribute noise using a Gaussian auxiliary variable and reparameterization tricks to ensure differentiability. The model is evaluated using node classification accuracy and graph classification accuracy across several datasets.
Theoretical Claims: The authors theoretically analyze the relationship between π-noise and the training loss in GCL. They show that predefined augmentations in existing GCL models can be considered as point estimations of π-noise, which may not always be reliable. By learning π-noise directly, PiNGDA provides a more robust approach to graph augmentation.
Experimental Designs Or Analyses: Yes, I have checked the soundness of the experimental designs (including the compared methods and experimental setups) and analyses.
Supplementary Material: Yes, I reviewed the supplementary material, which includes the proofs for the theorem.
Relation To Broader Scientific Literature: 1) The authors provide a novel perspective on GCL by analyzing it through the lens of the π-Noise framework. They elucidate why traditional random noise augmentation methods lead to unstable performance, which offers a fresh direction for future research in GCL.
2) The proposed model demonstrates extensive applicability, achieving superior performance in both node classification and graph classification tasks.
Essential References Not Discussed: 1) Spectral Feature Augmentation for Graph Contrastive Learning and Beyond, in AAAI 23
2) Unified Graph Augmentations for Generalized Contrastive Learning on Graphs, in NeurIPS 24
3) Graph Adversarial Self-Supervised Learning, in NeurIPS 21
Other Strengths And Weaknesses: **Strengths**
The ablation studies provide valuable insights into the impact of different augmentation strategies.
**Weaknesses**
1) The π-noise framework itself is not entirely novel, as it is based on existing research. The innovation lies in its application to GCL.
2) The paper lacks detailed descriptions of some experimental settings, such as hardware environment and encoder choices, which may hinder reproducibility.
3) Compared to AD-GCL, PiNGDA does not show advantages in terms of time and space efficiency.
Other Comments Or Suggestions: 1) The organization of the paper could be improved. For example, separating the discussion of existing research in Section 3 would enhance readability.
2) Minor grammatical issues, such as the lowercase "where" in line 19, should be corrected.
3) The authors should provide more details on the experiments to ensure reproducibility.
Questions For Authors: How does PiNGDA compare to other baselines that also focus on adaptive or learnable graph augmentations, such as those mentioned in the "Essential References Not Discussed" section?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Response to Reviewer LdjD**
We greatly thank you for the detailed and valuable comments. Please find our responses to the comments as follows:
>**W1:** Innovation of application of $\pi$-noise to GCL.
**Reply:** Although $\pi$-noise has been explored in other fields, its adaptation for graph data and integration within the GCL framework offers a unique perspective. The core challenge of π-noise—**how to define task entropy—remains highly difficult**, especially for **graph data**. This is not merely incremental work, but rather a fundamental challenge, which forms the core contribution of our research. Our work achieves **unified augmentations on both graph topology and node features within a theoretical framework**, which prior GCL methods have not been able to accomplish. This novel application allows us to leverage noise in a way that improves the quality of graph representations.
>**W2:** Lacks detailed experimental settings.
**Reply:** Thank you for your suggestion. Our experiments are conducted on an NVIDIA 4090 GPU (24 GB memory) for most datasets and on an NVIDIA A100 GPU (40 GB memory) for OGB-arxiv. For our proposed method, we employ a two-layer GCN network with PReLU activation, where the hidden layer dimension is set to 512, and the final embedding dimension is 256. Additionally, we utilize a projection head, consisting of a 256-dimensional fully connected layer with ReLU activation, followed by a 256-dimensional linear layer. Edge noise generator uses an MLP to process node features and then applies Gumbel-Softmax sampling method. Feature noise generator uses two MLPs to estimate the mean and variance of feature noise and then uses the reparameterization trick. We appreciate your feedback and will ensure these details are clearly presented in the revised version.
>**W3:** Compared to AD-GCL, PiNGDA does not show advantages in terms of time and space efficiency.
**Reply:** Our method indeed introduces some additional computational overhead because we have two augmentation modules—one for graph structure and one for node features. This results in a slight increase in memory and time consumption. However, as shown in the table, this increase is not substantial, and more importantly, our method achieves significantly better performance, demonstrating its effectiveness.
|Methods | Cora Mem.(M)|Time(s)|Acc(%) | PubMed Mem.(M)|Time(s)|Acc(%)|WikiCS Mem.(M)|Time(s)|Acc(%)|Amazon-Photo Mem.(M)|Time(s)|Acc(%)|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|AD-GCL(aug on graph only) |986|0.03|83.88|13632|0.22|84.23|9836|0.20|80.57|4768|0.14|91.92|
|Ours(aug on graph & fea)|1070|0.04|86.25|15108|0.44|87.34|9704|0.21|82.07|4072|0.13|93.29|
|Δ|84|0.01|+2.8%|1476|0.22|+3.7%|-132|0.01|+1.9%| -696|-0.01|+1.5%|
>**C1&2&3:**
**Reply:** Thank you for your helpful suggestions. We will restructure Section 3 to improve readability by separating the discussion of existing research. Additionally, we will correct minor grammatical issues. To enhance reproducibility, we will also provide more details on the experimental settings. We appreciate your valuable feedback and will make the necessary revisions.
>**Q:** How does PiNGDA compare to other baselines that also focus on adaptive or learnable graph augmentations, such as those mentioned in the "Essential References Not Discussed" section?
**Reply:**
| Method| CORA|CiteSeer| PubMed | Amazon-Photo| Coauthor-Phy|
|-|-|-|-|-|-|
| **GOUDA** | 82.25 ± 0.54| 70.25 ± 1.24| 85.82 ± 0.71| 89.61 ± 1.17| 94.54 ± 0.59 |
| **GASSL** | 81.82 ± 0.79| 69.51 ± 0.84| 84.91 ± 0.57| 92.14 ± 0.23| 94.93 ± 0.21|
| **Ours** | **86.25 ± 0.25** | **72.44 ± 0.14** | **87.34 ± 0.08** | **93.29 ± 0.17** | **95.81 ± 0.06** |
Since the official code for Paper 1 was not available, we replicated the results from the supplementary materials submitted with Papers 2 and 3 from OpenReview. The performance difference between the reported results and our replication could be due to the absence of specific hyperparameters in the papers. We base our results on our own replication.
Also, as suggested by **Review J7B2**, we also added PiNGDA to different GCL backbones and achieved good results.
|| Cora| CiteSeer|PubMed| Amazon-Photo | Coauthor-Phy |
|-|-|-|-|-|-|
| **GRACE**| 83.43 ± 0.32 | 70.93 ± 0.21 | 85.90 ± 0.24 | 93.13 ± 0.17 | 95.74 ± 0.06 |
| **GRACE+PiNGDA** | **84.28 ± 0.24**| **71.47 ± 0.20**| **86.79 ± 0.27** | **93.19 ± 0.18**| **95.81 ± 0.05**|
| **Sp²GCL**| 82.45 ± 0.35 | 65.54 ± 0.51 | 84.26 ± 0.29 | 93.05 ± 0.23 | 95.73 ± 0.04 |
| **Sp²GCL+PiNGDA** | **83.89 ± 0.48**|**67.21 ± 0.63**| **84.73 ± 0.23** | **93.11 ± 0.14**| **95.74 ± 0.04**|
Our method consistently outperforms existing approaches across all datasets, achieving significant improvements in accuracy. This demonstrates the effectiveness of our adaptive graph augmentation strategy, which better captures the underlying structure of the data and enhances model generalization.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I am satisfied that most of my concerns have been addressed, and I hope to see these contents included in the final paper. If so, I will increase my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable suggestions! We sincerely appreciate your guidance and will incorporate all feedback into the revised manuscript. | Summary: This paper proposes a graph contrastive learning (GCL) methods, namely PINGDA, with a novel learnable graph augmentation. The learnable augmentation follows a new information theory framework, namely positive-incentive noise. The authors propose to view all augmentations as “noise” and thus design a new algorithm to add beneficial noise to both graph topology and node attributes. The experiments are run on several graph tasks, including node classification and graph classification.
Claims And Evidence: Yes, the theoretical claims are well verified by the experiments. In most cases, the proposed PINGDA achieves the best results. In some cases, the method also achieves sub-optimal results. I notice that the stds are usually smaller than other GCL methods, which seems to prove that the proposed idea of using noise makes sense.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. The experiments contain both node classification and graph classification. The datasets also contain both heterogeneous and homogeneous graphs.
Theoretical Claims: I checked most of the mathematical derivations. It seems that the formulations are roughly correct.
Experimental Designs Or Analyses: I checked the experimental design and analyses. The results can well support the claims and conclusions.
Supplementary Material: I have reviewed the Appendix.
Relation To Broader Scientific Literature: The paper provides a new perspective for stable graph augmentation, the theory is well grounded by information bottleneck. It is also an interesting advantage compared with the existing GCL methods, such as GCA and JOAO.
Essential References Not Discussed: none
Other Strengths And Weaknesses: Strengths:
1. The paper proposes an interesting and new perspective for graph augmentation in GCL. The idea of viewing graph augmentation as noise seems new as far as I know. It can therefore unify the augmentation on graph topology and node attributes, which is an advantage compared with the existing GCL models with learnable augmentations.
2. PINGDA also focuses on how to augment node attributes while most existing GCL models focus on how to modify the graph and only utilize the simple perturbations on attribute augmentations.
3. The experimental results seem promising. The learnable augmentations seem to well support the theoretical analysis. The experiments contain node classification and graph classification. The datasets consist of both homogeneous graphs and heterogeneous graphs. I noticed that the standard derivations are smaller in most cases, which validates the effectiveness of the learnable augmentation.
Weaknesses:
1. The introduction of positive-incentive noise and the discussions with the existing papers of noise are not enough, especially in the main paper.
2. Figure 1 can be further improved. It is hard to distinguish how to simultaneously generate augmentations on both graph topology and node features. Meanwhile, $\mathcal{N}(0, \mathcal{I})$ in the figure is inconsistent with the notations appearing in the main paper.
3. In Table 2, the best results should be further highlighted. The current version is not intuitive.
4. The caption of Table 4 is quite confusing. The authors fail to clarify that the datasets are heterogeneous graphs in the caption.
Other Comments Or Suggestions: See above.
Questions For Authors: 1. Why do the authors assume that the topological noise is a Bernoulli distribution?
2. I cannot completely understand the meaning of Figure 3. How are the nodes visualized? t-SNE? Why does it only contain dozens of nodes? As we all know, there are thousands of nodes in a graph. The authors should clarify this in the rebuttal and the main paper.
3. What’s the additional computational and space complexity? Will it lead to a significant burden on both computation and memory?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Response to Reviewer 65ie**
We greatly thank you for the detailed and valuable comments. Please find our responses to the comments as follows:
>**W1:** The introduction of positive-incentive noise and the discussions with the existing papers of noise are not enough, especially in the main paper.
**Reply:** Thank you for your valuable suggestion. Due to space limitations, we only provide a simple and brief discussion in the main paper. We will consider expanding this discussion in the supplementary material or refining the main text to provide a clearer comparison.
>**W2:** Figure 1 can be further improved. It is hard to distinguish how to simultaneously generate augmentations on both graph topology and node features. Meanwhile, $\mathcal{N}(0,I)$ in the figure is inconsistent with the notations appearing in the main paper.
**Reply:** Thank you for your valuable feedback. We will refine the figure to enhance clarity. As for the notation inconsistency, we apologize for any confusion. In **Appendix [B]**, we explicitly describe the reparameterization trick used in our method, where the covariance matrix is defined as $\Sigma = \text{diag}(\sigma^2)$, ensuring a diagonal structure. Here, $\sigma\sim \mathcal{N}(0, I)$, aligning with the notations in the figure.
>**W3&4:** In Table 2, the best results should be further highlighted. The current version is not intuitive. The caption of Table 4 is quite confusing. The authors fail to clarify that the datasets are heterogeneous graphs in the caption.
**Reply:** Thank you for your helpful suggestions. We will update Table 2 to better highlight the best results for improved clarity and we will revise the caption of Table 4 to clearly specify that the datasets are heterogeneous graphs.
>**Q1:** Why do the authors assume that the topological noise is a Bernoulli distribution?
**Reply:** Thank you for your question. We assume that the **topological noise follows a Bernoulli distribution** because the simplest way to model edge perturbation is to learn whether an edge should be kept or removed, which naturally aligns with a **0/1 binary decision process**. This makes Bernoulli distribution a **straightforward and effective choice** for modeling edge retention or deletion in graph structures.
>**Q2:** I cannot completely understand the meaning of Figure 3. How are the nodes visualized? t-SNE? Why does it only contain dozens of nodes? As we all know, there are thousands of nodes in a graph. The authors should clarify this in the rebuttal and the main paper.
**Reply:** Sorry for the lack of clarity in Figure 3. To improve visualization, the nodes shown in the figure are **a subset selected from thousands of nodes** for a case study. This selection helps make the visualization more interpretable. The node distribution in the figure is based on their **connectivity relationships**, rather than a dimensionality reduction method like t-SNE. We will clarify this in the main paper. Thank you for your valuable feedback.
>**Q3:** What’s the additional computational and space complexity? Will it lead to a significant burden on both computation and memory?
**Reply:** Thank you for your insightful question. The **computational and space complexity** of noise learning depends on the design of the noise network. In the two noise generation modules, since each edge and each node feature is calculated by MLP, the computational complexity of the noise module is $\mathcal{O}(\mathcal{E} \cdot d) + \mathcal{O}(N \cdot d)$, where N is the number of nodes, $\mathcal{E}$ is the number of edges in the graph, and d is the dimension of the node feature. To minimize the burden, we have carefully **simplified the network structure**, such as reducing the number of hidden layers. The memory overhead as shown in Figure 2, is not significant. While our method introduces some additional computational costs, it provides clear performance improvements, making the trade-off worthwhile. The detailed table can be found in the response to **Reviewer J7B2**. We appreciate your feedback and will clarify this further in the paper. | Summary: This work introduces the PiNGDA method, designed to enhance graph data augmentation through the incorporation of beneficial noise. The paper also introduces the concept of task entropy, offering a fresh lens through which to comprehend the objective function of contrastive learning. Practical results demonstrate that PiNGDA leads to performance gains, thereby confirming its efficacy.
Claims And Evidence: This work asserts that its method addresses the stability issues associated with traditional methods. According to experimental findings, the proposed method excels in various tasks, including node classification and graph classification, demonstrating its efficacy. Additionally, ablation studies provide insight into the individual contributions of each module.
Methods And Evaluation Criteria: This paper introduces a novel graph contrastive learning data augmentation technique along with an innovative framework aimed at resolving issues with existing approaches. By employing a learnable augmentation strategy, the framework enhances the model's capabilities while maintaining the integrity of graph structure information. To ensure fairness and comparability in experimental results, the evaluation criteria adhere to traditional GCL methods, utilizing standard contrastive learning loss functions and widely accepted evaluation metrics.
Theoretical Claims: In Section 3 of this paper, the key theoretical assertions are meticulously validated through rigorous mathematical proofs. The authors clearly articulate their underlying assumptions, derive conclusions logically, and maintain coherence in the entire derivation sequence.
Experimental Designs Or Analyses: The experimental design of this paper is well-conceived, encompassing various benchmark datasets to fortify the reliability of the experimental outcomes. It employs widely-acknowledged baseline methods for comparative analysis. Furthermore, ablation studies are included to dissect the individual impacts of different modules.
Supplementary Material: Yes. These contents supplement some of the derivations and other hyperparameter analysis experiments to ensure the stability and applicability of the method.
Relation To Broader Scientific Literature: The paper proposes method about graph contrastive learning. Prior studies, such as DGI and GraphCL have demonstrated the effectiveness of contrastive objectives in learning graph representations, mainly using augmentations such as node/edge dropping. Newer methods such as GCA have introduced adaptable augmentation strategies. This paper extends these ideas by incorporating learnable noise into the contrastive framework, thereby enhancing representation robustness while maintaining theoretical consistency.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1)The paper is clearly expressed, allowing readers to easily understand the research content. In addition, the experimental section provides sufficient comparative analysis. The paper conducts experiments on multiple datasets and compares the performance of different methods, such an experimental design can help verify the robustness and applicability of the method.
2) The paper also provides a detailed analysis in the ablation experiment section to further verify the effectiveness of the method. It analyzes the impact of different components in the method on the final performance through detailed ablation experiments. This can help understand which parts contribute the most to the final result.
3) The authors visualize the changes in edge weights to make the abstract “noise” expression more intuitive. This helps us understand how the model adjusts the graph structure, optimizes information propagation, improving the interpretability of the method.
Weaknesses:
1) The depth of the discussion of the experimental results can be further improved, such as the explanation of certain experimental phenomena and the possible differences between different datasets. For example, in the Table 2, the learnable methods seem not to perform as well as the other two datasets.
2) The mathematical derivation in the method section can be more detailed to enhance readability and understanding. For example, the random variable alpha is a bit confusing. Why is this variable necessary? And why do you assume the probability distribution showed in equation 4? It will be very helpful if they are explained clearly.
Other Comments Or Suggestions: See the weaknesses.
Questions For Authors: 1) How is your method different from AD-GCL[1]? AD-GCL uses an adversarial method to enhance contrastive learning of graphs. It also uses a learnable method to drop edges which is quite similar as your method. Please clearly explain the key differences in the design of the two methods.
2) There are two main components proposed by PiNGDA, however the visualization part only shows the edge noise, how does the node attribute noise look like? The authors should provide more detailed explanation on the part.
[1]Suresh, S., Li, P., Hao, C., and Neville, J. Adversarial graph augmentation to improve graph contrastive learning, 2021
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Response to Reviewer PRCX**
We greatly thank you for the detailed and valuable comments. Please find our responses to the comments as follows:
>**W1:** The depth of the discussion of the experimental results can be further improved, such as the explanation of certain experimental phenomena and the possible differences between different datasets. For example, in the Table 2, the learnable methods seem not to perform as well as the other two datasets.
**Reply:** Regarding the observation that learnable augmentation performs worse than random augmentation on the WikiCS dataset, we believe this may be due to the following reasons. Compared to other datasets, WikiCS has a relatively **higher number of average edges**. The learnable augmentation method may not effectively enhance model performance when modifying the edge structure. Moreover, the **node features** in WikiCS have relatively **low dimensionality**. It indicates that edges may be more important than features in WikiCS. As a result, modifying features without adjusting edges might lead to suboptimal performance. However, if the learnable edge augmentation cooperates learnable feature augmentation, it may yield better results as can be found in the last column of **Table 2**. We will include a more detailed analysis in the paper to further investigate the differences across datasets. Thank you again for your insightful feedback!
>**W2:** The mathematical derivation in the method section can be more detailed to enhance readability and understanding. For example, the random variable alpha is a bit confusing. Why is this variable necessary? And why do you assume the probability distribution showed in equation 4? It will be very helpful if they are explained clearly.
**Reply:** Thank you for your comments. In our paper, the random variable $\alpha$ is introduced to indirectly **measure the difficulty of the task**, closely tied to the concept of **task entropy**. We adjust the variance of the auxiliary Gaussian distribution, which directly influences the calculation of task entropy.
>**Q1:** How is your method different from AD-GCL[1]? AD-GCL uses an adversarial method to enhance contrastive learning of graphs. It also uses a learnable method to drop edges which is quite similar as your method. Please clearly explain the key differences in the design of the two methods.
**Reply:** Our method differs from AD-GCL primarily in its augmentation strategy and focus. From the aspect of motivation, AD-GCL applies **adversarial graph augmentation** to minimize information from the original data which tries to maximize the loss, our approach enables the model to learn the augmentation simplify the representation learning. From the aspect of augmentation, AD-GCL mainly targets **graph structure**, whereas our method considers **both graph structure and node attributes**, leading to a more comprehensive augmentation. This advantage is reflected in our superior results in node classification, as shown in **Table 1**.
>**Q2:** There are two main components proposed by PiNGDA, however the visualization part only shows the edge noise, how does the node attribute noise look like? The authors should provide more detailed explanation on the part.
**Reply:** Thank you for your insightful suggestion. From a contrastive learning perspective, methods like SimCLR and BYOL commonly use **Gaussian noise** as an augmentation, making it a natural choice for adaptation to **non-vision data (tabular data)**. This is also an advantage of our method which serves as a **unified framework** for both **graph** and **tabular node features**. Regarding visualization, node attributes are typically **high-dimensional**, making them difficult to represent intuitively. Even if converted into a matrix or heatmap, unlike in vision tasks, there is **no inherent spatial relationship between adjacent values**, which could lead to misleading interpretations. We appreciate your valuable feedback and will add further explanations to clarify this point. | null | null |
How to Synthesize Text Data without Model Collapse? | Accept (poster) | Summary: This paper introduces a novel approach to generating semi-synthetic text data to address the issue of model collapse when trained with synthetic data. The method is supported by a solid theoretical framework under a simplified linear model setting. Extensive experiments validate the effectiveness of the approach, showing improvements in the stability of the training.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I didn't check the proofs closely
Experimental Designs Or Analyses: I checked the experiments- they look sound to me.
Supplementary Material: No I did not.
Relation To Broader Scientific Literature: According to my knowledge, the key contribution, token editing method, is novel.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The proposed token editing method to avoid model collapse is interesting. It is simple and useful. The experiments demonstrate the effectiveness of the method.
However, the proposed method is generating semi-synthetic data rather than pure synthetic data. In this sense, it does not address the model collapse when training with pure synthetic data, which is a more important problem. Because the motivation of using synthetic data is for the situation where we ran out of human data; this semi-synthetic data still requires human data and is more like a data augmentation method, not a data synthesis method.
Other Comments Or Suggestions: The title is "How to Synthesize Text Data without Model Collapse?", but the solution authors provided is to use semi-synthetic data instead, which seems deviated from what the title suggests.
Questions For Authors: If the issue of synthetic data is the coverage and lack of long tails, could we generate tons of data using LLM and only preserve those long tail ones? Will this method be better than token editing because token editing still require human data?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for your positive feedback and insightful comments. Below, we give detailed responses to your questions.
> [Q1] : The solution authors provided is to use semi-synthetic data instead, which seems deviated from what the title suggests.
We will revise the wording and provide further clarification to avoid misunderstandings. An example is provided below:
Based on prior works, model collapse is related to multiple factors, including self-generated processes, data quality, and others. Our proposed method is inspired by the statistical analysis on synthetic data in Sec. 3. Specifically, we focus more on the model collapse phenomenon caused by data quality. Therefore, we try to improve data quality then further indirectly prevent model collapse. In other words, we are not directly addressing model collapse, and thereby indirectly prevent it by improving data quality.
These statements will be included in the Introduction and Related work sections to better clarify our method.
> [Q2] : If the issue of synthetic data is the coverage and lack of long tails, could we generate tons of data using LLM and only preserve those long tail ones? Will this method be better than token editing because token editing still require human data?
` 1. Could we generate tons of data using LLM and only preserve those long tail ones? `
For now, generating long-tail samples is currently difficult for language models. The reason lies in the sampling strategy of LLMs [1]. Current LLMs adopt top-p, top-k or other sampling strategy for better performance. However, these sampling strategy will lead to cut-off distribution. When the data synthesizing scale up, this drawback will finally scaling law cut-off on synthetic data. However, human corpus data follows a Zipf distribution, $ p_i \propto i^{-\beta}, \quad i = 1, 2, \dots $ [2]. The truncated output distribution causes the LLMs to nearly fail to sample long-tail samples. In other words, it is currently difficult to induce long-tail samples fromLLMs that are as diverse as human data.
On the other hand, if we force the language model to generate long-tail samples, these may contain both noisy and high-information samples, which are like two sides of a coin, both distributed in the long tail of the data [3]. This necessitates further filtering of the high-information samples. Unfortunately, such samples are challenging to automatically identify in practice and may require extensive human annotation [4].
[1] Dohmatob E, Feng Y, Yang P, et al. A tale of tails: Model collapse as a change of scaling laws[J]. arXiv preprint arXiv:2402.07043, 2024.
[2] Zipf G K. The psycho-biology of language: An introduction to dynamic philology[M]. Routledge, 2013.
[3] Swayamdipta S, Schwartz R, Lourie N, et al. Dataset cartography: Mapping and diagnosing datasets with training dynamics[J]. arXiv preprint arXiv:2009.10795, 2020.
[4] Lin Z, Gou Z, Gong Y, et al. Rho-1: Not all tokens are what you need[J]. arXiv preprint arXiv:2404.07965, 2024.
` 2. Will this method be better than token editing because token editing still require human data?`
Yes, we agree with you on this idea. If we had sufficiently powerful LLMs, we could generate a large amount of long-tail data and quickly filter out the irrelevant ones, which would significantly boost the performance of current LLMs. However, these conditions are difficult to meet at present, and much work remains to be done.
---
Rebuttal Comment 1.1:
Comment: Thanks! I will keep my score. I suggest the authors re-consider the title in future versions
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your quick and constructive feedback! Based on your suggestion, we have revised the title. Please consider the following updated title:
- **To Edit and Not to Synthesize: Combating Model Collapse with Semi-Synthetic Data**
We remain open to making further revisions based on your valuable feedback. Looking forward to your continued comments. | Summary: The paper is twofold: The first part of the paper focuses on the effects of mixing real and synthetic data and what the authors call non-iterative model collapse. The second part of the paper proposes ToEdit, a method to adjust synthetic text data by resampling those tokens that have high probability to be generated. They argue that this method can prevent the negative effects of synthetic text data.
### update after rebuttal ###
After reading the other reviews and corresponding rebuttals, I think the authors addressed most of my (and the other reviewers) concerns. Therefore, I think this paper can be accepted and I will raise my score. I still suggest that the authors spend some more space on explaining their method to make it easier for the reader.
Claims And Evidence: The claims of the paper are supported by sound experiments and theoretical analysis.
Methods And Evaluation Criteria: The methods to validate their ToEdit approach seem fitting and convincing.
Theoretical Claims: The authors provide theoretical evidence for their ToEdit method. However, I did not check the extend proof in the Appendix for correctness.
Experimental Designs Or Analyses: The experimental design seems fitting. Nevertheless, I have some further questions (see below).
Supplementary Material: I did read through most of the Appendix (except the extended proof) and it answered a lot of questions I had when reading the main text (especially section G). I would suggest that the authors at least mention those sections at the appropriate time in the main text to make it obvious for the reader where to find those answers.
Relation To Broader Scientific Literature: The first part about non-iterative model collapse is nothing else than a single iteration of model collapse like it happens in the real world. The authors claim that non-iterative model collapse is different as it is not data generated by the same model as in the related work. I would argue that this is just an experimental choice for tractability in other papers.
The main contribution to literature is the ToEdit approach to help mitigate model collapse for textual data.
Essential References Not Discussed: There are several other studies on model collapse both theoretical and empirical. I suggest that the authors extend their related work section to give proper credit.
For example, but not limited to:
Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A. I., Babaei, H., LeJeune, D., ... & Baraniuk, R. Self-Consuming Generative Models Go MAD. In The Twelfth International Conference on Learning Representations. (2024)
Bertrand, Q., Bose, J., Duplessis, A., Jiralerspong, M., & Gidel, G. On the Stability of Iterative Retraining of Generative Models on their own Data. In The Twelfth International Conference on Learning Representations. (2024)
Briesch, M., Sobania, D., & Rothlauf, F. (2023). Large language models suffer from their own output: An analysis of the self-consuming training loop. arXiv preprint arXiv:2311.16822.
Martínez, G., Watson, L., Reviriego, P., Hernández, J. A., Juarez, M., & Sarkar, R. (2023, August). Towards understanding the interplay of generative artificial intelligence and the internet. In International Workshop on Epistemic Uncertainty in Artificial Intelligence (pp. 59-73). Cham: Springer Nature Switzerland.
Other Strengths And Weaknesses: The ToEdit method is not trivial to understand on the first read of the paper. Maybe the authors could use some more space to better illustrate their method (as it is the core contribution) so the reader can easier understand the edit operations (maybe a small example can help here).
Other Comments Or Suggestions: none
Questions For Authors: - Q1: Does ToEdit change the properties of synthetic data in a desired way as specified in the first part of the paper? Could the authors provide similar statistics as in section 3 for the edited data?
- Q2: Does ToEdit help in an iterative process as well? If I understand it correctly the theoretical proof says so but there is no experimental evidence for that in the paper?
- Q3: Can ToEdit help with already strongly collapsed data or is a minimum quality of the data necessary?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are grateful for your enthusiastic feedback. Below, we give detailed responses to your questions.
> [Q1] : References Not Discussed
We will include a discussion in the revised version as follows:
[1] demonstrates that without enough fresh real images, future generative models will gradually decline. [2] develops a rigorous framework to demonstrate the importance of real data in maintaining the stability of iterative training. [3] illustrates that real data in the iterative training process can slow the decline of LLMs, but cannot fully prevent it. [4] shows that the quality and diversity of generated images degrade over time.
[1] Self-Consuming Generative Models Go MAD.
[2] On the Stability of Iterative Retraining of Generative Models on their own Data.
[3] Large language models suffer from their own output: An analysis of the self-consuming training loop.
[4] Towards Understanding the Interplay of Generative Artificial Intelligence and the Internet.
> [Q2] : Maybe a small example to illustrate their method.
We add an input-output example below. The complete workflow is also provided in Lines.
Case 1: code reasoning sample in Magicoder-Evol-Instruct-110K:
| **Before** (source) | **After** (edited) | |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------------- |
| Construct a function using PHP language that applies lexical analysis on a provided text string to analyze the individual, non-repeated words elements present. | Construct a function using PHP language that applies lexical analysis on a provided text string to quantify unique words. | "analyze" → "quantify" |
| Test with provided string, `$str = 'Greetings, Planet Earth!'`. | Test with provided string, `$str = 'Greetings, Planet Earth!'`. | No changes. |
| Implements `wordCount` to remove punctuation, convert text to lowercase, split into words, and count unique words. | Implements `wordCount` to remove punctuation, convert text to lowercase, split into words, and calculate unique words. | "count" → "calculate" |
| Returns `{'greetings': 1, 'planet': 1, 'earth': 1}`. | Returns `{'greetings': 1, 'planet': 1, 'earth': 1}`. | No changes. |
> [Q3] : Does ToEdit change properties of data in a desired way ?
Yes, we present the statistical analysis of edited data below. **As shown in Supplemented Tables 1 and 2, ToEdit meets our expectations by preserving the original long-tail distribution.** Additionally, Table 14 (Line 1100) illustrates that tokens above the threshold gradually decrease as the iterations progress.
1. KL Divergence Between Distributions (gen_0, gen_1, gen_2)
| Distribution Comparison | gen_0 | gen_1 | gen_2 |
| ----------------------- | ------- | ------- | ------- |
| **gen_0** | 0 | 5.56e-6 | 1.34e-5 |
| **gen_1** | 5.56e-6 | 0 | 9.61e-6 |
| **gen_2** | 1.34e-5 | 9.61e-6 | 0 |
2. Sample Distribution Across PPL Intervals (gen_0, gen_1, gen_2)
| PPL Interval | gen_0 | gen_1 | gen_2 |
| :------------- | ----: | ----: | ----: |
| 6.04~25.43 | 49087 | 49101 | 49132 |
| 25.43~44.83 | 42548 | 42532 | 42510 |
| 44.83~64.23 | 5147 | 5149 | 5148 |
| 64.23~83.62 | 1993 | 1993 | 1984 |
| 83.62~103.02 | 762 | 763 | 763 |
| 103.02~122.42 | 339 | 338 | 340 |
| 122.42~141.81 | 98 | 98 | 95 |
| 141.81~161.21 | 18 | 18 | 19 |
| 161.21~180.6 | 4 | 4 | 4 |
| 180.6~200.0 | 4 | 4 | 4 |
> [Q4] : Does ToEdit help in an iterative process?
Yes, the following supplemented table shows effectiveness in an iterative process, with slight improvements in performance across generations.
3. Performance in an iterative process on Instruction tuning data.
| | PIQA | BoolQ | HS | SIQA | WG | Avg |
| ----- | ----- | ----- | ----- | ----- | ----- | --------- |
| Gen 0 | 79.87 | 81.28 | 59.72 | 49.69 | 74.51 | **69.01** |
| Gen 1 | 80.25 | 81.16 | 59.74 | 50.56 | 74.59 | **69.26** |
| Gen 2 | 80.14 | 82.69 | 59.82 | 50.51 | 73.80 | **69.39** |
> [Q5] : Can ToEdit help with already strongly collapsed data or is a minimum quality of the data necessary?
This is a very interesting and insightful question. The ToEdit algorithm was initially designed to preserve the long-tail distribution during the data generation process, thereby avoiding model collapse. For already collapsed data, the variance is typically very small, and enhancing diversity is crucial. We can also adjust the threshold $p$ to introduce more randomness in data. Through this operation, we can inject randomness into the collapsed data. However, this is a theoretical scenario, and as we know, data situations are highly complex. In practice, there will be many more challenges to address. | Summary: This paper investigates the issue of model collapse. Model collapse happens when training models on synthetic data cause performance degradations or, in some scenarios, complete model breakdown. The authors discuss the negative correlation between the proportion of synthetic data and model performance, even without iterative training.
To understand this decline, the authors perform statistical analyses revealing that synthetic data suffers from distribution narrowing and over-concentration of n-gram features. As a solution, instead of focusing on synthetic data generation, they propose token-level editing on human-produced data to generate semi-synthetic data. They utilize a pretrained LLM to find the 'easy' data points, then edit them for better final training performance.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I've checked the theoretical results of the main papers and appendix, and they seem correct.
Experimental Designs Or Analyses: Yes, this is explained further in the weakness and question sections.
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: The underlying problem, model collapse, is critical since real data has become more scarce, and models do need high-quality synthetic data in their training. Some of the findings of the paper regarding the characteristics of synthetic data are especially important in guiding better synthetic data generation methods. However, the proposed method does not do much about solving the model collapse. Instead, the authors focus on improving and enhancing the quality of the real-data training via editing.
Essential References Not Discussed: Papers showing the model collapse phenomena:
[1] Bertrand, Quentin, et al. "On the stability of iterative retraining of generative models on their own data." ICLR 2024.
[2] Ferbach, Damien, et al. "Self-consuming generative models with curated data provably optimize human preferences." NeurIPS 2024.
[3] Kazdan, Joshua, et al. "Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World." ICML 2024.
Since this paper shows a method to alleviate model collapse, it should also mention some of the known methods that have successfully generated useful synthetic data. [some of them are listed below]
[1] Wang, Yizhong, et al. "Self-instruct: Aligning language models with self-generated instructions." ACL 2023.
[2] Ulmer, Dennis, et al. "Bootstrapping llm-based task-oriented dialogue agents via self-talk." arXiv preprint arXiv:2401.05033 (2024).
[3] Gulcehre, Caglar, et al. "Reinforced self-training (rest) for language modeling." arXiv preprint arXiv:2308.08998 (2023).
[4] Singh, Avi, et al. "Beyond human data: Scaling self-training for problem-solving with language models." TMLR 2024
etc.
Other Strengths And Weaknesses: Strength:
1- The statistical findings about the differences between synthetic and real data are interesting.
2- The problem description and observations are well-written
3- The paper presents the importance of the problem in various settings and experiments.
Weakness:
1- The first finding is already established information, and multiple prior works have observed it before.
2- The proposed method is not a direct solution for model collapse but focuses on improving the quality of real data.
3- Some of the improvements in Tables 2 and 3 are minor. The authors should elaborate further on the results, as they are currently show that the proposed method does not help in several tasks.
4- Some of the experiment settings are missing (for example the training parameters for pertaining and fine-tuning).
5- The information about details of ToEdit experiments is missing. How many iterations are required to generate the semi-synthetic data? What are the computational costs?
Other Comments Or Suggestions: Please check out the weakness, question, and citation sections.
Questions For Authors: 1- How does the editing mechanism ensure the quality of the semi-synthetic data?
2- How are algorithm meters selected?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate your valuable feedback. In the following, we will address your concerns accordingly.
> [Q1] : References Not Discussed
We will include a discussion on all the provided references as follows:
[1] develops a rigorous framework to demonstrate the importance of real data in maintaining the stability of iterative training. [2] theoretically demonstrates that the impact of data curation can be formalized as an implicit preference optimization mechanism. [3] reveals the detailed training dynamics of model collapse under three different training workflows. Of course, there are also some remarkable studies that successfully used synthetic data. [4] proposes the Self-Instruct data generation framework, enhancing instruction-following capabilities. [5] employs the self-talk method to generate high-quality data. ReST [6] uses a policy model to generate datasets and then employs offline RL to fine-tune LLMs on generated datasets. [7] demonstrates that self-training with binary feedback filtering can reduce reliance on real data.
[1] On the stability of iterative retraining of generative models on their own data.
[2] Self-consuming generative models with curated data provably optimize human preferences.
[3] Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World.
[4] Self-instruct: Aligning language models with self-generated instructions.
[5] Ulmer, Dennis, et al. "Bootstrapping llm-based task-oriented dialogue agents via self-talk.
[6] Gulcehre, Caglar, et al. "Reinforced self-training (rest) for language modeling.
[7] Beyond human data: Scaling self-training for problem-solving with language models.
> [Q2] : The first finding is already established information.
We agree with you that the potential for synthetic data to cause model collapse has been demonstrated in prior notable works [8]. However, our contribution of statistical experiments provides a more fine-grained analysis of textual data features, specifically examining the underlying reasons behind the failure of synthetic data, e.g., overconcentration of n-grams.
[8] Position: Model Collapse Does Not Mean What You Think
> [Q3] : The authors focus on improving the quality of the real-data training via editing.
Yes, we agree with you on this point. Based on insights from prior works [1], model collapse is associated with multiple factors, including self-generated processes, data quality, and others. Our proposed method is inspired by the statistical findings on synthetic data (as stated in Sec. 3). Therefore, we attempt to prevent model collapse by improving data quality.
Furthermore, we will clarify in the paper that we are not directly addressing model collapse, but rather indirectly preventing it by improving data quality. Additionally, in our experiments, the data we edited is not entirely real data. We also conduct experiments with synthetic data, such as (1) datasets used in continual pretraining (e.g., Biomed, Finance) , (2) OSS-Instruct-75K and Evol-Instruct-110K also contain samples synthesized by ChatGPT.
Please refer to Appendix G, Q7 for more discussion.
[9] Instruction Pre-Training: Language Models are Supervised Multitask Learners (Cheng et al., EMNLP 2024)
> [Q4] : The authors should elaborate further on the results.
As shown in Table 3, our method improves performance across both OLMo-1B and Llama-3-8B in successful tasks. In Biomedicine, OLMo-1B’s average score improves from 38.83 to 40.89, and Llama-3-8B from 56.04 to 56.48. Similar improvements are seen in Finance and Math domains. Additionally, Table 2 shows the effectiveness of our approach in general pre-training, with OLMo-1B’s average performance rising from 32.75 to 33.11. In Table 4 (SFT), ToEdit enhances FLAN v2 from 70.18 to 70.65 and boosts task performance in Natural Instructions. However, in more challenging tasks, such as Math, the improvements are limited or negligible. This indicate that data modification alone has limited impact on harder reasoning tasks.
Please refer to Appendix C for detailed discussion.
> [Q5] :
>
> 1. Some of the experiment settings are missing.
> 2. The information about the details of the ToEdit experiments is missing.
Please refer to:
1. Appendix F (Line 412) for detailed experimental settings, including pre-training and fine-tuning.
2. Section 5.1 (Line 381) for the ToEdit experiment settings and cost.
Additionally, we add iterative experiments on instruction tuning data in Rebuttal to Reviewer Qb5k.
> [Q6] : How does the editing mechanism ensure the quality of the semi-synthetic data?
We conduct pre-ablation on experiments on throhold $p$ to control the editing process and ensure quality of output data. As shown in Table 5, we choose best parameters for main experiments.
> [Q7] : How are algorithm metrics selected?
We select our evaluation metrics based on established practices. Please refer to Appendix F.2. | Summary: The issue that is addressed by this paper, using synthetic data during pretraining, is a very important and timely one. Going forward, pretraining will use a higher, and eventually dominant, proportion of synthetic data. The main findings are in 3.2, the three failure modes of Cosmopedia, when evaluated using perplexity as the main metric. These failure modes teach us how to inspect synthetic data to find out what's wrong, and inspirs ways to adjust the synthetic data generation pipeline. For this reviewer, the token level editing method is interesting, but not as important as the identification of the failure modes. Better to address the problem at the source, not after the synthetic data are generated.
Claims And Evidence: I think the problem with "synthetic data" is with a particular synthetic data (Cosmopedia) only. This paper can lead to a false impression that all synthetic data suffer from the same issues.
Methods And Evaluation Criteria: Yes they make sense
Theoretical Claims: I apologize that I did not have time to check the equations. The ideas are sound and practice-able, so I didn't check.
Experimental Designs Or Analyses: They are sound
Supplementary Material: No, I did not
Relation To Broader Scientific Literature: No comment
Essential References Not Discussed: I don't know of any
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: No questions
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your critical feedback and valuable suggestions. Below, we will strive to address your concerns and refine our paper accordingly.
> [Q1] : I think the problem with "synthetic data" is with a particular synthetic data (Cosmopedia) only. This paper can lead to a false impression that all synthetic data suffer from the same issues.
We would like to clarify that our analysis is not meant to imply that all synthetic datasets suffer from the same issues. The analysis and theoretical validation are grounded in previous research on model collapse [1,2,3,4], a potential risk associated with training using synthetic datasets. Specifically, we use Cosmopedia as a representative example to highlight these concerns.
To avoid any potential misunderstanding, we will include the following explanation and clarification in the revision:
1. Cosmopedia is currently the largest open-source synthetic dataset, and it comes with detailed statistical information. We use Cosmopedia as a representative example to illustrate failure modes when using synthetic data. We agree that issues identified may vary depending on the specific dataset or generation pipeline used. However, the results obtained on Cosmopedia should also have a certain level of representativeness.
2. While our analysis primarily focuses on Cosmopedia, we do not intend to imply that all synthetic datasets suffer from identical problems. Rather, we aim to highlight general principles and cautionary lessons about synthetic data generation and evaluation.
3. The tremendous success of synthetic data does not conflict with potential issues, such as model collapse. We provide several examples of the success of synthetic data in the Introduction and Related Work. In Line 33-36 and 799-812, there are listed numerous high-quality synthetic datasets, such as UltraChat, UltraMedical, and so on. Furthermore, famous Phi-1/2 series models are basiclly on synthetic data.
[1] AI models collapse when trained on recursively generated data. *Nature* 631, 755–759 (2024).
[2] A tale of tails: model collapse as a change of scaling laws. ICML'24
[3] Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating, arXiv, 2024
[4] Position: Model Collapse Does Not Mean What You Think. 2025. | null | null | null | null | null | null |
Safely Learning Optimal Auctions: A Testable Learning Framework for Mechanism Design | Accept (poster) | Summary: The authors study a variant of Rubinfeld and Vasilyan's *testable learning* in mechanism design, and give a concrete tester-learner for basic auction settings.
Many classical results in mechanism design, e.g. Myerson's Optimal Mechanism for auctions, require the underlying distribution over valuations be *regular*, a specific technical condition requiring $v-\frac{1-F(v)}{f(v)}$ be non-decreasing. The authors attempt to develop a "tester-learner" for this problem, meaning that given a bounded number of samples from the distribution, one should either
1) detect the distribution is irregular and output **FAIL**, or
2) output a (near)-optimal mechanism
allowing the user to safely distinguish whether it is possible to use the algorithm on certain data from an unknown distribution. Unfortunately, for revenue maximization, the authors show the above guarantee for testing regularity is impossible. Namely they observe that it is easy to construct statistically indistinguishable distributions with far optimal revenue
Motivated in this context, the authors introduce a relaxed version of testable learning for mechanism design which, given an unknown distribution D', only requires the learner to compete with the best regular distribution near D' in Kolmogorov-Smirnov distance. They give a tester-learner in this context for revenue maximization over independent bidders as well as for the Bulow-Klemperer Theorem and anonymous price auctions (i.e. that adding a bidder can get optimal revenue, and that a fixed reserve price across bidders can achieve within a constant of optimal).
At a technical level, the authors algorithm works by constructing a regular distribution from the convex envelope of a quantile-shifted empirical estimate. They reject if this distribution is far from the original, and otherwise argue since it is stochastically dominated by the original distribution, applying Myerson's optimal mechanism to this shifted distribution gives the desired guarantee.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I checked the lower bound proofs which seem correct, and don't see any issues with the other claims.
Experimental Designs Or Analyses: N/A
Supplementary Material: I skimmed the supplementary materials which seem to give reasonable formalizations of the sketched arguments in the main body.
Relation To Broader Scientific Literature: The key contribution of this paper is to give the first variants of several classic results in mechanism design with non-trivial guarantees for irregular distributions. Prior results rely strongly on regularity, while the authors show that at least for irregular distributions with some nearby regular distribution, it is either possible to detect irregularity or to output a mechanism as good as on the nearby distribution.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Removing the need for strong distributional assumptions is always a welcome direction in any type of learning, mechanism design included, and the tester-learner framework is an interesting approach to this problem. The authors give extensions in this sense to several classical theorems, potentially broadening their use case in theory and practice.
The main weakness of the work is that the relaxed version of completeness seems too weak to accomplish what the authors set out to do. Since the optimality guarantee is with respect to nearby regular distributions (and not the original distribution itself) and there are irregular distributions with *no* nearby such distributions, this means there are underlying distribution on which the full "tester-learner" fails, i.e. it may output yes and an arbitrarily bad mechanism. The whole point of Rubinfeld-Vasilyan's idea is to avoid this type of scenario, and ensure one can always trust the output of the tester-learner pair. Presumably there may also be settings where the optimum value of nearby regular distributions is simply much lower, which is better than the prior issue, but still seems somewhat questionable as a "success case" of the learner. I would suggest the authors spend a bit more space justifying why their relaxation and tester-learners are useful in this context (beyond just the fact that one cannot use standard completeness guarantee, which is not imo of itself a sufficient motivation for the proposed relaxed version).
EDIT: The authors have partially addressed my concerns regarding the completeness of their tester-learner, though the issue of nearby regular distributions with significantly worse value remains. I have increased my score accordingly.
Other Comments Or Suggestions: Def 13: It reads like D is known now since it is not drawn from some class as in the previous def, but presumably this is not what is meant?
Typos: “It’s goal”, “minium”
Questions For Authors: See above: can you provide some clarification why the tester-learner framework remains useful in a setting where there are still distributions over which one cannot trust its output? This seems antithetical to the main point of tester-learners as introduced in Rubinfeld-Vasilyan (and indeed as the authors present as a way to *know* when one can safely apply a mechanism!)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for carefully reading our paper. We appreciate the constructive feedback and comments.
We would like to address your concern that our relaxed version of the completeness guarantee is too weak for what we wish to show. We agree that the completeness condition as mistakenly written in a Definition 14 is too weak. We should have instead written: “Suppose the tester T is presented with $h(\alpha,\delta,n)$ i.i.d. samples from a distribution $D$. $T$ should output yes with probability at least $1-\delta$ if and only if there exists a regular distribution that is $\alpha$-close to $D$. This stronger completeness guarantee avoids the scenario where the tester outputs yes on a distribution that has no close regular distribution, leading to a bad learning outcome. Importantly, in the proof of Theorem 6, we show that if there is a close regular distribution to the input distribution then the tester will find it and output yes.
The other direction is also simple to show. Namely, the Tester operates by explicitly constructing a regular distribution and testing whether this distribution is close to the input distribution. If the regular distribution that the Tester constructs is far away from the input distribution, then we can be sure that there is no close regular distribution. More specifically, in the Testing algorithm we first construct a (likely irregular) surrogate distribution $\hat{E}$ that is close to the input distribution and whose link function upperbounds the link function of any close regular distribution. We then take the convex envelope of this link function, which yields a new link function that is the greatest convex lowerbound. Since convex link functions (Lemma 1) correspond to regular distributions, this procedure has located a regular distribution. If this regular distribution is not close to the input distribution, then by properties of the convex envelope and link function, we can be sure that there does not exist a close regular distribution. This update to the completeness condition is straightforward and does affect any of our theoretical results. We are committed to adding more details to clarify this in the next revision of the manuscript. | Summary: Summary:
The paper proposes a framework for testably learning revenue-optimal auctions. In this setting, an auction designer has access to m samples (bids) and aims to design a DSIC and IR auction that maximizes revenue. Unlike prior work, which typically assumes conditions like regularity or MHR without testing their validity, the authors propose a framework to actively test for regularity. The goal is for the test to output YES with high probability when the distribution is regular, and for the learned auction to achieve high revenue (competitive with the optimal revenue of all close regular distributions) when it does so.
While the paper addresses a relevant problem, the technical novelty and significance of the work are unclear, and the presentation requires improvement to meet the standards for ICML.
Detailed comments:
1. The presentation needs improvement to enhance clarity and readability. Several terms are used without proper definition or are introduced too late. Examples include:
- Stochastic dominance in Lemma 1 is mentioned before being defined.
- The term "m-regularized tester" in Theorem 3 is not clearly defined.
- In Definition 9, it is not initially clear that m refers to the number of samples.
- The notation m is confusing, as it initially suggests samples from the product distribution rather than individual distributions.
These issues are individually minor but collectively hinder the readability of the paper.
2. The technical novelty of the work is not well-explained. The authors do not adequately differentiate their contributions and proof techniques from prior work.
- For example, Algorithm 2 appears to be largely similar to Guo et al. (2019), while Algorithm 1 only adds a testing condition.
- Clearly and explicitly stating the novel aspects of the work would strengthen the paper.
- The purpose and significance of Section 6 are unclear and feel somewhat like an afterthought.
Overall, while the paper addresses an interesting problem, it needs better presentation and a clearer articulation of its technical contributions to be suitable for publication.
Claims And Evidence: See above.
Methods And Evaluation Criteria: See above.
Theoretical Claims: See above.
Experimental Designs Or Analyses: See above.
Supplementary Material: See above.
Relation To Broader Scientific Literature: See above.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for carefully reading our paper and providing helpful feedback and comments. We would first like to address your presentation concerns. We are committed to improving the preliminaries by ensuring that the definitions are introduced in the proper order. We will also improve the “m notation” to clarify when the samples are drawn from a product distribution versus an individual distribution. We believe these changes you suggest, among others, can be completed promptly.
We would also like to address the significance of our work in comparison to [1,2].
**Comparison with [1,2].** Both [1,2] only provide results for regular or bounded distributions. We demonstrate that without these assumptions it is necessary to relax the conventional learning benchmark found in the existing literature. Consequently, our learning benchmark is also different than [1,2].
**Technical contribution.** It is indeed the case that many of the techniques that we use are based on proof techniques from [1,2], but (1) this is a coincidence, the problem that we are exploring is important on its own, and it happens that the techniques from [1, 2] are useful, and more importantly, (2) we cannot use the techniques from [1,2] in a black-box way. Instead, we need to carefully combine these techniques using some novel ideas. We believe that the fact that the techniques are based on [1,2] should not subtract value from our paper because the problem that we introduce and solve is well-motivated and important.
Additionally, both our tester and learner differ from [1,2]. In particular, for our tester, we introduce a new conditional check on the difference between the empirical quantiles and the quantiles of a close regular distribution. If this difference is too large, it indicates that there does not exist a regular distribution within the $\alpha$ KL ball around our target distribution and thus we cannot expect our learner’s guarantee to hold. Our learner also differs from [1,2] in that the algorithm must further adjust the quantiles of the close regular distribution it finds to ensure that it is stochastically dominated by our target distribution.
Finally, the primary purpose of Section 6 (A Tester for the Bulow-Klemperer Theorem) is to demonstrate that our Tester algorithm has utility outside of just being directly paired with a learning algorithm; the tester can be used to verify when important theorems in auction theory that rely on regularity can be expected to apply to a dataset. In particular, we demonstrate that we can test when the fundamental Bulow-Klemperer theorem can be applied to a dataset. This theorem says that when a distribution is regular and the bidders are i.i.d., there exists a simple method to generate as much revenue as the optimal mechanism. Namely, the auctioneer should just recruit another bidder and run the Vickrey Auction.
[1] Chenghao Guo, Zhiyi Huang, and Xinzhi Zhang. "Settling the sample complexity of single-parameter revenue maximization." Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (pp. 662-673) (2019).
[2] Wenshuo Guo, Michael Jordan, and Emmanouil Zampetakis. "Robust learning of optimal auctions." Advances in Neural Information Processing Systems 34 (2021). | Summary: This paper considers auctions with possibly regular or near regular distributions, and considers a two step process of a) testing for regularity, and b) designing an approximately optimal auction if the distribution tests as near regular. Regularity is a key distributional assumption in auction theory: it states that the revenue from setting a price for a single agent is convex, so e.g., you could use local search to learn an optimal reserve. And, setting that optimal reserve in a Vickrey auction gives you the optimal auction.
Prior work has considered similar questions, how to test distributions, how many samples are needed to learn almost the optimal auction, how much error can be present in draws from a regular distribution, and bounds for samples needed to learn near optimal auctions in possibly irregular settings (without testing).
This work focuses on empirical draws from a distribution that could be close to regular (but not all the way regular). They first show that direct distribution testing is insufficient: there are irregular distributions where the optimal auction cannot be learned for any fixed number of samples. They put forward a tester, that results in theoretical guarantees that are relative to the distance between the empirical distribution and regular distributions, so as the distance to regularity increases, the guarantees weaken.
They use first-order stochastic dominance to relate the near-regular distribution to all close-by regular distributions. Using this, they show that their approach can be used in three key results: optimal auctions, Bulow and Klemperer's approximation result (add a bidder is better than adding a reserve), and to monopoly pricing.
Claims And Evidence: Yes. The claims of the paper are supported by theoretical proofs and lower bounds.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I reviewed briefly the proof for the Bulow-Klemperer result and Theorem 3 and both appear to be correct.
Experimental Designs Or Analyses: N/A
Supplementary Material: I review some of the proofs in the appendix.
Relation To Broader Scientific Literature: This work builds on prior work in learning approximately optimal auctions and testing distributions. It makes a contribution by showing that testing for regularity and then operating on that is insufficient, and new results need to be leveraged.
Essential References Not Discussed: This work mentions Roughgarden and Schrijvers 2016, but should add a little more discussion and differentiation since that approach does give guarantees for irregular distributions.
Other Strengths And Weaknesses: In general, the paper is very well written and the three results gives it a breadth of applicability and illustrates well that this technique is broadly applicable within bayesian mechanism design.
The only missing piece (which needs to be there) is the use and discussion of the correct benchmark for irregular settings. With this, it is an accept or strong accept.
When the auction is irregular, the optimal auction is not an optimal reserve by itself, but it includes ironing of the virtual values and allocation rule: the lower value bidders are given a preferred allocation to add competition to the higher valued bidders (Myerson '81). Now, I am almost 100% certain that the revenue from that in a situation where it is almost regular should also be near enough that the theoretical results here should still work against the optimal auctions for irregular distributions that are \alpha close to the empirical distribution since the ironing will be very small if nearly regular. Please add to results if my intuition is correct, or discuss why nearby irregular auctions cannot be included in the benchmark.
Other Comments Or Suggestions: n/a
Questions For Authors: Do the bounds for the regularized learners also apply to irregular distributions that are \alpha close to D, if they are also \alpha close to a regular distribution?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for reading our paper and appreciating our results. We would first like to address the importance of our work in the context of Roughgarden and Schrijvers (2016) and, more generally, Guo et al. (2019) [2]. Both of these papers provide sample complexity results for learning optimal auctions over bounded, irregular distributions, where the learning benchmark is the irregular distribution’s optimal mechanism. These results do not apply to unbounded, irregular distributions. In contrast, our framework can handle this case; we can provide non-trivial revenue guarantees that hold for unbounded, irregular distributions. We will emphasize this important distinction in the next version of our manuscript. Next, we would like to address your question about whether the bounds for the regularized learners should also apply to nearby irregular distributions. These bounds should not apply to nearby irregular distributions. Even if the ironing region is small, nearby irregular distributions may exhibit radically different optimal revenues.
[1] Tim Roughgarden and Okke Schrijvers. “Ironing in the Dark.” Proceedings of the 2016 ACM Conference on Economics and Computation (pp. 1-18) (2016).
[2] Chenghao Guo, Zhiyi Huang, and Xinzhi Zhang. "Settling the sample complexity of single-parameter revenue maximization." Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (pp. 662-673) (2019). | null | null | null | null | null | null | null | null |
Context is Key: A Benchmark for Forecasting with Essential Textual Information | Accept (poster) | Summary: This paper introduces the "Context is Key" benchmark to evaluate the capability of models in leveraging textual information for time series forecasting. By designing 71 tasks and evaluating them with both human and large language model (LLM) annotators, the study confirms the significant role of contextual information in enhancing prediction accuracy, especially during relevant time periods. The authors propose a novel evaluation metric combining CRPS and twCRPS to more accurately assess model performance. Experimental results show that providing contextual information significantly improves forecasting quality, offering valuable insights for future developments in multimodal forecasting and automated predictive systems.
Claims And Evidence: This paper provides evidence supporting the significant role of contextual information in time series forecasting by designing 71 tasks and evaluating them with both human and large language model (LLM) annotators. It introduces a novel evaluation metric (CRPS and twCRPS) to more accurately measure prediction performance. Experimental results show that providing contextual information improves forecasting quality, demonstrating the effectiveness of this approach. Overall, the main findings of the paper are clear and convincing, highlighting the critical importance of context in forecasting tasks.
Methods And Evaluation Criteria: This paper collects a large number of datasets and designs reasonable evaluation metrics, filling a gap in the field.
Theoretical Claims: I have reviewed the correctness of the theoretical proofs in this paper.
Experimental Designs Or Analyses: The experiments in this paper are relatively comprehensive. However, I have reservations about the training approach for UniTime and Time-LLM. Such a stringent comparison setup fails to fully leverage the strength of the baseline models and may lead to an overestimation of the role of event text. Furthermore, if the authors could include some multimodal time series forecasting baselines, such as dataset-specific models (e.g. TimeMMD) and foundational models (e.g. ChatTime), it would make the work even more complete.
Supplementary Material: I have reviewed the validity of the code in the supplementary materials.
Relation To Broader Scientific Literature: This paper further advances the development of multimodal time series forecasting.
Essential References Not Discussed: I believe the related work section of this paper is sufficient.
Other Strengths And Weaknesses: Refer to the above comments.
Other Comments Or Suggestions: Refer to the above comments.
Questions For Authors: Refer to the above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful response. We appreciate your recognition of our work as addressing a gap in the field, offering valuable insights for future developments, and providing clear and convincing findings. We address your concerns below and are happy to clarify any further points.
---
## ***Training Approach of Time-LLM and UniTime***
Thank you for your questions regarding the training approaches for Time-LLM and UniTime. We included Time-LLM and UniTime in our evaluation due to their respective contributions to multimodal forecasting. These models require training to adapt them for context-aided forecasting, however CiK is an evaluation benchmark, and has no training set. Therefore, we rely on the original papers’ training recipes to train their models, which adapt them for specific time series datasets with paired templates of context text. We discuss these limitations of the models in Appendix D.3 (“Why Do Time-LLM and UniTime Not Benefit (More) From Context?”). To make this more clear, we propose to:
- \[**Expanded Results Discussion**\] Add to the main text an abridged version of the discussion in Appendix D.3, which describes the limitations we faced when evaluating Time-LLM and UniTime.
Finally, if you can suggest a more appropriate training recipe, we are happy to rerun the models and report the results in the camera-ready version.
---
## ***Stringent comparison setups for Time-LLM and UniTime, and potential overestimation of the importance of event text***
Thank you for pointing this out. We acknowledge the reviewer’s claim that the comparison setups for Time-LLM and UniTime are stringent, and may fail to fully leverage the strength of the baseline models. We attribute this to the training approaches of the respective models (described above), which adapt the baseline models’ capabilities to process contextual text in highly specialized templates, and for specific time series domains. CiK is however intended to evaluate context-aided forecasting across several time series domains and types of textual context, which is a different, more broader setup compared to those used in the papers of Time-LLM and UniTime. The training dataset of the models would have to be much more diverse to enable these models to generalize well enough to perform well on CiK, which contains linguistic variations and requires many types of reasoning. To make this more clear, we propose to
- \[**Addition to Limitations**\] Add to limitations the impact of the lack of training datasets when evaluating dataset-specific approaches, such as UniTime and Time-LLM.
- \[**Addition to Future Work**\] Add to future work the need for training datasets for context-aided forecasting.
---
## ***Additional baselines***
\[**TimeMMD**\] We contacted the authors of [Time-MMD](https://arxiv.org/abs/2406.08627) for access to checkpoints of models from their paper, but have yet to hear back. Unfortunately, we found that their paper does not contain the hyperparameters used to train these models and thus we could not recover their checkpoints ourselves. We will however add the results of these models to the paper if we receive the checkpoints or manage to reproduce their original results.
\[**ChatTime**\] Thank you for these suggestions. We evaluate the publicly available checkpoints ([Base](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Base), [Chat](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Chat)) of [ChatTime](https://arxiv.org/abs/2412.11376) on CiK. Here are the aggregate results:
| Model | Average RCRPS (± std) |
|---|--|
|**With Context**||
| ChatTime-Base | 0.735 ± 0.002 |
| ChatTime-Chat | 0.747 ± 0.005 |
|**Without Context**||
| ChatTime-Base | 0.725 ± 0.002 |
| ChatTime-Chat | 0.781 ± 0.015 |
Compared to other models evaluated on CiK (shown in Table 1), these results (with and without context) are not competitive and are, in fact, much worse than those of statistical models that cannot process context. ChatTime-Base shows a very small improvement with context, while ChatTime-Chat degrades with context. This is likely because ChatTime’s training dataset is limited to 3 time series datasets with highly specific contextual text templates. Similar to Time-LLM and UniTime, the training approach likely fails to generalize outside its training domains, and fails to leverage the strength of the baseline models. We will add these results to the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's patient response and detailed experiments, which largely addressed my doubts. Considering the limitations of the baseline training approach, I will maintain my score. Additionally, to my knowledge, similar to TimeLLM, TimeMMD also requires separate training for each dataset. Perhaps you could include it as a baseline following the training approach of TimeLLM and UniTime. | Summary: The paper introduces Context is Key (CiK), which is a benchmark aiming to evaluate forecasting models’ ability to integrate both numerical time-series data and essential textual context. Unlike traditional forecasting benchmarks that rely solely on numerical data, CiK explicitly requires models to process and leverage natural language context to improve prediction accuracy. CiK consists of 71 manually designed forecasting tasks spanning seven real-world domains (climatology, economics, energy, mechanics, public safety, transportation, and retail). The authors evaluate statistical models, time-series foundation models, and large language model-based forecasters. Notably, the authors introduced a new forecasting metric, RCRPS, to specifically evaluate forecasts when context is relevant.
Claims And Evidence: Firstly, the authors explicitly designed various tasks where textual context is necessary to make accurate forecasts. And with human and LLM-based evaluations, it further confirmed that with contextual guidance, the forecast quality is improved. The experiments are comprehensive and detailed.
Methods And Evaluation Criteria: The experiments in this paper are exceptionally well-designed and detailed, effectively filling a critical gap in the literature by providing time series data paired with contextual information—a resource that has been largely missing until now. Additionally, the authors introduce a new evaluation metric, RCRPS, which prioritizes context-sensitive forecast accuracy. This ensures that models are assessed based on their ability to effectively integrate and utilize textual context, making the evaluation more meaningful and relevant to real-world forecasting scenarios.
Theoretical Claims: The paper does not include formal theoretical proofs to support its claims about context-aided forecasting. Instead, it relies primarily on empirical evidence to demonstrate that textual context improves forecasting accuracy. While RCRPS is intuitively appealing, the paper does not formally prove that it is optimal metric for context-aware forecasting.
Experimental Designs Or Analyses: The paper’s experimental designs are well-structured and comprehensive, which are one of the key strengths. The paper evaluates a wide range of models, including Statistical Models (ARIMA, ETS, Exponential Smoothing), Time-Series Foundation Models (Lag-Llama, Chronos, Moirai, TimeGEN), LLMs (GPT-4o, Llama-3, Mixtral-8x7B, Qwen-2.5), Multimodal Models (UniTime, Time-LLM). This allows for a fair and meaningful comparison between traditional, modern, and context-aware forecasting methods. Additionally, the authors conducted detailed ablation studies. For example, in figure 5, it shows that removing textual context significantly reduces forecasting accuracy. And the authors also designed a new evaluation metrics specifically for context-aware forecasting. They also included various examples tasks, which would be really good for further implementation.
Supplementary Material: Yes, I reviewed the supplementary material provided in the paper, particularly focusing on the benchmark details, dataset construction, evaluation metric (RCRPS), and additional experimental results. No issue found and it is comprehensive.
Relation To Broader Scientific Literature: This paper is one of the early contributions in the emerging field of integrating contextual information with time-series forecasting, primarily due to the lack of paired datasets in this domain. It effectively fills a gap by introducing a benchmark specifically designed to test models’ ability to incorporate textual context into forecasting. Additionally, the lack of appropriate evaluation metrics has been a significant challenge in this area. The authors address this by proposing Region of Interest CRPS, which has the potential to enhance the evaluation of context-aware forecasting models.
Essential References Not Discussed: Essential references are discussed but several suggested references are listed in Suggestion section for time series reasoning.
Other Strengths And Weaknesses: The paper is among the first to systematically evaluate how textual context improves time-series forecasting. The CiK benchmark fills a major gap in the literature by providing a dataset where context is essential. The proposed evaluation metric (RCRPS) is specifically designed for context aware. The findings have practical implications for industries like finance, energy, and public safety, where human forecasters already use external knowledge to refine numerical predictions. The paper is well-structured and clearly written, with detailed explanations of the benchmark design, evaluation methods, and experimental setup. The supplementary material provides extensive details on dataset construction, evaluation metrics, and failure cases, which ensures reproducibility.
The weakness lies on the lack of theoretical justification. If a theoretical framework could be designed (for example using Bayesian reasoning or causal inference), this would strengthen the argument. RCRPS is well-motivated but not formally compared against other scores (e.g., CRPSS, Brier Score) in terms of statistical properties.
Other Comments Or Suggestions: In addition to citing Merrill et al.’s work on the challenges large language models (LLMs) face in zero-shot time-series reasoning, several other studies have explored this area:
1. “Towards Time-Series Reasoning with LLMs”: they propose a novel multi-modal approach that integrates a lightweight time-series encoder with an LLM.
2. “Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution”: they introduce a program-aided inference agent that leverages LLMs’ reasoning capabilities to decompose complex time-series tasks into structured execution pipelines.
3. “Implicit Reasoning in Deep Time Series Forecasting”: This study delves into how deep learning models implicitly perform reasoning during time-series forecasting, shedding light on the internal mechanisms that contribute to their predictive capabilities.
4. “XForecast: Evaluating Natural Language Explanations for Time Series Forecasting”: The authors of this paper focus on the generation and evaluation of natural language explanations accompanying time-series forecasts, aiming to enhance the interpretability and transparency of predictive models.
5. “Position: Empowering Time Series Reasoning with Multimodal LLMs”: This work advocates for the integration of multimodal data—such as textual descriptions, visual data, and audio signals—with LLMs to bolster time-series reasoning.
Questions For Authors: 1. Did you experiment with any alignment strategies to help LLMs better process time-series data?
2. Could the lower performance of certain LLM-based forecasting approaches be attributed to the fact that LLMs struggle to natively process time-series sequences?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful response. We are grateful that you highlighted the thoroughness of our experiments, the value of the RCRPS metric, and the quality of the writing. We are pleased that you see the CiK benchmark as filling a critical gap in the literature, with real-world implications across multiple industries that rely on additional non-numerical information for forecasting.
Below, we present point by point clarifications to your concerns. We are happy to clarify the points further.
---
## ***Formal theoretical proofs to support claims about context-aided forecasting***
### *RCRPS*
We thank the reviewer for highlighting the added value of a formal justification of the RCRPS. We emphasize that we give further details on the RCRPS metric and why it is a good choice of metric for context-aided forecasting in Appendix E. We propose the following:
\[**Addition to Main Text from Appendix E**\] We will include key elements of Appendix E in the main text, including the desiderata for the RCRPS and statistical properties such as properness.
\[**Additional Appendix Section: Statistical Properties**\] We will include in the Appendix a detailed comparison of the statistical properties of different metrics, including the CRPS, CRPSS, twCRPS, Brier Score, and the proposed RCRPS.
### *Formal characterization of the context-aided forecasting setting*
Thank you for your suggestion regarding a formal characterization of the problem setting of context-aided forecasting. We emphasize that the problem setting is defined formally in Section 2, according to which, we are interested in models where, in expectation, forecasts that leverage context perform better. Critically, all tasks in the benchmark are designed accordingly.
**We seek clarification on this.** We are happy to include aspects of formalization beyond what is discussed in the problem setting. Please let us know what you had in mind.
---
## ***Related Work***
\[**Addition to Related Work**\] Thank you for providing these valuable references; we will make sure to reference them all in the revised paper.
---
## ***Clarifications to Questions***
\[**Time series alignment strategies for LLMs**\] Thank you for your question. We hope that the analyses we presented are of use:
- \[**Sensitivity to input format and post-training**\] We find that the forecasting performance of the LLMs is sensitive to the input-output format, as shown by differences between the LLMP and DP methods which each prompt the model differently. We also find that the post-training strategy plays a role, as can be evidenced by the gap in performance of models with the DP and LLMP models. Sec 5.3 (“Comparing LLMP and DIRECT PROMPT”) contains more details on both these results. Both these factors could play important roles in the ability of LLMs to process and forecast time series.
- \[**Addition to Future Work**\] Future work that explores better input/output formatting strategies would benefit the community. As also suggested by reviewer M7ph, the gap between DP and LLMP suggests that prompting strategies might elicit different capabilities too. Further, fine-tuning LLMs on time series data may further adapt them to process time series better. We leave all these directions to future work.
\[**Impact of the ability of LLMs to natively process time series**\] Thank you for this insightful question.
- **\[Comparison of performance of LLMs to other models in the no-context setup\]** We compare the performance of LLMs to that of quantitative models that can natively process time series (such as foundation models and statistical models) in a no-context setup where all models consume the same input (only the numerical historical data). These results are presented in App C.1. We find that many of the LLMs (with both DP and LLMP) are competitive and sometimes better than the quantitative models. With that said, as all LLMs consider time series as text, we agree that the lower performance of some LLMs could be due to their inability to “natively” process time series, among other reasons (such as poor forecasting ability). Improving the ability of LLMs to process time series is an active area of research, and we believe CiK can be of high value in this front.
- \[**Extended discussion of catastrophic failures**\] While successful examples of LLMs generating good forecasts hints at the capability to process time series data as text, the catastrophic failures (Sec 5.4) may suggest that LLMs are not extremely reliable when processing time series, and more research is needed in understanding when they can fail catastrophically. | Summary: This paper introduces "Context is Key" (CiK), a benchmark for time-series forecasting models that incorporate textual context alongside numerical data. The authors design 71 tasks across seven domains where textual information is essential for accurate forecasting, propose a Region of Interest CRPS (RCRPS) evaluation metric, and evaluate statistical models, time series foundation models, and LLM-based forecasters. Their "Direct Prompt" method performs better than other tested approaches, showing the value of textual context while highlighting current limitations.
Claims And Evidence: The paper's main claims are backed by empirical results:
- The authors demonstrate context necessity through their task design, with 95% of instances confirmed by evaluators as benefiting from contextual information.
- Figure 6 shows quantified performance gains from textual context, with larger models seeing significant improvements (67.1% for Llama-3.1-405B-Inst), with statistical significance confirmed in Appendix C.6.
- The Direct Prompt method outperforms other approaches in comprehensive evaluations (Table 1), though no single method excels across all context types.
However, I have concerns about real-world applicability due to the data transformations used to mitigate memorization. The authors apply several techniques described in Appendix A.1, including prioritizing live data sources, using derived time series, and adding noise or shifting timestamps. While these modifications help prevent LLMs from leveraging memorized data during evaluation, they create an artificial evaluation setting that may not reflect performance on unmodified real-world data. For instance, shifting timestamps could disrupt natural seasonal patterns that models might otherwise leverage. The paper claims these transformations are used "sparingly," but provides no quantitative analysis of how extensively they were applied or how they might affect model behavior. This creates a disconnect between benchmark performance and what practitioners might experience in deployment scenarios. The main text should explicitly acknowledge this limitation rather than burying it in an appendix, ideally with ablation studies showing performance differences between original and transformed data where possible.
Methods And Evaluation Criteria: The benchmark draws from diverse data sources and includes various context types (intemporal, future, historical, covariate, causal). The authors address memorization issues by applying transformations to the data, though this potentially affects realism.
The RCRPS metric improves on standard metrics by accounting for context-sensitive regions and constraint satisfaction, making it more suitable for evaluating contextual forecasting.
The experiment design uses multiple instances per task and multiple forecasts per instance, with a weighting scheme to handle task similarity clusters. This helps provide statistical reliability.
The model comparison covers different architectural approaches and scales, allowing for meaningful performance comparisons across the forecasting methodology spectrum.
Theoretical Claims: The paper is mostly empirical. The RCRPS formulation extending CRPS is mathematically sound. The context-aided forecasting problem formalized in probabilistic terms in Section 2 seems correct.
Experimental Designs Or Analyses: The task creation process addresses data contamination through several methods. The sample size (5 instances × 25 forecasts per task) seems adequate for statistical analysis.
Table 1 shows comprehensive evaluation results with significance testing in the appendix. The failure analysis in §5.4 reveals important model limitations, like GPT-4o struggling with scientific notation.
The decision to clip RCRPS scores for major failures at 5 prevents outliers from dominating results but also hides how badly models can fail. The authors acknowledge this trade-off and include win-rate analyses that aren't affected by clipping.
Figure 7's parameter-efficiency analysis provides practical deployment insights beyond raw performance numbers.
Supplementary Material: Yes, I focused on Appendix A (details on data sources, task creation methodology).
Relation To Broader Scientific Literature: The paper positions itself between numerical approaches (time series foundation models) and language-integrated methods. It acknowledges prior benchmarks (Merrill et al., 2024; Zhang et al., 2023; Liu et al., 2024a) while explaining CiK's focus on essential contextual information.
The Direct Prompt approach builds on prior prompt-based forecasting methods (Requeima et al., 2024; Gruver et al., 2024), though the paper could better discuss architectural innovations in multimodal forecasting.
Essential References Not Discussed: None that I'm aware of.
Other Strengths And Weaknesses: Strengths:
- Manual creation and validation of tasks ensures quality
- Context type taxonomy provides structure for analyzing text-forecast relationships
- Model failure and inference cost analysis offers practical insights
Weaknesses:
- Missing analysis of which linguistic patterns most impact forecast quality
- Limited exploration of computational costs for real-time applications
- One-way focus (text informing forecasts) misses potential bidirectional interactions
- Limited insight into how models could better use contextual information
Other Comments Or Suggestions: The paper could benefit from a more systematic analysis of the failture patterns across context types and model architectures. Developing clearer guidelines for matching specific context types to the most appropriate models based on your findings could also be useful. Have you considered extensions to domain-specific ontexts like healthcare or finance? The specialized terminology may present additional challenges for contextual integration.
Questions For Authors: 1. What specific aspects of instruction tuning hurt LLMP performance while not affecting Direct Prompt? Does this suggest fundamental differences in how these methods use model capabilities?
2. Besides scientific notation issues, have you found other systematic failure patterns that could guide model selection?
3. Have you explored compression techniques that might preserve contextual integration while reducing computational costs?
4. How might the benchmark extend to structured data formats (databases, tables) common in operational forecasting?
5. Since no method excels across all context types, would you recommend ensemble approaches based on context detection, or do you think unified models could overcome these limitations?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough feedback. We appreciate the recognition of our contributions, including the comprehensive empirical evaluations, task diversity and quality, the soundness of the proposed metric, and the thoroughness in handling data contaminations. Below, we clarify several points raised in the review.
---
## Transformations to prevent data contamination
We acknowledge your concerns. We present the time series domains and the exact transformations used in App A.1. We include a summary table [here](https://raw.githubusercontent.com/anon-forecast/benchmark_report_dev/refs/heads/main/rebuttal_images/data_transformation_table.png).
We’d like to note that a significant portion of our tasks did not require any transformation. And wherever required, mitigation strategies were chosen extremely carefully and applied intelligently (the exact details of which are in App A.1). The tasks where we shifted the date by 24 hours are tasks with solar irradiance data, where the date shift would have minimal impact as the seasonal effects change smoothly over the year. For the tasks to which we added a very small amount of gaussian noise, we visually inspected that the noise barely changed the data.
We’d also like to emphasize that the code uses transformations as an option, and the tasks can be obtained without the transformation too. Nevertheless, we will explicitly point to these transformations and their potential impact in the main text, as suggested. For the camera-ready version, we will also analyze the performance of all models to the sensitivity to the transformations.
---
## Failure pattern analysis
- [**Specific linguistic patterns and systematic failures**] We thank the reviewer for this point. The analysis of specific linguistic patterns could yield interesting further insights. More analyses are also required to understand systematic failure specific models (such as for DP - GPT-4o with scientific notation). We propose to add a note on these points in the future work.
- [**Additional analyses**] We would like to point out that we provide the number of catastrophic failures per model (in App. C.5.). In the camera-ready version, we will add a view of this table broken down by context type and domain.
---
## Other comments
Thank you for bringing these aspects to our attention.
- [**Computational costs**] Apart from the pareto analysis which was aimed at this direction (Fig 7), we currently also provide the average inference time of all models per task (App C.4). We propose to further complement this analysis that provides the per-timestep average inference time, enabling estimation of real-world performance. We are open to any other suggestions.
- [**Bidirectional interactions**] We focus on the forecasting task in CiK, but other tasks that use numerical data to predict text are very relevant as well. We will add this point to future work.
- [**How models can better use contextual information**] We believe encouraging models to “think” or reason about the context explicitly can be useful here. We also believe fine tuning LLMs on datasets with text and time series would allow them to generalize better to tasks in CiK. We add these directions in our future work section.
- [**Domain-specific contexts**] Extending CiK to build domain-specific tasks and benchmarks is definitely a valuable direction of future work that will present additional changes in context integration. We will add this point to future work.
---
## Clarifications to other questions
- [**Impact of instruction tuning**] We agree that the differences in the performance of LLMP and DP may point to fundamental differences in how the methods use contextual information. We hypothesize that, while the instruction-tuned models respond better to DP’s direct instructions, the iterative prediction strategy used by LLMP may rely on a base (non instruction-tuned) model’s proper calibration for forecasting, which might be more calibrated (as instruction-tuning might affect calibration as per [the GPT-4 technical report](https://arxiv.org/abs/2303.08774) (Fig 8)). We agree that this requires further investigation and leave it to future work.
- [**Compression**] We have not tested this but agree that it is a very interesting idea, especially given recent works on LLM-based lossless compression such as [FineZip](https://arxiv.org/abs/2409.17141). We will add this point to future work.
- [**Structured context**] While we restrain this work to study textual context, the codebase is extensible and natively supports adding more modalities. We agree this is a very interesting direction of future work, that we have already noted in the future work section.
- [**Context-source specific models**] We believe both approaches, i.e. unified models for context-aided forecasting, and ensemble approaches such as expert-based methods and [learning to defer](https://arxiv.org/abs/1711.06664) merit investigation. We will add this point to future work. | null | null | null | null | null | null | null | null |
Radio: Rate–Distortion Optimization for Large Language Model Compression | Accept (poster) | Summary: The authors propose to utilize rate-distortion theory to guide the allocation when quantizing LLM. Through extensive tests on benchmark datasets and pre-trained models, it is shown that the proposed approach can provide improvement on the state of the art when quantizing LLMs for 3 or 4 bits per parameter on average.
## update after rebuttal
Thanks for the response.
For item (2) above, I feel the insistence on using a less efficient algorithm is not helping. Cover and Thomas did not provide an explicit algorithm but simply stated that it can be viewed as a dual variable. The algorithm I outlined earlier is well known, is straightforward to implement, and is essentially nln(n) computation to find the exact solution. The current algorithm is essentially a first-order method, and we know that a first-order method converges slowly and only to an approximate solution. The additional "max" is not really an issue using standard convex optimization derivation, which is another dual variable. I do not believe this issue has been resolved in the rebuttal.
For (3) above, the authors were distracted and forgot to reply to my original question. The authors omitted the second-order term, but this is not well-justified because at the convergent solution, the first-order term would be almost zero, and it is not clear why the second-order term is indeed small relatively to the first-order term.
Also, for (3), I believe the authors are saying their approach is essentially very similar to the existing "Hessian-aware" approach but with an optimized rate allocation. I am not sure if the two allocations would differ significantly if the latter was applied on the finer grain level, and a more detailed comparison would help to clarify this.
Overall, the rebuttal did not change my view, because 1) the gain from the existing methods is relatively small, and I'm not particularly enthused; 2) the technique used is not significantly novel; and 3) there are still a few issues not fully resolved yet, as discussed above. I would keep the score unchanged, but will also not oppose strongly if the AC decides to recommend acceptance.
Claims And Evidence: The authors claim three contributions.
1. A rate-distortion theoretic framework for quantizing LLMs
2. A stochastic ascent algorithm for solving the optimization problem.
3. Experimental results that show the proposed approach can offer improvements over the state of the art.
While I have some reservations about the first two claims, the last claim is reasonably well-supported.
For the first claim, the theoretical development under the R-D theory requires several approximations, and it is unclear if the key factor that enables the performance is actually the R-D theory formulation, or a better estimation of the statistics, or the grouping strategy, or the non-linear scaling (companding).
For the second claim, the problem actually has a straightforward "closed-form" solution. Under the simplification that the authors made, this reduces to the classical rate distortion theory on Gaussian sources, which would give the so-called water-pouring solution (see Cover Thomas' textbook). Here the problem also has an upper limit on the rate allocation, but this is also quite simple to address. The water-pouring solution can be solved in several ways, but it is well known that the overall complexity is linear in the number of groups (i.e., n), without a need for iteration over V. Given the known "closed-form" solution, I would not view the second claim as important or well-supported.
Methods And Evaluation Criteria: The authors provide evaluations on benchmarks and well-known pretrained LLMs, which are well-accepted in the literature.
I have a concern on the general approach of using a calibration dataset to learn the optimal quantization rate allocation. This approach requires access to a trustworthy high-quality calibration dataset. Firstly, it seems unreasonable to assume such a dataset is available, as it needs to follow the same distribution as the training set, but the training dataset is usually unknown (e.g., the training set for LLama2). Secondly, the effect of the calibration dataset size should be carefully studied: a small dataset will not capture the function accurately, but too large a dataset will induce significant computational loss. In the extreme case where the training set is the same as the calibration set, the training itself should be redesigned to take into account the quantization, instead of as a post-training procedure.
Theoretical Claims: One of the concerns mentioned above is on the theoretical derivation of the rate distortion expression. The authors made several approximations, which I am not convinced. One approximation is to throw away the second order term and only keep the first order term. When the training has converged, the first order derivative will be almost zero, and for this reason, it appears questionable to throw away the higher order terms in favor of the first order approximation. Some existing work emphasizes such an issue (e.g., Kim 2024), and uses the Hessian information but throws away the first order term instead. Another approximation is in (13), where the author argues that the quantization error is zero mean. While I intuitively agree this could be the case, the approximation error should be carefully analyzed, under proper assumptions.
Experimental Designs Or Analyses: The experiments seem well designed for the purpose of verifying the performance, similar to other existing works. I would ask the authors to provide 1) a comparison on computation times of the quantization optimization with other methods such as OWQ, GuIP, AWQ and 2) a discussion on any computational advantage during inference, because some existing methods seem to quantize in other factorized forms and show improved inference computation advantages.
Supplementary Material: Yes, the appendix only.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: I would like to see the performance comparison includes Kim 2024.
Other Strengths And Weaknesses: The area of LLM model compression is quite crowded at this point, and previous works seem to have obtained most of the potential compression gain already. Though the current work seems to provide some minor additional improvements, I'm somewhat less enthusiastic on the incremental improvement, unless the authors can point out the major novelty in the methodology or potential other computational advantages. It also seems that many existing works have already recognized the importance of allocating different bit depths for different groups of parameters depending on their sensitivity, e.g., OWQ and SqueezeLLM, and the eventually allocation strategies seem very similar to the proposed approach. It would help considerably if the authors can discuss the similarity and difference in more depths.
Other Comments Or Suggestions: N/A
Questions For Authors: See comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you, especially for initiating a discussion around water-filling.
____
*1. For the first claim, …, unclear if the key factor that enables the performance is actually the R-D theory formulation, or a better estimation of the stats ...*
The manuscript already provides the brakdown of performance gains in Tables 3 and 2b–c. In Table 3, we start with the RTN scheme, add in the mixed precision depths, and finally companding, to derive the proposed method. In Table 2b, we show that estimation the statistics (gradient variances) is not that very sensitive to the number of tokens per sequence used. Our grouping strategy is similar to GPTQ’s, and we fix the group size at 512. Table 2c shows that changing group size has little effect on the quantized model accuracy.
____
*2. For the second claim, the problem actually has a straightforward "closed-form" solution .. the so-called water-pouring solution (see Cover Thomas' textbook) ... without a need for iteration over V. Given the known "closed-form" solution, I would not view the second claim as important or well-supported.*
In fact, Algorithm 1 (lines 14–16) already implements the water-filling algorithm. Your description of water-filling as a "closed-form" solution is not fully accurate. While the optimal rates are given in closed form for a given water level and source variances, the appropriate water level (equivalently, dual variable) V still must be found numerically, in order to satisfy the total rate constraint. Thus, even under the water filing framework, solving for V remains a necessary iterative component.
____
*3. I have a concern on the general approach of using a calibration dataset to learn the optimal quantization rate allocation. This approach requires access to a trustworthy high-quality calibration dataset ....*
In post-training quantization, the original training dataset is typically not assumed to be available. Prior work (e.g., Kim et al., 2024) has shown that a small calibration set — even ~100 random sentences — is sufficient for calibrating model quantization.
____
*4. One of the concerns mentioned above is on the theoretical derivation of the rate distortion expression. The authors made several approximations, which I am not convinced. One approximation is to throw away the second order term and only keep the first order term. When the training has converged ...*
It appears you are misreading (5). We are not discarding second-order terms of the distortion function. Rather, we linearize the model function f f inside the squared norm, which is equivalent to approximating the Hessian of the distortion by the outer product of the Jacobian of f. This is a standard approach in optimization; see Appendix B and also (Nocedal & Wright, 2009, p. 250).
____
*5. Another approximation is in (13), where the author argues that the quantization error is zero mean ...*
Quantization errors are not zero mean in practice. In fact, our manuscript (p. 4, last paragraph) explicitly states this. To address this, we compute the mean quantization error (bias) and update the model’s bias vectors accordingly after each quantization step. This correction is implemented in Algorithm 1, line 18.
____
*6. The experiments seem well designed for the purpose of verifying the performance ... provide 1) a comparison on computation times of the quantization optimization with other methods such as OWQ, GuIP, AWQ and 2) a discussion on any computational advantage during inference.*
Please refer to our response to DSG1.1 and DSG1.7, which include quantization runtimes and inference-time speedups across several model sizes.
____
*7. The area of LLM model compression is quite crowded at this point … the authors can point out the major novelty …*
We agree. Methods such as SqueezeLLM and OWQ can allocate 16 bits to more sensitive weights and 3 bits to others — but they do so heuristically. This naturally raises the question: why only 3 and 16 bits? Why not assign bit depths proportionally to sensitivity (e.g., from 0 up to 16 bits)? And what is the correct sensitivity metric for rate allocation?
These questions are precisely what our rate–distortion framework is designed to address. It provides a principled method for allocating bit depths and selecting sensitivity metrics (gradient variance), as well as a numerical procedure for optimization. Without this foundation, bit allocation becomes a combinatorial search problem, rather than a tractable convex optimization.
____
*8. I would like to see the performance comparison includes Kim 2024.*
Thank you. They are now included in Tables 1 and 5, as well as Table 6 in our response to DSG1.1.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal response, which helped to clarify some issues.
1. "breakdown of performance gains in Tables 3 and 2b–c". This ablation study is not completely clear to me. Particularly the stepsize optimization is introduced before the depth optimization. It seems the order should be reversed, or both orders of ablation should be considered.
2. "Your description of water-filling as a "closed-form" solution is not fully accurate. While the optimal rates are given in closed form for a given water level and source variances, the appropriate water level (equivalently, dual variable) V still must be found numerically, in order to satisfy the total rate constraint. " This seems to be due to a misunderstanding on how water-filling solution can be computed by the authors. Since the R-D water-filling solution is a piece-wise defined function, only the (R,D) value (and corresponding water level) at the water-level thresholds need to be computed to determine which segment the target rate actually falls into. Then the explicit formula can be used for that segment. Based on this understanding, a simple algorithm can be designed without computing all the threshold values by combining with a bisection approach.
4. "It appears you are misreading (5). " I am referring to (12) in the appendix where it was stated "the second term of is approximately zero". In fact, if the authors are indeed approximating Hessian, what is the difference from the previous approach based on "Hessian-aware"?
8. Can you include the additional results in Table 1 and 5 and discuss whether your method provides any improvement?
---
Reply to Comment 1.1.1:
Comment: *1. breakdown of performance gains in Table 3 ... not clear to me.*
The RTN scheme with MMSE step sizes is a well-established technique (see Lee et al., 2024, p. 13358, last para) and serves as the quantization backbone for Radio after it allocates bit depths. For this reason, we present RTN + MMSE immediately after MinMax RTN in Table 3—to isolate the improvement from MMSE step size optimization within the RTN framework, before introducing the additional gains from our bit-depth allocation strategy. Since Radio uses the RTN + MMSE step sizes as the backbone, this order of ablation more accurately reflects the logical progression of our method and enables a clearer decomposition of individual contributions.
_____
*2. … misunderstanding on how water-filling solution can be computed ...*
The water-filling procedure in Algorithm 1 (lines 11–14) follows the standard formulation in Cover & Thomas (2nd ed., Sec 10.3.2) and Boyd & Vandenberghe (Sec 5.1.6). These references treat water-filling as a convex optimization problem and solve for the optimal dual variable numerically. This approach is numerically robust, easy to implement, and consistent with standard treatment in the optimization literature.
We do acknowledge that one could alternatively exploit the piecewise structure of the rate-water level function, by computing water-level thresholds and selecting the corresponding segment. However, under an additional maximum rate constraint, the water level generally cannot be computed in closed form and still requires iterative evaluation. As such, the potential advantage of the piecewise approach becomes more limited in our setting.
_____
*3. I am referring to (12) ... what is the difference from the previous approach based on "Hessian-aware"?*
Thank you for clarifying. Yes, the second term of the Hessian in (12) is approximated to zero—this is a standard Gauss–Newton approximation in nonlinear least squares problems (Nocedal & Wright, eq 10.24–10.25).
While our method, like HAWQ, leverages second-order statistics to guide quantization, the approaches differ fundamentally. HAWQ employs a custom notion of sensitivity (e.g., the largest eigenvalue of a block Hessian) to greedily assign bit depths at the layer level. However, such heuristics do not scale well to fine-grained quantization (e.g., per-channel or sub-channel needed for LLM compression), due to the combinatorial explosion of possible bit combinations to explore.
By contrast, our approach is grounded in a rate–distortion framework that formulates bit allocation as a convex optimization problem. This enables bit-depth assignment at any level of granularity—including per-channel—in a tractable and principled manner. The manuscript already discusses this distinction in the last paragraph of Section 2 (just before Section 3, p. 2). In this sense, our method generalizes prior heuristics and offers both scalability and theoretical grounding.
_____
*4. Can you include the additional OmniQuant (Shao et al., 2024) and SqueezeLLM (Kim et al., 2024) results and discuss ...*
Sure. Please see below for the results (due to the character limit, we only show 3-bit results; 4-bit results follow a similar trend). Note that calibration examples are sourced from the C4 dataset, so perplexity on C4 reflects how well a quantized model fits the calibration distribution. In contrast, WikiText2 perplexity reflects the model’s generalization ability to a different generation task and data distribution.
Across all model sizes and families, Radio consistently outperforms both OmniQuant and SqueezeLLM on WikiText2, indicating stronger generalization. On C4, while performance is competitive across most settings, some perplexity degradation is observed with Meta LLaMA 2 models, which we attribute to Radio prioritizing generalization over potential overfitting to the calibration set (e.g., with the k-means quantization in SqueezeLLM).
| Wikitext2 PPL | 125M | 350M | 1.3B | 2.7B | 6.7B | 13B | 30B | 66B | L2-7B | L2-13B | L2-70B |
|-|-|-|-|-|-|-|-|-|-|-|-|
3-bit RTN | 1.3e3 | 64.57 | 119.47 | 298.00 | 23.54 | 46.04 | 18.80 | 6.1e3 | 6.66 | 5.52 | 3.98 |
OmniQuant/128 | 32.25 | – | 15.71 | 13.18 | 11.27 | 10.47 | 9.79 | 9.53 | 6.03 | 5.28 | 3.78 |
SqueezeLLM | – | – | 16.30 | 13.85 | 11.70 | 11.76 | 10.17 | – | 6.18 | 5.36 | 3.77 |
Radio | **30.71** | **25.94** | **14.83** | **12.42** | **11.07** | **10.28** | **9.56** | **9.24** | 6.04 | **5.25** | **3.72** |
| C4 PPL | 125M | 350M | 1.3B | 2.7B | 6.7B | 13B | 30B | 66B | L2-7B | L2-13B | L2-70B |
|-|-|-|-|-|-|-|-|-|-|-|-|
3-bit RTN | 839.97 | 55.96 | 4.2e3 | 1.1e4 | 4.4e3 | 3.2e3 | 1.1e3 | 3.5e3 | 521.22 | 14.01 | 11.06 |
OmniQuant/128 | 31.30 | – | 17.46 | 15.33 | 13.28 | 12.50 | 11.73 | 11.22 | 7.75 | 6.98 | 5.85 |
SqueezeLLM | – | – | 17.19 | 15.62 | 13.41 | 13.55 | 11.85 | – | 7.72 | 6.97 | 5.73 |
Radio | **30.05** | **26.20** | **16.88** | **14.91** | **13.14** | **12.35** | **11.62** | **11.19** | 8.04 | 7.22 | 5.99 | | Summary: This paper proposed a rate-distortion optimization framework, Radio, for compressing large language models (LLMs) via quantization. Specifically, this paper formulate quantization as distortion problem and try to minimize it by assigning bit depth to weight groups.
Claims And Evidence: Most claims are supported by empirical results across diverse models and tasks.
Methods And Evaluation Criteria: The evaluation metrics (perplexity, QA accuracy) are appropriate
Theoretical Claims: The derivations in Appendices B and C assume high bit depths and Gaussian/Laplace weight distributions.
Experimental Designs Or Analyses: This paper employs two key techniques:
- PCA
- Dual optimization
However, I wonder whether the time cost increases exponentially as the model size grows.
Additionally, for the proxy, can you explain why you choose gradient deviation? There are lots of proxies like Hessian, SNIP etc.
Can you show me the superiority of gradient deviation?
About the granularity, instead of per row or per column, here I use per channel or per token instead.
- Generally, most of method apply per channel quantization as per-channel is memory efficient which employ the locality of memory, while per-token would access memory across physical location.
For quantization, please add more experiments on SOTA LLMs like Qwen2.5. As there is evidence shows that the models under sufficient training will show distinct characteristics. (You can refer to paper: arxiv:2411.04330)
Supplementary Material: I have checked the supplementary B for the Eq 5 derivation.
Relation To Broader Scientific Literature: Radio builds on rate-distortion theory from signal compression and connects to prior LLM quantization works (GPTQ, AWQ). It addresses a gap by formalizing quantization as a resource allocation problem, contrasting with Hessian-based methods like OBS.
Essential References Not Discussed: Please include more papers: SmoothQuant, OmniQuant, QuaRoT, ZeroQuant.
Other Strengths And Weaknesses: The rate-distortion framework is a creative application of classical theory to modern LLM challenges.
Limited validation of theoretical assumptions; omission of deployment metrics (latency, hardware); insufficient discussion of calibration data sensitivity.
Other Comments Or Suggestions: This paper did not follow the ICML paper instructions. Maybe the format is out of dated as there is no number of lines in this paper.
Questions For Authors: - **Q1**: How does Radio’s quantization time scale with model size (e.g., OPT-175B), and how does it compare to GPTQ’s runtime?
- **Q2**: The derivation assumes uncorrelated quantization errors. How does this impact performance at ultra-low bit depths (e.g., 2 bits), and are there plans to address correlated errors?
- **Q3**: Why are hardware efficiency metrics (e.g., latency, memory bandwidth) absent? Including these would strengthen the case for deployment on resource-limited devices.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you, especially for checking the derivation of (5)!
____
*1. However, I wonder whether the time cost increases exponentially as the model size grows. How does Radio’s quantization time scale with model size (e.g., OPT-175B), and how does it compare to GPTQ’s runtime?*
Thank you for raising this. The revised manuscript now includes Table 6 below, which shows that our runtime scales roughly linearly with model size — consistent with the complexity of other methods reported in the same table.
**Table 6: Quantization running times.** We quantize the Meta Llama 2 family of LLMs to ~3 bits per weight on average and measure the running time of the proposed method. We also include the running times of GPTQ, QuIP, OWQ, AWQ, and SqueezeLLM.
| Method | Llama2-7B | Llama2-13B | Llama2-70B |
|--------------|---------------|---------------| -------------- |
| GPTQ/256 (Frantar et al., 2022) | 10m | 18m | 90m |
| QuIP (Chee et al., 2024) | 36m | 80m | 9h |
| OmniQuant/128 (Shao et al., 2024) | 66m | 132m | 15h |
| SquezeLLM (Kim et al., 2024) | 11m | 23m | 92m |
| Radio (Ours) | 47m | 97m | 11h |
-----------------------------------------------
____
*2. Additionally, for the proxy, can you explain why you choose gradient variance? There are lots of proxies like Hessian, SNIP etc. Can you show me the superiority of gradient deviation?*
This is a great question. In (5) and Appendix B, we show that gradient variance arises naturally from the rate–distortion optimization formulation. Under this framework, it is the theoretically correct sensitivity measure to incorporate. In contrast, heuristics like SNIP are not grounded in the rate–distortion theory that underpins our formulation.
____
*3. About the granularity, most of method apply per channel quantization as per-channel is memory efficient which employ the locality of memory, while per-token would access memory across physical location.*
Agreed. Our method already uses per-channel quantization, not per-token quantization. Moreover, our current focus is on weight quantization, where per-token quantization (relevant for activations) does not apply.
____
*4. For quantization, please add more experiments on SOTA LLMs like Qwen2.5 ...*
While Qwen and DeepSeek may soon overtake Llama 2/3 in adoption, they were unpublished at the time of our ICML submission, and even now, most SOTA quantization methods (e.g., OWQ, QuIP, SqueezeLLM) do not yet support them. For reproducibility and fair comparison, we based our experiments on OPT and Llama 2 (11 models in total), which are widely used baselines in recent quantization works.
____
*5. Please include more papers: SmoothQuant (Xiao et al., 2023), OmniQuant (Shao et al., 2024), QuaRoT (Ashkboos et al., 2024), ZeroQuant (Yao et al.., 2023).*
Thank you for the suggestions. The revised manuscript now discusses all these works. Specifically, Section 2, paragraph 3 now reads:
“...by re-scaling or retaining original weight values (Lin et al., 2024; Xiao et al., 2023), low-rank decomposition of quantization error matrices (Shao et al., 2024), and decorrelation of weight matrices prior to quantization (Ashkboos et al., 2024).”
For OmniQuant, we also include its 3-bit and 4-bit results on OPT and Llama 2 models in Tables 1 and 5.
____
*6. The derivation assumes uncorrelated quantization errors. How does this impact performance at ultra-low bit depths (e.g., 2 bits), and are there plans to address correlated errors?*
Indeed, at very low bit depths, correlation in weight quantization errors comes more pronounced. One way to decorrelate quantization errors is by decorrelating the weights themselves prior to rate quantization and quantization (Gersho and Gray, ). We now include this discussion in Sec 5 para 2: "It is known that scalar quantization of correlated sources can be improved by decorrelating the sources prior to quantization (see Gersho and Gray, 1991). This will be explored in future work.”
____
*7. Why are hardware efficiency metrics (e.g., latency, memory bandwidth) absent? ...*
The revised manuscript now includes Table 7 shown below, detailing the acceleration achieved by 3-bit-average weight quantization for different models / embedding dimensionality. Sec 4 (last para) now reads: “Timing results. ... Table 7 lists the acceleration achieved by our custom quantized matrix-vector multiply kernel, with acceleration ranging between 1.4 and 3.3 depending on the embedding dimensionality.
**Table 7: Acceleration due to quantized mat-vector multiplies relative to multiplication in FP16.** A 3-bit weight matrix of dimension N×M multiplies a vector of length M to produce a vector of length N (denoted M→N).
| Model (Embedding) | E→ E| E→ 4E | 4E→ E | Overall |
|--|--|--|--|--|
| OPT-1.3B (E=1024)| 0.9|2.1|2.7|1.4|
| OPT-6.7B (E=4096)| 2.4|3.1|3.1|2.8|
| OPT-30B (E=7168)| 3.2 |3.2|3.1|3.2|
| OPT-175B (E=12288)|3.2|3.2|3.8|3.3|
------------------------------------------
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for their detailed rebuttal. Most of my concerns are well addressed. I have raised my score. | Summary: The paper
* formulates a rate-distortion theoretic framework for quantization of LLMs
* designs an algorithm to solve the optimization (for model compression) resulting from that framework
* runs the model compression method on various models to show the aspects
## update after rebuttal
My assessment won't change. Overall the paper looks OK to me and the author does elaborate on the point I pick.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: The paper
* defines the bit depth optimization problem in eq.3
* makes a relaxation and derives the first-order condition eq.4
* gets an approximation of the partial derivative in eq.5 then combine that with eq.4 to get the update rule eq.6
* formula for stochastic is in eq.7
* eq.8 shows an an asymptotically optimal choice of sigmoid function under the assumption of weights being Laplace-distributed
* eq.9 gives the theoretical gain from weight grouping with the claim of being non-negative
The math looks good.
Experimental Designs Or Analyses: * `WikiText2 perplexity` looks fair since the only moving part is the compression method while the authors' codebases are used for all the other methods to compare
* `Effect of hyperparameters on quantized model accuracy` shows the trend
* `Perplexity across optimization iterations` (Figure 4) is not very well explained
* `Ablations and pruning effects of quantization` looks good
* `2.x-bit quantization and downstream tasks` `C4 perplexity (validation)` look fair
Supplementary Material: B & C
Relation To Broader Scientific Literature: * bit depth determination has been a topic in LLM quantization
Essential References Not Discussed: Not as I'm aware of.
Other Strengths And Weaknesses: The rate-distortion framework borrows idea from classic information theory which makes the method principled.
Other Comments Or Suggestions: 0
Questions For Authors: 0
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you, especially for checking through the individual equations.
____
*1. Perplexity across optimization iterations (Figure 4) is not very well explained.*
Thank you for pointing this out. Figure 4 shows how quantized model accuracy (in terms of perplexity) improves as more gradient variances are accumulated and weights are re-quantized accordingly. We observe that performance saturates after about 30 accumulation iterations, beyond which additional iterations yield diminishing returns. This is valuable, as accumulating gradient variances via backpropagation can be computationally expensive — thus, the ability to early-stop after ~30 iterations is a practical benefit. | Summary: The paper focuses on the problem of compressing Large Language Models (LLMs) for efficient deployment on resource-limited devices. It proposes Channel-Wise Mixed-Precision Quantization (CMPQ), a novel mixed-precision quantization method. CMPQ allocates quantization precision in a channel-wise pattern based on activation distributions, using non-uniform quantization and incorporating two outlier extraction techniques. Experiments on different sizes of LLMs show that CMPQ outperforms existing post-training quantization methods, especially in 2-bit quantization tasks, and can achieve significant performance gains with a modest increase in memory usage.
Claims And Evidence: Yes. CMPQ can adaptively quantize LLMs to any specified bit-width and outperform baselines.
Methods And Evaluation Criteria: Yes. The performance of quantized LLMs is evaluated based on perplexity across language generation tasks (WikiText-2 and C4).
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes. Experiments are conducted on OPT (OPT-2.7B, OPT-6.7B) and LLaMA2 (LLaMA2-7B, LLaMA2-13B) models, and extended to other OPT models up to 30B parameters and LLaMA3-8B. The evaluation is based on perplexity on WikiText-2 and C4 datasets, and additional experiments are done on the MMLU benchmark.
Supplementary Material: Yes. The supplementary material includes additional experimental results on more LLMs, further demonstrating the effectiveness of CMPQ. It also provides a detailed analysis of the impact of sparsity levels, non-uniform quantization, and data efficiency for the calibration set. The latency and peak memory usage of CMPQ are also measured and compared with SqueezeLLM in the supplementary material.
Relation To Broader Scientific Literature: The paper is well-grounded in the existing literature on model quantization, especially post-training quantization and mixed-precision quantization. It reviews and builds upon previous works such as GPTQ, AWQ, QuIP, and LLM-MQ, identifying their limitations and proposing CMPQ as an improvement.
Essential References Not Discussed: The paper is well-grounded in the existing literature on model quantization, especially post-training quantization and mixed-precision quantization.
Other Strengths And Weaknesses: Strengths:
CMPQ shows superior performance in integer-bit quantization tasks and can achieve significant performance improvements with a small increase in storage overhead, especially in low-bit quantization.
It can adapt to any bit-width constraint, including fractional bit-widths, which is a significant advantage over many existing methods.
Low Memory Requirements: The method only requires forward propagation, resulting in moderate memory requirements during quantization, and has minimal reliance on the calibration set, reducing the risk of overfitting.
The paper conducts extensive experiments, including comparisons with multiple baselines, ablation studies, and evaluations of robustness to the calibration set, providing strong evidence for the effectiveness of CMPQ.
Weaknesses:
The introduction of multiple components such as channel-wise quantization, non-uniform quantization, and two types of outlier protection may make the method relatively complex to implement and understand compared to some simpler quantization methods.
Although the paper compares CMPQ with several relevant baselines, there may be other emerging quantization techniques that could be included in the comparison to further validate the superiority of CMPQ.
Other Comments Or Suggestions: The authors could further explore the generalization of CMPQ to other types of language models or even other types of neural networks beyond LLMs to expand the applicability of the method.
Questions For Authors: Have you considered combining CMPQ with other model compression techniques, such as pruning, to further improve the efficiency of LLMs?
Could you provide more insights into the selection of the specific percentages for outlier protection and how sensitive the performance of CMPQ is to these values?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you! First, we kindly clarify that our work is entitled “Radio”, not “CMPQ”. CMPQ appears to refer to a prior work by different authors.
____
*1. The introduction of multiple components such as channel-wise quantization, non-uniform quantization, and two types of outlier protection may make the method relatively complex to implement and understand...*
We respectfully disagree. As shown in Algorithm 1 and detailed in Appendix A, our method is straightforward to implement. Notably, our non-uniform quantization is achieved via a simple companding followed by uniform quantization — a lightweight alternative to more complex approaches like Lloyd-Max quantization. Furthermore, Radio does not involve any explicit outlier protection mechanism; perhaps this was conflated with another work.
____
*2. Although the paper compares [Radio] with several relevant baselines, there may be other emerging quantization techniques that could be included in the comparison to further validate the superiority of [Radio].*
Thank you! As also requested by other reviewers, we add the quantization results of SqueezeLLM (Kim et al., 2024) and OmniQuant (Shao et al., 2024) in Tables 1, 5 as well as their timing results. Please see our response to DSG1.5 and i5t4.8.
____
*3. The authors could further explore the generalization of [Radio] to other types of language models ...*
Thank you! Low-bit quantization of Qwen and DeepSeek is currently planned. Their quantized model accuracy and performance will be shared directly on our project webpage (url to be revealed after double-blind review). Please also see our response to DSG1.4
____
*4. Have you considered combining [Radio] with other model compression techniques, such as pruning, to further improve the efficiency of LLMs?*
Yes — interestingly, Radio implicitly induces mild pruning through zero-bit quantization, as shown in Table 3b. Additionally, our method can be combined with weight matrix decorrelation before bit allocation and quantization. This is now discussed in Sec. 5, para. 2, where we mention that decorrelating correlated sources can improve scalar quantization (cf. Gersho and Gray, 2991). We plan to explore rate–distortion optimal quantization of decorrelated weights in future work. Please also see our response to DSG1.6
____
"5. Could you provide more insights into the selection of the specific percentages for outlier protection and how sensitive the performance of [Radio] is to these values?"
[Radio] does not use any explicit thresholding or masking for outlier protection. Instead, our companding function (Eq. 8 and Appendix C) continuously adjusts the quantization resolution across the distribution. This adaptively mitigates quantization error at the center while preserving dynamic range in the tails (outliers) — see Fig. 2 for a visualization. | null | null | null | null | null | null |
LASER: Attention with Exponential Transformation | Accept (poster) | Summary: This paper addresses the problem of vanishing gradients in standard dot-product attention in transformers. The authors show mathematically how this problem arises in the Jacobian of the attention function during backpropagation. Next, a new technique called LASER (Logarithm of Summed Exponentials of Representations) is introduced. As the name suggests, this technique, at it's core, is like the log-sum-exp function which is a new way of formulating/approximating the weighted average in attn(.) using the log of summed exponentials. The benefit of this technique is the avoidance of the vanishing gradients as shown mathematically in this paper and consequently the better training of transformer based models.
Claims And Evidence: Yes, claims are clear with convincing evidence.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I did not check the proofs.
Experimental Designs Or Analyses: Yes. The experimental designs are sound and valid.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: This paper is a key contribution to the broad literature of attention mechanisms in transformers.
Essential References Not Discussed: The paper does a good job with citing the essential references.
Other Strengths And Weaknesses: Strengths:
This paper address a key issue in transformer models with a simple yet novel and innovative idea. The paper is very well written and the flow of ideas are easy to follow. The paper is also mathematically grounded with experiments spanning across multiple applications.
Weakness:
The performance improvements shown are minor.
Other Comments Or Suggestions: 1. In section 3.2, the definition of $\tilde{A} = KQ^T$ is different from definition in 3.1
2. Is $v_1 - v_2 >> 0$ an obvious choice? Is it always true?
3. Equation (8) should be $log(exp(v_1 + log(a_{11})\textbf{)}+exp(v_2+log(a_{12}))\textbf{)}$ (the text is missing two closing brackets)
4. In algorithm 1, step 4: what is $exp(.)$ ?
5. In section 4.2, authors say that LASER shows more improvement in autoregressive LM compared to BERT. Is there some explanation/hypothesis to this in light of how LASER is formulated?
Questions For Authors: My main concern about the paper is: how significant are the improvements shown? In all the different tasks, the improvements seem minor. Did the authors perform statistical significance testing?
Furthermore, how does LASER scale to even larger transformer models? I understand that it might not be possible to conduct that experiment but the authors should comment on how LASER can be useful for larger models given how minor the improvements are.
Related to the above, does LASER introduce any computational overhead compared to vanilla attention? If yes, how is it worth using LASER given the performance and computational overhead tradeoff?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s comments and suggestions.
### Addressing Other Comments or Suggestions
**Comment 2:**
Unfortunately, this is a typo — it is supposed to be `exp(v_1) - exp(v_2) >> 0`. While `v_1 - v_2` might not be significantly different, the difference between `exp(v_1)` and `exp(v_2)` can be much larger due to the exponential function, as dictated by the value parameter \( W_V \).
**Comments 1 & 3:**
Thank you for pointing these out. We will correct them in our revision.
**Comment 4:**
Thanks for pointing out this typo — there should be no `exp(.)` in that expression. We appreciate the feedback and will fix this in the revision.
**Comment 5:**
We conjecture that, since half the attention scores are zeros in decoder-only models, this could lead to less dependence on W_K and W_Q parameters, and less gradient flow through W_K and W_Q compared to BERT models.
---
### Questions for Authors
> **Reviewer:** My main concern about the paper is: how significant are the improvements shown? In all the different tasks, the improvements seem minor. Did the authors perform statistical significance testing?
In Table 7 (Appendix A.4), we note that the **standard deviation is low** compared to the difference between LASER and Standard attention, indicating that the observed improvements are statistically significant.
> **Reviewer:** Furthermore, how does LASER scale to even larger transformer models? I understand that it might not be possible to conduct that experiment but the authors should comment on how LASER can be useful for larger models given how minor the improvements are.
Please refer to our response to **Reviewer tFgw**, where we trained a **7.7B model** and demonstrated **average improvements of 1.44%**, with more pronounced gains of up to **6%** on individual downstream tasks.
Additionally, please see our response to **Reviewer 9sxs**, where we fine-tuned the **2.2B model** on the **SuperGLUE dataset** [1] for 10 epochs and found a **1.65% improvement in decoding accuracy** (as opposed to eval/ranking accuracy in Table 2).
> **Reviewer:** Related to the above, does LASER introduce any computational overhead compared to vanilla attention? If yes, how is it worth using LASER given the performance and computational overhead tradeoff?
As noted in **Lines 326–329** under *Performance Analysis*:
- The **2.2B model with standard attention** takes **27.22 hours** on TPU v5 to reach a minimum test loss of 2.327.
- In contrast, **LASER takes only 24.88 hours**, a **relative walltime improvement of 9.4%**.
---
[1] Wang, Alex, et al. *"SuperGLUE: A stickier benchmark for general-purpose language understanding systems."* Advances in Neural Information Processing Systems 32 (2019).
---
Rebuttal Comment 1.1:
Comment: I am happy with the comments and keep my score to a 4 (accept).
---
Reply to Comment 1.1.1:
Comment: Thank you for the response. | Summary: This paper studies the problem of Gradient saturation in softmax in the attention architecture. Thus, it proposes LASER, a log-sum-exp structure to replace the original dot-product formulation in attention with Log-Weighted-Sum-Exp Trick. Extensive experiments on different benchmarks and models, with in-depth analysis, support the development of such a novel attention mechanism.
Claims And Evidence: All the claims are correct with experimental / theoretical analysis as evidence.
Methods And Evaluation Criteria: The method is compared with various baseline choices, adopted to different models on different tasks and is universally better than baselines on average. However, one possible concern is the improvements are marginal. Can the author justify the use of LASER in terms of other perspectives, e.g. faster convergence during training, stabilizing training, etc. While the author provides some results on the training stability in the Supplementary materials, it is appreciated if the author can provide some analysis on why LASER is stabler than original attention mechanism.
Theoretical Claims: All the theoretical claims appear to be correct.
Experimental Designs Or Analyses: The experimental designs and analyses are thorough, especially for the part where different optimizers are ablated.
Supplementary Material: The supplementary materials provide additional information of the paper, and help with the understanding of the paper. I'm curious about why LASER is stabler than vanilla attention mechanism at training time, since the author mentioned that LASER introduces larger gradient norm, intuitively this may cause fluctuation in training. Can the author provide some analyses on this from perspectives like gradient distribution?
Relation To Broader Scientific Literature: The key contribution of this paper, a new attention mechanism that aims to solve the gradient saturation problem in attention, has great potential for a great impact in attention-based models, which are dominant in the current era.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The proposed LASER method indeed theoretically solves the gradient saturation problem.
2. The loss curves showed empirically show the effectiveness of LASER.
Weaknesses:
1. The quantitative experimental results on downstream tasks demonstrate marginal improvements compared to vanilla attention mechanism.
Other Comments Or Suggestions: N/A
Questions For Authors: I'm open to discussion for the above-mentioned issues, and I think the justification of LASER in terms of some analyses on the better stability could make the paper more thorough.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the reviewer’s helpful observations.
> I'm curious about why LASER is stabler than vanilla attention mechanism at training time, since the author mentioned that LASER introduces larger gradient norm, intuitively this may cause fluctuation in training. Can the author provide some analyses on this from perspectives like gradient distribution?
To understand this, we draw analogy to a residual connection. While residual networks add residual connections to improve gradient backpropagation, residual connections increase the gradient norm compared to vanilla network, however, the stability improves as more parameters are activated. Analogously, we conjecture that, fixing the gradient backpropagation through query and key within softmax improves the learning of W_K and W_Q.
> I'm open to discussion for the above-mentioned issues, and I think the justification of LASER in terms of some analyses on the better stability could make the paper more thorough.
We will formally prove the above in our final revision, using the aforementioned intuition.
> The quantitative experimental results on downstream tasks demonstrate marginal improvements compared to the vanilla attention mechanism.
In our response to Reviewer tFgw, we trained a 7.7B model and observed **average improvements of 1.44%**, with more pronounced gains of up to **6%** on individual downstream tasks.
Additionally, we fine-tuned the 2.2B model on the SuperGLUE dataset [1] for 10 epochs and found a **1.65% improvement in decoding accuracy** (as opposed to eval/ranking accuracy reported in Table 2) on the SuperGLUE benchmarks listed below:
| Task | LASER | Standard |
|-------|--------|----------|
| Copa | 57.00 | 58.00 |
| Wic | 56.64 | 53.92 |
| WSC | 40.38 | 36.54 |
| RTE | 22.02 | 20.94 |
| **Avg** | **44.01** | **42.35** |
We believe these new fine-tuning results offer a more comprehensive view of how LASER improves over standard attention, beyond what is seen in 1-shot downstream evaluations.
[1] Wang, Alex, et al. *"SuperGLUE: A stickier benchmark for general-purpose language understanding systems."* Advances in Neural Information Processing Systems 32 (2019).
---
Rebuttal Comment 1.1:
Comment: Thank the author for the point-by-point response to my questions. I appreciate the commitment the author has to prove the intuition behind using LASER's stability against vanilla attention in the final revision. Upon reading the rebuttal the authors provided to other reviewers, I'm willing to increase my score to 4 (Accept).
---
Reply to Comment 1.1.1:
Comment: Thank you for the reconsideration of the score. | Summary: The papers studies the gradients during backpropagation of the standard softmax dot product attention within transformers. The authors' key insight is that these gradients can be extremely small, leading to poor gradient signal propagation, which in turn leads to sub-optimal learning. They then suggest a modification of attention to mitigate this issue by suggesting that attention should be carried into in an exponential value space, involving applying an exponential function to the values matrix along with a log function to the softmax probabilites.
**Update After rebuttal:**
I thank the authors for their rebuttal. I have re-read their paper several times including the appendix. I have also read the other reviews along with their rebuttals to the other reviewers and to my review. However, I am still not convinced that this work has enough novelty and significance to be published in a conference at the level of ICML. Therefore, I will be voting to reject the paper.
A real concern I have with the paper is that they have shown a lot of empirical results yet offer no real theoretical understanding of why their method works in the way they have reported. Furthermore, the paper has so many statement that are well known to researchers that publish in this area. For example, the whole of section 3.1 and 3.2 is well known and yet they dedicate more than a page to it. Lemma 3.1 is a very simple lemma which they claim they extend to arbitrary sequence length $N$ yet the extension is extremely simple this cannot be considered a contribution. Furthermore, the theory of the paper simply consists of computing gradients and showing that certain quantities from the computation yield a saturation term (possibly of low order). This is by no means novel or significant.
**Final Comments on Rebuttal:**
1. **We would like to highlight a subtle distinction in our approach: we apply the logarithm after the multiplication..:**
Thank you for clearing that up. I am still not convinced that this is novel. Just because an idea has not been explored that does not make it novel.
2. **Please refer to our response to Reviewer tFgw, where we scaled to 7.7B parameter models and found an average improvement of 1.44%...:**
You empirical results are fine but there is a major issue in that there is no real explanation in why you are able to get these improvements.
3. **With better gradient signal propagation, which we theoretically demonstrate in our manuscript, we conjecture that gradients may be better conditioned with LASER than with Standard attention. However, we have not yet explored the effect of this on the convergence rate of the optimizer.**
I do not agree with this entirely. You actually do not prove that your method gives better signal propagation. You simply compute gradients and show how you can then apply your method to avoid the saturation that can take place. There is no proof that what you do leads to better signal propagation and thus the relationship to the empirical results is thus still lacking. You should really prove that your method leads to better conditioned gradients. That will then clearly show your method is capping off any over flow and leading to better training.
4. **We observed similar spikes in gradient norms for both LASER and Standard attention, but only a few such spikes translated into actual loss spikes. We address this in our stability analysis (Appendix A.5), where we note that Standard attention suffers from more loss spikes than LASER.**
Thank you. I noticed massive spikes in your analysis (appendix A.5) even with normal attention which you have not made clear as to why that is happening. The fix could simply be that a better learning rate scheduler need to be chosen. Yet no analysis is done to form an actual conclusion.
5. **Thank you for pointing this out. We will explicitly mention the following limitations of LASER in our revision...**
The quadratic complexity holds even for normal attention so really you have not added anything by saying that point. I was asking about explicit limitations of LASER that are not present in usual attention. The fact that there were no limitations stated and analysed in the original paper is quite concerning, especially when reading the paper I noticed several limitations as stated above.
Claims And Evidence: Yes the claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are fine.
Theoretical Claims: The theoretical claims both in the paper and the appendix were checked and as best as I can see are correct.
Experimental Designs Or Analyses: The experimental designs are fine.
Supplementary Material: I reviewed all parts of the supplementary material.
Relation To Broader Scientific Literature: The key contributions in the paper are related to transformer training and the well known issues of gradient propagation in neural networks. this is a well studied and important topic and the authors give a new viewpoint of this.
Essential References Not Discussed: References are fine.
Other Strengths And Weaknesses: **clarity:** I admire the authors efforts in trying to understand gradient propagation within the attention layer of a transformer. Their approach of mapping everything to exponential space and working there is a nice, although somewhat simple, insight. The clarity with which they write the paper is simple to read.
However, I do have the following issues:
**Novelty:** I don't feel there is enough novelty in this paper to be published in ICML. The authors simply show that gradients saturate in the attention gradients during backpropagation and then show that by applying an exponential on the values and a logarithm on the softmax probabilities one can produce a gradient that does not saturate. The approach is simply to look at the saturated gradient part and realise that a certain transformation can mitigate the effect of the saturation. Furthermore, the mathematical approach is simply just computing a derivative using the chain rule. What would have been much better is if the authors actually showed that this saturation caused serious problems with convergence of the optimizer. Or for that matter showing that by getting rid of saturation by using their LASER method that a different optimizer such as SGD can be used to train a transformer as it is well known that SGD performs extremely badly when compared to Adam on many transformers. My feeling is this will lead to nothing but the authors are welcome to prove me wrong.
**Significance:** This is related to the novelty. I don't believe the paper will be significant. While their insight is nice. It is my feeling that that is all it is. A nice insight that leads to some better performance with a simple adjustment done by carrying out some simple chain rule calculations. I feel the authors should have really shown how LASER can be impactful and significant by showing that it leads to something practitioners in the field did not expect with transformers such as better training with different optimizers like SGD.
Other Comments Or Suggestions: I do not have any other comments.
Questions For Authors: 1. Would the authors be able to say something about whether theoretically LASER leads to a better convergence rate for the optimizer?
2. Furthermore, I don't really see a big difference between LASER and standard attention in the training curves. For example, in figure 2 (top curve) it seems LASER converges faster but only slightly. Since your work is that LASER should have better back propagated gradients shouldn't this lead to a much better and faster train loss curve?
3. In figure 3 I notice an issue with the gradient norm. Between step 60000 and 80000 I see that LASER has a sudden jump in gradient magnitude. This is usually an effect of gradient becoming too large which is not good for training. Did this happen in other experiments? This could be a limitation of the work.
4. What are the limitations of LASER? I noticed you did not really discuss any limitations of your method.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful feedback.
> **Reviewer:** The authors simply show that gradients saturate in the attention gradients during backpropagation and then show that by applying an exponential on the values and a logarithm on the softmax probabilities one can produce a gradient that does not saturate.
We would like to highlight a subtle distinction in our approach: we apply the **logarithm *after*** the multiplication of softmax probabilities with the values. To the best of our knowledge, this idea has **not been explored** in existing literature, and we believe it offers a **novel attention formulation**.
---
> **Reviewer:** Significance: This is related to the novelty. I don't believe the paper will be significant.
Please refer to our response to **Reviewer tFgw**, where we scaled to **7.7B parameter models** and found an **average improvement of 1.44%** in downstream evaluations, with **more pronounced improvements up to 6%** in individual tasks. Additionally, we show that this insight **generalizes to other tasks** such as:
- **ViT: +1.2%**
- **BERT**
- **Speech Transformer**
Also, in our response to **Reviewer 9sxs**, we fine-tuned the **2.2B model on the SuperGLUE dataset** [1] for 10 epochs and observed a **1.65% improvement in decoding accuracy** (in contrast to the eval/ranking accuracy reported in Table 2) across the SuperGLUE benchmarks.
---
> **Reviewer:** Would the authors be able to say something about whether theoretically LASER leads to a better convergence rate for the optimizer?
With better gradient signal propagation — which we theoretically demonstrate in our manuscript — we **conjecture** that gradients may be **better conditioned** with LASER than with Standard attention. However, we have **not yet explored** the effect of this on the convergence rate of the optimizer. We will consider including this analysis in our final revision.
---
> **Reviewer:** In Figure 3 I notice an issue with the gradient norm. Between step 60000 and 80000 I see that LASER has a sudden jump in gradient magnitude. This is usually an effect of gradient becoming too large which is not good for training. Did this happen in other experiments? This could be a limitation of the work.
We observed similar spikes in **gradient norms for both LASER and Standard attention**, but only **a few such spikes translated into actual loss spikes**. We address this in our **stability analysis (Appendix A.5)**, where we note that **Standard attention suffers from more loss spikes than LASER**.
---
> **Reviewer:** What are the limitations of LASER? I noticed you did not really discuss any limitations of your method.
Thank you for pointing this out. We will explicitly mention the following **limitations of LASER** in our revision:
- LASER attention has **quadratic computational complexity**, which can be significant for long sequence lengths.
- A **comprehensive failure case study** for LASER (and similarly for Standard attention) has not been conducted. This remains **future work**.
---
[1] Wang, Alex, et al. *"SuperGLUE: A stickier benchmark for general-purpose language understanding systems."* Advances in Neural Information Processing Systems 32 (2019). | Summary: This paper introduces LASER (LogArithm of Summed Exponentials of Representations), a novel attention mechanism for Transformers. The researchers found that in the standard attention mechanism, the gradients backpropagated through the softmax operation can be small, which may lead to inefficient learning of parameters. LASER addresses this by applying attention in the exponential value space. It conducts attention on exp(V) and uses a Log - Weighted - Sum - Exp trick to avoid numerical overflow. Experiments on various Transformer models, including autoregressive LLMs, Vision Transformer, and Conformer, show that LASER can improve performance. For example, in autoregressive language modeling on the C4 dataset, it can achieve up to a 1.74% relative improvement in test loss, and it also shows better results in downstream tasks with an average accuracy improvement of about 1%.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: no
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths
1. LASER really shows its worth in experiments. It can significantly reduce the test loss in autoregressive language modeling, like up to 1.74% in some cases. And it also improves the accuracy in downstream tasks, which means it can make the model perform better in practical applications. For instance, in the Vision Transformer on Imagenet, it has a 1.15% absolute improvement in error rate, which is a great result.
2. It's really convenient that LASER can be implemented with just small modifications to the existing attention implementations. It doesn't need to change the underlying attention function, which makes it easy for researchers and engineers to apply in their work. This simplicity also means it can be quickly integrated into different models.
3. The paper provides a solid theoretical analysis. By studying the gradient backpropagation in the attention mechanism, it clearly points out the problem of small gradients in the standard attention and explains how LASER can solve this problem. This theoretical support makes the proposed method more reliable and convincing.
4. LASER has been tested on a variety of models across different modalities, such as text, speech, and vision. It shows consistent improvements in all these models, which indicates that it is a versatile method and not limited to a specific type of model or task. This broad applicability makes it more valuable in different research and application scenarios.
Weaknesses
1. Although LASER shows good results in many experiments, it may not be suitable for all situations. The paper mainly focuses on models with a certain scale of parameters, and it's not clear how well it will perform in models with extremely small or large numbers of parameters. Also, it might face challenges in some specific tasks where the data characteristics are very different from those in the experiments.
2. The Log - Weighted - Sum - Exp trick is crucial for LASER to avoid numerical overflow. But this also means that the performance of LASER is highly dependent on this trick. If there are some issues with this trick in different hardware or software environments, it may affect the performance of LASER. And the paper doesn't fully explore the potential problems that might occur when using this trick.
3. The paper mainly compares LASER with the standard attention mechanism and some simple modifications of it. There are many advanced attention mechanisms proposed recently, and the paper doesn't compare LASER with them. So, it's hard to say how LASER stands out among the most advanced techniques in the field. This lack of comparison limits the understanding of LASER's superiority and competitiveness.
Other Comments Or Suggestions: no
Questions For Authors: see weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comprehensive comments.
> The paper mainly focuses on models with a certain scale of parameters, and it's not clear how well it will perform in models with extremely small or large numbers of parameters.
We trained a model with **7.7B parameters scaling up by 3.5x our largest model (2.2B)**. We train the model for 44B tokens and evaluate it on downstream tasks:
| Dataset | Laser Acc | Standard Acc |
|-------------|----------------|--------------------|
| arc_e | 52.48 | 52.69 |
| boolq | 62.45 | 56.48 |
| cb | 44.64 | 42.86 |
| hellaswag | 57.16 | 56.02 |
| multirc | 56.00 | 55.59 |
| openbookqa | 45.40 | 43.60 |
| racem | 44.64 | 44.15 |
| rte | 53.43 | 49.82 |
| storycloze | 71.78 | 71.51 |
| wic | 47.34 | 47.34 |
| winogrande | 58.41 | 57.77 |
| **average** | **53.97** | **52.53** |
We found that compared to 2.2B billion model which gave an gain of 1% on average, for 7.7B model, we found about **1.44% average gain over standard attention**, and observe **more pronounced differences in boolq (+6%), cb (+1.78%), openbookqa (+1.8%), rte (+3.61%)**. In 7.7B parameter model, we scaled along all dimensions, model_dimension=3440, hidden_dimension=11584, num_heads=16, dims_per_head=720.
We conducted a power-law fit (Figure 4) on test loss of models ranging from 234M to 2.2B and conjecture that **smaller models might have smaller differences in loss value with LASER**. However, this observation is **for decoder language models on C4**. In contrast, our experiments with **ViT-S/16 with 22M parameters, LASER gives 1.2% improvement** in classification error.
> Also, it might face challenges in some specific tasks where the data characteristics are very different from those in the experiments.
While we showed gains on standard downstream tasks used to evaluate the performance large language models, identifying tasks where LASER performs poorly and similarly softmax-attention performs poorly is interesting future work we plan to do.
> The Log - Weighted - Sum - Exp trick is crucial for LASER to avoid numerical overflow. But this also means that the performance of LASER is highly dependent on this trick.
**We cached 4800 query, key, value matrices of size (1024, 8, 256) (headsize-256, heads-8, sequence length-1024) during training**, and computed the following numerical reconstruction relative errors of attention output in bfloat16: **vanilla LASER (0.0018, 0.0002), LASER+log-weighted-sum-exp-trick (0.0017,0.0001), Standard Attention (0.0016, 0.0001)**. This experiment is conducted in a machine with 4 **TPUv5 chips.** We conducted the same experiment on **16 A100s** and found the following errors: **vanilla LASER (0.002, 0.0003), LASER+log-weighted-sum-exp-trick (0.0019,0.0002), Standard Attention (0.0018, 0.0002)**. While on an average, we found log-weighted-sum-exp-trick to help on both TPUv5 and A100s, **we note that this trick prevents overflows, which is crucial for stable training**. Similar trick famously known as **log-sum-exp [1] is used to prevent overflows due to exp(.) function in the softmax function** and is adopted by both pytorch and jax in their softmax implementations.
> There are many advanced attention mechanisms proposed recently, and the paper doesn't compare LASER with them.
**Diff Transformers [1] is a recent state of the art attention and concurrent work, which outperforms Standard Attention** which uses the difference of two softmax attention matrices and their multiplication with the value matrix as the output. We note that both matrices still face backpropagation challenges due to the use of softmax. To address this, we use the LASER formulation. We trained a 2.2B DiffTransformer with Model dim: 2048, Hidden dim: 8192, Number of attention heads: 8, Head size: 512, Number of layers: 32. The model was trained on 24 billion tokens.
| Dataset | Diff+LASER | DiffTransformer |
|-------------|-------------|-----------------|
| arc_e | 49.2845 | 49.2003 |
| cb | 42.8571 | 41.0714 |
| hellaswag | 51.9319 | 51.5834 |
| multirc | 55.1361 | 52.8259 |
| openbookqa | 44.2000 | 43.0000 |
| racem | 42.4095 | 40.7382 |
| rte | 52.3466 | 50.9025 |
| storycloze | 71.0315 | 71.2988 |
| wic | 50.0000 | 49.5298 |
| winogrande | 56.0379 | 55.8011 |
| **Average** | **51.5235** | **50.5951** |
On average, **we observe an improvement of ~1%, similar to the improvement in 2.2B transformer model in Section 4.1**.
[1] Ye, Tianzhu, et al. "Differential transformer." (2024). | null | null | null | null | null | null |
On the Local Complexity of Linear Regions in Deep ReLU Networks | Accept (poster) | Summary: This paper suggests a new measure for analyzing feedforward neural networks based on the density of points where the model’s gradient is discontinuous (“kinks”) near training samples, which they term Local Complexity (LC). It builds opton prior findings that neural networks tend to form fewer linear regions, especially in later training stages, which relates to the generalization question in over-parameterized networks. To establish their measure’s relevance, the authors connect it to: (a) the dimensions of the features manifold, as previous results suggest that networks that generalize better tend to form low-dimensional representations; (b) robustness to gradient-based adversarial attacks, and show that LC is at least as indicative than the TV of the model over the data distribution; and (c), the “representation cost”, which is the smallest possible norm of the model’s parameters needed to exactly compute some function. These connections are proved theoretically as upper and lower bounds for the LC, and empirically for toy examples.
Claims And Evidence: Strengths
1. It offers an intuitive and mathematically rigorous measure for analyzing models and can be used to formalize previous results. Most, if not all, previous results referenced by this paper are only tested on very small networks (1D or 2D inputs) that can be analyzed analytically; this paper offers a method to generalize their results to more complex settings.
2. The math is well-developed, extensive and easy to follow.
3. Simple and intuitive examples.
4. The authors provide experimental results to highlight the limitations of some connections.
Weaknesses
1. The LC term was previously introduced in a paper Humayun, A. I., Balestriero, R., and Baraniuk, R. Deep networks always grok and here is why. In High-dimensional Learning Dynamics 2024: The Emergence of Structure and Reasoning, (https://openreview.net/pdf?id=NpufNsg1FP). This raises questions about the novelty of the concept, even though the purposes of the two papers appear to be different. There are quite similar ideas, definitions and illustrations.
2. It is not clear that the paper offers a new insight model generalization or otherwise. The dynamics of linear regions in the input space and their connections to the training procedures and generalization are referenced and no new insight is given, and no results demonstrate that a drop in LC correlates with a drop in the generalization error. While LC is an interesting measure, It's not clear that it currently provides a useful novel perspective.
3. It is very lacking in results. The only somewhat practical result is its connection to the adversarial robustness, which also needs to be tested on more complex settings than MNIST. While it seems that prior work in this area also lacks large-scale experiments, this paper could have used LC to demonstrate analytical results in more complex settings.
4. The paper shows that the connection to the feature’s manifold dimension only empirically holds in toy examples and breaks on MNIST, but the counterexample is in the appendix, and although referenced, is not discussed in detail. This contradicts parts of their insights and conclusion and feels "hidden".
5. There is not much discussion on the tightness of the bounds.
6. In the case of Total Variation (TV), both Figure 3 in the main paper and Figure 9 in the Appendix were expected to provide clear evidence that TV is bounded by Local Complexity (LC). However, the figures suggest that TV does not always remain strictly below LC, raising questions about the tightness of the theoretical bound and whether additional factors influence TV's behavior in late-stage training.
Methods And Evaluation Criteria: See above
Theoretical Claims: The theory appears good, relating LC to TV. Its relevance and importance is less clear. Also novelty with respect to previous papers, as stated above.
Experimental Designs Or Analyses: Not sufficient. Very simple networks and datasets.
Supplementary Material: Reads well, theory and proofs are discussed broadly in sufficient manner.
Relation To Broader Scientific Literature: Relation to generalization properties at later stages of training, which is an important problem in DNNs.
Essential References Not Discussed: Relations to Humayun et al 2024 paper should be well discussed and the additonal contributions here.
Other Strengths And Weaknesses: see above
Other Comments Or Suggestions: • Contains multiple incorrect references (e.g., “Figure 5” on page 7 should reference Figure 3). Additionally, the Impact Statement is missing a period.
• The MNIST example was not fully clear. It did not specify how many or which classes were used for training, only stating that a very small subset of the data was taken.
• In Figure 2, why is there a spike rising after 1e6 iterations? Does this always happen (as it seems from Figure 7)?
• In Figure 2, the bounding connection between LC and LR was not completely clear when the drop occurred in each. Adding a grid might help improve readability.
• Lines 158, 307 – Missing capital letters, affecting consistency.
• Left paragraph, Lines 77-78 – Sentence unclear.
• Right paragraph, Line 137 – "that" should be "than".
• Line 290 – Unclear sentence: "an 4", "to estimate the learn a map" needs correction.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and detailed comments which have helped us improve our paper. We have addressed your thorough list of minor errata for the camera-ready version. Below we address each major point raised.
> The LC term was previously introduced in a paper ... This raises questions about the novelty of the concept, even though the purposes of the two papers appear to be different.
Response: We appreciate this point and wish to clarify the novelty of our contribution. The referenced prior work introduced the LC term predominantly in an empirical context, highlighting an intriguing phenomenon without providing any theoretical analysis or claims. In contrast, our work explicitly develops a theoretical foundation for the Local Complexity. Specifically, we derive rigorous theoretical explanations and demonstrate connections between LC and other established theoretical aspects of deep learning. Therefore, while the previous paper contributed an interesting empirical observation, our paper significantly extends the theory of Local Complexity in neural networks.
> ... It's not clear that it currently provides a useful novel perspective
Response: We respectfully emphasize that our theoretical framework does indeed provide a novel and valuable perspective, as it unifies different areas of study in deep learning theory under the common theme of the local complexity. We are able to derive quantitative insights into important phenomena, including adversarial robustness, grokking, and neural collapse. To the best of our knowledge, this has not been explored or established in previous literature.
> It is very lacking in results / ... needs to be tested on more complex settings than MNIST.
Response: We acknowledge this feedback and note that readers seeking extensive empirical results may find Humayun et al. particularly relevant. Our paper, however, focuses primarily on the rigorous development of a theoretical framework that lays a significant foundation for understanding and analyzing related empirical phenomena. We provided experiments to better illustrate and motivate our theoretical contributions, which is the focus of our paper. Nevertheless, we will be glad to add more extensive experiments on the local complexity on CIFAR-10, CIFAR-100 and Imagenette for the camera-ready version.
> There is not much discussion on the tightness of the bounds. ... raising questions about the tightness of the theoretical bound and whether additional factors influence TV's behavior in late-stage training.
Response: We thank the reviewer for raising this important point. For an extensive on the tightness of our theoretical bounds and the conditions under which they hold, we direct the reviewer to Appendix B.4. In that appendix, we clarify the relationship between total variation (TV) and local complexity (LC), particularly addressing situations where TV increases while LC does not. We demonstrate that such discrepancies are driven by increases in the Lipschitz constants of the networks, which are explicitly accounted for in our bound, thus clarifying the precise circumstances affecting the tightness of our theoretical predictions.
> Relations to Humayun et al 2024 paper should be well discussed and the additional contributions here.
Response: We agree that our paper is closely related to the work of Humayun et al. (2024). Indeed, our work aims to provide a theoretical angle on how to understand the empirical observations from their work. We are adding more discussion of the similarities and differences for the camera-ready version.
Please let us know if any aspects of our work would benefit from further clarification.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal response. I agree the theoretical contribution is of significance and hope more clarifications regarding previous art and additonal experiments are provided, as promissed. I raise my rank. | Summary: The authors introduce two novel metrics, Local Complexity (LC) and Local Rank (LR), to analyze the structure of linear regions in ReLU networks. These metrics provide insights into the relationship between network complexity, feature representations, adversarial robustness, and representation cost. Theoretical bounds are established for LC and LR, and empirical experiments validate these findings, offering a new framework for studying the geometric properties of neural networks.
Claims And Evidence: All theorems/propositions/corollaries are supported with convincing proofs detailed in the supplementary materials. Some of them are summarized below :
Claim : lower Local Complexity corresponds to learned low-dimensional representations.
Evidence : theoretical bound (Theorem 5, proved in A.5) + empirical study (Figure 2).
Claim : Local Complexity is an upper bound on total variation and relates to adversarial robustness.
Evidence : theoretical bound (Theorem 7, proved in A.8) + empirical study (Figure 3).
Claim : Training dynamics drive networks toward lower local complexity solutions.
Evidence : theoretical bounds (Proposition 9, proved in A.11, Corollary 10) + empirical study (Figure 13).
Methods And Evaluation Criteria: Yes :
- The metrics LC and LR are mathematically well-defined.
- the Stanford Bunny experiment is a good toy problem for illustrating linear regions in an interpretable 2D setting.
- MNIST is appropriate for evaluating adversarial robustness in real-word settings.
However, the choice of noise variance σ affects LC estimation. A sensitivity analysis on different noise levels would improve confidence in the robustness of results. For example, what happen in Figure 5. for other values of σ?
Does the effects remains qualitatively the same in the two setups? It is also unclear how this value is chosen : why is the middle figure in figure 4. better than its left or right counterpart ?
Theoretical Claims: The proofs seems sound and well-structured, but I didn’t checked closely their validity.
Experimental Designs Or Analyses: The experiments seems valid, although it would be great to give access to a github repository to reproduce the results.
Supplementary Material: I reviewed mostly the sections B and C. There is linking/naming problem in section B.3 :
1 .“on Figure 5” references Section 5 but should reference Figure 3?
2. “as in Figure B.1” → “as in Section B.1”
3. “In Figure 5” → “In Figure 3”
There is also the same problem in line 40 of the paper : “We replicate similar experiments in Figure 5.”. Other than that, the supplementary materials provide useful informations.
Relation To Broader Scientific Literature: The study builds upon prior work on the expressivity and complexity of ReLU networks (Montufar et al., 2014, Pascanu et al., 2014, Serra et al., 2018…). While these previous papers were focused on counting/bounding the number of linear regions, this work is the first to introduce a notion of density of linear regions over the input space.
The discussion on grokking aligns with recent work on the training dynamics of these networks and the link with linear regions (Humayun et al., 2024, Liu et al., 2022). The paper empirically confirms findings from Humayun et al., 2024 that grokking corresponds to a reduction in the number of linear regions. The link between weight decay and reduced LC aligns with prior work on representation cost (Jacot, 2023).
Prior works (Croce et al., 2019) showed that larger linear regions correlate with increased robustness. This paper formalizes that idea by proving a theoretical upper bound between local complexity and total variation, showing that networks with lower LC are more robust.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: Strengths :
- The paper is well-written, clear about his assumptions, every claim is proved.
- The contribution seems to be relevant to the field and so could impact future research.
- The idea of using the density of linear regions instead of their number is original.
Weaknesses :
- The method for estimating LC via bias perturbations could be better motivated, particularly regarding sensitivity to noise variance.
- The paper would benefit from a discussion of how LC compares to standard complexity metrics like VC dimension.
Other Comments Or Suggestions: List of typos :
- line 78 : “the a network” → “of the network”
- line 155 : “is bears” → “bears”
- line 409 : “properities” → “properties”
- line 1790 : “intialization” → “initialization”
Questions For Authors: It is known that constraining the Lipschitz constant K of a neural network enhances its robustness. Following your results on robustness, do you think LC and K could be correlated, and if so, in what ways?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and detailed comments which have helped us improve our paper, and their positive review. We have addressed your minor errata for the camera-ready version.
> The method for estimating LC via bias perturbations could be better motivated, particularly regarding sensitivity to noise variance.
Response: Our theory centers around taking the expectation of a random level set of a function. Our theoretical framework builds on the analysis of the threshold at which a neuron transitions from inactive to active. The biases directly control the level at which a ReLU transitions from being inactive to being active and that is why we focus on bias perturbations. This is illustrated in the proof of Lemma 12. Perturbations of the biases control the level sets directly which makes the resulting formulas more tractable. A more in depth discussion can be found in section 3.1.
> The paper would benefit from a discussion of how LC compares to standard complexity metrics like VC dimension.
Response: We agree that relating LC to traditional complexity measures such as VC dimension is an intriguing direction. However, establishing a quantitative connection between LC, which captures the local behavior and density of linear regions in the network, and global complexity measures of a family of functions such as the VC dimension is challenging. We believe that this presents an interesting avenue for future work, and we plan to include a mention to this in the discussion section for the camera-ready version.
> It is known that constraining the Lipschitz constant $K$ of a neural network enhances its robustness. Following your results on robustness, do you think LC and $K$ could be correlated, and if so, in what ways?
Response: Our bounds relating the total variation (TV) of the network over the input space and the local complexity (LC) can shed some light on the relationship between the Lipschitz constant $K$ and LC. For instance, a simple bound shows that
$
TV = \mathbb E_x[ || \nabla f(x) || ] \leq K.
$
This suggests that controlling the Lipschitz constant may indirectly influence the local complexity, thereby impacting robustness.
Please let us know if any aspects of our work would benefit from further clarification.
---
Rebuttal Comment 1.1:
Comment: ** Sorry the comment was not for this article**
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. However, the content of your comment seems to be unrelated to our previous discussion. Could you please confirm if this was intended for our paper, or if there might have been a mixup? | Summary: The paper studies ReLU networks and proposes local complexity that estimates the average density of gradient discontinuities under an input distribution and a parameter set. Three theorems are provided. First, it is shown that local complexity can be estimated by gradients at each neuron. Second, it is established that the local rank can be bounded in terms of local complexity where the local rank is defined as the rank of the average rank of the Jacobian. Third, local complexity can be used to bound total variation of the ReLU network if the Lipschitz constant is given under several assumptions. Since network with a lower total variation can have adversarial robustness, a lower local complexity may be an indicator of robustness. Finally, it is shown that the local complexity can be bounded by the representational cost that can be connected to weight decay, implying that local complexity might be minimized during training.
Claims And Evidence: The claims in the papers are supported by proof.
Methods And Evaluation Criteria: This is a theoretical paper providing insights to ReLU networks. Several results are provided to establish the notion of local complexity and how it can be used to understand robustness.
Theoretical Claims: The reviewer did not verify the correctness of proof.
Experimental Designs Or Analyses: The notion of local complexity is a natural way to study the density of linear regions in a ReLU network. The analysis seems fairly reasonable.
Supplementary Material: The reviewer did not verify the proof in the supplementary material.
Relation To Broader Scientific Literature: The results shed light on ReLU networks and the findings are of interest to the broader community of machine learning.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
1. The notion of linear regions in ReLU network is an important concept and local complexity provides a way to estimate the density of the linear regions.
2. Links to local rank and total variations seem to be interesting and seem to have some implications on the training dynamics of ReLU networks.
Weaknesses:
1. It is not clear to the reader why the local rank is defined through the Jacobian. What is its relation to the rank of the weight matrix? In addition, what are the implications of having lower ranks? Would this directly connected to robustness mentioned in Section 5?
2. It seems the bound in Theorem 7 is hard to compute as it depends on the maximum Lipschitz constant across different layers of the network. Does it imply that one needs to estimate all Lipschitz constants from all of the subnetworks to estimate the total variations? How does the maximum Lipschitz constant relate to the maximum norm of the gradient $C_{grad}$? Some comments on how to estimate the bound would be useful to enhance the clarity of the theorem.
3. Proposition 9 illustrates the relation between local complexity and depth. However, it is only valid at the global optimum of the problem in (15) which minimizes the representational cost. Are there more general bounds that describe the local complexity in terms of depth? What was the main difficulty in the analysis?
4. Although the local complexity can be connected to total variation and thus develop some understandings of robustness, it is not known how to use it to design a more robust model or motivate architecture design.
Other Comments Or Suggestions: 1. Line 199: Is $\sigma$ the standard deviation? Should it be $N(0,\sigma^2I)$ In Theorem 2, should it be $N(\beta_i,\sigma^2)$?
2. Line 290: to estimate the learn a map between
Questions For Authors: 1. Could you highlight the difference between the definition of local complexity and the one in (Hanin & Rolnick, 2019b)?
2. How does $\sigma$ affect the local complexity in Theorem 2? This parameter seems to be hidden by $\rho_{b_i}$.
3. Are the constants in Theorem 5 and 7 the same ones defined in Corollary 3?
4. Reverse engineering a ReLU network may require the knowledge of all of the discontinuities in Eq. (2). If the local complexity is known, would it be possible to shed light on the hardness of reverse engineering?
5. If the weight noise is also used, would this change the theoretical results? For example, would it be possible to still be able to estimate the total variation?
6. For a given ReLU network, we can create a corresponding ResNet by adding skip connections. Would this ResNet have lower local complexity?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and detailed comments. We have addressed minor errata for the camera-ready version.
> It is not clear to the reader why the local rank is defined through the Jacobian. What is its relation to the rank of the weight matrix? ... Would this directly connect to the robustness mentioned in Section 5?
The rank of the Jacobian at a point quantifies the local dimension of the feature manifold, that is, the number of directions in which the input may be locally transformed. For a ReLU network, the Jacobian is given by
$
Jf(x) = W_1 D_1 W_2 D_2 \cdots W_L D_L,
$
where each $D_i$ is a diagonal matrix with entries in $\{0,1\}$ indicating which neurons are active. This implies that the rank of the Jacobian is bounded above by the smallest rank among the weight matrices. One might expect that networks mapping data to a low-dimensional manifold could exhibit simpler, and potentially more robust, decision boundaries.
> Some comments on how to estimate the bound would be useful to enhance the clarity of the theorem.
We estimate the bound on local complexity by considering the operator norm of the weight matrices. In particular, we compute an upper bound via
$
\max_l {|| W_l \cdots W_1 | |}
$
which is tractable to compute and provides a useful approximation. Additional details are provided in Appendix Section $B.4$. We show also that the term including $C_\text{grad}$ is typically $0$.
> Are there more general bounds that describe the local complexity in terms of depth?
We also derive a bound that explicitly incorporates network depth via the notion of representation cost (see Proposition 8) which is valid for any parameters. We can substitute the representation cost for the $L^2$ norm of the weights as well.
> ... it is not known how to use it to design a more robust model or motivate architecture design.
While our current results do not provide a recipe for designing inherently more robust models, our findings suggest that controlling the rank of the weight matrices could be a promising direction for enhancing robustness and guiding architectural choices. We also demonstrate a connection between weight decay and local complexity, which may guide future work.
> Could you highlight the difference between the definition of local complexity and the one in (Hanin & Rolnick, 2019b)?
Hanin & Rolnick investigate how the depth and number of neurons affect the expected number of linear regions for a generic distribution of parameters. In contrast, in our definitions and results we are concerned with how the linear regions vary for a specific choice of parameters, and we are concerned with the distribution of linear regions over the data distribution, which are aspects that are not discussed in the work of Hanin & Rolnick. We also explore properties like Local Rank and Total Variation, which are not addressed in their work.
> How does $\sigma$ affect the local complexity in Theorem 2? This parameter seems to be hidden by $\rho_{b_i}$.
The choice of $\sigma$ will affect the local complexity through the probability density function of the normal distribution, $\rho_{b_i}$, as well as through the magnitude of noise that is added to the biases, as we take expectation over this in the computation of LC. We show in Figure 4 the effect of increasing this.
> Are the constants in Theorem 5 and 7 the same ones defined in Corollary 3?
Yes, these are the same constants.
> If the local complexity is known, would it be possible to shed light on the hardness of reverse engineering?
This is an interesting point for further work. To our knowledge, our current framework does not directly address this, though one would expect that a low local complexity makes this easier.
> If the weight noise is also used, would this change the theoretical results? For example, would it be possible to still be able to estimate the total variation?
It is possible to extend the definitions to include perturbations of the weights though this may introduce changes in the direction of the boundaries, which are naturally more complicated. In Figure 5 in the appendix, we illustrate how adding perturbations to the weight matrices does not qualitatively change our measure of the local complexity. The computation of the total variation would also be similar.
> For a given ReLU network, we can create a corresponding ResNet by adding skip connections. Would this ResNet have lower local complexity?
Consider a simple one-layer example where
$
f_{\text{res}}(x) = f_\theta(x) + x.
$
Its Jacobian is
$
Jf_{\text{res}}(x) = Jf_\theta(x) + I.
$
Here, the discontinuities in the Jacobian $Jf_{\text{res}}(x)$ occur only where $Jf_\theta(x)$ is discontinuous. So, the local complexity of the ResNet does not differ markedly from that of the corresponding plain ReLU network.
Please let us know if further clarification is needed.
---
Rebuttal Comment 1.1:
Comment: The reviewer would like to thank the authors for their detailed responses. The rating will be kept unchanged. | null | null | null | null | null | null | null | null |
Fraud-Proof Revenue Division on Subscription Platforms | Accept (poster) | Summary: The paper "Fraud-Proof Revenue Division on Subscription Platforms" addresses the problem of revenue distribution on subscription-based platforms, particularly in the context of music streaming services. The authors formalize three types of manipulation-resistance axioms—fraud-proofness, bribery-proofness, and Sybil-proofness—and evaluate existing revenue division mechanisms against these axioms. They find that the widely used GlobalProp mechanism fails to prevent fraud and makes fraud detection computationally intractable. The authors propose a novel mechanism, ScaleDUserProp, which satisfies all three manipulation-resistance axioms and is shown to be a fairer alternative through both theoretical analysis and empirical evaluation on real-world and synthetic datasets. The paper also introduces fairness axioms such as engagement monotonicity and Pigou-Dalton consistency, and demonstrates that ScaleDUserProp performs well on these metrics compared to existing rules like UserProp and UserEQ.
Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. The authors provide rigorous theoretical proofs for their proposed axioms and mechanisms, and they back up their claims with empirical experiments on both synthetic and real-world datasets. The theoretical results are well-articulated, and the empirical results are presented with sufficient detail, including the use of the Music Listening Histories Dataset, which adds credibility to their findings.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors use a combination of theoretical analysis and empirical evaluation to validate their claims. The synthetic data generation process is well-described, and the use of real-world data from the Music Listening Histories Dataset adds robustness to their findings. The evaluation metrics, such as pay-per-stream (PPS) and maximum envy (ME), are well-chosen to measure fairness and manipulation resistance.
Theoretical Claims: The theoretical claims in the paper appear to be correct. The authors provide detailed proofs for their key results, including the manipulation-resistance properties of the proposed mechanisms (e.g., Theorems 3.1, 3.8, 4.3) and the fairness axioms (e.g., Theorems 3.3, 4.4). The proofs are well-structured and logically sound, and the authors provide additional proofs in the appendix for omitted results.
Experimental Designs Or Analyses: The experimental designs and analyses are sound. The authors conduct experiments on both synthetic and real-world datasets, and they provide a clear explanation of their methodology. The results are presented in a way that allows for easy comparison between different mechanisms, and the authors discuss the implications of their findings in detail. The use of synthetic data helps to control for certain variables, while the real-world data provides a realistic evaluation of the mechanisms.
Supplementary Material: The supplementary material includes additional proofs and discussions that support the main claims of the paper. The authors provide detailed proofs for some of the theorems that are omitted in the main text, which adds to the rigor of the paper. The supplementary material also includes a discussion on how the proposed model generalizes portioning rules, which is a useful addition for readers interested in the broader context of the work.
Relation To Broader Scientific Literature: The paper is well-situated within the broader scientific literature on revenue division and manipulation resistance in subscription platforms. The authors reference prior work on revenue-sharing mechanisms, such as GlobalProp and UserProp, and they build on these ideas to propose a new mechanism that addresses the limitations of existing approaches. The paper also connects to the literature on fairness axioms and cooperative game theory, particularly in the context of the Shapley value and Pigou-Dalton consistency.
Essential References Not Discussed: The paper does a good job of citing relevant prior work. I am not sure if they cited all key works since I am not familiar with this topic.
Other Strengths And Weaknesses: Strengths:
* The paper addresses a timely and important problem in the context of subscription-based platforms, particularly in the music streaming industry.
* The proposed ScaleDUserProp mechanism is novel and offers a compelling solution to the problem of manipulation resistance.
* The paper provides a thorough theoretical analysis, supported by empirical evidence, which adds to its credibility.
* The authors do a good job of connecting their work to the broader literature on fairness and revenue division.
Weaknesses:
* While the paper is well-written, some parts of the theoretical analysis could be more accessible to readers who are not familiar with the
technical details of revenue division mechanisms.
* The paper could benefit from a more detailed discussion of the limitations of the proposed mechanism, particularly in terms of scalability and computational complexity.
Other Comments Or Suggestions: None
Questions For Authors: 1. Comparison with Other Mechanisms: The paper compares ScaleDUserProp with GlobalProp, UserProp, and UserEQ. Are there other mechanisms in the literature that the authors considered but did not include in their analysis? If so, why were they excluded?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review. Please find our response below.
---
> Comparison with Other Mechanisms: The paper compares ScaledUserProp with GlobalProp, UserProp, and UserEQ. Are there other mechanisms in the literature that the authors considered but did not include in their analysis? If so, why were they excluded?
To the best of our knowledge, these are the only rules that have been proposed and studied in the literature to date.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I'll keep my score. | Summary: The authors propose a mechanism-design framework to counter manipulation in subscription-based streaming platforms. They formally define three types of fraudulent behaviors and illustrate how the commonly used “global proportion” revenue rule is highly vulnerable. To address this, they introduce some new axioms and prove that no single existing mechanism can satisfy them all. Then, they propose ScaledUserProp, which is a revenue-sharing mechanism designed to resist manipulation while incentivizing engagement. Experiments on some real-world streaming data demonstrate that ScaledUserProp reduces fraudulent activity compared to current industry practices.
Claims And Evidence: Overall, I found this paper to be well written, both in their setup and methods. This does indeed seem to be an interesting and well motivated economics problem, and the authors seem to make a good case for their ScaledUserProp mechanism.
My main issue: I am not a computational economist, and did not expect to be reviewing economics papers for ICML. ICML is potentially the wrong venue for this work, and I worry that this may lead to a lower reviewing bar than at, say, Economics and Computation.
After a quick skim of the references in this paper, I found no publications at ICML / Neurips / ICLR. The closest: a paper published at KDD, and another at SODA. Most of the cited papers, on the other hand, were published at economics conferences / journals.
That said (and despite my lack of background), I did my best to review the paper for the benefit of the authors. However, I feel my lack of a sense of prior work has prevented me from asking pertinent and non-trivial questions. I’d like the AC to take this into account when using my review.
Methods And Evaluation Criteria: (discussed below)
Theoretical Claims: I did not check all of the theoretical claims, as they were extensive and significantly outside of my area. However, I did try to understand some of the claims, and follow some proofs. Overall, the claims I spent time with were reasonably stated. The proofs were mostly believable, but may have omitted some important details (hard for me to tell, again outside my area, so not sure what the norms are). Below, I offer some questions / comments.
**Bribery proofness:** So, the definition states that for any two instances where the engagement profiles differ for exactly $k$ users, $$\left|\phi_{\mathcal{I}}(\hat{C}) - \phi_{\mathcal{I}'}(\hat{C})\right| \leq 1$$ This bound of “1”' is independent of $k$. This surprised me: in other words, even if many users’ weights are changed (i.e., $k > 1$), the change in the allocated revenue for any subset $\hat{C}$ is still bounded by 1. This is quite strong? My naive take is that this mixes the single-user notion (as in click-fraud-proofness) with a multi-user scenario. Are we trying to allow only a unit change *regardless* of the number of bribed users?
**Clarity in Sybil-Proofness and Strong Sybil-Proofness:** For definition 2.9 of Sybil-proofness, there’s a statement “no artist benefits from splitting or merging,” and then in the formal definition its given that the condition “for any two instances $\mathcal{I} = (N,C,\mathbf{w})$ and $\mathcal{I}' = (N,C,\mathbf{w}')$ with $\mathbf{w}_i = \mathbf{w}'_i$ for each user that is not a Sybil identity.” I found the terms “splitting” and “merging” a little bit too informal, which made it hard for me to understand the Sybil proofness property - can you formalize these? Did I miss something?
**Question on the a “Pigou-Dalton Consistency” statement:** So at some point, you state that ScaledUserProp satisfies no free-ridership, engagement monotonicity, and Pigou-Dalton consistency. Then, later in "Proof of Theorem 4.4" and in Table 1, you explicitly state/show that ScaledUserProp **fails** Pigou-Dalton consistency for every $\alpha \in (0,1]$. So which one is it? Am I missing something?
**Proof of Theorem 3.2** I tried to follow this proof more closely. There were a few steps that seemed a little unclear to me. When you say “by linearity” and then make an assumption on the sum of $w_{ic}$, and then say $f(0,T,N) = 0$, I’m not entirely sure I see how this works out.
Experimental Designs Or Analyses: I did take a close look at the provided notebooks / “real world experiments.” This is probably the weakest portion of the paper. It’s hard to evaluate these plots. The behavior between the synthetic and “real” data seems analogous, although there are clear differences when interpolating alphas, and the tradeoffs are not explored empirically in the rigorous way that experiments tend to be presented at ICML, simply given for these two scenarios. More extensive/well interpreted empirical results (that ask questions through ablations, different ways of looking at the data, etc.), which help the reader through the use case and impacts on the artists, and would help for a conference like ICML.
Supplementary Material: I reviewed a few proofs more closely in the appendix supplement, and opened the code notebooks to verify that they seemed reasonably implemented (they did).
Relation To Broader Scientific Literature: Hard for me to say - I am not familiar with this area.
Essential References Not Discussed: Hard for me to say - I am not familiar with this area.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: My main question (beyond some of the clarifying questions above): can you convince me that there’s an audience for this work at ICML? Can you give me an example of a similar strain of work that's been presented at the conference in say, the past 3 years? I don't see the point in this paper being presented at the conference, in place of some other worthy paper, if it has no audience. This led me to set my score as "Weak Reject," despite the paper's clarity of presentation and interesting problem, but I am open to being shown that I am wrong!
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review. Please find our responses below.
---
> Main question: can you convince me that there’s an audience for this work at ICML? ... This led me to set my score as "Weak Reject," despite the paper's clarity of presentation and interesting problem, but I am open to being shown that I am wrong!
While this specific line of work on subscription platforms is relatively new (motivated from economics), our work is part of a broader literature on using computational/algorithmic methods to tackle incentive challenges in online economic systems/platforms (these works aim to contribute theoretical foundations that complements more applied ML work). This is a central concern in the EconCS community, which has a growing presence at ICML.
Some recent ICML papers on strategic manipulation/incentive compatibility on online systems/platforms:
- Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict? ICML’24
- How Bad is Top-K Recommendation under Competing Content Creators? ICML’23
- Performative Recommendation: Diversifying Content via Strategic Incentives. ICML’23
The type of “adversarial fraud” we study in subscription platforms also has a close analogue in recommendation systems (which in itself is actively studied in venues like ICML and NeurIPS), where similar challenges are known as poisoning attacks.
Several other examples of work at ICML looking at strategic manipulation/incentive compatibility:
- Online mechanism design for information acquisition. ICML’23
- Fairness Interventions as (Dis)Incentives for Strategic Manipulation. ICML’22
- Making Paper Reviewing Robust to Bid Manipulation Attacks. ICML’21
- Incentivizing Compliance with Algorithmic Instruments. ICML’21
- Strategyproof Mean Estimation from Multiple-Choice Questions. ICML’20
Thus, we believe our paper fits naturally within this growing line of research at ICML and would garner interest from the community.
Moreover, ICML'24 has hosted multiple workshops that reflect this interest. For instance:
- “Agentic Markets” workshop focuses on the intersection of market/incentive design and agentic AI, and derives insights from “economics, mechanism design, game theory”.
- “Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact” workshop includes a topic on “Strategic behavior and its impact on algorithmic decision-making”.
- “Next Generation of AI Safety” includes “Agentic AI” as a theme, where “adversary exploitation” of deployed AI systems is a concern.
- “Models of Human Feedback for AI Alignment” workshop specifically calls for perspectives from “economics”
These are all close thematic matches. Perhaps one additional relevant reference (will add this citation) is the paper “Computational Copyright: Towards A Royalty Model for Music Generative AI” at the ICML'24 Workshop on Generative AI and Law, but they focus on royalty models for AI-generated music.
We hope that these examples of thematically similar papers and workshops from recent years have persuaded you that there is an audience at ICML for our work; and if so, that you would be able to reconsider your evaluation of our work.
---
> ... Are we trying to allow only a unit change regardless of the number of bribed users?
Nope. If $k$ users are bribed, then we allow at most $k$ units of change as defined in Def 2.6. When we compare bribery-proofness (BP) to click-fraud proofness, we focus on the case of “a single user altering their engagement”. Then, by BP, the difference for any subset of artists is 1. We can make this wording clearer. As a side note, by Prop 2.7 it suffices to only consider single-user BP.
---
> Proof of Thm 3.2
We consider linearity as used in linear algebra, and thus exclude affine functions (will clarify this). Formally, $f\left(\sum_{i \in N} w_{ic},\sum_{i \in N} \sum_{j \in C} w_{ij},N\right) = \sum_{i \in N} w_{ic} \times g\left(\sum_{i \in N} \sum_{j \in C} w_{ij},N\right)$. Clearly, if $\sum_{i \in N} w_{ic} = 0$ then for all args $T, N$, $f(0, T, N) = 0$.
---
> ... More extensive/well interpreted empirical results (that ask questions through ablations, different ways of looking at the data, etc.), which help the reader through the use case and impacts on the artists, ...
Our main contributions lie in introducing a novel approach to the problem, along with a set of axioms—particularly focused on manipulation-resistance—and supporting theoretical results. We agree that more extensive experiments would provide valuable insights. However, given the space/scope constraints of a conference paper, we prioritized highlighting what we believe are the most novel and fundamental aspects of the problem. But we appreciate the suggestion and will consider this for future work, thanks!
---
We have noted other comments and will address them accordingly (adding an example to better understand the def of (Strong) Sybil-proofness, and fixing the typo in the statement for Pigou-Dalton Consistency).
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's careful response to my review. I have also read through responses from the other reviewers.
Thank you for providing me a laundry list of prior work. I took a look at the first few papers; they focus on optimization / algorithmic approaches to solving some objective. They each cite extensive prior work from the ML community, in particular work published at ICML / Neurips / ICLR.
Let me say this: I believe that ICML should accept EC work. However, the point I was trying (and perhaps failed) to make was that ICML reviewers have a particular expertise and knowledge-base; namely, problems around optimization and learning algorithms.
If a paper on EC contains an interesting application of those algorithmic tools, or if that EC work leads to the expansion of those tools, then it seems like a natural fit for ICML.
However, if thats not the case, and its difficult to find prior work published at ICML / Neurips / ICLR that's relevant to cite (as it appears to be in your case), then maybe you're submitting to the wrong venue.
I had hoped that other reviewers would be more expert on the topic, but as I suspected, three of us had little familiarity with the literature, and only one had some familiarity. I read the review by the one reviewer with more familiarity. They did a good job, but I was not convinced by their questions + your responses that the paper received adequate scrutiny in the review process.
**So, I have decided to maintain my score, to highlight for the AC this issue. That said, there's every chance that this paper is worthy of acceptance.**
Hopefully the AC is more expert in the field, can read the paper, and make the correct decision. I am genuinely sorry to not feel able to raise my score; your work is well thought out and put together, and your responses to the reviews suggest you will make necessary adjustments if accepted. You clearly have a chance (as the other reviews were more positive than me), so it'll be up to the AC -- best of luck. | Summary: This paper examines fraud-proof mechanisms in subscription platforms. Specifically, the authors define a set of axioms covering fundamental properties, protection against strategic manipulation, and fairness in revenue division mechanisms. They analyze commonly adopted mechanisms and verify which axioms they satisfy. Based on these observations, the authors propose a new mechanism, ScaledUserProp, designed to enhance fairness in revenue distribution by considering the maximum envy metric. Experiments on both real-world and synthetic datasets demonstrate the effectiveness of the proposed mechanism.
---
I remain my score unchanged after rebuttal.
Claims And Evidence: **(Pro)** The theoretical results are generally sound.
**(Con)** In the axioms of fraud-proofness, the final constraint is given as $\phi_{I'}(\hat{C}) - \phi_{I}(\hat{C}) \le \hat{n}$. Why is the right-hand side not generalized to a more flexible form, such as $\hat{n} \cdot A$, where $A$ is a constant? The current formulation seems restrictive to the specific case of $\hat{n}$. Would the theoretical results still hold under this more general setting? A similar concern applies to the bribery-proofness condition.
Methods And Evaluation Criteria: The evaluation metrics and methods are generally sound.
Theoretical Claims: **(Pro)** I did not verify the theoretical claims in full detail, but they appear to be generally sound.
**(Con)** Some claims require further clarification. In Definition 2.4, does fraud-proofness hold for all $\hat{N} \subseteq N$ and $\hat{C} \subseteq C$? I noticed that in Definition 2.6, the condition "for all $\hat{C} \subseteq C$" is explicitly stated. Additionally, in Definition 2.12, should there be a further requirement that $\delta < w\_{ij} - w'\_{ij}$? This would prevent cases where $w'\_{ij} = w\_{ij} - \delta$ is so small that $|w'\_{ij} - w'\_{i'j}|$ becomes larger than $|w\_{ij} - w\_{i'j}|$.
Experimental Designs Or Analyses: **(Con)** In the experiments, the authors analyze the top and bottom few agents based on their pay-per-stream values. However, in Section 4, the theoretical analysis focuses on only the top and bottom single agents. Could the authors clarify why these two evaluation methods differ?
Supplementary Material: I briefly checked the proofs.
Relation To Broader Scientific Literature: **(Pro)** The paper's key contribution is the comprehensive analysis of existing revenue division mechanisms across a broad range of axioms and the introduction of a new mechanism to improve fairness.
Essential References Not Discussed: No essential references appear to be missing.
Other Strengths And Weaknesses: No additional strengths or weaknesses were identified.
Other Comments Or Suggestions: The paper is generally well-written. However, the axioms are quite complex and require significant effort to understand. It would be helpful if the authors included more illustrative examples to clarify the axioms.
Questions For Authors: See the concerns listed above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review. Please find our responses below.
---
> “In the axioms of fraud-proofness, the final constraint is given as $\phi_{I'}(\hat{C}) - \phi_{I}(\hat{C}) \leq \hat{n}$. Why is the right-hand side not generalized to a more flexible form, such as $\hat{n} \cdot A$, where $A$ is a constant? The current formulation seems restrictive to the specific case of $\hat{n}$. Would the theoretical results still hold under this more general setting? A similar concern applies to the bribery-proofness condition.”
The $\hat{n}$ term in both the fraud-proofness and bribery-proofness axioms is a result of the assumption where each user's subscription cost is set to 1, but this is without loss of generality. This assumption implies that creating $\hat{n}$ fake users incurs a total cost of $\hat{n}$. However, the formulation is indeed flexible: if we assume that the cost of creating a user is instead a constant $A$, the right-hand side can be generalized to $\hat{n} \cdot A$ without affecting any theoretical result. Note that our model can also be generalized to consider users with variable cost (refer to our response to Reviewer Qvr6).
---
> “Some claims require further clarification. In Definition 2.4, does fraud-proofness hold for all $\hat{N} \subseteq N$ and $\hat{C} \subseteq C$? I noticed that in Definition 2.6, the condition "for all $\hat{C} \subseteq C$" is explicitly stated. Additionally, in Definition 2.12, should there be a further requirement that $\delta<w_{ij}-w'_{ij}$? ...”
For Definition 2.4: Yes that’s right, we will add $\hat{C} \subseteq C$ (the part on $\hat{N} \subseteq N$ is already handled by the universal quantification over instances), thanks!
For Definition 2.12, we noticed a typo in condition (ii), specifically in the second inequality: it should read $w_{i'j}' \leq w_{ij}'$ instead of $w_{i'j} \leq w_{ij}$. This correction ensures that the new engagement profile is “at least as balanced” as before for candidate $j$, aligning with the intended interpretation you mentioned. The proofs remain valid after correcting this typo. Thanks for pointing this out!
---
> “In the experiments, the authors analyze the top and bottom few agents based on their pay-per-stream values. However, in Section 4, the theoretical analysis focuses on only the top and bottom single agents. Could the authors clarify why these two evaluation methods differ?”
Our theoretical results on the top and bottom single agents easily extend to top-$k$ and bottom-$k$ agents as well, making it consistent with that used for the experiments. For all our counterexamples, we can simply duplicate each agent $k$ times. We will clarify this point in the revised version.
In our experiments, we chose to report metrics for the top and bottom few agents rather than just the single best and worst, as we believe this provides a more robust assessment—mitigating the impact of potential outliers that may disproportionately affect the extremes.
---
We have noted your other comments/suggestions and will take them into account in the revision. Thanks!
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. After considering the other reviews, I have decided to keep my score unchanged. | Summary: The paper explores fraud-proof revenue division on subscription platforms like Spotify and Apple Music, where users pay a fixed fee for unlimited access, and creators are compensated based on engagement. Current revenue-sharing rules, like GLOBALPROP (proportional to total streams), are vulnerable to manipulation through bots and click farms, making fraud detection complex and computationally hard.
The authors propose three new fraud-resistance axioms—fraud-proofness (preventing profit from fake users), bribery-proofness (preventing profit from bribed users), and Sybil-proofness (preventing profit from splitting/merging identities). Existing rules like USERPROP and USEREQ offer some protection but fail on fairness or manipulation resistance. The paper introduces SCALEDUSERPROP, a new mechanism that adjusts user contributions based on engagement intensity, making it fraud-proof, bribery-proof, and Sybil-proof.
Claims And Evidence: **Supported Claims:**
1- The paper shows that GLOBALPROP fails the fraud-proofness and bribery-proofness axioms.
It also shows that detecting fraudulent activity under GLOBALPROP is computationally NP-hard.
2- USERPROP and USEREQ improve on GLOBALPROP by satisfying fraud-proofness and bribery-proofness.
3- The paper shows that SCALEDUSERPROP satisfies fraud-proofness, bribery-proofness, Sybil-proofness, and fairness axioms (engagement monotonicity and Pigou-Dalton consistency).
**Potentially Problematic Claims:**
The claim that **"SCALEDUSERPROP is the fairest alternative"** is not fully substantiated. While the empirical results support that SCALEDUSERPROP reduces disparity, fairness is inherently subjective and context-dependent. Reducing disparity is not necessarily a positive outcome, as content providers may differ in quality, and the platform may want to incentivize and attract higher-quality artists.
The paper does not provide a **concrete definition of fairness** beyond the Pigou-Dalton consistency and engagement monotonicity axioms. A more thorough discussion of trade-offs—such as how fairness is balanced against platform revenue, user satisfaction, and content quality—would strengthen this claim. Additionally, the paper does not address how the proposed fairness notion might impact the platform’s long-term ecosystem, including its ability to attract and retain high-quality content providers.
**"Detecting fraud under GLOBALPROP is computationally intractable."**
The paper proves that finding the set of artists who benefit the most from fraud is NP-hard.
However, it does not explore whether heuristic or approximate methods could still be effective in practice. Acknowledging this would make the claim more balanced.
**"USERPROP and USEREQ are fraud-proof and bribery-proof."**
The proofs assume that the cost of creating fake users is normalized to 1 unit per user — but in practice, this cost could vary depending on the platform's structure. A sensitivity analysis or discussion of how varying costs would affect these conclusions would strengthen this claim.
Methods And Evaluation Criteria: The paper presents formal proofs that SCALEDUSERPROP satisfies the proposed axioms.
The computational complexity results (e.g., NP-hardness of fraud detection under GLOBALPROP) are well-supported by theoretical analysis.
The evaluation uses both real-world data (Music Listening Histories Dataset) and synthetic data to test the proposed mechanisms.
Theoretical Claims: See above
Experimental Designs Or Analyses: See above
Supplementary Material: I did skim them; did not check them line by line
Relation To Broader Scientific Literature: This paper contributes to the broader literature on robustness to strategic manipulations in digital platforms. Similar issues have been studied in the context of online advertising markets. Notable examples include
"Dynamic Incentive-Aware Learning: Robust Pricing in Contextual Auctions" (http://papers.neurips.cc/paper/9169-dynamic-incentive-aware-learning-robust-pricing-in-contextual-auctions.pdf) and
"Dynamic Reserve Prices for Repeated Auctions: Learning from Bids" (https://link.springer.com/chapter/10.1007/978-3-319-13129-0_17).
Additionally, there are empirical studies on fake reviews, such as "An Empirical Investigation of Online Review Manipulation" (https://www.aeaweb.org/articles?id=10.1257/aer.104.8.2421) and "The Market for Fake Reviews" (https://pubsonline.informs.org/doi/10.1287/mksc.2022.1353).
Also, the paper contributes to the literature on fairness in recommendation systems. Example include
Interpolating Item and User Fairness in Multi-Sided Recommendations, (https://nips.cc/virtual/2024/poster/93355)
User-item fairness tradeoffs in recommendations (https://openreview.net/pdf?id=ZOZjMs3JTs)
Essential References Not Discussed: The paper "Learning Product Rankings Robust to Fake Users" (https://dl.acm.org/doi/10.1145/3465456.3467580) is highly relevant to this study. It focuses on designing learning algorithms for ranking products on digital platforms while being resilient to fake clicks. It would be beneficial for the authors to broaden the discussion on manipulation and mitigation strategies beyond the subscription-based model, considering a more general framework that could apply to various types of digital platforms.
Other Strengths And Weaknesses: The exposition could be improved in several places, as parts of the paper appear rushed or underdeveloped.
I strongly recommend that the authors improve the positioning of this work within the broader literature. Adding a dedicated related work section, even if placed in the appendix, would help clarify how this work builds on and differs from existing research.
Other Comments Or Suggestions: 1- In Definition 2.4, please clarify what $\hat n$ and $\hat C$ are. And explain better in words, what this definition aims to say. Overall, definitions would benefit from more explicit discussions.
For example, in Definition2.4 (**Fraud-proofness**), here is how I interepret it. A rule $\phi$ is fraud-proof if an attacker cannot create fake users and profit from them. Here, $N$ = Set of real users, $\tilde{N} \subseteq N$ = Set of fake users . $C$ = Set of real artists, $\tilde{C} \subseteq C$ = Set of fake artists. $w_i$ = Engagement profile of user $i$.
Fraud-proofness holds if the extra profit from fake users does not exceed the cost of creating them:
$
\phi_{I'}(\tilde{C}) - \phi_I(\tilde{C}) \leq |\tilde{N}|
$
where $|\tilde{N}|$ is the number of fake users (cost is assumed to be 1 unit per user).
For **Bribery-Proofness**, here is my understanding: Bribery-proofness means that an attacker cannot profit by bribing real users to increase engagement with specific artists.
Cosnider any two instances: $I = (N, C, w)$ = Initial setup
and $I' = (N, C, w')$ = Setup after changing the engagement of exactly $k$ users e
The rule is bribery-proof if: $
\phi_{I'}(\tilde{C}) - \phi_I(\tilde{C}) \leq k
$
where:
$\phi_{I'}(\tilde{C})$ = Revenue to attacker’s artists after bribing;
$\phi_I(\tilde{C})$ = Revenue to attacker’s artists before bribing; and
$k$ = Number of bribed users (cost assumed to be 1 unit per user)
2- Please recall the definition of $\alpha$ in Theorem 2.8
3- Also, try to better explain in words the GlobalProp and UserProp. Here is my understanding: GLOBALPROP (Global Proportional Rule) works as follows: (i) Revenue is pooled into a global pot. (ii) Each artist’s payment is proportional to their share of the total engagement across the entire platform. USERPROP (User Proportional Rule) works as follows:
(i) Each user’s subscription fee is treated as an individual pool. (ii) Each artist’s payment is based on their share of that specific user’s engagement. (iii) Revenue is distributed based on what individual users actually listen to, rather than global totals.
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review. Please find our responses below.
---
> The claim that “SCALEDUSERPROP (SUP) is the fairest alternative” is not fully substantiated. ... Additionally, the paper does not address how the proposed fairness notion might impact the platform’s long-term ecosystem, including its ability to attract and retain high-quality content providers.
We note that a central focus of our paper is on manipulation-resistance, as emphasized in the title, abstract, and throughout the main text. Fairness is a desirable property, but it plays a complementary role in our work. Broader questions about how fairness interacts with platform objectives and long-term ecosystem health are important, but fall outside the scope of our paper.
That said, we are careful in our claim, stating that “on real-world data, SUP emerges as the fairest mechanism among those considered”. To substantiate this claim empirically, we consider the disparity in the value of each users’ stream (“pay per stream”)—that is, the extent to which streams from different users are valued unequally. Prior work (e.g., Dimont, 2018; Meyn et al., 2023; refs in main text) has highlighted that high disparities in per-stream payments can plausibly be viewed as unfair, particularly when artists are paid unequally for equivalent listener engagement. Such disparity under USERPROP (UP) arises even for a single artist receiving the same number of streams from different users—making it unrelated to artist quality and thus, we argue, difficult to justify from a fairness standpoint.
While rewarding “high-quality” artists is a valid platform design choice, defining “quality” is normative and beyond our scope. In this work, we evaluate fairness conditional on observed engagement, and within this scope, we argue that SUP provides a more defensible approach by “treating equal engagement more equally”. If one equates quality with engagement, our experiments show that SUP better rewards high-engagement artists compared to UP.
---
> The paper proves that finding the set of artists who benefit the most from fraud is NP-hard. However, it does not explore whether heuristic or approximate methods could still be effective in practice. Acknowledging this would make the claim more balanced.
We note that the problem we reduce from (SSBVE) cannot be approximated better than $O(n^{1/4})$ under plausible complexity conjectures. With our reduction, for a threshold $\gamma$, under the same complexity conjecture, this rules out any constant factor approximation of the size $k$ such that a set of $k$ artists have _potentially suspicious profit_ exceeding the threshold $\gamma$.
That said, heuristics may work well in practice, and certain “gap versions” (e.g., distinguishing no manipulation from large manipulation) could be polynomial-time solvable. Exploring such variants would be an interesting direction for future work.
---
> “"USERPROP (UP) and USEREQ (UEQ) are FP and BP" The proofs assume the cost of creating fake users is normalized to 1 unit per user—but in practice, this cost could vary depending on the platform's structure...”
We note that UP and UEQ could be extended with varying cost while still maintaining fraud-proofness (FP) and bribery-proofness (BP). If we let the cost a user pay to be $c_i$, UP could be defined as $\phi_{I}(j) = \sum_{i \in N} \frac{w_{ij}c_i}{\sum_{j' \in C} w_{ij'}} \times \alpha$ and UEQ could be defined as $\phi_{I}(j) = \sum_{i \in N} \frac{\mathbf{1}[{w_{ij}>0}]c_i}{| set(j' \in C:w_{ij'}>0)|} \times \alpha$. The defns of FP and BP would need to be adapted accordingly.
As this is the first work in this direction, we chose to focus on a simpler model to introduce our approach. We agree though, that with varying costs, there is a richer space for mechanisms that we have not explored, and would make for interesting future work.
---
> “beneficial for the authors to broaden the discussion...beyond the subscription-based model, considering a more general framework that could apply to various types of digital platforms.”
Our focus is on the subscription-based model, which is already a rich and nuanced setting with many compelling research questions in its own right. That said, we agree that extending the framework to encompass a broader class of digital platforms would be a valuable and interesting direction for future work.
---
> “I strongly recommend that the authors improve the positioning of this work within the broader literature. Adding a dedicated related work section, even if placed in the appendix, would help clarify how this work builds on and differs from existing research.”
We will expand the current related work section to better situate our contributions within the broader literature on robustness to strategic manipulations in digital platforms (including online advertising markets). Thanks for the helpful references.
---
We have noted your other comments/suggestions and will take them into account in the revision. | null | null | null | null | null | null |
Minerva: A Programmable Memory Test Benchmark for Language Models | Accept (poster) | Summary: Minerva introduces a thorough evaluation set to test the memory abilities of different LLMs. The evaluation set is generated with parametric programs and covers a wide breadth of different memory skills such as information retrieval and localization, processing, content transfer and structural awareness. Tests are also divided into atomic and composite where atomic tests individual skills while composite relies on the combination of atomic skills. Different LLMs are evaluated and the results provide a more detailed picture on the different memory skills of LLMs.
Claims And Evidence: The main claim is that current benchmarks do not capture memory-usage capabilities and the evidence is shown through the new evaluation set as well as well as the different results.
Methods And Evaluation Criteria: The method in this case is an evaluation set and it is measured with several different criteria including ROGUE, exact match and Jaccard similarity. Given the nature of the task, these metrics
make sense.
Theoretical Claims: N/A
Experimental Designs Or Analyses: There are many experiments across the different tasks. The main limitation is that it is unclear how much context or in-context examples were given to the different models as well as different prompt variants. Given the sensitivity of LLMs to these factors, it is an important aspect to look at.
Supplementary Material: Yes, all of them
Relation To Broader Scientific Literature: Deigning new benchmarks to test LLM abilities is a common topic. The most related are needle in a haystack benchmarks but Minerva goes much deeper in testing the abilities of memory of LLMs.
Essential References Not Discussed: Not that I am aware of
Other Strengths And Weaknesses: Strengths:
- The topic of memory utilization is extremely important as it is crucial to many different tasks. In general, designing new benchmark is also important given the saturation of current ones.
- The test set is fairly comprehensive and shows strengths and weaknesses of different models (rather than all models performing similarly). These results are useful for the broader community.
- The test set is flexible making it easier to add to. The benchmark is also easy to extend and can prevent models from over fitting to it.
Weaknesses:
- As mentioned earlier, the main weakness is how the models were evaluated. Given how sensitive models are to different prompts and effectiveness of in-context learning, it would make sense to provide several examples for the different tasks as that might change the performance, especially for the more complex tasks. For the composite tasks, the examples could come from the output of the atomic tasks.
Overall, this paper presents an extendable evaluation set that examines the memory capabilities of different LLMs. Given that there is not an evaluation set like this, I think it should be accepted as it provides insight that would otherwise be difficult to obtain.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How many trials were done (if more than one)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and positive assessment of our paper. Below, we address your main concerns:
**Use of in-context examples**
We intentionally did not include in-context learning examples, as our goal is to evaluate models' inherent memory capabilities, rather than their ability to perform few-shot learning. We think that including in-context examples would conflate memory with adaptation to in-context cues, making it harder to isolate the specific skills we aim to measure.
**Prompt variation and sensitivity**
We recognize that LLMs can be sensitive to prompt formulation. However, our primary goal in this work is to establish a standardized benchmark for evaluating memory capabilities, rather than optimizing performance through prompt engineering.
To assess the impact of prompt variations, we conducted a preliminary experiment where we tested different wordings for the same atomic task. We found that, as long as no heavy prompt tuning is involved, the impact on performance remains minimal. For example, in the word presence task, we obtained identical results from multiple models for the prompt variant "Is xxx present in the context?" compared to "Given the context, determine if xxx is present in the context." This suggests that minor wording differences do not substantially alter model performance.
Given these findings, we believe that fixing the prompts to a single, simple version (as shown in Appendix A) ensures consistency and comparability across models. While prompt tuning could enhance performance for specific models, that is a separate research question beyond the scope of this work.
**Number of trials**
The results reported in the paper are based on a fixed snapshot of the benchmark, meaning we used our program to generate a predefined set of 1110 evaluation instances, which we assume corresponds to what the reviewer means by "one trial." However, we generated multiple instances for each task (see Appendix B), ensuring a more diverse representation. To maintain reproducibility and consistency, we fixed the experimental settings as described in the paper.
We also want to emphasize that our benchmark can be used to generate any number of test samples and is fully extendable to different context lengths. We will open-source our code and also provide the exact snapshot of our data used in this experiment to facilitate further experimentation with alternative configurations.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and additional details. I keep my score at 4. | Summary: Paper presents a new benchmark for evaluating LLMs' long-context problem-solving abilities. The proposed benchmark include atomic tasks (searching, recalling, editing, matching, etc) that evaluate models on tasks that go beyond those commonly explored (passkey, key-value, needle in the haystack). Experiments present a comprehensive and detailed analysis of different models' ability to use memory.
Claims And Evidence: The main claim is that the proposed approach provides a more comprehensive and nuanced evaluation of an LLM's context utilization, which I believe is supported by the experiments.
Methods And Evaluation Criteria: Presents a wide range of different and relevant atomic tasks for evaluating long-context capabilities in LLMs, including search, recall, edit, match, compare, spot the difference, stateful processing, processing data blocks. Paper also evaluates models on composite tasks that combine multiple atomic abilities, and show that there is a significant deterioration of abilities in this setting.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiments evaluated a range of different models (both black-box and open-source) on the proposed benchmarks. The evaluations show that different models perform differently on different atomic tasks (i.e. A model being better of some tasks doesn't mean it will be better on another), which highlights the need to a wide range of tasks to evaluate different kind of long-context retrieval abilities.
Supplementary Material: No
Relation To Broader Scientific Literature: While a number of prior works have proposed benchmarks for context utilizations, they mainly focus on basic information retrieval or are relatively limited in scope. The proposed new benchmark presents a more comprehensive suite of atomic evaluations on different kinds of tasks.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: One of the claims of the paper is that the proposed benchmarks provide more a more detailed view into the long-context capabilities of the model, which can provide better guidance for future model training and development. It would be nice to add a bit more discussions about the specific kinds of improvements the evals revealed.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive assessment of our work and thoughtful feedback.
One key takeaway from our experiments is that different models exhibit high variance across atomic memory tasks, reinforcing the need for a diverse evaluation suite. In addition, we found that a major failure pattern is that models perform well on direct retrieval (e.g., finding an exact string in a long context) but degrade significantly when required to store, update, and apply memory dynamically. We believe this might be due to the mainstream attention-based architecture, which makes point-wise access (e.g., retrieving a specific token) relatively easy but lacks explicit memory mechanisms for storing the information and updates. Future models may benefit from architectures that explicitly model memory, such as those explored in the recent Titan paper.
Beyond memory retention, models also struggle with comparing, integrating, or transforming stored information, particularly when relevant facts are dispersed across different parts of the context rather than appearing in a single block, or when the task requires keeping track of updates to an entity’s attributes over time. These results suggest that attention alone may not be enough for effective long-context understanding. Future improvements might require structured memory systems, such as hierarchical memory representations to help models better handle complex memory tasks.
We will expand this discussion in the paper to further highlight how our evaluations can inform future model development. | Summary: This paper presents a framework for automatically generating a broad set of tests to evaluate LLMs' memory usage. Going beyond simple search, the benchmark assesses tasks like recalling, editing, matching, and tracking information across distinct data blocks. Experiments reveal that while models handle basic retrieval, they often fail more complex operations, highlighting important gaps and guiding future improvements.
Claims And Evidence: Please refer to Strengths And Weaknesses.
Methods And Evaluation Criteria: Please refer to Strengths And Weaknesses.
Theoretical Claims: Please refer to Strengths And Weaknesses.
Experimental Designs Or Analyses: Please refer to Strengths And Weaknesses.
Supplementary Material: Please refer to Strengths And Weaknesses.
Relation To Broader Scientific Literature: Please refer to Strengths And Weaknesses.
Essential References Not Discussed: Please refer to Strengths And Weaknesses.
Other Strengths And Weaknesses: **Strengths**
- The paper presents a wide range of memory-related tasks and thoroughly evaluates them.
- By incorporating visual results, the authors make it easier for readers to compare model performance and understand outcome disparities.
**Weaknesses**
- While extensive in empirical evaluation, the work does not introduce significant new theoretical concepts or frameworks for memory usage.
- The paper positions itself as going "beyond simple search," but does not clearly illustrate the threshold that separates simple retrieval from more advanced memory tasks; additional concrete examples would clarify these distinctions.
- The paper's core focus on LLM memory benchmarking may not align closely with the calls for papers in ICML.
Other Comments Or Suggestions: Please refer to Strengths And Weaknesses.
Questions For Authors: Please refer to Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s comments and the opportunity to clarify our contributions.
**Position of the paper and fitness with ICML**
The reviewer raises the concern that our paper does not introduce new theoretical concepts. The paper does not aim to be theoretical. The nature of memory analysis in LLMs is empirical. However, we believe that a rigorous and comprehensive benchmarking framework is a significant contribution. Using static data benchmarks to test needle-in-the-haystack type of analysis is not sufficient especially in a rapidly evolving field like LLMs. Our study provides:
1. **A well-defined taxonomy of memory-related tasks** that categorizes different types of memory demands in LLMs, providing a structured way to assess and analyze LLM memory capabilities, as described in Section 2.
2. **New insights into LLM memory behavior** across various tasks, revealing critical limitations that have not been studied before. As discussed in Section 4 line 434 - 436 (left) and 385-394 (right), previous benchmarks and tests focus mainly on information retrieval, and lack a comprehensive evaluation on memory. Many other benchmarks are also static dataset benchmarks.
3. **A programmable, extensible benchmark** that enables future research on memory efficiency, potentially informing both theoretical studies and model development. This mitigates the risk of overfitting on benchmarks, and biasing the empirical evaluations.
Historically, benchmarks have played a pivotal role in advancing ML research. For example, ImageNet reshaped computer vision, and MMLU influenced LLM evaluation. Similarly, our benchmark exposes key limitations in memory-intensive tasks, which could inspire future advancements. Moreover, evaluation is explicitly within the scope of ICML’s call for papers.
**Clarifying the distinction between simple search and other tasks**
Thanks for the suggestion and we will add additional explanations and examples to better illustrate this distinction in the paper.
By "simple search," we refer to tasks that require only the ability to locate and extract relevant information from memory without additional processing (see line 105, left). In our benchmark, this includes tasks like string search, key-value search, and batch search (see Table 1 and Appendix A). For example, given a keyword x, the model retrieves the associated value y. This "locate-and-extract" ability resembles traditional search tasks, and thus we refer to them as simple search. This type of task has dominated prior work on LLM memory utilization, such as the Needle-in-a-Haystack task (see Section 4, line 419, left).
However, real-world applications of memory demand more than just retrieval. Our benchmark explicitly evaluates tasks that require models to recall, edit, match, compare, and track state, which extend beyond this simple search (see Table 1 and line 55-67, right). For instance: A writing assistant that rephrases or edits a paragraph operates differently from a system that merely retrieves a fact embedded in a document. A financial analysis tool that verifies consistency across multiple reports needs to compare and reconcile stored information rather than just extract it. A personalized assistant managing ongoing tasks must track state changes over multiple interactions instead of retrieving isolated details. In practice, memory-intensive tasks often require a composition of these abilities. Our benchmark not only evaluates these capabilities but also provides a structured taxonomy to identify model strengths and weaknesses systematically. | Summary: This paper introduces a framework for systematically evaluating the memory utilization capabilities of language models. Expanding beyond conventional memory tests—such as passkey retrieval, key-value lookup, and needle-in-the-haystack search—the proposed framework assesses models on a broader range of atomic tasks, including searching, recalling, editing, matching, and comparing information within structured memory contexts. Additionally, the authors design composite tests to examine models' ability to maintain state across operations. By simulating real-world data structures, this benchmark provides a more interpretable and granular assessment of large language models' memory handling capabilities.
Claims And Evidence: The paper claims to introduce a novel framework for evaluating the memory utilization capabilities of large language models (LLMs). While the authors do propose an extended set of tests beyond traditional memory benchmarks, it is unclear whether the framework provides fundamentally new insights or if it merely reinforces existing knowledge about model performance. Looking at the performance results, it appears that models that perform well on standard NLP benchmarks (e.g., GPT-4o outperforming GPT-4 Turbo, which in turn outperforms GPT-4o-mini) also perform well on this benchmark. This raises the question of whether the proposed benchmark truly captures unique aspects of memory utilization or if it is simply correlated with overall model quality. Additional evidence, such as cases where model rankings differ from traditional benchmarks, would strengthen the claim of novelty.
Methods And Evaluation Criteria: The authors employ a variety of atomic and composite tests to assess models' abilities to retrieve, edit, and manipulate information within structured contexts. However, there are some concerns regarding the design of the evaluation criteria. For example, in the String Search task, the prompt asks whether a certain substring (e.g., “bbb”) is present, with expected answers being "yes" or "no." However, the evaluation metric used is exact_match, which does not seem appropriate in this context.
Additionally, the authors set the temperature to 0 and top-p to 1, ensuring deterministic outputs. While this suggests an intent to evaluate models under controlled conditions, it is unclear whether this decision might disadvantage certain models or affect specific types of tests. For instance, tasks requiring reasoning or more flexible memory retrieval might be better evaluated with slightly higher randomness. The paper should clarify the implications of this choice on model performance.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental setup is well-structured, but there are areas where further exploration could enhance the analysis:
1. The paper does not discuss whether the authors experimented with varying context lengths, which could have provided deeper insights into how memory performance scales with longer or shorter inputs. Given that real-world applications involve diverse context lengths, it would be valuable to understand whether model performance remains stable across different settings.
2. It is unclear whether the models' performance depends on how instructions are phrased in the prompts. The authors should clarify whether variations in prompt wording lead to performance shifts and, if not, why that would be the case. Understanding how models respond to different instructions is critical for assessing robustness.
3. The authors emphasize the distinction between proprietary (black-box) and open-source models (Line 184), but it is unclear why this distinction is particularly relevant in the context of memory benchmarking. Are there fundamental differences in how proprietary and open-source models handle memory? Additional clarification on this point would be useful.
Supplementary Material: Yes, all the sections.
Relation To Broader Scientific Literature: The work is relevant for the research community.
Essential References Not Discussed: The authors have discussed sufficient literature on the topic.
Other Strengths And Weaknesses: Strengths:
The paper provides a structured approach to evaluating LLMs’ memory abilities, introducing a broader set of tasks beyond traditional benchmarks.
The inclusion of composite tests that assess state retention while operating on memory is valuable for understanding sequential dependencies.
Weaknesses:
The novelty of the benchmark is questionable, given that performance trends largely align with existing NLP benchmarks.
The lack of variation in context length experiments limits insights into how well models generalize memory performance across different scales.
The deterministic setting (temperature = 0) may not be suitable for all types of memory-related tasks and could influence results in ways that are not discussed in the paper.
Other Comments Or Suggestions: Minor
Line103-104, seems to have some structural irregularity. I don’t see the bullet points following from the earlier discussion. Please rewrite.
Line 74, 84 (right) - remove boldening of words
Questions For Authors: Mentioned above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback and the chance to clarify our contributions.
**Insights from the benchmark**
Our main goal is not to rank known models, but rather to introduce tests to evaluate different functionalities needed for LLM (agents). Any model can be (1) tested using our benchmark, (2) updated/retrained/fine-tuned to improve its functionalities WITHOUT the problem of being overfitted on the benchmark, and (3) have older and newer versions of the model compared fairly by rerunning the benchmark with fresh randomness.
While some of our results align with prior benchmarks, we disagree that our work merely reinforces prior knowledge. It is true that GPT-4o performed the best in our tests, as expected, but our findings provide meaningful insights beyond the simplification of “GPT-4o is better than GPT-4o-mini”. Specifically: (1) Models of similar sizes (particularly open-source ones) show notable performance variations across different types of memory tasks (See Figure 3-5) (2) Performance gaps vary by task, e.g. while models perform similarly in information fetching tasks, more challenging tasks like "spot the difference" reveal larger gaps. (3) Smaller models can outperform larger ones on certain tasks. Notably, LLaMA-3.1-8B and phi-3-small outperformed GPT-4-turbo in “patch the difference”, despite being smaller. This suggests that factors beyond sheer model size, such as architectural choices and training data, play a significant role in memory-related capabilities. Generating insights doesn’t require contradicting prior benchmarks. Instead, we aim to uncover nuanced model behavior patterns that inform development and evaluation strategies. This goes beyond using benchmarks merely for ranking models.
We also would like to clarify on the following regarding our experiment setups:
**1. Variation in context length**
We mainly fixed the context length to 4K tokens to highlight that models already struggle in many memory tests at this length. We have also shown in one of our ablation studies (Figure 6) how the number of operation steps/context length affects performance. We also have other ablation results that show a similar trend: performance was near perfect at shorter lengths, but significantly lower well before reaching what is typically considered "long context". We can add more results in the paper to clarify these findings. Additionally, since our benchmark is fully programmable and transparent, it can be extended to evaluate models at different context lengths.
**2. Evaluation criteria**
For the substring presence test, exact match means that if the model’s response (yes/no) matches with presence/absence of the substring, it receives a score of 1, otherwise 0. We believe this is a fitting metric for a binary classification task.
**3. Variations in prompt wording**
As discussed in our response to Reviewer RVnF, we conducted a preliminary experiment to assess the impact of wording variations and found that performance differences are minimal for instruction rephrasing. This supports our decision to fix prompts to a single simple version (in Appendix A) for consistency and comparability across models. So, based on this result, we observe that the main challenge is not in “understanding” the instruction, but rather in “performing the task”. All that said, note that (1) our benchmark is not a static dataset and a simple script can be added to our benchmark to generate many variations; (2) we will add confidence intervals around the results to show also robustness to instruction variations.
**4. Deterministic setting**
We deliberately used a deterministic setting for consistency and controlled comparisons. While we understand the concern that this might disadvantage certain models, we argue that: (1) A deterministic setting provides more stable and reproducible evaluation. (2) This choice is an evaluation setting, not a limitation of the benchmark itself. While we used a deterministic setup in this study, our benchmark is fully configurable. Users can modify decoding parameters to introduce stochasticity and explore its effects on different memory tasks.
**5. Separation between proprietary and open-source models**
We separate proprietary and open-source models because these groups tend to be more comparable within themselves. For example, the open-source models we evaluated are of similar sizes, making side-by-side comparisons more meaningful. This grouping allows for clearer analysis while still enabling cross-category comparisons where relevant.
**Final remarks on experiment setups**
Our benchmark is designed to be fully programmable, extendable to different context lengths, evaluation criteria, prompting strategies, or other configurations. We will open-source our code and provide a snapshot of our data to encourage further research. However, for the sake of reproducibility, we fixed the experimental settings as described in the paper to ensure consistency in our results.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions in detail. This is helpful, and I believe they clarify the concerns that I raised to some extent. So, at this point, I will increase the score. However, as the authors themselves mentioned, a lot of things need to be added in the paper for thorough evaluation of the benchmark.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for the thoughtful feedback and for raising the score. We really appreciate your engagement with the work.
We’d like to clarify that we have run the additional experiments we mentioned (e.g., prompt variations, context length changes) and we’re ready to include them in the final version.
While we understand these additions will help clarify the paper further, we also want to emphasize that we believe the main ideas and contributions are already well-supported in the current version. Our main contribution is to introduce a comprehensive, programmable benchmark for memory evaluation, one that goes beyond simple recall or “needle-in-a-haystack” tasks. We designed tasks that isolate different memory behaviours, which we think is crucial for understanding where and why models succeed or fail.
Moreover, a key strength of our programmable benchmark is its flexibility: it allows users to generate diverse variations of the tests by adjusting different parameters. While we present results from a particular snapshot in the paper, along with ablation studies for selected variations, the framework is designed to support a wide range of experimental setups. Our goal is not to "sell" the specific results shown in the paper, but to introduce a modular set of tasks and an extensible framework for evaluating memory. Researchers can pick it up and run their own experiments with configurations that suit their needs. We believe **the core idea is new and useful** to the community. Also, the **code and data are ready to use and extend**.
To give a clearer picture of the results we’ll add in the revision:
## Context length variation
We tested how model performance changes with increasing context length, beyond the results already shown in Figure 6 (for integer/set state tracking). We see that the model performance dropped well before reaching a “long” context.
### Task: Functional updates (metric: Rouge-L)
| Model | 500 | 1k | 2k | 4k | 8k |
|---------------|------|------|------|------|------|
| gpt-4o | 1.00 | 1.00 | 0.99 | 0.93 | 0.59 |
| gpt-4o-mini | 0.69 | 0.66 | 0.42 | 0.24 | 0.10 |
| phi-3-small | 0.49 | 0.45 | 0.21 | 0.07 | 0.03 |
### Task: Count (metric: exact match)
| Model | 1k | 2k | 4k | 8k | 16k |
|---------------|------|------|------|------|------|
| gpt-4-turbo | 0.52 | 0.44 | 0.40 | 0.32 | 0.28 |
| cohere | 0.36 | 0.24 | 0.20 | 0.20 | 0.12 |
| phi-3-medium | 0.20 | 0.16 | 0.12 | 0.08 | 0.04 |
## Prompt variation
We ran small wording changes on the same task and evaluated the difference in model performance. We found that across multiple models, which suggests that the main challenge is not in “understanding” the instruction, but rather in “performing the task”. Below, we report task performance and 95% confidence intervals (CIs).
### Task: String search (word)
number of sample per variation: 50
Variation 1: Given the context, determine if xxx is present in the context.
Variation 2: Is xxx present in the context?
| Model | Variation 1 | Variation 2 | 95% CI |
|---------------|-------------|-------------|---------------|
| gpt-4o | 1.00 | 1.00 | (0.94, 1.00) |
| gpt-4o-mini | 0.98 | 0.98 | (0.94, 1.00) |
| phi-3-medium | 1.00 | 1.00 | (0.94, 1.00) |
### Task: Group association
number of samples per variation: 40
Variation 1: determine if the word "aaa" and the word "bbb" are in the same list
Variation 2: check if the words "aaa" and "bbb" belong to the same list
| Model | Variation 1 | 95% CI | Variation 2 | 95% CI |
|---------------|-------------|---------------|-------------|---------------|
| gpt-4o | 0.65 | (0.50, 0.80) | 0.63 | (0.48, 0.78) |
| cohere | 0.70 | (0.56, 0.84) | 0.75 | (0.62, 0.88) |
| phi-3-small | 0.55 | (0.40, 0.70) | 0.55 | (0.40, 0.70) | | null | null | null | null | null | null |
Synonymous Variational Inference for Perceptual Image Compression | Accept (poster) | Summary: This paper proposes a novel framework for perceptual image compression based on synonymous variational inference (SVI). Specifically, the paper introduces a method to analyze the optimization direction of perceptual image compression using semantic information theory. A new image compression scheme called Synonymous Image Compression (SIC) is devised to encode latent synonymous representations and reconstruct images through sampling. A progressive SIC codec is developed to leverage multiple synonymous levels for variable-rate compression. Besides, to the best knowledge of the authors, this method is the first work that can theoretically explain the fundamental reason for the divergence measure’s existence in existing perceptual image compression schemes. However, it lacks comprehensive comparisons with essential state-of-the-art methods and does not convincingly validate the effectiveness of the proposed models through ablation studies.
Claims And Evidence: Claim: Experimental results demonstrate comparable rate-distortion-perception performance using a single neural progressive SIC image codec, thus verifying our method’s effectiveness.
Evidence: The paper shows some improvements in DISTS and LPIPS metrics, but lacks comprehensive comparisons with state-of-the-art methods like HiFiC and MS-ILLM (with GANs).
Evaluation: This claim is not fully convincing due to the limited comparison. More extensive experiments with diverse datasets and state-of-the-art methods are needed.
Methods And Evaluation Criteria: No, the proposed methods and evaluation criteria do not fully align with the problem or application at hand. While the paper introduces a novel approach to perceptual image compression from the perspective of Synonymous Variational Inference (SVI) and theoretically explores the fundamental reason for the divergence measure’s existence in existing schemes, the experimental results fail to demonstrate sufficient competitiveness in practical applications. The evaluation criteria, though relevant to perceptual quality, do not comprehensively validate the effectiveness of the proposed method compared to state-of-the-art techniques. Therefore, the proposed methods and criteria are insufficient to address the practical needs of perceptual image compression.
Theoretical Claims: Claim: The proposed Synonymous Variational Inference (SVI) method is the first to theoretically explain the fundamental reason for the divergence measure’s existence in existing perceptual image compression schemes.
Proof: The authors provide a theoretical analysis showing that the divergence measure arises naturally from the consideration of synonymous sets and semantic information theory. They introduce the concept of partial semantic KL divergence and demonstrate its relevance in the context of perceptual image compression.
Experimental Designs Or Analyses: Yes, I have reviewed the experimental designs and analyses presented in the paper. The authors aim to validate the effectiveness of their proposed Synonymous Variational Inference (SVI) method and the corresponding Synonymous Image Compression (SIC) scheme.
1.The choice of datasets and metrics is appropriate for evaluating the performance of the proposed method. These datasets are widely used in the field, ensuring that the results are comparable to other studies.
2. While the paper provides comparisons with several methods, the results for some state-of-the-art perceptual image compression methods (e.g., HiFiC and MS-ILLM with GANs) are not comprehensive. This limits the strength of the claims regarding the superiority of the proposed method.
Supplementary Material: Yes. I primarily reviewed Part D of the supplementary material, which includes the additional experimental results and corresponding visualizations.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to several areas of the broader scientific literature, particularly in the fields of image compression, semantic information theory, and variational inference. It introduces novel ideas such as synonymous sets and partial semantic KL divergence, which offer new perspectives on handling perceptual similarity in image compression.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1.Originality and Novelty: The paper introduces a novel perspective on perceptual image compression by leveraging semantic information theory and synonymous variational inference (SVI). To the best knowledge of the authors, this method is the first work that can theoretically explain the fundamental reason for the divergence measure’s existence in existing perceptual image compression schemes.
2.Clarity and Presentation:
The paper is well-structured and clearly presents the theoretical concepts, methods, and experimental results. The use of visualizations and detailed explanations helps in understanding the complex ideas and their practical implications.
Weaknesses:
1.Performance: The paper shows some improvements in DISTS and LPIPS metrics, but lacks comprehensive comparisons with state-of-the-art methods like HiFiC and MS-ILLM (with GANs).
Other Comments Or Suggestions: 1.The paper provides comprehensive experimental results, but it would be beneficial to include more detailed ablation studies. For example, analyzing the impact of different components of the proposed method (e.g., the effect of learnable weights, the choice of architecture) could provide deeper insights into the method's effectiveness.
2.The description of the SIC scheme is thorough, but it could benefit from additional visualizations showing more results about the progression of image quality at different synonymous levels. This would help readers better understand the practical implications of the method.
3.The paper provides several performance metrics (PSNR, LPIPS, DISTS), but it would be useful to include additional qualitative results.
Questions For Authors: 1.Could the authors provide a more detailed comparison with state-of-the-art perceptual image compression methods, particularly those using GANs or other advanced techniques like HiFiC and MS-ILLM? How does the proposed SVI method differ from these methods in terms of performance and computational efficiency?
2.Could the authors provide additional ablation studies to validate the theoretical claims made in the paper? For example, how does the performance change when different components of the SVI method (e.g., partial semantic KL divergence, synonymous level partitioning) are removed or modified?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer KhzF,
Thank you for recognizing our work, especially **the originality and novelty of our SVI theory analysis for perceptual image compression** and our **well-structured and clearly presented**.
Your concern centers on our unsatisfactory experimental results, which is also a concerned by many reviewers. However, we want to claim that **our focus is on the SVI analysis theory, which offers a new theoretical framework for perceptual image compression**, while **the implemented SIC model serves only as a preliminary validation**. Compared to the completeness of our theory, our method is indeed relatively rough. This is due to several issues in the theoretical analysis that need further attention in method design, while these issues in turn offer significant potential for our future research.
Below are our responses to your concerns.
**[Q1] The paper shows some improvements in DISTS and LPIPS metrics, but lacks comprehensive comparisons with state-of-the-art methods like HiFiC and MS-ILLM (with GANs).**
[A1] As several reviewers have raised concerns about this issue, we conducted additional experiments, introduced a discriminator into the model structure, and fine-tuned our model using discriminative loss. We have included some of the results and comparison with HiFiC and MS-ILLM in [Figure 5] of the [[Anonymous Link]](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C) and provided visualizations in [Figure 6]. For detailed configuration, reasoning, and discussion, please check the rebuttal to **Reviewer 7ghf [A2]**.
Although our method still has some gaps in perceptual quality compared with HiFiC and MS-ILLM, it offers a clear advantage in deployment cost. Specifically, we can use a single model to achieve all bit rates, with a model size of 465MB. In contrast, an MS-ILLM model can only reconstruct the received information for one bitrate, with a single model size of 693MB. To support all bit rates (e.g., MS-ILLM with GAN, 6 points), you would need 6×693MB = 4158MB = 4.06GB, which is significantly larger than our model. Therefore, in this respect, our model has a clear advantage.
**[Q2] It would be beneficial to include more detailed ablation studies.**
[A2] We added an ablation study on the detail representation part in the provided anonymous link, comparing random sampling with forcing it to 0. Please refer to [Figure 2~3] of the above anonymous link.
We believe this ablation is important as it highlights the difference between our approach and existing methods (HiFiC and MS-ILLM). Specifically, it shows that random details provide effective information for image reconstruction, making distortion and perceptual quality not entirely dependent on the information provided by the coding sequence.
Additionally, although an ablation study on different components of SVI (i.e., partial semantic KL divergence, synonymous level partitioning) is essential, we prefer to discuss this issue based on our theoretical results:
- If partial semantic KL divergence is not utilized, the existence of the ideal synset will not be considered (i.e., only consider the original image), which makes the loss function only a rate-distortion tradeoff without perceptual term. This is the core reason why we propose using this divergence for analysis: **It provides a sufficient theoretical foundation for using distribution distance as a perceptual loss.** This can be referred to Appendix A.3 (especially Lines 949~967) in our manuscript.
- If synonymous level partitioning is not utilized, the model should be a single-point model, which only accepts a specific coding rate, similar to how HiFiC and MS-ILLM operate. In this case, the performance of each point in our method has the potential to be improved, as there is no competition between different synonymous levels, as illustrated by the tradeoff loss in Equation (12). However, the disadvantage of this approach is that it increases the cost of deployment, as a separate model needs to be deployed for each bitrate.
**[Q3] It would be useful to include additional qualitative results.**
[A3] In [figure 5] of our provided anonymous link, we added performance curves of our model (no-GAN and with GAN) for **FID** and compared the results with HiFiC and MS-ILLM (with GAN). Although the performances under FID are still unsatisfactory, we analyze the underlying causes and consider it an essential direction for future research. Please check the rebuttal to **Reviewer 7ghf [A2]** for this discussion.
**[Q4] More results about the progression of image quality at different synonymous levels are required.**
[A4] Thanks for your suggestion. We added a progression of images at different synonymous levels in [Figure 7~8] in the anonymous link, in which [Figure 7] shows gradual changes of the images by our $M=1$ model, and [Figure 8] by our $M=5$ model.
In summary, we sincerely thank you for your suggestions. These suggestions will help us improve our paper. | Summary: This paper presents a novel perspective to analyze the perceptual image compression problem, which is based on the notion of synonymy in semantic information theory, which suggests that images with perceptual similarity constitute a synonymous set. Based on this, the authors propose a synonymous variational inference method (SVI) and theoretically demonstrate that the optimization direction of perceptual image compression follows a three-pronged trade-off that encompasses bit rate, distortion, and perception, which is in line with existing related research. In addition, the paper designs a new image compression scheme, synonymous image compression (SIC), and verifies its effectiveness experimentally.
## update after rebuttal
The author answered my questions, so I upped my score. I hope the author will make the code public in the future to promote the community.
Claims And Evidence: The analysis proves theoretically that the optimization direction of perceptual image compression follows a triple tradeoff, i.e., a synonymous rate-distortion-perception tradeoff, and that this tradeoff can cover existing rate-distortion-perception schemes. Theorem 3.3 indeed gives a theoretical formulation of this triple tradeoff (Eq. 9), and Appendix A.2 provides a detailed proof procedure. Appendix A.3 also discusses compatibility with existing schemes. However, “covering” existing programs may mean that existing programs are special cases of the theory, which is mathematically supported. However, this does not necessarily mean that the design concepts of all existing programs can be fully explained or guided by the theory.
Methods And Evaluation Criteria: 1) The SVI framework proposed in the paper provides a new theoretical foundation for perceptual image compression from the perspective of semantic information theory.
2) The SIC compression scheme is a concrete realization based on this theory, and its design concept is consistent with the core idea of SVI.
3) The design of progressive SIC takes into account the needs of practical applications and allows the generation of images with different perceptual qualities at different rates.
4) The selection of commonly used benchmark datasets and an evaluation system that includes both traditional metrics (PSNR) and perceptual metrics (LPIPS, DISTS) enables a comprehensive evaluation of the performance of the proposed method and facilitates the comparison with existing techniques. The authors particularly emphasize the DISTS metrics, which are related to their proposed approach based on tautology.
Theoretical Claims: 1) Proof of Lemma 3.2: This lemma states the equivalence between minimizing the expected negative logarithmic tautological likelihood term and minimizing the weighted expected distortion term and the weighted expected KL scatter term. A detailed proof of the lemma is given in Appendix A.1.
2) Proof of Theorem 3.3: The theorem presents a formula for the minimum achievable rate of perceptual image compression, i.e., the tautology rate-distortion-perceptual tradeoff. The detailed proof of the theorem is given in Appendix A.2.
Experimental Designs Or Analyses: 1) Use of LPIPS instead of KL scatter: although the authors explain the use of LPIPS instead of KL scatter due to computational challenges, LPIPS is, after all, a depth-feature based perceptual metric that is not mathematically equivalent to the KL scatter in the theoretical derivation. This may affect the extent to which the experimental results directly validate the theory.
2) Simplicity of the synonym level division: a uniform division of the channel dimensions is used in the paper to define the different synonym levels. The effectiveness and optimality of this approach may require further exploration and experimental validation. More complex or data-driven based segmentation strategies may be able to better capture different levels of semantic information.
3) Setting of hyperparameters: the paper mentions that the hyperparameters in the loss function are configured empirically. The selection of these hyperparameters is crucial to model performance, but the lack of systematic hyperparameter search and analysis may affect the robustness and reproducibility of the results.
4) Impact of different numbers of samples (M): Fig. 5 illustrates a comparison of Progressive SIC performance using different numbers of samples M (M=1 and M=5) in the reconstructed synonym set X̂. The results show that different M values perform inconsistently at different code rates, sometimes M=1 is better and sometimes M=5 is better. The authors recognize that this may be related to the mechanism of synonym level division, number of samples, and insufficient setting of hyperparameters, which need to be further explored in future work.
5) Interpretation of the visualization results: although Figs. 11-16 show the visual effects of the reconstructed images, a more in-depth analysis of these visualization results, such as the changing patterns of image details and perceived quality under different synonym levels, and the effects of different M values on the diversity and quality of the generated images may be more helpful in understanding the performance of SIC.
Supplementary Material: Supplementary material contains data from the results of the visualization in Appendix D.2. The author also lists the required dependency installers for evaluation.py and their corresponding versions.
Relation To Broader Scientific Literature: 1) Links between synonymous variational inference (SVI) and semantic information theory.
2) The SVI connection to rate-distortion-perception (R-D-P) theory.
3) The difference and connection between Synonymous Image Compression (SIC) and Learning Image Compression (LIC).
Essential References Not Discussed: Research on “Perceptual Equivalence” or “Just Noticeable Difference (JND)” in image compression: In this paper, perceptual similarity is considered as a criterion for tautological relationships. In the field of image processing and computer vision, there has been researched on when two images are perceptually “identical” or “indistinguishable”, such as image compression methods based on the JND model. These models attempt to identify perceptually unimportant information and remove it, thereby improving compression rates while maintaining perceptual quality. Although JND models do not directly correspond to the concept of “synonym set”, they are concerned with the sensitivity of the human perceptual system to image differences, which can complement the understanding of the theoretical basis of the “perceptual similarity” criterion in this paper.
Other Strengths And Weaknesses: Pros:
1) The originality is highlighted by the innovative application of the concept of “tautology” in semantic information theory to the field of perceptual image compression. This not only provides a new theoretical perspective on the problem, but also makes the first attempt to explain, at a theoretical level, the fundamental reason for the existence of distributional distance metrics (e.g., KL dispersion or its substitutes) in perceptual image compression, which stems from the consideration of an ideal set of synonyms.
2) The paper proposes a novel variational inference method, the Synonymy Variational Inference (SVI), and derives a synonymy rate-distortion-perception ternary trade-off based on this theory. This theoretically extends and unifies the existing rate-distortion and rate-distortion-perception theories by showing that the latter can be regarded as a special case of the former under specific conditions. This theoretical unification has important academic value.
Weaknesses:
1) The experimental results, although competitive, do not show a significant advantage in terms of perceptual quality over some state-of-the-art GAN-based methods such as HiFiC and MS-ILLM. The authors also acknowledge that this may be related to the direct use of LPIPS instead of KL dispersion in the loss function and propose to explore the discriminative mechanism in the future to further improve the perceptual quality. This suggests that there is still room for improvement in the practical performance of the currently proposed SIC model.
Other Comments Or Suggestions: There were some problems with the format of the references, for example, some conference papers had “Proceedings” while others did not. In addition, there is no consistency in the case of the names of the conferences covered in the references.
Questions For Authors: The experimental results show that the advantage of the proposed method over some advanced GAN-based perceptual image compression methods (e.g., HiFiC and MS-ILLM) in terms of perceptual quality (DISTS metrics) is not significant. Considering that the SVI framework aims to provide a unified perspective on perceptual image compression, how do the authors view this performance gap? How are future plans to integrate discriminative mechanisms or adversarial training within the SVI framework to narrow this gap while still adhering to the semantic information theory based on tautology? A clear articulation of the author's understanding of this gap and promising future research directions would help to enhance the long-term potential and impact of this work.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 7ghf,
We sincerely appreciate your high regard for our work, especially your recognition of our work including:
- **New theoretical perspective on perceptual image compression**
- **Mathematically supported unified theory**, which can **extends and unifies the existing RD and RDP theories**
- **Holding important academic value**.
Your main concern is the unsatisfactory experimental results, which many reviewers have also raised. Below are our responses to the relevant questions.
**[Q1] How do the authors view the performance gap between the SIC methods and the advanced RDP methods in terms of DISTS?**
[A1] We believe the insignificant performance of our method on DISTS is due to multiple factors:
- **The choice of perceptual loss.** Your understanding of using LPIPS instead of KL divergence is accurate. With LPIPS replacing KL divergence as the perceptual loss, our scheme outperforms MS-ILLM (no-GAN) on DISTS mainly due to DISTS' resampling tolerance, which aligns with our method's random detail sampling. However, the advantage is limited. Previous work (MS-ILLM) shows that using adversarial loss significantly improves DISTS quality. Therefore, we believe that fine-tuning our model with a discriminator and adversarial loss can help bridge this gap to some extent.
- **Confronts between multiple synonymous levels**. To ensure each synonymous level in the progressive SIC model is effective, we use Equation (12) (line 319) for alternating training. This creates battles between different optimization directions within a single SIC model, inevitably impacting overall performance.
- **Suboptimal choice of sampling mechanism**. In the detail representation part, we empirically use uniform sampling, which may cause significant differences between the distribution of the reconstructed and real images.
Therefore, future research should combine theoretical analysis and method design to address these issues.
**[Q2] Future plans to integrate discriminative mechanisms or adversarial training to narrow this gap while still adhering to the semantic information theory?**
[A2] In fact, this is also a concern of many reviewers. To this end, we tried to build a discriminator model based on HiFiC’s discriminator structure and fine-tuned our $M=1$ and $M=5$ models with non-saturating loss for $2×10^5$ steps. We plot the discriminator structure in [Figure 4] and the fine-tuned performance curves in [Figure 5] (PSNR, LPIPS, DISTS, and KID) of the [[Anonymous Link]](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C/). The supplementary results show that:
- **The perceptual quality (DISTS, FID) has improved**, with more noticeable gains at higher bitrates, and DISTS gradually approaches the performance of MS-ILLM (with GAN). Besides, while DISTS and FID values change little at low rates, the visual quality improves significantly, as shown in [Figure 6] ([Table 3] corresponds to the bitrates in [Figure 6]).
- **The distortion has degraded,** which can verify the distortion-perception tradeoff as mentioned in (Blau & Michaeli, 2018).
- **The gap compared to HiFiC and MS-ILLM is still obvious,** especially on FID. This may be due to insufficient fine-tuning with multiple synonymous layers battling against each other. Besides, this also means that the issues stated in [A1] need to be solved in the future, including designing better perceptual optimization directions, broadening SVI theory to address multi-synonymous level confronts, and improving detail prediction and sampling methods to fit the true distribution.
**[Q3] Concerns about the correlation between synset and "JND"?**
[A3] This comment is crucial for completing our explanation of the synset. JND's ideas are somewhat similar to ours. However, while JND provides the minimum stimulus difference that causes a perceptible change, which may help us to determine a certain synset threshold to keep human-perceptual consistency, our concept of synset takes it a step further:
In certain cases, only ultra-low bitrates are sent to the decoder, where the reconstructed image can differ significantly from the original image—beyond the JND threshold—and still be tolerable to people in that context.
A typical example is the use of diffusion models for ultra-low bitrate image compression, where the reconstructed images exhibit significant differences from the originals, yet maintain good perceptual quality that is acceptable to humans.
**[Q4] "Covering" is mathematically supported, but not mean that all existing programs can be fully explained or guided by the theory.**
[A4] You are correct, but this also means there is room for further development of our theory. We believe the SVI theory presented in this manuscript is just the beginning, and future research will involve theoretical explanations of various existing methods.
In summary, thanks for your valuable comments and suggestions. We will update our manuscript accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. The author answered my questions. I hope the author will make the code public in the future to promote the community.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your recognition and encouragement of our work! We will make our code public in the future to promote the community. | Summary: This paper introduces a novel progressive training approach for image compression, which focuses on both improving image quality and maintaining semantic consistency during compression. By using synonymous latent representations, the model progressively decodes and recovers image details, ensuring high-quality reconstruction, especially at low bitrates. The loss function combines rate-distortion loss with a partial semantic KL-divergence loss to optimize semantic integrity. While comparable to direct LPIPS optimization in terms of perceptual quality, this method offers advantages in preserving semantic consistency and flexibility in quality recovery.
## update after rebuttal
The authors have already resolved my issues. As a result, I raise the final score.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I have carefully read the proofs for theory in the article, and they are all reasonable.
Experimental Designs Or Analyses: Please refer to weakness in ‘Other Strengths And Weaknesses’.
Supplementary Material: All.
Relation To Broader Scientific Literature: This paper builds on prior work in perceptual image compression, progressive coding, and semantic-aware latent representations. It extends perceptual compression techniques such as LPIPS-optimized methods by introducing synonymous latent representations to enhance semantic consistency across compression levels. Additionally, it relates to progressive coding approaches but differentiates itself by focusing on gradual semantic detail recovery rather than purely hierarchical bitstream refinement. However, its final training loss remains similar to existing rate-distortion-perception (RDP) methods, raising questions about its practical advantages over LPIPS-optimized compression.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strength:
1. Unlike conventional perceptual image compression methods that primarily focus on pixel-wise reconstruction, this work leverages synonymous latent representations to ensure semantic consistency between the compressed and original images. This helps retain core image features while allowing for flexibility in fine details.
2. By focusing on semantic information, the proposed approach is valuable for multi-modal compression tasks, such as image-text joint encoding, where maintaining cross-modal consistency is crucial.
Weaknesses
1. In Section 3.2, the paper derives the Partial Semantic KL Divergence, emphasizing its role in maintaining semantic consistency. However, the final training loss(equation 12) almost identical to existing Rate-Distortion-Perception (RDP) frameworks.
2. The experimental results show that the proposed method performs almost identically to direct LPIPS-optimized methods in perceptual metrics (LPIPS, DISTS). This raises concerns about whether the method provides any meaningful improvement.
3. The paper claims that synonymous latent representations improve semantic consistency, ensuring that images at different bitrates retain the same semantic information. However, there is no quantitative validation of this claim in the experiments. The experiments only report perceptual metrics (LPIPS, DISTS), which do not directly assess semantic consistency. Can the authors provide quantitative evidence that their method maintains better semantic consistency across bitrates?
Other Comments Or Suggestions: Refer to weakness in ‘Other Strengths And Weaknesses’.
Questions For Authors: Refer to weakness in ‘Other Strengths And Weaknesses’.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer VZKv,
We sincerely appreciate your recognition of our SVI analysis, especially your understanding of **semantic consistency** and the **crucial potential in multi-modal compression tasks of our SVI theory**.
Your concerns are essential for refining our paper and guiding future research. Below are our responses.
**[Q1] The paper derives the Partial Semantic KL Divergence, emphasizing its role in maintaining semantic consistency. However, the final training loss (equation 12) is almost identical to existing Rate-Distortion-Perception (RDP) frameworks.**
[A1] You may misunderstand the role of the Partial Semantic KL Divergence: It optimizes image samples in the reconstructed synset within the image space conditioned on **ensuring explicit semantic consistency in the latent space**. This will **finally result in implicit semantic consistency with the original image, aligning with the ideal synset**, as shown in Figure 1 of the manuscript.
As for the final loss function, which is almost identical to the existing RDP framework, there are two misunderstandings:
- **The object of coding rates**: Only contains the **common features** of all samples in the reconstructed synset, which is **partial** latent features between the original image and any reconstructed image. This can be intuitively illustrated as the Venn Diagrams [Figure 1] in our [[Anonymous Link]](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C/). So the rate term is actually different from the existing RDP methods.
- **Almost identical means compatibility**: In Appendix A.3 (especially Lines 935~967), we show that SVI analysis is compatible with both the RDP and RD frameworks, as both are special cases of SVI under specific conditions. This leads to our training loss being almost identical to that of existing RDP schemes. However, this is not a weakness but **an advantage**—**our theory effectively explains the theoretical foundations of the existing RDP framework**. This is also mentioned by **Reviewer 7ghf** as our **strengths**.
**[Q2] The experimental results raise concerns about whether the method provides any meaningful improvement.**
[A2] We apologize that our implemented model design is relatively rough compared to the completeness of our theoretical analysis. The model is implemented to verify the effectiveness and potential of our SVI theory. To surpass existing methods, further exploring the model design is necessary, such as **better detail prediction and sampling module** to fit the true distribution, and **better choice on perceptual optimize direction**.
To verify the potential of further perceptual optimization, we implement a discriminator to finetune our model. The results are available at the [[Anonymous Link]](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C) [Figure 5]. Please check the rebuttal to **reviewer 7ghf [A2]** for the relevant experimental configuration and results analysis.
**[Q3] Can the authors provide quantitative evidence that their method maintains better semantic consistency across bitrates?**
[A3] Your suggestion is crucial. Verifying semantic consistency is essential as the ultimate goal of our solution optimization. To verify semantic consistency between samples in the reconstructed synset and original images across bitrates, we conducted the following supplementary tests on both $M=1$ and $M=5$ models:
1. Set the synonymous level $l$ to obtain the reconstructed samples $\tilde{\boldsymbol{x}}_i^{(l)}$.
2. Sent back $\tilde{\boldsymbol{x}}_i^{(l)}$ to $g_a$ and get a new synonymous representation ${\hat{\boldsymbol{y}}'}_s^{(l)}$.
3. Compare ${\hat{\boldsymbol{y}}'}_s^{(l)}$ with the original $\hat{\boldsymbol{y}}_s^{(l)}$ from the original image $\boldsymbol{x}$ and calculate the numerical difference ratio (Labeled as "DiffRatio").
The results on Kodak are plotted in the synonymous link **[Table 1]** and **[Table 2]**. These results show good semantic consistency of our models, as the DiffRatios across every bitrate are very low (means consistency higher than 98%). However, this also means the ultimate optimization goal of SVI is not finally achieved because of the rough design of our implemented model. One of the most intuitive and objective reasons is that errors arise from the **nonlinearity** and **irreversibility** of the adopted neural network structure. This suggests that in our future work, we can explore using reversible neural network structures to address these issues.
Besides, another observation is the intersection of DiffRatios at different rates for $M=1$ and $M=5$, which mirrors the intersection trend of the performance curve in Figure 5 in our manuscript. This suggests that, in future research, we can try to address the issues in Figure 5 by resolving the problems in DiffRatio.
In summary, we sincerely thank you for your suggestions and will update the relevant content in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, especially regarding A1, which has clarified my confusion on this aspect.
As for A2, based on the results, incorporating GAN does not show a significant improvement, which does not adequately address my concern. The outcomes fail to validate the effectiveness of the theoretical method described in the paper. Simply stating that better model design leads to better results is insufficient—I believe this part of the work needs further refinement.
Regarding A3, first, I do not fully understand how the "numerical difference ratio (DiRatio)" is calculated. Is it obtained by directly subtracting the latent variables? Moreover, your table does not include comparisons with other methods. Without such comparisons, how can you prove that SVI leads to better semantic consistency? I find this experiment unconvincing.
In summary, I believe the theory presented in this article is interpretable, but the overall design and experiments are not sufficiently thorough.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer VZKv,
First of all, we would like to thank you again for **your recognition of the interpretability of our theory**. We also regret that our previous response did not fully address your concerns.
Here, we provide further explanation and clarification on the two issues you raised.
**[Q2'] Regarding the issue of unsatisfactory performance improvement and insufficient statement on better model design.**
**[A2']** Firstly, we want to clarify the performance improvement with GAN finetuned: The improvement of DISTS is relatively obvious, while the enhancement in FID remains limited. This is a noteworthy phenomenon since **it is absent in the non-sampling schemes**, as confirmed by the experimental results in the MS-ILLM paper (Muckley et al., 2023).
Unlike DISTS' resampling tolerance, FID focuses on the consistency of the distribution between the original and reconstructed image groups. This suggests that our implementation is still insufficient in optimizing the distribution of reconstructed images, **especially in our detailed sampling mechanism**.
Actually, **our SVI theoretical analysis provides an ideal detail sampling principle** that **follows a conditional distribution $p_{\hat{\boldsymbol{y}}_{\epsilon,j}|\tilde{\boldsymbol{y}}_s}$ in vector form** (stated in the equations in Lines 631~639), which guides the prediction of the samples vector $\hat{\boldsymbol{y}}_{\epsilon,j}$ conditioned on synonymous representations $\tilde{\boldsymbol{y}}_s$.
However, in our method design, we empirically adopted a **uniform distribution** for detail sampling (Lines 1136) performed on **each element**. We realize that this sampling method cannot fit the needed conditional distribution **in vector form**, which **cannot ensure reasonable contextual structure in the details of the reconstructed images, thereby affecting the distribution consistency focused by FID**. Although we are still exploring effective solutions to this problem, adopting a simple yet suboptimal sampling method is currently a necessary compromise. This will be a key breakthrough direction for our future research.
**[Q3'] Regarding the issue of "DiffRatio" and the absence of comparisons.**
**[A3']** I apologize for not clearly explaining how "DiffRatio" is calculated in my previous reply, which may have led to confusion in your understanding. Here, we define DiffRatio as
$\frac{1}{n}\sum_{k=1}^n\mathbf{1}\left( \hat{y}\prime_{s,k}^{(l)}\ne \hat{y}_{s,k}^{(l)} \right), $
in which $n$ denotes the number of elements in $\hat{\boldsymbol{y}}_s^{(l)}$, and $\mathbf{1}(\cdot)$ is the indicator function which outputs 1 when the inequality holds and 0 otherwise.
In SVI theory analysis, the ideal reconstructed synset should cover the ideal synset, ensuring that the reconstructed images share the same synonymous representation as the original image at the corresponding synonymous level $l$. Therefore, **a DiffRatio value closer to 0 indicates that the reconstructed image is nearer to the ideal synset, reflecting better semantic consistency**.
Since the optimization directions of the existing RD and RDP methods are special cases of SVI under specific conditions (in Appendix A.3., Lines 930~978), these methods can use the quantized latent feature map $\hat{\boldsymbol{y}}$ as a synonymous representation (as no detailed representation exists) and compute DiffRatio with
$\frac{1}{n}\sum_{k=1}^n\mathbf{1}\left( \hat{y}\prime_k \ne \hat{y}_k \right).$
We have added the DiffRatio calculation results for the RD method (Mean-Scale Hyperprior) and MS-ILLM (no-GAN & with GAN) in the [anonymous link](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C) (see [Tables S1–S3 and Figure S1]).
These results indicate that the RD method without perceptual optimization has a DiffRatio exceeding 10%, while the MS-ILLM method with perceptual optimization achieves a DiffRatio in the range of approximately 1%~3%, regardless of whether GAN fine-tuning is applied. This proves that the introduction of perceptual optimization improves the semantic consistency that it can establish. However, our SVI-based methods hold the DiffRatio less than 1% across most bitrate ranges and even close to 0 at high bitrate with $M=1$, which demonstrates that **our approach achieves superior semantic consistency**.
**[Statement 1]** Here, we clarify that **our main contribution is to establish a unified mathematical theory for perceptual image compression, supporting existing RDP theory, and suggesting potential future research directions**. Our current rough implementation serves as a preliminary validation, demonstrating that a single model can adapt to multiple rates while approaching the performance of existing RDP methods, which aligns with our intended verification. **Achieving further performance breakthroughs will require future research efforts.**
We hope the above responses help you clarify any confusion and better understand the contribution of our work. | Summary: This paper is about perceptual image compression, where previous works measure the perceptual quality by calculate certain divergence distance between the source distribution and the reconstructed distribution.
This paper is inspired by a recent advancements in semantic information theory (Niu & Zhang, 2024), where manipulating a set
of samples with the same meaning (referred as to a synonymous set, abbreviated as “Synset”) is considered as the principle of semantic information processing.
Important terminology to understand the paper:
semantic information (i.e., the meaning) and syntactic information (i.e., data samples), where one meaning can be expressed in diverse syntactic forms.
Then this paper proposes the synonymous variational inference, which is based on a modified KL divergence. This partial semantic KL divergence is defined with regard to a syntactic distribution $q$ and a semantic distribution $p_s$. For VAE based learned image compression, q is defined as parametric latent density conditional on the source image $x$, which is viewed as a sample in the ideal and unknown synset X. By minimizing this KL divergence, the output of the semantic decoder can be considered as a sample of the ideal synset, thus producing the perceptual optimized reconstruction.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Dataset includes set CLIC2020, DIV2K and Kodak, which is sufficient. [Q1] While I think FID should be evaluated as previous perceptual image compression works.
Theoretical Claims: I check the theorems, they are extended versions of previous derivation for rate-distortion-perception and they are correct to me.
Experimental Designs Or Analyses: [Q2] The overall design is good. However, the tradeoff between distortion and perception is not illustrated as previous rate-distoration-perception works. Another problem is the performance. The implementation did not show the advantage of this new synonymous variational formulation compared with previous perceptual image compression methods.
Supplementary Material: I have read the supplementary material, though I do not have enough time to review the derivations.
Relation To Broader Scientific Literature: This paper might be related to perceptual image quality assessment. Which is also discussed in What makes an image realistic? ICML 2024.
Essential References Not Discussed: [Q3] There exist some previous works for semantic perceptual definition and optimization from different perspectives.
[R1] What makes an image realistic? ICML 2024. This paper also provides an analysis of the defination of perceptual distance, which I think should be discussed as related works.
[R2] The Rate-Distortion-Perception Trade-Off with Algorithmic Realism, this paper propose a perceptual metric for individual or batch images, which is quite related to this work.
Another previous attempt to extend the RDP tradeoff for semantic compression is '[R3] Conditional perceptual quality preserving image compression', which porposes to measure the divergence conditional on semantic information and should also be discussed as related works.
In my understanding, this paper formulates semantic oriented compression problems by introducing Synset to previous variational inference framework, which is a meaningful extension. I think the nature of synset is quite similar to the conditional perception proposed in [R3], which referes to samples (synset) or sample distribution (conditional posterior in R3) with same semantics. The correlation with those three previous works should be clarified in the manuscript.
Other Strengths And Weaknesses: This paper is based on a very new advancements in semantic information theory (Niu & Zhang, 2024), I think this paper will provide important and new insight for learned image compression community.
In equation 5, symbols like $U$ and $N_{i_s}$ are not explained. Though I can infer the meaning of those symbols, it's better to write it more clear for general readers.
Other Comments Or Suggestions: None
Questions For Authors: Please consider my concerns in previous sections [Q1][Q2][Q3]. To sum up, I like the formulation by semantic set and variational inference, which bring new insight to this field. However, the implementation did not show the advantage of this new formulation.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer uGiv,
Thank you for recognizing our contributions, especially the viewpoint that **our SVI theory will provide important and new insight for learned image compression community**.
We believe that your questions and suggestions are crucial for improving our work. Below are our responses.
**[Q1] FID should be evaluated as previous perceptual image compression works.**
[A1] Your suggestion is valid. Since FID is commonly used to measure distribution similarity between image groups, it is crucial for evaluating the performance of perceptual image compression. However, previous work (MS-ILLM) shows that using adversarial loss significantly improves FID quality. Since our method roughly uses LPIPS rather than adversarial loss to replace KL divergence for optimization, its FID performance is worse than HiFiC and MS-ILLM, which is why we did not include FID comparisons in the manuscript.
Since several reviewers noted our method's performance weaknesses, especially compared to GAN-based perceptual image compression, we conducted additional experiments, adding a discriminator to our method and fine-tuning the model with a non-saturating loss in the loss function. The results including FID are available at the [[Anonymous Link]](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C). Please check the rebuttal to **reviewer 7ghf [A2]** for the relevant experimental setup and results analysis.
**[Q2] The tradeoff between distortion and perception is not illustrated as previous rate-distortion-perception works. Besides, the implementation did not show the advantage of the method compared with previous perceptual image compression methods.**
[A2] For your first sub-question:
- From the perspective of our SVI theory analysis, the discussion of the tradeoff between distortion and perception is optional. If we view **Lemma 3.2** (Lines 245~258) inversely, the lemma shows that **different distortion-perception tradeoffs actually correspond to different criteria for determining ideal synonymous sets $\boldsymbol{\mathcal{X}}$**.
- From the perspective of the experimental results, indeed, our previous manuscript version did not discuss the distortion-perception tradeoff problem. In our provided anonymous link, we provide the performance curves with adversarial loss optimization. As perceptual metrics like DISTS and FID improve, the distortion of the reconstructed image decreases. This confirms that our method aligns with previous work on the distortion-perception tradeoff.
For your second sub-question:
- Our theoretical analysis of SVI demonstrates that, under ideal conditions and with ideal model structure designs, the SIC method has the potential to outperform existing perceptual image compression methods. This is also mentioned in the rebuttal to **Reviewer 1wA9 [A1]**.
- As shown in [figure 5] of the provided anonymous link, introducing the discriminator for training effectively improves perceptual quality. We note that since the discriminator is balanced across different quality levels, the fine-tuning has not fully converged, which means that there is still potential for further improvement. Nevertheless, we still consider possible issues with the current design, including **the absence of better detail prediction and sampling method** and **the better choice of perceptual optimize direction**. These issues direct the key directions for our future work.
**[Q3] Some previous works for semantic perceptual definition and optimization from different perspectives should be clarified.**
[A3] Thanks for providing the relevant papers. They have given deeper insights into the area of perceptual image compression and our own works. Below are some insights on the correlation with these previous works.
- The paper [R1] attemps to build a universal critic of **realism**. The authors build a universal realistic critic based on *Kolmogorov complexity*. This critic allows the measure to evaluate the realism without original image reference. In contrast, our analysis originates from semantic information theory and establishes a perceptual-oriented optimization direction with original image reference.
- The paper [R2] follows [R1] and proposes a new model for rate-distortion-perception tradeoff, which defines a log-likelihood ratio critic as the realistic critic without comparing to the original one, while our theory provides an expected log-likelihood ratio critic that requires that comparing.
- The paper [R3] is indeed related to our works, and your understanding is completely correct. They explored the optimization direction of conditional posteriors with common features but did not consider that these conditions can serve as synonymous representations for synonymous compression. Besides, they did not approach the optimization problem from the perspective of a perceptual-similar set.
We sincerely appreciate the questions you raised. These questions will help us improve our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply, after reading other reviews, I think this is a good paper. Thus I raise my score.
Please include those new results and discussions in the final version to make the paper more complete.
---
Reply to Comment 1.1.1:
Comment: Thank you for your affirmation of our paper and your valuable suggestions! Your suggestions have indeed improved our work! We will incorporate these new results and discussions into the revised manuscript to strengthen it further. | Summary: This paper proposes synonymous variational inference and introduces synonymous image compression. It is based on the observation that a given image to be encoded has a set of synonymous images that share the same semantic meaning. Instead of optimizing the variational distribution at the pixel level, we optimize it at the semantic level. Building on this insight, we propose a simple modification to the existing training scheme to achieve synonymous variational inference.
Claims And Evidence: Yes. The claims are supported by experiments and proof.
Methods And Evaluation Criteria: Yes. The paper reports DISTS and BPP, and also report LPIPS and PSNR in appendix.
Theoretical Claims: I checked the outline of the proof in the main text.
Experimental Designs Or Analyses: The experiments are sound to me. Also, the author provides an ablation study on M.
One thing I am not sure of is that the proposed algorithm is simply modifying the latent code with a deterministic one and a stochastic one. What if we do not use the stochastic code, but keep all other components, like loss / network to be the same? Will this reduce the performance?
Or to say, my question is: is the performance gain due to the SVI proposed in this paper, or the loss / network / other tricks?
Disclaimer: I am familiar with the Rate-Distortion-Perception trade-off in general, but I only know some old baselines in this area, like HiFiC, and I am not familiar the recent development in this area. Therefore, I cannot evaluate the significance of the performance gain. Also, I my questions regarding the performance/experiments might be biased. If that is the case, feel free to correct me.
Supplementary Material: Not in detail.
Relation To Broader Scientific Literature: The SVI proposed in this paper is new to me. I think this result is insightful yet simple enough to employ. I believe this is a good contribution to this area.
However, I cannot confidently evaluate the significance of the performance of this approach.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This paper is well-structured and well-written. It is easy to follow and is a pleasure to read.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 1wA9,
Thank you for recognizing our work, especially our proposed analysis theory, i.e., **synonymous variational inference (SVI)**, as **a good contribution to the area of perceptual image compression**.
We think that your questions are valuable for improving our work. Below are our responses.
**[Q1] Is the performance gain due to the SVI proposed in this paper, or the loss / network / other tricks?**
[A1] We address your question from two perspectives: **theory and method**.
- **Theory**: According to the proof and compatibility analysis in Appendix A of our manuscript, SVI can offer theoretical potential performance advantages for two reasons:
- **Stochastic Effectiveness**. As Equation (15) (especially Lines 636~639) presents in our manuscript, each sampled random detail corresponds to a sample $\tilde{\boldsymbol{x}}_j$ in the reconstructed synset. When the ideal and reconstructed synsets fully overlap, that sample should match an image $\boldsymbol{x}_j$ in the ideal synset, which is perceptually similar to the original one. Thus, the reconstruction quality can be supported by both coded synonymous representations and random details, rather than relying solely on the code sequence as in HiFiC and MS-ILLM.
- **Encode only common features**. As Equation (31) (Lines 862~868) and (32) (Line 874) present, ideally, the coded rates of the synonymous representation reflects the rates of common features of the reconstructed synset $\tilde{\boldsymbol{\mathcal{X}}}$, i.e, the minimized single-side semantic mutual information $I(\boldsymbol{X};\tilde{\mathring{\boldsymbol{X}}})$, rather than the minimized mutual information $I(\boldsymbol{X};\tilde{\boldsymbol{X}})$ between the original image and specific reconstructed sample, in which the latter one is greater (line 925-926). This can be intuitively illustrated as the Venn Diagrams [Figure 1] in our [[Anonymous Link]](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C/), and means that our compression limits will be lower than the existing methods with the same distortion and perceptual quality constraints ideally.
- **Method**: The advantages of our proposed scheme including:
- **Multi-rate adaptability**. This relies on **Stochastic Effectiveness**, since the adaptability across various rates is achieved by determining partial of the random details to the accurate representation of the source image, with the guidance of our derived loss function.
- **Performance Improvements on DISTS**. This relies on both the two theoretical advantages, since the difference on the details is more acceptable to DISTS and human perception than other measures like LPIPS; besides, the compression efficiency can be further improved by encoding only synonymous representations.
However, we admit that our proposed SIC method struggles to fully achieve SVI's theoretical advantage due to **the absence of better detail prediction and sampling method** as well as **better perceptual optimize direction like adversarial losses**. These are key directions for future research.
To verify its potential for subsequent optimization, we plot some additional experimental results to the [[Anonymous Link]](https://anonymous.4open.science/r/supplementaryResults_SVI-F92C), in which we finetune our model with a CNN-based discriminator using non-saturating loss. Please check the rebuttal to **Reviewer 7ghf [A2]** for the relevant experimental configuration and results analysis.
**[Q2] What if we do not use the stochastic code, but keep all other components, like loss / network to be the same? Will this reduce the performance?**
[A2] Your suggestion is crucial since it touches on a core question we are concerned with: whether random details positively support both distortion and perceptual quality. Therefore, we added an ablation test result in our provided anonymous link [Figure 2~3], where we force the random details $\hat{\boldsymbol{y}}_{\epsilon,j}$ to 0, preventing them from providing detail information. We obtained test results at each rate under $M=1$ and $M=5$ and compared them with the performance of random sampling. All other conditions remain unchanged.
The experimental results show that without random sampling, both models reduce the performance significantly in distortion and perceptual quality. This indicates that random details contribute effective information to the reconstructed image's distortion and perceptual quality, i.e., verifying the **Stochastic Effectiveness** mentioned above. Besides, it also suggests that perceptual image compression performance can be supported not only by coding sequences as existing RDP methods do. Based on this phenomenon, we conclude that when designed properly, the SVI-based method has the potential to surpass the performance of the existing RDP methods.
Again, we appreciate your affirmation of our work and the questions you feedback. We will update our manuscript accordingly. | null | null | null | null |
Task Generalization with Autoregressive Compositional Structure: Can Learning from $D$ Tasks Generalize to $D^T$ Tasks? | Accept (poster) | Summary: This paper demonstrates that for Boolean tasks with an AutoRegressive Compositional Structure (e.g., sparse parity), using chain-of-thought (CoT) to break down the tasks into simpler sub-problems—while predicting the outcomes of intermediate steps—significantly improves generalization in the GPT-2 model.
Claims And Evidence: **Weakness 1**
I believe the paper’s most significant contribution is demonstrating that by leveraging chain-of-thought (CoT) to train tasks with an auto-regressive compositional structure:
(1) Tasks that were previously unlearnable without CoT (in the sense that the training loss could not be reduced) can now be learned.
(2) The method also appears to have an advantage in terms of generalization (i.e., the sense that the test loss is small).
However, regarding point (1), I am not particularly surprised, because it seems the authors have used prior knowledge of the ARC structure to explicitly break the tasks down into smaller units via CoT, thereby artificially lowering the complexity and making it more learnable.
As for point (2), I am skeptical of the claim that this approach truly confers a generalization advantage. The paper’s primary example, Sparse Parity, is well-known to be essentially unlearnable (and certainly not generalizable) by Transformers. Thus, from my perspective, the experiments mainly show that CoT makes learning possible in the first place, but do not convincingly demonstrate how much of a generalization benefit is gained over a No CoT approach.
For a fair comparison of generalization performance, I believe it is necessary to examine tasks that can be learned both with and without CoT. Without such a baseline, it seems to me that the paper merely shows how reducing task complexity via CoT enables successful training, which, while potentially useful, is not particularly surprising.
Methods And Evaluation Criteria: I believe the methods and evaluation criteria proposed by the authors are reasonable.
Theoretical Claims: Under the stated assumptions, the proofs for the theoretical claims are logically coherent and employ standard techniques appropriately. While some assumptions (e.g., infinite samples, fixed TV gaps) might be limited, I think that they are acceptable for a theoretical analysis.
Experimental Designs Or Analyses: **Weakness 2**
Despite the fact that the experiments are neither time-consuming nor require extensive GPU resources, the authors did not provide results from multiple repeated trials. As a result, it is difficult to assess the consistency of the findings.
Supplementary Material: Yes, I reviewed the supplementary material in its entirety, but I did not closely examine the whole proof.
Relation To Broader Scientific Literature: This work appears to contribute to the research area of using synthetic tasks to better understand language models. The parity task is known to be difficult for Transformers to learn, but other researchers could gain insights from this paper on how to make it learnable.
Essential References Not Discussed: The author mentions that, so far, Transformers have been unable to learn sparse parity. However, while reviewing recent studies, this paper* claiming that sparse parity can be learned when multiple tasks are trained together.
*Task Diversity Shortens the ICL Plateau, arXiv 2025. (https://arxiv.org/abs/2410.05448)
Other Strengths And Weaknesses: **Strength**
1. Providing experimental and theoretical explanations of how Transformers can effectively learn and generalize by leveraging CoT.
2. In the sparse parity and arithmetic tasks, it is interesting that the results closely align with the theoretical findings.
3. Overall writing and structure were easy to read!
**Weakness**
Please see weakness 1 and weakness 2.
Other Comments Or Suggestions: The multilingual translation experiments are conducted in a relatively simplified setting, where a single word (e.g., "cat" or "dog") is translated through a language chain using a 2-layer, 3-head Transformer. This simplified experimental environment may not adequately capture the diverse linguistic and contextual factors present in real-world translation scenarios, making it difficult to substantiate the claim that the ARC structure naturally arises in actual language translation tasks. Therefore, there are no experimental results that sufficiently bridge the gap between the paper's contributions and real-world scenarios.
Questions For Authors: 1. How is the input prompt structured when training with CoT? I'm curious about how zero padding is applied. For example, when using 10101 -> 111 as input, do you add two zero paddings to 111, making it 11100? It would be helpful if you could provide a detailed explanation of the overall process. (I understand the setting used in the paper "Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions")
2. I’m curious whether the experimental results are permutation-invariant. There are multiple ways to apply CoT (technically, there are k! possible ways). In the current paper, the model processes the input autoregressively in order, starting from the smallest index (such as figure in page 5). What happens if a randomly permuted order is used instead of ascending order? If you have experimented with this, I’d love to know the results. Additionally, I’d be interested in hearing the authors' insights on this matter.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments. Below, we address each point in detail.
> **(1) Tasks that were previously unlearnable without CoT (2) The method also appears to have an advantage...**
We thank the reviewer for raising this question regarding whether the difference between CoT and non-CoT models is primarily an optimization issue or a generalization issue. To clarify, in our experiments, models trained without CoT are able to reduce the training loss to near-zero and achieve nearly 100% training accuracy. For clarity, we include an additional plot showing the training dynamics corresponding to Figure 1 of our main paper: [Link](https://files.catbox.moe/95rhrv.png). As shown, while the training accuracy approaches 100%, the test accuracy remains at random chance. This indicates that the key challenge is not optimization, but generalization: models trained without CoT fail to generalize to unseen compositions of subtasks, despite perfectly fitting the training data. In contrast, models trained with CoT exhibit strong generalization.
The core contribution of our work is a shift in perspective from traditional sample complexity to task complexity. We show that for tasks with compositional structure—where CoT prompting serves as an example—training on a small set of tasks enables generalization to an exponentially large number of unseen tasks. Prior work [Wen et al (2024)] shows that transformers can, in principle, learn sparse parity functions without CoT, but doing so requires exponentially more data. In our setting, which focuses on in-context learning (ICL), we observe that even when the model fits the training tasks without CoT, it fails to generalize across tasks. In contrast, CoT enables generalization to an exponentially large space of unseen compositions, in line with our theoretical predictions. We will make this distinction clearer in the revision.
> **Multirun experiment.**
We appreciate the concern. It is worth noting that as we are focusing on task generalization, each time we change the number of tasks we need to rerun the experiments from scratch. In other words, each data point in our plot corresponds to a completely new random sample of tasks, which already provides an implicit robustness check. Nevertheless, in the revised version, we further strengthen our results by including additional experiments with repeated runs (at least 5 random seeds) for selected settings—specifically for $d=15$ and $d=20$—to directly assess the stability of our results. A plot summarizing the variance across seeds for these settings is available [here](https://files.catbox.moe/acfo1o.png).
> **Essential References Not Discussed**
We thank the reviewer for highlighting Task Diversity Shortens the ICL Plateau, which shows transformers can learn sparse parity when trained on a diverse task mix (e.g., parity with linear or quadratic regression). However, it's important to note that their setting differs from ours as they do not separate training tasks from testing tasks. Specifically, the model is trained continuously in an online fashion, where all tasks are repeatedly visited during training. In contrast, our work explicitly evaluates out-of-distribution task generalization by testing models on entirely unseen tasks. The key message of our work is that the ARC structure enables generalization to an exponentially large space of unseen compositions. Moreover, in their Figure 1, sparse parity (red line) isn’t learned on its own with transformers—it only becomes learnable when mixed with other task.
> **How is the input prompt structured when training with CoT?**
The input and overall setting are exactly the same as in the paper you mentioned, *“Understanding In-Context Learning…”*. The only difference is that, instead of providing only the final parity label, each label now includes the intermediate XOR computations (see the figure in the right column of page 5). For example, if $d = 15$, context length is 40, and $k = 3$, then the input size becomes $40 \times (15 + 3)$, compared to $40 \times (15 + 1)$ without CoT—just as in the original paper. As a result, zero-padding is not needed.
> **Permutation-invariant**
That's an interesting question. As the task has the same compositional structure, our theoretical framework still applies. However, as you noted, the number of possible tasks increases by a factor of $k!$, so the constants in the theory may differ. We ran experiments with $d = 15$ and $k = 3$, varying the number of training tasks. The results are summarized in the table in this [link](https://files.catbox.moe/63l1hh.png). (The number of training tasks is $n \cdot 15 \cdot \ln(15)$, where $n = 2, 3, 4$, and the total number of possible tasks is ${15 \choose 3} \cdot 3! = 2730$.) As expected, the generalization behavior roughly follows the pattern observed in the non-permutation case. However, we found that training was more difficult and required more steps to converge. | Summary: This paper presents the simple but useful and intuitive idea that *chain-of-thought generation reduces the theoretical complexity of compositional problems, and therefore leads to theoretically and empirically less required samples for strong generalisation* to unseen functions. Specifically, prior work has shown that transformers fail with this type of generalisation, but the authors of this work show they can in fact do it. The question the authors investigate is when can transformers generalise to a large number of unseen tasks from a smaller number of tasks? They show for an autoregressive compositionally structured task formulation how many tasks a model needs to theoretically see to generalise to all heldout tasks, and empirically demonstrate that this holds for three examples of such tasks. Even when task space gets larger (e.g. through more steps in the sequence), the amount of tasks needed to generalise follows the theoretically found scaling law. The authors further show that this is only the case when the model is trained on i.i.d. tasks and examples of these tasks, and has worse generalisation performance when specific examples/tasks are structurally held-out.
## Update after rebuttal
My points have been addressed after the rebuttal and I am still in favour of accepting this paper.
Claims And Evidence: All claims are supported by clear and convincing evidence. The authors derive a theoretical result and show it empirically holds for three separate tasks for which the critical assumption in the theory holds.
This is not central to the thesis of the work, but the claim that is not supported is the following. In Section 5.2.2. the authors show results on a toy word translation task, and claim that *"This example illustrates that autoregressive compositional structures naturally arise in real-world languages, even without explicit CoT."* But this is a bit of a stretch, given that sequential translation of words to different languages is not a real-world language task.
One suggestion I would make to make the evidence stronger is to show that the scaling law does not hold for a task that violates the assumption about compositional identifiability.
Methods And Evaluation Criteria: All methods and evaluation criteria make sense.
Theoretical Claims: I did not check the correctness of any proofs and theoretical claims, but the empirical results validate them and they make intuitively perfect sense. E.g. if a problem is compositional, in that each next step to tackle is independent of the previous steps if you keep track of a state, chain-of-thought can break down the problem and leads to a lower complexity task. Generalisation then is independent of the overall task complexity in parameters like sequence length cause one can compositionally apply the algorithm to each step.
Experimental Designs Or Analyses: The experimental design is sound and valid.
Supplementary Material: Yes, appendix section C and D that shows additional experiments motivating some of the claims in the main paper (e.g. that ICL fails without CoT, and that context length is important for generalisation) plus details of the experimental setup.S
Relation To Broader Scientific Literature: The authors successfully demonstrate how transformers can show strong generalisation in cases that prior work said they struggle with, and come up with a new theoretical result that explains why and how, which can potentially connect to generalisation properties of LLMs.
Essential References Not Discussed: Some papers the authors could consider citing are the following (just a suggestion, I leave it up to the authors if they are worth discussing):
- https://arxiv.org/abs/2307.03381 (related in that they show CoT is crucial for strong generalisation)
- https://arxiv.org/abs/2406.02550 (related in that they focus on task generalisation)
Other Strengths And Weaknesses: **Strengths**
- Very clear writing
- Neat connection of theory to multiple empirical results
- The idea presented here contributed to my understanding of generalisation in transformers
**Weaknesses**
- No real-world tasks
- As mentioned above, would be interesting to show a failure case (different number of tasks required than predicted by theory for a task that violates compositional identifiability)
Other Comments Or Suggestions: List of typos:
- 117r typo finte -> finite
- 315r typo ambien -> ambient
- 346-348l needs a rewrite (mostly in terms of verbs and determiners)
- 365-366l as well
- 111r typo hide -> hides
- 193l "can only access to" -> "can only access"
- 202l "it accesses to" -> "it has access to"
- 211r "two stage" -> "two stages"
- Typo title figure 3 arithmatic -> arithmetic
Questions For Authors: - Do you have any idea why the compounding of errors does not show up in the other tasks than the word translation one?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's support and valuable suggestions. Below, we address each point in detail.
> The sequential translation task is not a real-world language task.
Thank you for pointing this out. We agree the translation example is synthetic and will clarify that our goal was merely to illustrate how decompositional structure can emerge without explicit CoT, as long as compositional structure holds, rather than to suggest that this task reflects real-world language use. We will clarify this in the final version.
> A failure case where the compositional identifiability assumption is violated so that the scaling law fails.
Thank you for the suggestion. At a high level, the compositional identifiability assumption is required to ensure that any subtask is identifiable regardless of the other subtasks it is composed with. To illustrate what happens when this assumption is violated, we present an extreme failure case where a subtask is identifiable only in the presence of specific preceding subtasks. In this setting, task complexity grows exponentially with $T$:
Let the vocabulary be $\mathcal{Y} = \{0, 1\}$. Consider the following autoregressive compositional task class where the output is deterministic and independent of the input $x$.
For timesteps $1 \leq t \leq T-1$, there are two possible tasks:
- $\theta_t^0$: Always output $y_t = 0$, regardless of $(x, y_{<t})$.
- $\theta_t^1$: Always output $y_t = 1$, regardless of $(x, y_{<t})$.
At timestep $t = T$, there are also two tasks:
- $\theta_T^0$: Output $y_T = y_1 \land y_2 \land \cdots \land y_{T-1},$ where $\land$ denotes the AND operation.
- $\theta_T^1$: Always output $y_T = 1$, regardless of $(x, y_{<T})$.
In this case, identifying the subtask $\theta_T^0$ as distinct from simply outputting $y_T = 0$ requires that the specific task $(\theta_1^1, \theta_2^1, \dots, \theta_{T-1}^1, \theta_T^0)$ appears in training. The probability of sampling this task decreases exponentially with $T$, making the task complexity exponentially large for successful learning. We will briefly discuss this in the final version.
> Why the compounding of errors does not show up in the other tasks than the word translation one
This is a great and challenging question. One plausible explanation is that the implicit biases of Transformers vary significantly across tasks. In the word translation task, the model may adopt undesirable shortcuts—such as only partially solving the intended task (e.g., learning to translate from Chinese to French without generalizing from English to Chinese)—which increases the likelihood of errors at each generation step. In autoregressive generation, an error rate of $p$ per step compounds over time, meaning that achieving correct sequence generation with constant probability requires $p = o(1/T)$. In contrast, algorithmic tasks like parity and arithmetic appear to induce stronger biases toward learning the correct underlying computation, potentially mitigating this issue. That said, we stress that this compounding error argument is merely a hypothesis and does not constitute a rigorous explanation. We do not fully understand why these tasks exhibit different scaling behaviors, and we leave a complete explanation for future work.
> No real-world tasks.
Our goal is to establish a foundation where the compositional structure of tasks is clearly defined and generalization to unseen tasks is fully controllable. To this end, we deliberately focus on synthetic tasks to isolate the core principles underlying ARC-style generalization, explicitly specifying seen tasks and rigorously evaluating generalization on truly unseen tasks, free from leakage or contamination from pretraining.
We also note that several influential works studying transformer behaviors similarly focus on synthetic or simplified settings, such as parity functions [1] and modular arithmetic [2,3], which have proven effective for uncovering key insights.
While extending this framework to broader NLP tasks is important, it incurs substantial computational cost, which is beyond the scope of this paper. We view our current results as a necessary first step and are actively exploring such extensions.
[1] Bhattamishra et al., *Understanding in-context learning in transformers and LLMs by learning to learn discrete functions*, ICLR 2024.
[2] Power et al., *Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets*, 2022.
[3] Nanda et al., *Progress measures for grokking via mechanistic interpretability*, ICLR 2023.
> Typos and Related Papers in the paper.
We thank the reviewer for the careful feedback and suggestions. We will correct the typos and include the suggested reference in the final version of the paper. | Summary: This paper investigates when models trained on a small set of tasks can generalize to a much larger task family. The authors approach this through the lens of "autoregressive compositional structure" (ARC), where tasks are composed of T sequential operations, each chosen from D possible subtasks, creating a total task space of D^T. The key theoretical contribution is proving that generalization to all D^T tasks can be achieved by training on only O(D log DT) tasks. Empirically, the authors demonstrate that Transformers can achieve this exponential generalization on sparse parity functions and arithmetic operations.
Claims And Evidence: 1. The theoretical claims are supported by the proof.
2. The empirical claim on the dependency of the function space (D) and the number of compositions (T) are not sufficiently convincing. First, for each task, only a handful number of D and T choices are tested, which is still relatively small to verify the bound.
Methods And Evaluation Criteria: Yes the proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand.
Theoretical Claims: 1. I briefly read the proof but didn't check the details. One confusing thing is that it seems to be more like a general approach rather than being specific to the autoregressive prediction, which is probably also the reason that the authors observe a different dependency on T in the language translation task.
2. In the paper, it assumes the number of demonstration (nx) goes to infinity which seems not very realistic.
Experimental Designs Or Analyses: 1. About the experimental setup: There is no detail on the arithmetic task. Is it still following the ICL format and what does it look like?
2. About the empirical results: Besides the number of tested D and T is small, the visualizations to showcase the dependency of the number of training tasks are also relatively confusing. The current plots have test acc (the y axis) vs num of training tasks/Dlog(DT) (the x axis) which is hard to see whether it scales in a linear/log way. More natual visualizations would be number of training tasks used to achieve ~100% test acc vs Dlog(DT)), with the actual linear/log line as a reference.
Supplementary Material: I briefly check the proof.
Relation To Broader Scientific Literature: The key theoretical contribution is proving that generalization to all D^T tasks can be achieved by training on only O(D log DT) tasks. I fell like this result is relatively new but it would be great if the author could clarify that how the proposed frame work and the theoretical result is different from the compositional generalization literature. For the empirical work, though the authors claim that it verifies the theory, I still feel like they're not very convincing for the reasons mentioned in Claims And Evidence & Experimental Designs Or Analyses.
Essential References Not Discussed: Most recent works in this area that I am familar with are cited.
Other Strengths And Weaknesses: Strengths:
1. The paper introduces a structured mathematical framework that quantifies compositional complexity through parameters D and T and provide theoretical result for the sample complexity.
2. The paper designs ICL experiments to verify the theortical results.
Weaknesses:
1. The dimensions tested are relatively small compared to the parameter spaces of modern LLMs. The generalization properties might behave differently at much larger scales.
2. The experiments focus on relatively synthetic tasks with clear compositional structure. It remains unclear how these findings might transfer to more complex, natural language tasks.
Other Comments Or Suggestions: 1. I recommend making the experimental section more comprehensive with clearer visualizations for the linear/log dependency.
2. The paper would benefit from more details on experimental setups for each task.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and valuable suggestions. Below, we address each point in detail.
>Dependency on $d$ and $k$ in Experiments
We agree that a broader range of $d$ and $k$ provides additional support for our findings. However, both $d$ and $k$ appear in the base and exponent when determining the total number of tasks, leading to exponential growth. For instance, in the parity task, even with $d=25$ and $k=10$, the number of distinct parity functions is $\binom{25}{10} \approx 3$ million. We have strengthened our empirical support by adding this new experiment on parity tasks with $d=25$ and $k=10$:
| $d$ | $k$ | \# Training Tasks | \# Total Tasks | Accuracy (%) |
|---|---|-|-|-|
| 10 | 5 | 69 | 252 | 98.51 |
| 15 | 7 | 121 | 6,400 | 99.12 |
| 25 | 10 | 241 | 3.2M | 98.60 |
These results clearly indicate that the number of training tasks grows approximately linearly while the total number of tasks grow exponentially.
It is still a valid question to ask why can’t we scale to significantly larger $d$. The main challenge comes from context length. Unlike prior works focusing on learning a fixed parity function where $d$ simply reflected input dimensionality, we are in an ICL setup where the model needs to learn a general algorithm that infers *any* parity function drawn from a large function family. For the problem to be identifiable, the number of in-context examples $N$ must scale linearly with $d$. For example, when $d=100$ and $k=50$, $N=100$ examples result in an input length of roughly $N(d+k) \approx 15,000$ tokens -- which surpasses our available computational resources. This quadratic scaling of context length is the main bottleneck limiting us from larger $d$ and $k$.
>Theoretical Claims and Autoregressive Specificity
While the proof may appear general, our framework is specific to the autoregressive (ARC) setting. The distribution of $y_t$ explicitly depends on $y_{<t}$, and our constructed learner solves each subtask autoregressively, only using $(x,y_{\leq t})$ from the input sequence. Setups without ARC structure can be seen as a special case (when $T=1$), but compositionality, which is central to our analysis, only emerges through ARC composition when $T>1$.
>Assumption of Infinite Demonstrations
The assumption $n_x\to\infty$ in the main text is for simplifying exposition. We also provide a non-asymptotic guarantee in Appendix B (Remark 3.4, Theorem B.2). Specifically, if each subtask distribution is separated from incorrect hypotheses by a total-variation margin $\epsilon>0$, then $n_x=\widetilde{O}(1/\epsilon^2)$ demonstrations suffice. Thus, our analysis covers the finite-sample regime.
>Visualization of Experimental Results
The visualization suggested by the reviewer—plotting the number of training tasks needed to reach a given accuracy—is a valid alternative. However, we believe there is no single best way to present the results. Our current presentation is chosen to align closely with the theoretical prediction on task complexity. It also provides both lower and upper bounds for a given accuracy. For instance, Figure 1 shows that in the parity task, all setups achieve ≥98% accuracy within $[2D\log D,4D\log D]$ training tasks.
>Relation to Prior Literature on Compositional Generalization
Our framework differs from prior work in two key ways. First, most existing work is largely empirical, focusing on assessing compositional abilities of pretrained LMs, while we explicitly study how compositionality emerges during training when structured task families are presented. Second, we shift the focus from standard sample complexity to *task complexity*, aiming for generalization to unseen tasks where training on a small number of tasks generalizes to exponentially many unseen tasks.
>Experimental Details: Arithmetic Task
The arithmetic task fully follows the in-context learning (ICL) setting, consistent with the parity task. We will add more experimental details in the final version.
>Use of Synthetic Tasks
We chose synthetic tasks to ensure that the compositional structure is fully controlled and interpretable. The goal is to establish a foundation where the compositional structure is clearly defined, and generalization to unseen tasks is fully controllable. We also note that several influential works studying transformer behaviors similarly focus on synthetic or simplified settings, such as parity functions [1] and modular arithmetic [2,3], which have proven effective for uncovering key insights. In our case, the key insight is *exponential task generalization*.
Extending this framework to real-world NLP tasks is important and an active direction we are exploring. However, doing so requires substantial more work, which we believe deserves a dedicated follow-up study.
[1] Bhattamishra et al. (https://arxiv.org/abs/2310.03016 )
[2] Power et al. (https://arxiv.org/abs/2201.02177)
[3] Nanda et al. (https://arxiv.org/abs/2301.05217)
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I think the question "About the experimental setup: There is no detail on the arithmetic task. Is it still following the ICL format and what does it look like?" is not answered. Could you verify?
---
Reply to Comment 1.1.1:
Comment: We briefly mentioned in the second-to-last response, the arithmetic task indeed follows the same in-context learning (ICL) setup as the parity task, and we are happy to provide more details here.
To elaborate, consider the case where the input dimension is d = 5. Each task consists of an operation sequence of length d - 1 = 4, represented by a tuple of arithmetic operations (either `+` or `*`), such as (`+`, `+`, `*`, `+`). A sample input could be the binary vector `11110`.
The corresponding chain-of-thought (CoT) reasoning involves applying the operations step-by-step, like this:
- 1 + 1 = 2 — Step 1 of CoT
- (1 + 1) + 1 = 3 — Step 2 of CoT
- ((1 + 1) + 1) * 1 = 3 — Step 3 of CoT
- (((1 + 1) + 1) * 1) + 0 = 3 — Step 4 (final answer)
So the CoT steps are `2, 3, 3`, and the final step gives the answer `3`. The final String looks like this $11110\rightarrow\underbrace{233}_{\text{CoT steps}}3.$
The rest follows the same ICL format as the figure on page 5, identical to the parity setup, i.e. we concatenate \(N\) such examples as in-context demonstrations, followed by a query input, and ask the model to produce the query's CoT steps and final answer.
We appreciate your thoughtful feedback and hope that our response has addressed your concerns and helped shift your assessment in a positive direction. We’re happy to clarify further during the rebuttal window if needed. | Summary: This paper investigates task generalization in large language models (LLMs) through the lens of AutoRegressive Compositional (ARC) structure. The central question explored is: When can learning from a small set of tasks enable generalization to a much larger task family? The authors propose that LLMs, particularly transformers, can achieve exponential task generalization when tasks are structured compositionally.
Claims And Evidence: The paper makes several claims that are appropriately supported by empirical results.
Claim: Generalization to exponentially large task sets is possible with limited training tasks when tasks follow ARC structure.
- Empirical results in sparse parity and arithmetic tasks show that transformers
Claim: Task selection significantly impacts generalization.
- When tasks are sampled adversarially (e.g., omitting tasks with a specific coordinate value), transformers fail to generalize even when trained on an otherwise large task set.
Methods And Evaluation Criteria: Yes, their proposed framework and the evaluation criteria make sense and support their theroem.
More specifically,
- the ARC framework is clearly defined and analyzed mathematically
- the experimental setup isolates core factors influencing generalization (number of secret indices, ambient dimension, task selection, CoT usage)
- uses linear probing for subtask identification.
Theoretical Claims: Yes. I checked Theorem 3.3 and the proof in the supplementary material seems correct.
Experimental Designs Or Analyses: The experimental setup is well-designed, focusing on synthetic benchmarks that allow controlled evaluation of task generalization.
(See Methods and Evaluation Criteria part).
Supplementary Material: I checked the supplementary material Part A (Proof of Theorem 3.3) and Experimental details (Section
Relation To Broader Scientific Literature: The paper connects well to prior work in:
- Compositional generalization in neural networks
- In-context learning and task transferability
- CoT reasoning and its impact on LLM expressiveness
Essential References Not Discussed: I am not aware of essential references not discussed
Other Strengths And Weaknesses: ### Strengths
- Theoretical contributions are rigorous, providing a well-defined framework for studying task generalization.
- Scaling laws are empirically validated, demonstrating consistency between theory and practice.
- Task selection ablations are insightful, highlighting the importance of diverse training examples.
### Weaknesses
- Scope is limited to synthetic tasks, making it unclear whether the results translate to broader NLP applications.
- Lacks model scale experiments
- The range of k (number of secret indices), and d (ambient dimensions) is narrow.
Other Comments Or Suggestions: - (minor) Typo in line 933 Wadam -> adamW?
- model scale experiments (rather than with 3-layer, 1-head model) could enhance the empirical results.
Questions For Authors: - How does task generalization change with model scale?
- Would pretraining on structured reasoning datasets improve ARC generalization?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and constructive review. Below, we address each point in detail.
> Scope is limited to synthetic tasks
Our goal is to establish a foundation where the compositional structure of tasks is clearly defined and generalization to unseen tasks is fully controllable. To this end, we deliberately focus on synthetic tasks to isolate the core principles underlying ARC-style generalization, explicitly specifying seen tasks and rigorously evaluating generalization on truly unseen tasks, free from leakage or contamination from pretraining.
We also note that several influential works studying transformer behaviors similarly focus on synthetic or simplified settings, such as parity functions [1] and modular arithmetic [2,3], which have proven effective for uncovering key insights. In our case, the key insight is *exponential task generalization*: the model generalizes to a combinatorially large set of unseen tasks after training on only a small subset.
While extending this framework to broader NLP tasks is important, it incurs substantial computational cost, which is beyond the scope of this paper. We view our current results as a necessary first step and are actively exploring such extensions.
[1] Bhattamishra et al., *Understanding in-context learning in transformers and LLMs by learning to learn discrete functions*, ICLR 2024.
[2] Power et al., *Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets*, 2022.
[3] Nanda et al., *Progress measures for grokking via mechanistic interpretability*, ICLR 2023.
>The range of $d$ and $k$ is narrow
We agree that a broader range of $d$ and $k$ provides additional support for our findings. However, both $d$ and $k$ appear in the base and exponent when determining the total number of tasks, leading to exponential growth. For instance, in the parity task, even with $d=25$ and $k=10$, the number of distinct parity functions is $\binom{25}{10} \approx 3$ million. We have strengthened our empirical support by adding this new experiment on parity tasks with $d=25$ and $k=10$,
| $d$ | $k$ | \# Training Tasks | \# Total Tasks | Accuracy (%) |
|-----|-----|------------------|----------------|--------------|
| 10 | 5 | 69 | 252 | 98.51% |
| 15 | 7 | 121 | 6,400 | 99.12% |
| 25 | 10 | 241 | 3.2M | 98.60% |
It is still a valid question to ask why can’t we scale to significantly larger $d$. The main challenge comes from context length. Unlike prior works focusing on learning a fixed parity function where $d$ simply reflected input dimensionality, we are in an ICL setup where the model needs to learn a general algorithm that infers *any* parity function drawn from a large function family. For the problem to be identifiable, the number of in-context examples $N$ must scale linearly with $d$. For example, when $d=100$ and $k=50$, $N=100$ examples result in an input length of roughly $N(d+k) \approx 15,000$ tokens -- which surpasses our available computational resources. This quadratic scaling of context length is the main bottleneck limiting us from larger $d$ and $k$.
>Lack of model scale experiments
In early experiments, we varied the number of layers and heads. We observed that while increasing layers from 2 to 3 improved generalization, going beyond 3 layers yielded no additional benefit. Similarly, increasing attention heads did not improve performance. This motivated our choice of a 3-layer, 1-head model for faster optimization without sacrificing generalization. A supporting plot from our early experiments is available in this [link](https://files.catbox.moe/x1r11m.png).
We also conducted a new set of experiments comparing models with width 192 vs. 384, 3 layers vs. 6 layers, and 1 head vs. 4 heads, across different numbers of training tasks (120 and 160), with fixed input dimension \(d = 15\) and \(k = 3\). The results were consistent, showing comparable performance across configurations, in this table [Link](https://files.catbox.moe/dlsa4k.png).
Importantly, our model already exhibits the near-linear task complexity predicted by theory. Larger models would not meaningfully improve this scaling.
>Would pretraining on structured reasoning datasets improve ARC generalization?
This is an excellent question. Recent work on R1 distillation has shown that strong generalization can be achieved from a relatively small number of SFT samples [4,5]. Inspired by this, we hypothesize that pretraining on structured reasoning datasets could further enhance ARC generalization.
However, this direction requires significant additional work and resources, as pretraining large models on diverse reasoning datasets is far more demanding than our current ICL setting. Nonetheless, we believe this is a promising avenue for future research.
[4] Ye et al., *LIMO: Less is More for Reasoning*, 2025.
[5] Muennighoff et al., *s1: Simple test-time scaling*, 2025.
>Typo in line 933
Thank you for pointing this out. We will fix it. | null | null | null | null | null | null |
FrameBridge: Improving Image-to-Video Generation with Bridge Models | Accept (poster) | Summary: This paper introduces FrameBridge, a novel approach to improve image-to-video (I2V) generation using diffusion models. The authors address the mismatch between the noise-to-data generation process of traditional diffusion models and the I2V task, which can lead to suboptimal results. FrameBridge proposes a bridge model that adopts a data-to-data generative process, enabling better utilization of the given image and enhancing consistency in video generation. The paper also introduces two innovative techniques: SNR-Aligned Fine-tuning (SAF), which adapts pre-trained text-to-video models for I2V tasks, and a neural prior to further improve the model when training from scratch. Experimental results on WebVid-2M and UCF-101 demonstrate that FrameBridge outperforms existing diffusion models in synthesis quality, with significant improvements in FVD scores. The SAF and neural prior techniques also contribute to the enhanced performance of bridge-based I2V models.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes. I checked the basics of DDBM, parameterization of FrameBridge, SNA-Aligned Fine-tuning and training objective in Appendix.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes. I reviewed the proof, implementation details, and more qualitative results.
Relation To Broader Scientific Literature: This paper mainly focuses on improving the appearance and temporal consistency of video diffusion models and has shown its effectiveness on various models.
Essential References Not Discussed: [1] VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion Models. arXiv:2403.05438
[2] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models. arXiv 2407.10285
These two papers also investigate how to improve the performance of various video diffusion models, and the authors are encourgaed to discuss them in this paper.
Other Strengths And Weaknesses: 1. The technical contributions are somewhat limited. The proposed FrameBridge seems like a simple combination of video diffusion models and bridge models.
2. The paper only provides qualitative experimental comparisons with earlier video generation models (e.g., DynamiCrafter), but does not include comparisons with the latest models (e.g., CogVideoX, HunyuanVideo).
3. In Fig.1, the key idea of FrameBridge is to take duplicated images as the initial video. The reviewer is curious that: what is the effect of directly adding noise to the initial video and using a video diffusion model to denoise it? Will FrameBridge perform better than this baseline?
Other Comments Or Suggestions: 1. The authors are encouraged to provide more video comparisons in supplementary materials and specify the name of pre-trained video diffusion models.
2. The number of decimal places in the table should be kept consistent as much as possible, e.g., Table 1 and Table 2.
Questions For Authors: Please see weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer 2c9b,
We sincerely appreciate your recognition of the strengths and effectiveness of our work and the valuable suggestions to help us improve that. We are happy to have a discussion and hope it could address your concerns. Tables are provided in https://framebridge-icml.github.io/rebuttal-demo-page/.
## Contributions of FrameBridge
**1. Reply to W1**
We would like to gently point out that the contribution of our work is not limited and our FrameBridge is a novel bridge-based I2V framework:
(1) **SAF technique: Applying bridge to I2V generation is not trivial.** As shown by Table 4 of our paper, directly fine-tuning **T2V diffusion** to **I2V bridge** may cause suboptimal video quality, and we propose SAF technique to align the two process. **As far as we know, it is the first trial to fine-tune bridge models from diffusion models.** A concurrent work [13] also shows the importance of aligning different processes in fine-tuning. **Table 3, 4 in our link** provide more evidence for the effectiveness of SAF.
(2) **We propose neural prior to further improve the performance of bridge model.** As bridge-models benefit from informative priors, it is well-motivated and effective to further investigate the choice of prior.
(3) We provide theoretically sound proof and derivations for all the techniques we proposed.
**In conclusion, we are the first to apply bridge models to I2V generation, which may not achieve superior generation quality without the two novel proposed techniques (i.e., SAF and neural prior).**
**2. Discussions about other related works**
We thank the reviewer for pointing out the related works we unintentionally left out, and will add citations and discussions in our revised paper. VideoElevator [14] proposed to improve the T2V generation quality with T2I diffusions, achieves high spatial quality and motion smoothness at the same time. Noise Calibration [15] proposed to enhance the video quality of SDEdit [16] with iterative calibration of initial noise. These are valuable works investigating into the improvement of T2V diffusion models, laying foundations for better I2V generation by building enhanced base models and providing viable techniques for improving diffusion-based I2V generation.
## Qualitative Comparisons
**Reply to S1 and W2:** We thank the reviewer for the instructive and valuable suggestions and add a later academic baseline ConsistI2V (TMLR 2024) [7] in our qualitative comparisons. Although we fine-tune a FrameBridge model from CogVideoX, we choose CogVideoX-2B as base model and fine-tune for several hundred GPU hours, which is $< 1\%$ of that used to train CogVideoX-I2V (fine-tuned from CogVideoX-5B), HunyuanVideo and other industry-grade models. We provide more qualitative comparisons in https://framebridge-icml.github.io/rebuttal-qualitative/, and pre-trained models are specified. Please feel free to tell us if you still have any concern, and we are willing to provide more experimental evidence or have a further discussion.
## Comparing FrameBridge with Inference-Time Noise Manipulation
**Reply to W3:** If we understand correctly, the provided baseline is a inference-time noise manipulation which only changes the prior when sampling and do not modify the diffusion process. (We are not certain if we have understood correctly. Feel free to point it out if there is misunderstanding and we are willing to provide more experimental resutls.) This baseline will generate almost static videos, and thus the quality is much inferior to FrameBridge and diffusion I2V models.
Table 1. Zero-shot metrics on MSR-VTT and VBench-I2V score.
|Model|FVD|CLIPSIM|PIC|VBench-I2V Score|
|-|-|-|-|-|
|Baseline|644|0.2249|0.56|77.36|
|FrameBridge|**99**|**0.2250**|**0.70**|**85.37**|
Due to character limit, we kindly invite you to read our response to Revewer fXNN (**Comparison between bridge model and inference-time noise manipulation** part), where we provide more intuitive explanations. In case that you are interested in the comparison between bridge models and flow-matching (which shares some similarities with the mentioned the baseline), we kindly invite you to read our response to Reviewer g6Xo (**Comparing bridge and flow-matching** part).
## Other Discussions
**Reply to S2**: Thanks for the suggestion and we will revise that in out paper later. (In this response, we temporarily align the decimal places with the current version.)
Due to the character limit, we can not elaborate on all the details. Please feel free to tell us if you still have any concern or question. We are willing to continue to have a kind and thorough discussion.
References: See response to Reviewer g6Xo. | Summary: This paper introduces FrameBridge, reformulating the image-to-video task as data-to-data generation through a bridge model. Different from image-to-video as first frame condition (i.e., noise-to-data generation), data-to-data generation achieves better consistency and can well preserve information in the first frame. Besides, the authors further introduce SNR-Aligned fine-tuning and neural prior to improve the performance of FrameBridge.
Claims And Evidence: The experiments in Table 4 and Table 5 demonstrate part of the claim. However, the authors do not directly compare noise-to-data and data-to-data in the ablation studies. Although the authors compare their approach to other noise-to-data paradigms, like ExtDM, the training settings and architectures are not matched.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: I think most of the experimental designs are soundness. However, it is better to directly compare noise-to-data and data-to-data in ablation study.
Supplementary Material: I have read all parts of the supplementary material.
Relation To Broader Scientific Literature: The contributions are broadly related to distribution transfer between arbitrary two data distributions. This paper resorts to a bridge model to translate data-to-data distribution, while this idea can also be implemented with flow-matching.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Strengths: 1) FrameBridge is well-motivated and the neural prior is a novel idea for initializing data distribution.
Weaknesses: 1) The authors should carefully ablate noise-to-data and data-to-data paradigms with fair training settings. Does data-to-data learn a shortcut to more static motion? It would be better to analyze the differences between these two paradigms to provide more insights for readers.
Other Comments Or Suggestions: 1) The author should discuss/experiment with flow-matching, since flow-matching can also be adopted to data-to-data transfer. Is it any benefit to use bridge model than flow-matching?
Questions For Authors: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer g6Xo,
We sincerely appreciate your acknowledgement of the strengths of our work and instructive suggestions to help us improve that. We hope the following discussions could address your concerns and questions. Tables are provided in https://framebridge-icml.github.io/rebuttal-demo-page/.
## Ablations between noise-to-data and data-to-data
We thanks the reviewer's suggestion and provide the experimental results of this ablation.
1. For the setting of fine-tuning, we provide the results in **Table 3, 4**. Here we use the same base model, network architecture, training data, training budget for noise-to-data and data-to-data models. As we need to fine-tune bridge models from pre-trained diffusion models, it is important to align the two process with our proposed SAF technique. A concurrent work [13] reaches similar conclusions when fine-tuning from flow-matching to diffusion process.
2. For the setting of training-from-scratch, since VDT-I2V has the same network architecture (Latte-S/2), training dataset, training budget as FrameBridge, we would like to gently point out that Table 3, 5 in the paper can offer experimental evidence for the ablation between noise-to-data and data-to-data. We appreciate the reviewer's question which help us to identify the lack of clarity in the setup (details are now dispersed in Sec. 5.2, App. D.3.3, D.3.4) and will revise it for better clarity.
## Does data-to-data learn a shotcut to more static motion?
This is a thoughtful question. Bridge models will not learn a shortcut to more static motion. Due to character limit, we kindly invite you to read our response to Reviewer fXNN for more discussions (**Reply to Q2** part).
## Comparing noise-to-data and data-to-data
We wold like to offer more intuitive comparisons between the two frameworks.
1. When considering the differences between noise-to-data and data-to-data frameworks, we suggest to view from the perspective of forward-backward process instead of pointwise denosing. Theoretically, denoising score matching guarantees perfect score function for each $(z_t, t)$. But practically the score function is not perfect for each point, and we should consider the complexity for modeling the whole process.
2. Differences in process: For noise-to-data I2V generation, the image can only be injected into the model by network conditioning. For data-to-data I2V generation, the whole diffusion process is changed to a bridge process and now we can utilize the image condition from two aspects (network conditioning and process). As shown by Figure 2 in our paper, the marginal distribution $z_t$ in bridge process carries more information than in diffusion. In alignment with our experiments, **modeling the whole bridge process** with bridge score function could be easier than **modeling the whole diffusion process** with diffusion score function.
We hope the discussion could provide some intuitive insights for the readers and are willing to have a further discussion if there are other cocerns or questions.
## Comparing bridge and flow-matching
We conduct experiments with 3 flow-matching models and compare them with diffusion and FrameBridge:
1. Vanilla flow-matching: The noisy latent takes the form of $z_t = (1-t)z_0 + t \epsilon, \epsilon \sim \mathcal{N}(0, I)$
2. Coupling flow-matching with $\sigma=1.0$ [18]: $z_t = (1-t)z_0 + t (z^i + \epsilon)$
3. Coupling flow-matching with $\sigma=0$ [19]: $z_t = (1-t)z_0 + t z^i$
Results are shown in **Table 12** of our link. We compare bridge and flow-matching in both methods and experiment results as below:
1. The algorithm and performance of Vanilla flow-matching is quite similar to that of diffusion.
2. Coupling flow-matching with $\sigma=0$ will generate almost static video. In [19], it is used to refine the results of a diffusion, and need condition augmentation, which is not suitable for single-stage I2V.
3. Coupling flow-matching with $\sigma=1$ has better performance than diffusion models. Different from bridge models, the prior is still a Gaussian (with non-zero mean), while the prior of bridge is a deterministic point. Empirically, as shown by the experiment results, brige models can utilize the condition more effectively.
Due to character limit, we can not elaborate on all the details. Please feel free to tell us if you still have any concern and we are willing to have a further discussion.
References (Due to character limit, we can only provide abbreviated forms of some works where there is no ambiguity):
[1] VBench++
[2] Stable Video Diffusion
[3] CogVideoX
[4] Conditional Image Leakage
[5] I^2SB
[6] DDBM
[7] ConsistI2V
[8] RPGDiffusion
[9] MovieDreamer
[10] VideoGen-of-Thought
[11] VideoDirectorGPT
[12] VIDIM
[13] SANA-Sprint
[14] VideoElevator
[15] Noise Calibration
[16] SDEdit
[17] Flow Matching for Generative Modeling
[18] Stochastic Interpolants with Data-Dependent Couplings
[19] Boosting Latent Diffusion with Flow Matching | Summary: This paper proposes a bridge model-based image-to-video generation model. It first formulates image-to-video generation as data-to-data generation instead of noise-to-data generation. Under this formulation, the generation should be easier because it starts from a strong prior of the image instead of the Gaussian prior. Furthermore, two designs are proposed to help finetune a text-to-video model for image-to-video generation: an SNR-aligned parameterization and a neural prior that predicts the mean image using a neural network. The proposed method is evaluated on WebVid-2M, UCF-101 and VBench-I2V, showing advantageous performance over different base models.
## update after rebuttal
The authors have addressed all of my concerns with clarifications and experimental results. Therefore, I am raising my score to Weak accept.
Claims And Evidence: The paper makes three main claims: 1) image-to-video generation can be formulated as a data-to-data generation task, 2) SNR-aligned parameterization helps finetune a text-to-video model to image-to-video generation, 3) neural prior helps train the image-to-video model from scratch. While the first two claims are supported by extensive experimental evidence, I have concerns about the third claim:
* **W1**: The predicted neural prior looks blurry (as shown in Figure 4), which could lead to inferior detail in the generated video. The authors seem to adopt an implementation of concatenating the neural prior with the original image to preserve the details (the second row of Figure 4), but this is more of a hack and weakens the contribution of this design.
Methods And Evaluation Criteria: The proposed method is suitable for image-to-video generation because it translates the noise-to-data generation problem into an easier data-to-data generation form. The evaluation criteria are also quite comprehensive, including FVD, IS, and PIC on UCF-101, FVD, CLIPSIM, and PIC on MSR-VTT, and various metrics in VBench-I2V.
Theoretical Claims: The paper makes two theoretical claims: 1) SNR-aligned parameterization aligns with a VP diffusion process, 2) the neural prior actually predicts the mean value of subsequent frames. The proofs of these two claims are provided in Appendix A, and the proofs look solid.
Experimental Designs Or Analyses: The experimental designs could be improved in the following directions:
* **W2**: Inconsistent base model and setup across experiments. The current ablation studies use a different base model (VDT-I2V) and setup (non-zero-shot UCF-101) than the main experiments, which is confusing. It would be better if the authors could perform ablation studies using the same base model and setup (i.e., DynamicCrafter with zero-shot UCF-101). Even more, it is unclear to me whether the DynamicCrafter baseline is properly initialized in the main experiments, since the convergence curves in Figure 6 do not seem to share the same starting point.
* **W3**: Evaluate on long video generation. The current method is evaluated only on 16-frame (2s) video generation, where the proposed strong image prior is clearly beneficial, as short videos don't deviate much from the starting frame. However, long video generation (e.g. $\ge$ 5s) requires the generation of new content that is significantly different from the starting frame, where a strong image prior could become a limitation. It would be better if the authors could investigate its effectiveness across different generation lengths.
Supplementary Material: The supplementary material includes theoretical proofs, detailed experimental settings, and additional results alongside ablation studies, all of which enhance the overall completeness of the paper.
Relation To Broader Scientific Literature: The paper leverages the diffusion bridge model [Zhou’23] for image-to-video generation, extending its previous success in image-to-image translation [Liu’23, Zhou’23].
Essential References Not Discussed: There are a few seminal works on bridge models that are not cited [De Bortoli'21, Peluchetti'21].
---
[1] De Bortoli, et al. Diffusion Schrödinger Bridge with Applications to Score-based Generative Modeling. NeurIPS 2021.
[2] Peluchetti. Non-Denoising Forward-Time Diffusions. 2021.
Other Strengths And Weaknesses: Strengths:
* The idea of applying the bridge model to image-to-video generation is novel and elegant. It reduces the gap between prior distribution and target distribution in video generation.
* The theoretical analysis on SNR-aligned parameterization and neural prior enhances the completeness of the paper.
Weaknesses:
* **W4**: The proposed method shows a much inferior dynamic degree on VBench-I2V (48.29 vs. 81.95). This is a fundamental limitation associated with the proposed strong static image prior. The authors should carefully discuss this limitation and propose possible solutions.
* **W5**: The results on VBench-I2V seem to be different from the original paper [Huang'24] or the official website. I suggest the authors further check the evaluation protocols and make sure that the baseline results are consistent.
---
[1] Huang, et al. VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models. 2024.
Other Comments Or Suggestions: Judging from the demo samples provided on the website, the generated videos do not have a high dynamic degree or large camera motion. It would be better if the authors could include more demo samples with high motion.
Questions For Authors: * **Q1**: Intuitively, the difference between noise-to-data generation and data-to-data generation is not that great, since the model outputs differ only by a constant offset value $z^i$, which is given in the image condition. It should not be difficult for the model to copy the image value $z^i$ and add it to its output. What do you think contributes to the performance gap, is it due to the closer value range of source and target latents, which makes it more friendly for the neural network?
* **Q2**: How does the model handle scene cuts, since it imposes such a strong static image prior?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer WAVy,
We sincerely appreciate your recognition of the strengths of our work and providing valuable suggestions to help us improve it. We hope the following discussions can address your concerns. Tables and demos are provided in https://framebridge-icml.github.io/rebuttal-demo-page/.
## Performance on VBench-I2V
**1. Reply to W5**
We find there are discrepancies between the results of Table 2 in our paper and the latest version of VBench-I2V [1] and thank the reviewer for bringing this to our attention. We present the modified version in **Table 1**. Note that **two FrameBridge models still achieve higher total score than baselines**.
**2. Reply to W4**
It is a valuable consideration and we will provide our empirical results and theoretical discussions. Before that, we would like to gently point out that the much higher DD of SparseCtrl may lead to inferior overall video quality as shown by **Table 1 and 6**. We kindly invite you to read the discussions in our response to Reviewer fXNN (**Reply to Q2** part) due to the character limit.
We would appreciate it if you could kindly take the time to read that, and it may also provide intuitions about why the samples can "deviate" from the prior, which is also appicable to the scenarios involved in W1 and W3.
## Implementation Details
**Reply to W1**
Firstly, we would like to calrify that neural prior predicts the **mean video conditioning on the first frme** instead of "mean image". (i.e., Frames can be different.) We list the model input, condition, priors of diffusion and bridge models below.
|Model|Input|Condition|Prior|
|-|-|-|-|
|Diffusion|$z_t,t,c,z^i$|$c, z^i$|$z_T\sim\mathcal{N}(0, I)$|
|Bridge|$z_t,t,c,F_\eta(z^i, c)$|$c, z^i,F_\eta(z^i, c)$|$z_T=F_\eta(z^i,c)$|
Here $F_\eta(z^i, c)\in \mathbb{R}^{F \times h \times w \times d}$ has the same shape as video latents and we do not concatenate $F_\eta(z^i, c)$ with the original image when it serves as a prior (as illustrated in our pseudo code, line 846), and this design is not a hack. However, both diffusion and bridge model should take original image as an input to ensure it is learning the conditional score function. Blurry prior will not degrade quality of final videos (as shown by our experiments) and bridge models here can be seen as a refiner. (e.g., Bridge can be applied to deblurring [5].)
## Experimental Design
**1. Reply to W2**
To address the concern, we conduct ablations of SAF in zero-shot setting and provide results in **Table 3, 4**. Neural prior is designed for training from scratch and the ablation setting is aligned with the main experiments in our paper.
We would like to gently point out that the ablation with VideoCrafter is already provided in the Table 1 of paper (line 342, 344, 345). We thank the reviewer's suggestion and will improve the clarity of ablation studies. For Figure 6, the first point is evaluated when the model is fine-tuned for 3k steps. When initialized, both models generate noise and thus we omitted it. To address the concern, we provide the CD-FVD curve before 3k in **Figure 1**.
**2. Reply to W3**
This is a thoughtful suggestion to help us show the effectiveness of FrameBridge more thoroughly. Due to the constraints of time and computational resources, we temporarily provide the experiment results with 24 frames in **Table 5**.
Conducting experiments with long video requires significant computational resources: The base model should be large enough to have capability to deal with long video and training with more frames will also add to computational complexity. As far as we know, baselines and benchmarks under this setting (I2V for longer than 16 frames) is not mature enough. Taking all the above into account, we compare FrameBridge with a diffusion I2V model fine-tuned from the same CogVideoX-2B, and find FrameBridge outperforms the diffusion counterpart. Please feel free to tell us if you still have concerns and we are willing to provide more experimental evidence and discussions.
## Other Discussions
**1. Reply to Q1**: This is an insightful question and we think the input/output parameterization is not the most important difference. We kindly invite you to read our response to Reviewer g6Xo (**Comparing noise-to-data and data-to-data** part.)
**2. Reply to Q2**: We kindly invite you to read our response to Reviewer fXNN (**Reply to Q1** part).
**3. Additional references**: We thank the reviewer for pointing out the related works we unintentionally left out, and will add citations and discussions in our paper later.
**4. More demos of FrameBridge**: We provide more demos generated by FrameBridge in **Table 9-11**, which we hope could address the concern of dynamic degree.
Due to the word limit, we could only elaborate on some important points. Please feel free to tell us if you still have any concerns and we would be happy to have a kind and thorough discussion further.
References: See response to Reviewer g6Xo.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. However, I still have a few remaining concerns:
1. Related to W4: Since the DD metric is not very reliable as you suggested, a user study would be necessary to confirm that there is no disadvantage in terms of dynamic degree. Additionally, the comparison in Table 6 on the rebuttal website between FrameBridge-CogVideoX and previous baselines is not entirely fair, as the improvement may come from differences in the base model rather than the method itself.
2. Related to Q1: The advantage of bridge models over **conditional** diffusion models is not so clear to me. Specifically, in your response to Reviewer g6Xo, you mention that "the marginal distribution $z_t$ in bridge process carries more information than in diffusion". However, $z_t$ in the bridge model is an interpolation of the source distribution, target distribution, and noise, which doesn't necessarily contain more information than in conditional diffusion models where the clean condition is directly encoded.
3. Related to Q2: Using LLMs planners to handle scene cuts may not be a good solution, since some transitions are smooth rather than abrupt.
**Update**: Thank you for addressing my concerns, I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer WAVy,
We appreciate your time in reading our response and engaging in the constructive discussions. We would like to provide additional experimental evidence and related discussions, and we hope this could address your remaining concerns.
## Related to W4
To address the concern, we conduct a user study to compare with four models: DynamiCrafter-VC (VC for fine-tuned from VideoCrafter), FrameBridge-VC, FrameBridge-VC-FS, FrameBridge-VC-NC. **The first two models use the same base model (i.e., VideoCrafter) and fine-tuning setups**, and comparisons between them can fairly show the effectiveness of FrameBridge. The last two models are variants of FrameBridge-VC, which we proposed in our initial response to offer possible methods to further improve the dynamic degree of FrameBridge.
We randomly sample 50 prompts from VBench-I2V and generate videos with the mentioned 4 models. For each group of videos (i.e. the 4 videos generated with the same prompt by different models), participants are asked 2 quetions:
(1) Rank the videos according to the dynamic degree. Higher rank (i.e. lower ranking number) corresponds to higher dynamic degree.
(2) Rank the videos according to the overall quality. Higher rank (i.e. lower ranking number) corresponds to higher quality.
We recruited 18 participants (which is a reliable setting in this domain [1, 2, 3]) and use Average User Ranking (AUR) as a preference metric (lower for better performance).
|Model|AUR of DD|AUR of overall quality|
|-|-|-|
|DynamiCrafter-VC|2.85|3.04|
|FrameBridge-VC|2.74|**2.26**|
|FrameBridge-VC-FS|**2.12**|*2.34*|
|FrameBridge-VC-NC|*2.29*|2.35|
The results show that:
(1) By comparing DynamiCrafter-VC and FrameBridge-VC, FrameBridge does not have inferior dynamic degree, and the overall quality is better.
(2) By comparing FrameBridge-VC and the two variants, the proposed solutions are effective.
Meanwhile, we would like to respectfully clarify a possible misunderstanding in the comment. We stated "the much higher DD of SparseCtrl may lead to inferior overall quality", but did not intend to challenge that DD is reliable for measuring dynamic degree and VBench-I2V total score is reliable for measuring overall quality. The results of user study is largely consistent with DD score: DynamiCrafter-VC and FrameBridge-VC has similar dynamic degree, and the dynamic degree of two variants are much higher.
## Related to Q1
We would like to provide another explanation of the differences from an empirical perspective. Previous works in diffusion show that, although theoretically equivalent, even linear changes in the parameterization of diffusion process can lead to significant performance gaps in sample quality [5]. The sampling process corresponds to ODE/SDE trajectories, and linear transformations can induce complicated changes in these trajectories, resulting in empirical inequivalence (commonly used samplers are finite-order approximation, which cannot handle these changes losslessly). For bridge models, the start point of sampling process is shifted to a deterministic point closer to target data, and bridge process causes empirically non-trivial changes of sampling trajectories. The primary contribution of our work is also in the empirical domain. We provide experimental evidences demonstrating that bridge models are empirically more suitable for I2V generation, which aligns with intuition, and propose specific techniques to enhance the I2V generation performance.
## Related to Q2
**We offer the possible solution of applying LLM planner since we understood the "scene cuts" means aburpt change. To address the concern thoroughly, we conducted a preliminary experiment (due to time constrain, we use limited computational resources) to show that bridge model itself has the ability to deal with scene cuts without incorporating other tools**. We manually construct a dataset by concatenating two random videos and change the middle frames with interpolation of two videos to manually simulate 3 types of scene cuts (both abrupt and smooth) and find that FrameBridge can model these scene cuts after training with the constructed dataset. We show some samples in: https://framebridge-icml.github.io/rebuttal-supplementary/ (**Table 1,2,3**).
The intension of this preliminary experiment is to show that scene cut does not pose a limitation to bridge model's expressive capacity. Our discussion from the perspective of generative models (i.e. The second part of our reply to Q2 of Reviewer fXNN) is still applicable to this scenario. We also show an example of sampling process, which we hope could offer some intuitive explanations (**Table 4** in the above link). We would like to focus our discussion on this, and more detailed investigation into scene cut generation is another valuable topic beyond the scope of our paper.
[1] DynamiCrafter
[2] InstructVideo
[3] Conditional Image Leakage
[4] VBench++
[5] DPM-Solver++ | Summary: This work focuses on the mismatching issue of diffusion models and I2V generation tasks, and propose FrameBridge, which build a data-to-data generation process with bridge model, making the generation procedure more in line with the frame-to-frames nature of I2V task. For fine-tuning scenario, a SNR-Aligned Fine-tuning is proposed to take advantage of pretrained diffusion models. For training from scratch, a trained prior model provides initial prior of non-first frames.
## update after rebuttal
I have carefully read the response from authors and the comments of other reviewers. My major concerns about the adaption to multi-shot generation and limited dynamics were also raised by other reviewers. My concerns have been addressed. I will keep my initial rating.
Claims And Evidence: Yes. This work proposes to model the I2V generation task from frame-to-frames to a data-to-data framework based on the schr\"odinger bridge theory, which is implemented by a SNR aware finetuning or an advanced prior. The effectiveness of the proposed bridge based I2V method is validated on various benchmarks across different metrics.
Methods And Evaluation Criteria: Yes. The benchmarks used to validate are commonly used in this field, including the MSR-VTT and UCF-101, which are standard benchmarks for video generation task.
Theoretical Claims: Yes, the theoretical claims in the appendix is reviewed, including the analysis of parameterization of the bridge optimization objectives and the analysis of SAF and the proposed prior for training from scratch.
Experimental Designs Or Analyses: The proposed method is validated on MSR-VTT, webvid and UCF-101 compared with previous I2V works. The details of these datasets and the evaluation metrics are elaborated in the appendix.
Supplementary Material: The demo website is provided in the supplementary material.
Relation To Broader Scientific Literature: This work is mainly related to I2V video generation task, but implemented based on the schr¨odinger bridge theory. This theory has been explored these years for synthetic tasks like style transfer, I2I generation, etc. The bridge theory is more in line with these data-to-data task, compared with the commonly used diffusion method.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. This paper identifies a clear mismatch issue between diffusion models and the frame-to-frames I2V task, which is well-motivated. A Schrödinger bridge-based method is proposed to deal with I2V task.
2. The proposed SAF and neural prior are practical and effective to take advantage of pretrained diffusion models and enable efficient from scratch training.
3. The paper is well-written and easy to follow. The details of experiments are elaborated.
Weaknesses:
please refer to the question part.
Other Comments Or Suggestions: - In line 168 right side, the $i$ in $(z_T|i, c)$ is not explained.
Questions For Authors: 1. I certainly understand the underlying bridge theory and the proposed method for I2V, considering that some modern video generation methods can process multi-shot or multi-scene video generation, in which scenario the content of multiple shots may be much different with the given first frame, dose the hypothesis of starting from repeated first frame via bridge still make sense? Is there any more improvement or consideration of the issue?
1. In the I2V task, repeating the first frames across the temporal dimension may influence the final motion of the generation result, limiting achieving high dynamic motions. Could the authors discuss about this issue?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer fXNN,
We sincerely appreciate your acknowledgement of our methods and proposed techniques as "well-motivated", "practical and effective". We are happy to engage in a thorough discussion and hope it will address your concerns and questions. Tables and demos are provided in https://framebridge-icml.github.io/rebuttal-demo-page/.
## Reply to Q2
**Firstly, we will offer some empirical solutions to improve the dynamic degree of FrameBridge.** The limited dynamic degree is a well-studied issue for diffusion I2V models [2,3,4]. We adapt two techniques used in diffusion I2V models to FrameBridge: (1) Add frame-stride condition; (2) Add noise to image condition. Demonstrated by our experments on FrameBridge-VideoCrafter, we find that these techniques are also effective for FrameBridge, and can **improve the DD score from 35.77 to 48.62**, which is much higher than the diffusion counterpart (38.69 for DynamiCrafter). (See **Table 2,7,8**. -MI for Motion-Improved).
**Secondly, we would like to mention that the potential limited dynamic degree is an empirical issue instead of the inherent weakness of bridge-based methods.** Just as diffusion models, if we have the perfect bridge score function and a perfect sampler for the backward bridge SDE (Eq. 5 in the paper), we can sample from the true data distribution, which is a mathematically well-established conclusion in [6]. It may be counterintuitive. To understand it **more intuitively (but not that rigorously)**, we would like to provide discussions from three aspects:
1. **Marginal:** Consider the marginal $z_t=a_t z_0 + b_t z_T + c_t \epsilon$, where $a_0=b_1=1, a_1=b_0=c_0=c_1=0$. Different from the diffusion process, the coefficient of noise $c_t$ first increases then decreases when $t$ varies from $T$ to $0$ in the sampling process. So, the latent $z_t$ **will continuously become noisier in the first half part of sampling process**.
2. **Forward-Backward SDE:** In [6] a forward diffusion process
$$
\mathrm{d}z_t = f(t)z_t \mathrm{d}t + g(t)\mathrm{d}w,
$$
is used to construct the forward bridge process
$$
\mathrm{d}z_t=[f(t)z_t + g(t)^2 \nabla_{z_t} \log p_{T,diff}(z_T|z_t)]\mathrm{d}t+g(t)\mathrm{d}w,
$$
Compared with diffusion process, the additional term $g(t)^2 \nabla_{z_t} \log p_{T,diff}(z_T|z_t)$ gradually "pushes" $z_t$ to the value $z_T$ and finally the prior $z_T$ will become a fixed point. Reversely, compared with diffusion backward process, the backward bridge process will additionally "pull" $z_t$ away from $z_T$.
3. **Comparison between bridge model and inference-time noise manipulation:** In FrameInit, the prior $z_T$ is constructed by combining low-frequency information from a static video with Gaussian noise. (Similar to FrameBridge at first glance.) Then, $z_T$ is denoised with a **trained diffusion model**. Although $z_T$ has been influenced by the low-frequency components of a static video, the diffusion process will sample from it in the same way as a Gaussian noise, **since the training process of diffusion model is "ignorant" of the noise manipulation and still learns the score function of a standard diffusion process $z_t=\alpha_tz_0+\sigma_t\epsilon$, which may cause train-test mismatch and limit the dynamic degree**. (This may also explain why the Dynamic Degree score of ConsistI2V is lower.) However, for bridge model, we change the prior $z_T$ **along with the diffusion process. The model is "aware" of that and will learn the score function of a bridge process $z_t=a_0z_0+b_tz_T+c_t\epsilon$ with bridge score matching loss.**
## Reply to Q1
This is a valuable question, and multi-shot/scene I2V generation is a highly demanding task for I2V generation without mature benchmarks or baselines currently. It will also be challenging for FrameBridge as repeating the first frame as prior may not be a proper design. Inspired by RPGDiffusion [8] and other previous works about long video generation [9,10, 11] , we propose a possible hierarchical solution as below:
1. Use LLM planner to divide the whole video into several sub-video clips $v_1, v_2, ..., v_M$, and generate image prompts for the start of each clip.
2. Generate the start of each clip $i_1, i_2,..., i_M$ with a T2I diffusion model.
3. For $v_t, 1 \leq t \leq M - 1$, construct a neural prior by interpolating between $i_t, i_{t + 1}$ and train a FrameBridge for video interpolation (which is also a typical task for I2V generation [12]) to generate it. Generate $v_M$ with the static prior constructed by $i_M$ using FrameBridge.
Our consideration is to generalize the neural prior with LLM planner and use FrameBridge to leverage the locally reliable prior.
## Other Discussions
The line 168 is a typo and should be $(z_T | z^i, c)$. We thank the reviewer to point it out and will revise it later.
Due to word limit, we can not elaborate on all the details and please feel free to tell us if you still have any concern.
References: See response to Reviewer g6Xo. | null | null | null | null | null | null |
Explicit Preference Optimization: No Need for an Implicit Reward Model | Accept (poster) | Summary: The paper presents a new objective to use for preference optimization that replaces DPO-like objectives. The new objectives (EXPO) are designed to address two issues with DPO-like objectives, (1) shifting the learned policy away from the reference policy when the reference policy closely matches the optimal policy and (2) smoothly interpolating between the optimal and the reference policy where the model's behavior should more closely match that of the reference policy as lambda increases. The analysis presented a generalized Quasi Preference Optimization framing for DPO-like objectives, and then breaks down how the structure of the objective contributes to the two limitations. Then the two EXPO algorithms are introduce with one a combination of multiple objectives and the other a regression objective. An empirical study is conducted to evaluate how well EXPO aligns with the objectives of not moving the learned policy away from the reference policy when the reference policy is of a high quality and smoothly interpolating between the optimal and reference policy based on the value of lambda. The evaluations are conducted on a synthetic dataset and two real world preference datasets. The conclusions are that EXPO better exhibits the two properties than the QPO objectives, and has a higher win rate on the real world tasks.
Claims And Evidence: The main claim of the paper is that the EXPO preference objective better meets the two goals of preserving the things the reference model already does well and smoothly interpolating between the reference and optimal policy that the QPO objectives. Experiments on synthetic data are used to specifically test how well EXPO meets the two goals relative to a subset of specific QPO objectives. The EXPO and QPO methods are then assessed using real data and win rate compared to the chosen responses in the corresponding dataset's test split.
Not something mentioned in the paper, but from the results it looks like the benefit of the QPO objectives is they are less sensitive to the value of the lambda hyper-parameter than the QPO objectives. This means that there are values of lambda for which the QPO results closely match of the EXPO learned models. The results in Figure 4 support this. However, the results in Figure 5 (win rate comparisons) do not account for the performance differences with respect to the value of lambda in the case of "real world" data. So it is unclear how beneficial the additions of EXPO are in real world scenarios where lambda is carefully tuned.
Following the paper and the claims requires following a lot of detailed proofs. It would be beneficial to include a higher-level intuitive explanation for the differences between the QPO and the EXPO objectives.
It is difficult to assess the claims made around Figure 8 without lines marking the optimal policy. If the lines for the optimal policy match those in Figure 3 then it looks like all methods converge to the optimal policy, not reference.
Methods And Evaluation Criteria: It is not clear to me what exactly about the QPO objectives is problematic and exactly how EXPO addresses the issue.
In motivating the QPO generalization, it would be beneficial to explicitly spell (can be in the appendix) how IPO and DPO are expressed in terms of the QPO objective.
Theoretical Claims: I did not. My background is more empirical than theoretical
Experimental Designs Or Analyses: - Given that the similarity in performance between the QPO and EXPO methods varies with the value of lambda, it would be beneficial to show how the different values of lambda impact performance on the real world tasks.
- It is not clear to me how Figure 3 addresses the interpolation goal. The results are only provided in relation to the optimal policy, but the interpolation is in relation to the reference and the optimal policies.
Supplementary Material: I reviewed appendix A - E.
Relation To Broader Scientific Literature: The paper attempts to address issues faced with DPO-like objectives.
Essential References Not Discussed: The correct papers appear to be discussed.
Other Strengths And Weaknesses: - The paper is well written and the text is easy to follow.
Other Comments Or Suggestions: - It would be beneficial to either add the distance between the reference and optimal policies to the "Good Cases" plot in Figure 4 or describe where it is.
Questions For Authors: 1. How well tuned are the lambda parameters for the Anthropic-HH and IMDB experiments? What values were used and how were they selected. This question pertains to my concerns around the QPO and EXPO methods having similar performance in Figure 4 for certain values of lambda.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are appreciative of the reviewer's helpful comments.
**Comment:**
*For the right $\lambda$, QPO can closely match EXPO ... results of Figure 4 support this ... so how beneficial is EXPO in real-world cases with tuned $\lambda$.*
**Response:**
Actually, there are no values of the QPO hyperparameter that can *simultaneously* match EXPO in both the good and bad input prompt regimes as depicted. For example, consider IPO: for the good cases $\log \lambda > 0$ is optimal, while for the bad cases $\log \lambda \approx -1$ is optimal. We provide another illustration of this point at this [link](https://anonymous.4open.science/r/ICML2025_rebuttal-15908/Figure4_extention_preservation.pdf); only change is adjusting $\pi^*$. And overall, we regard reduced sensitivity to $\lambda$ as a positive even in real-world scenarios, as hyperparameter tuning can be expensive.
**Comment:**
*Ambiguity of Figure 8 ... it looks like all methods converge to the optimal policy, not reference.*
**Response:**
Figures 7 and 8 are meant to be viewed in tandem (and we can add explicit text to better convey their relationship). From Figure 7, when $\lambda$ is large we observe training convergence to $\pi_{ref}$ values (0.2 and 0.4). Meanwhile in Figure 8 we show fully converged solutions for a range of $\lambda$ values varying from small to large. For large $\lambda$ (right side of each plot) we observe that all models produce $\pi_{ref}$ as expected. Conversely, for small $\lambda$ (left side of each plot) we see that only EXPO maintains the optimal policy. Again, we can easily clarify this in a revised appendix.
**Comment:**
*What exactly about the QPO objectives is problematic and exactly how EXPO addresses the issue ... higher-level intuitive explanation would be helpful.*
**Response:**
Good suggestion. We agree that accessible explanations are valuable for conveying some of the subtle messages associated with our work. In this regard, the paragraph beginning on Line 172 (righthand column) may be helpful. There we provide a more intuitive perspective for the specific QPO cases of DPO and IPO; similar ideas hold more broadly, but are decidedly more involved to present, hence the technical aspects of our submission. There is also another natural entry-point for understanding where QPO objectives begin to fall short. For simplicity, consider only the special case of DPO, which is advertised as producing the minimal solution to equation (4) instantiated with an optimal reward. Next observe how (4) behaves as $\lambda \rightarrow 0$. Basically, once the KL term is ignored, the remaining loss will be trivially optimized by a degenerate policy assigning *all* probability mass to a single response; see equation (43) for the derivation. In contrast, the EXPO loss is explicitly designed to reflect the *full* optimal BT policy in this same limit (not just a mode). Appendix E.2 provides further details regarding why this distinction is important.
**Comment:**
*... it would be beneficial to explicitly spell how IPO and DPO are expressed in terms of the QPO objective.*
**Response:**
Great suggestion, and this is easy to include in a revised appendix.
**Comment:**
*Given that the similarity in performance between the QPO and EXPO methods varies with lambda, it would be beneficial to show how the different lambda values impact performance on the real world tasks.*
**Response:**
As we have clarified in a previous response above, QPO and EXPO performance is quite distinct as $\lambda$ is varied across our synthetic experiments. And while we do agree that further ablations with real-world data would be interesting, such testing incurs significant time and computational cost that we unfortunately cannot accommodate. That being said, in Appendix B.3 we mention $\lambda$ stability for EXPO under a limited testing range.
**Comment:**
*It is not clear to me how Figure 3 addresses the interpolation goal ...*
**Response:**
Indeed Figure 3 only shows the interpolation extreme on the optimal policy side (i.e., when $\lambda$ is small). However, Figures 7 and 8 in Appendix B.2 complete the full picture, as we ran out of space in the main text. For reference, there is also a pointer on Line 346 (righthand column).
**Comment:**
*Good to add the distance between the reference and optimal policies to the "Good Cases" plot in Figure 4 ...*
**Response:**
By design, the distance between $\pi_{ref}$ and $\pi^*$ is zero in Figure 4, top plot. Please also see Line 355 (righthand side) in the text.
**Comment:**
*How well tuned is lambda for Anthropic-HH and IMDB experiments ... relationship with Figure 4.*
**Response:**
Please see our responses above regarding Figure 4, and the distinct performances of QPO vs EXPO once both good and bad cases are considered collectively. In terms of tuning $\lambda$ on real-world data, we only explored a few distinct values because of computational cost; see also Section B.3.
---
Rebuttal Comment 1.1:
Comment: Sorry for the delay in this message. I posted it in the wrong spot.
The authors have addressed a number of my concerns, so I have raised by score accordingly
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's continued engagement with our paper and decision to raise the score after viewing the rebuttal. In this regard, we notice that the score reported in Openreview has not yet changed as the discussion period is soon drawing to a close. Given that the reviewer mentioned originally posting in the wrong spot, we were politely wondering if the reviewer may have inadvertently missed updating the score for our paper? Thanks for your consideration. | Summary: This paper proposes a new direct preference optimization (DPO) method. The authors first formulate quasi-convex generalizations to unify some of existing DPO based methods. Then, they identify two limitations of existing DPO based methods under this formulation. One limitation is the failure to preserve optimal policies and the other one is suboptimal interpolation. To address these two limitations, the authors propose two new preference optimization objectives. Experiments on synthetic and real-world datasets validate the effectiveness of the proposed methods.
## update after rebuttal
The authors' responses address my concerns. I am leaning towards accept.
Claims And Evidence: 1. I am not convinced by the first limitation of existing approaches. The considered setting seems a bit synthetic to me as it only considers two types of prompts: good and bad, where $\pi^*=\pi_{ref}$ for good prompts. However, it is very likely that $\pi_{ref}$ is not optimal for all the prompts in the datasets. A more realistic setting is that the authors can prove the proposed methods can improve over $\pi_{ref}$ over all the prompts without requiring $\pi^*=\pi_{ref}$ for good prompts.
2. The synthetic evaluations support the limitation claim and the advantage of the proposed method over the existing method. However, more evidences are needed to demonstrate using the real-world dataset. For example, to address the second limitation. the authors can show the diversity of the proposed methods over existing methods while ensuring good generation quality.
Methods And Evaluation Criteria: The proposed methods are well motivated by the identified two limitations. However, my major concern is that the first limitation seems unreasonable to me. In practice, it is very likely that $\pi_{ref}$ is not optimal for all the prompts in the datasets. A more realistic setting is that the authors can prove the proposed methods can improve over $\pi_{ref}$ over all the prompts without requiring $\pi^*=\pi_{ref}$ for good prompts.
Theoretical Claims: I didn't check the proof.
Experimental Designs Or Analyses: The synthetic evaluations effectively highlight the advantages of the proposed methods in overcoming the two identified limitations. However, additional evidence is required to demonstrate these benefits on real-world datasets. For instance, to address the second limitation, the authors could showcase the diversity of the proposed methods compared to existing approaches while maintaining high-generation quality.
Supplementary Material: I reviewed Section A, B and C.
Relation To Broader Scientific Literature: The paper is broadly related to AI alignment.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: Please refer to the above questions for comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the constructive comments, and address the main points as follows (grouping where appropriate).
**Comment:**
*First/Main limitation: I am not convinced by the first limitation of existing approaches. The considered setting seems a bit synthetic to me as it only considers two types of prompts: good and bad, where* $\pi_{ref} = \pi^*$ *for good prompts. However, it is very likely that $\pi_{ref}$ is not optimal for all the prompts in the datasets. A more realistic setting is that the authors can prove the proposed methods can improve over over all the prompts without requiring* $\pi_{ref} = \pi^*$ *for good prompts.*
**Response:**
We remark that it is commonplace for theoretical results to involve simplifying assumptions, otherwise many analytical steps become infeasible. That being said, in the present circumstance the point is not necessarily to reflect the most realistic scenario possible. Instead, the goal is to show that even in an idealized case DPO-like methods perform below expectations, the implication being that such subpar performance is unlikely to disappear even under broader conditions. We also reiterate that our assumption (for Theorem 3.1) is not that $\pi_{ref} = \pi^*$ for all prompts, but rather, only for so-called ideal good cases.
**Comment:**
*Second limitation: The synthetic evaluations support the limitation claim and the advantage of the proposed method over the existing method. However, more evidences are needed to demonstrate using the real-world dataset. For example, to address the second limitation. the authors can show the diversity of the proposed methods over existing methods while ensuring good generation quality.*
**Response:**
As demonstrated in Figure 5 of the submission, our proposed method achieves high-quality generation on multiple real-world datasets. However, to quickly explore diversity per the reviewer's suggestion with limited time during the rebuttal period, we evaluate using the basic setup, sampling method and metrics from Section 5.3 of [this paper](https://openreview.net/forum?id=2cRzmWXK9N) (although some experimental details, like max_token are not reported in the paper; for these we adopt our settings). Results are shown in the new table below, where generally EXPO displays higher diversity relative to DPO. We also normalize the entropy by token length to avoid bias towards longer responses; other metrics are already implicitly normalized w.r.t. length.
| | **Normalized Entropy** $\uparrow$ | Self-Bleu $\downarrow$ | Distinct-1 $\uparrow$ | Distinct-2 $\uparrow$ |
| --- | --- | --- | --- | --- |
| DPO | 0.033 | 0.93 | 0.018 | 0.29 |
| EXPO (comp) | **0.036** | 0.90 | **0.025** | **0.35** |
| EXPO (reg) | 0.035 | **0.88** | 0.023 | 0.33 | | Summary: This paper works broadly on offline preference optimization methods, the most canonical of which is DPO, discusses a common weakness shared by all of these methods, and proposes a method that fixes this problem. The paper argues that DPO's uniform regularization to the reference policy creates problems: assume the space of prompts can be divided into two subsets, **set A**: one where the reference policy (also usually the policy one starts with during fine-tuning) is already close to optimal, and **set B**: one where the reference policy performs poorly. The paper then shows that DPO and its variants in general improves performance of the trained policy on $B$, but it inevitably comes with performance degradation on $A$. The paper then proposes their method, EXPO, that removes the reparaterization trick (designing an implicit reward function based on log probabilities), and shows that their method can improve performance on set $B$ while preserving performance on **set A**.
# Updates after Rebuttal
The authors have answered my concerns. Thanks for the additional experiments, specially the large scale experiments on Llama-3-8B.
I maintain my score at 3, and recommend acceptance of this paper.
Claims And Evidence: The claims made in this paper is clear and has convincing evidence.
Methods And Evaluation Criteria: The proposed methods make sense to me. I think the paper is lacking in benchmark datasets and models that it shows results on. The paper mentions multiple times that they use the same model and same datasets used in the original DPO paper. While that is great, the DPO paper is 2 years old at this point, and the same level of evaluation is no longer sufficient in my opinion. The LLM used in real-world experiments is Pythia-2.8B, which is very outdated by this point.
I would request the authors to show evidence that their method works with:
1. At least one more dataset, say UltraFeedback [1], which has ~60K prompts and (preferred, dispreferred) pairs, so it has reasonable scale.
2. At least one more LLM of 8B parameter size, say Llama-3.1-8B [2]. (It is possibly good to fine-tune the base model instead of the instruction tuned model, to show improvements resulting from EXPO).
The paper mentions that they use the same evaluation setup as the original DPO paper. But I would like to point out that the original DPO paper is around 2 years old now, so a paper submitted for review now should use more recent benchmarks/models for it to be accepted.
Theoretical Claims: I took a brief look over the theoretical proofs, but since my expertise is not theory, I cannot comment on the correctness of the proofs. However, the results surely are interesting and relevant.
Experimental Designs Or Analyses: Yes, I checked the soundness of the experiments, they seem correct to me and follows what is common in relevant literature.
Supplementary Material: I briefly looked over the loss formulations for EXPO in the appendix. I did not check any other part or the attached zip file.
Relation To Broader Scientific Literature: The paper deals with offline preference optimization methods for fine-tuning language models, a canonical example of which is direct preference optimization or DPO [3], or more broadly other direct alignment methods [4]. The paper studies specific problems with this class of methods, some of which were discussed by these papers:
1. DPO, even the online variant, underperforming PPO due to poor regularization [5]
2. DPO often reduces the log-likelihood of the preferred response, leading to unintentional unalignment [6, 7, 8]
3. DPO's offline regularization to the reference policy can be problematic and fixed using an online way of regularization [9]
Essential References Not Discussed: There have been prior work that has discussed the problems with DPO's implicit reward formulation. For example, [10] should be cited and discussed, but it is not. [11] should also be cited. I would request the authors to find more papers that talk about implicit reward formulations.
Also, all the papers mentioned in **Relation To Broader Scientific Literature** section above should be cited in related works, but it does not seem to be case.
Other Strengths And Weaknesses: # Strengths
Many other recent works have also found out different problems with DPO and similar methods. This paper uncovers a major problem with DPO-like methods, shows it both theoretically and empirically. I like the paper's theoretical results.
# Weaknesses
1. The most major weakness is the scale of LLM experiments in this paper. A 2.8B old LLM does not tell practitioners enough about how much to trust this paper's results.
2. It is pretty clear from recent papers such as Deepseek [12] and other works [5, 6] that offline preference learning algorithms simply don't perform as well as their online counterparts. Another recent work such as [13] confirmed this for RLHF, where we do not have access to gold reward during training (unlike Deepseek, which trains on verifiable problems with gold reward). It is very unclear what would be the value of this paper (or in general any other paper that gives an n-th variation of an offline preference learning algorithm), given that the community might shift to more online methods like GRPO [14] anyway.
**If the authors can show that their method works with a newer model with around 8B parameter count, I would be very happy to increase this paper's score to 4.**
Other Comments Or Suggestions: No other comments that I can think of.
Questions For Authors: # Questions
1. How does the memory and computational cost compare between EXPO, DPO and other key variants? It would be good to report these numbers on an 8B model against a standard dataset.
2. How does the log-probability of preferred and dispreferred responses change throughout training? Essentially I am interested in Figure 17 of [6] but for EXPO.
3. Is there any intuitive explanation of the difference between WIC and SIC conditions? I am not entirely sure if I grasp their difference and significance.
4. Does this paper's main observation, that performance improvement on set of bad prompts also result in performance decrease on set of good prompts, also happen for online preference optimization techniques? Why or why not? (**Note: this is outside the scope of this paper and the authors should not be penalized for discussing this in their paper, but I am curious and would love it if they have any explanation.**)
5. Most preference learning algorithms work on the single-turn setting. However, LLMs are being increasingly used in more and more multi-turn setting. Algorithms like DPO has a simple multi-turn extension: one can just take loss over the agent action tokens and discard the log-probabilities of the environment tokens, see [15]. Is there a way to extend EXPO to the multi-turn setting?
# References
[1] UltraFeedback: Boosting Language Models with Scaled AI Feedback, https://arxiv.org/abs/2310.01377. **Dataset link: https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized**
[2] The Llama 3 Herd of Models, https://arxiv.org/abs/2407.21783. **Model link: https://huggingface.co/meta-llama/Llama-3.1-8B**
[3] Direct Preference Optimization: Your Language Model is Secretly a Reward Model, https://arxiv.org/abs/2305.18290
[4] Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms, https://arxiv.org/abs/2406.02900v1
[5] Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study, https://arxiv.org/abs/2404.10719
[6] Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data, https://arxiv.org/abs/2404.14367
[7] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization, https://arxiv.org/abs/2410.08847
[8] Iterative Reasoning Preference Optimization, https://arxiv.org/abs/2404.19733
[9] The Importance of Online Data: Understanding Preference Fine-tuning via Coverage, https://arxiv.org/abs/2406.01462
[10] On the Limited Generalization Capability of the Implicit Reward Model Induced by Direct Preference Optimization, https://arxiv.org/abs/2406.01462
[11] Bootstrapping Language Models with DPO Implicit Rewards, https://arxiv.org/abs/2406.09760v2
[12] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning, https://arxiv.org/abs/2501.12948
[13] All Roads Lead to Likelihood: The Value of Reinforcement Learning in Fine-Tuning, https://arxiv.org/abs/2503.01067
[14] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models, https://arxiv.org/abs/2402.03300
[15] From $r$ to $Q^*$: Your Language Model is Secretly a Q-Function, https://arxiv.org/abs/2404.12358
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing out that our paper is clear, based on convincing evidence, and supported by interesting and relevant theory. The reviewer also provided many constructive comments; with limited space to reply, we prioritize the main critiques as follows:
**Comment:**
*Methodology is sound, but warrants testing on larger scale models, e.g., in 8B parameter range as opposed to 2.8B range.*
**Response:**
We agree with the reviewer that the community is trending towards experimentation with larger models, particularly on less theoretically driven papers. But even so, many recent papers still lean heavily on 2.8B-sized models or smaller for analyzing preference methods and drawing comparative conclusions between approaches. As representative examples, the DeepSeek GRPO paper [14, Figure 5] uses a 1.3B model to compare online vs offline preference learning; likewise 1.4B-2.8B models are used for a similar purpose in [9, Table 1] and [13, Figure 3]. We also note that other more theoretical works such as the IPO paper (Azar et al.,2024) contain no real-world experiments at all and yet remain highly influential to the field. In any event, we are not disputing the value of larger models; instead we are merely advocating that high quality contributions are still possible without them, especially when granting that compute resources and timing constraints are often limiting factors (as is the case for us within the rebuttal period).
Even so, during the rebuttal window we tried fine-tuning a Llama-3.1-8B model using Lora to reduce the computational burden, and indeed EXPO is better than DPO in this limited setting. However, complete testing with further baselines or a full parameterization is unfortunately not feasible at this time.
**Comment:**
*Recent evidence suggests that online preference learning generally performs better ... so relevance of offline approaches may be waning.*
**Response:**
This is a very reasonable point to consider, and addressing it helps to further advance the relevance of our paper. In this regard, the suggested references from the reviewer are extremely helpful. Our overall response is three-fold:
1) The inference (from references [5,6,12,13,14] and others) that online preference learning is generally superior is to date mostly derived based on testing with QPO approaches, and usually DPO alone (or in the case of [6], an even earlier rejection-sampling approach). Hence the open possibility remains that offline approaches that mitigate DPO deficiencies (like our EXPO) might reset the scales relative to online alternatives. And crucially, many of the specific arguments provided for why DPO is outperformed by online approaches like PPO do *not* apply to our EXPO. For example, in [5] it is argued that a key DPO limitation is that it does not exploit unlabeled prompt-only data, which can then lead to undesirable biases. However, as we point on on Lines 322-324, EXPO can naturally incorporate such unlabeled prompt-only data.
2) Many references that argue in favor of online preference learning nonetheless suggest that DPO is still valuable when reformulated as an online or related on-policy iterative approach; see for example [6,9,10,11,13]. Some (e.g., [11]) even explicitly advocate for generalizing beyond DPO to online versions of IPO and the like. Hence our analysis of QPO models in general, and EXPO in particular, remains quite relevant in a wide variety of revised online settings.
3) Even if we concede that offline preference learning is sub-optimal, in many scenarios it is still considerably simpler, possibly even with convergence guarantees. As such, for prototyping or resource-constrained environments offline learning may enjoy some durable advantages.
**Comment:**
*Relation to literature, missing references ... [10] should be cited and discussed, but it is not. [11] should also be cited ...*
**Response:**
References [10] and [11] are empirically-driven studies that advocate strongly for iterative versions of DPO, e.g., exploiting DPO implicit rewards over multiple on-policy rounds. See also our comments above that include reference to [10,11]. Overall, these works are quite complementary to our own, and we can definitely cite them in our revision. Likewise for other references the reviewer has kindly provided.
**Comment:**
*Questions 1-5 ...*
**Response:**
1. Memory consumption is the same as DPO.
2. Using unlabeled samples for (16) may help some, but further study is needed.
3. SIC and WIC differ strongly as $\lambda$ becomes small. For SIC the minimal loss equates to an optimal preference distribution, while for WIC we only obtain the *mode* of an optimal distribution. Appendix E.2 contains further details.
4. We have not tested this (and it could depend on the implementation), but intuition suggests that EXPO may still provide a benefit in some cases.
5. Multi-turn extensions of EXPO represent an interesting direction for future work.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thanks a lot for your detailed answers within the 5000 character limit.
In general, I think the responses are satisfactory. I agree with the authors that running experiments on a 8B model might not always be feasible with a small enough model, and therefore would recommend not holding this against this paper while deciding its acceptance and rejection.
However, it is still definitely true that certain phenomena appears on larger models, and hence it is important to see if improvements are real on an 8B model. I hope the the authors can add that to a later revision of this work.
Other than that, I have no further comments at this moment. The paper still requires some additional experiments, so I will keep my score. Kudos to the authors however, for a cleanly written paper!
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the detailed feedback and continued engagement with our rebuttal. Regarding the suggested LLaMA-3 8B (larger-scale) experiment, we can now provide additional preliminary results as feasible within time constraints. Specifically, starting from the full base model we execute SFT training via the Ultrachat-200k dataset; subsequently, we conducted preference fine-tuning with both DPO and EXPO. For all training we use the full parameterization LLaMA-3 8B model parameters (i.e., no Lora as in our previous response).
For DPO, we set $\lambda$ (referred to as $\beta$ in the DPO paper) to 0.01 and used a learning rate of 5e-7 as suggested by (Meng et al., 2024) for optimal performance. For EXPO (reg version), we simply selected $\lambda = 0.2$, the same value we used for the Anthropic HH dataset with the Pythia 2.8B model as reported in our original submission (note that this indicates some further degree of stability across scenarios). We also maintained the same learning rate of 5e-7 for EXPO as applied to DPO. We used Alpaca-Eval2 to obtain length-controlled WinRate (a common metric for Alpaca-Eval2). Results are shown below:
| DPO | EXPO (reg version) |
|-----------------------|------|
| 16.7 | 20.3 |
The discrepancy between our DPO results above and those reported in (Meng et al., 2024) may be attributed to: (1) differences in batch size due to computational constraints, and (2) updates to AlpacaEval upon which evaluations depend. We hope these results help to further address reviewer scale-related comments. | Summary: This paper proposes a framework for aligning language models with human preferences without relying on implicit reward models. EXPO addresses limitations of existing methods like DPO and IPO, which suffer from suboptimal regularization and interpolation issues. The authors propose two variants: a compositional loss and a regression-based loss, which explicitly balance human preferences and reference policy adherence. Theoretical analysis shows EXPO preserves optimal policies in regions where the reference model performs well while improving elsewhere, and it satisfies strong interpolation criteria (SIC). Experiments on synthetic and real-world datasets (Anthropic HH, IMDb) demonstrate EXPO outperforms DPO, IPO, and other baselines in win rates and policy preservation.
Claims And Evidence: EXPO avoids suboptimal regularization is validated by synthetic experiments showing EXPO converges to BT-optimal policies, while DPO/IPO converge to degenerate solutions.
EXPO satisfies SIC is validated by empirical results show EXPO interpolates smoothly between $\pi_{ref}$ and $\pi^*$, unlike DPO/IPO.
Methods And Evaluation Criteria: EXPO has been tested on Anthropic Helpfulness and Harmlessness (HH) preference dataset and IMDb dataset, as well as synthetic experiments with controlled preference distributions.
Theoretical Claims: Theorems 3.1 and 3.6 are derived, showing QPO methods (DPO/IPO) cannot preserve optimal policies or satisfy SIC. Propositions 4.2–4.3 demonstrate EXPO’s advantages. I checked the proof of Theorem 3.1.
Experimental Designs Or Analyses: I checked both the LLM tasks and the synthetic data tasks, and the ablation studies and did not find any issues.
Supplementary Material: I reviewed proof of Theorem 3.1 and the ablation studies in the supplementary materials.
Relation To Broader Scientific Literature: EXPO builds on DPO/IPO but eliminates their dependency on implicit rewards via explicit regularization. It connects to RLHF’s KL-regularization but avoids multi-stage training. The work fills a gap in ensuring policy preservation and interpolation, addressing underappreciated limitations in prior preference optimization methods.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
Novel explicit regularization approach enhances interpretability and control.
Weaknesses:
Limited experiment results on real world tasks. Would like to see its performance on Llama-3-8B on AlpacaEval 2.
Other Comments Or Suggestions: The same algorithm name ExPO has already been used in "Weak-to-Strong Extrapolation Expedites Alignment".
Questions For Authors: Can EXPO handle non-BT preference models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for checking many of the technical details of our paper, including proof and empirical materials, while acknowledging the novelty of our approach in addressing underappreciated limitations in prior preference optimization methods. We respond to points of critique as follows.
**Comment:**
*Limited experiment results on real world tasks. Would like to see its performance on Llama-3-8B on AlpacaEval 2.*
**Response:**
During the short rebuttal window, we tried fine-tuning a Llama-3.1-8B model using Lora to reduce the computational burden, and indeed EXPO is better than DPO in this limited setting. However, complete testing with further baselines or a full parameterization is unfortunately not feasible at this time.
**Comment:**
*The same algorithm name ExPO has already been used in "Weak-to-Strong Extrapolation Expedites Alignment".*
**Response:**
Thanks for the notice, we could easily change the name to avoid any confusion.
**Comment:**
*Can EXPO handle non-BT preference models?*
**Response:**
Good question. In principle yes, but some theoretical results would need to be reconsidered. | null | null | null | null | null | null |
Optimal Decision Tree Pruning Revisited: Algorithms and Complexity | Accept (poster) | Summary: The authors investigate the computational complexity of pruning a decision tree.
More specifically the focus is on the algorithmic optimization of two pruning techniques: replacement (removing a subtree and assigning to the root the majority class in the leaves) and raising (removing the subtree rooted at an internal node and substituting it with the subtree rooted at one of its children). The study is on whether there are efficient algorithm that given a maximum number of internal nodes to remove can obtain a pruned tree with at most a given number of misclassification. Parameterized complexity is investigated with respect to single parameters and their combinations.
## Update after rebuttal
I kept my already positive score since there was no additional information from the authors that justified in my opinion an increase.
Claims And Evidence: The mathematical claims are soundly supported by the analyses presented.
Methods And Evaluation Criteria: The paper presents a theoretical investigation.
The experimental analysis is meant to investigate practically used heuristics with respect to the possible optimization of number of pruning operations and resulting misclassifications allowed. These evaluations is meaningful
Theoretical Claims: I checked the proofs in section 4. Both the DP arguments appear correct.
Experimental Designs Or Analyses: The esperiments are meant to provide evidence that the commonly used heuristics are not far from the optimal results given by the DP algorithms. As claimed by the authors it is a small scale study but, nonetheless provide interesting information.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper appears to take into consideration the relevant papers. In fact, the perspective proposed is, to the best of my knowledge, novel.
Essential References Not Discussed: I have no comments on this
Other Strengths And Weaknesses: The paper present a novel perspective on decision tree optimization, from the point of view of pruning.
The hardness results are in my opinion not so surprising.
The DP algorithms are interesting with respect to the idea that in practical cases the number of different thresholds used per feature is small. I think some more should be---maybe experimentally---argued about this point.
However, the complexity attained is significant only with respect to such assumption, since, in practice, could become proportional to nˆ2d.
Other Comments Or Suggestions: --
Questions For Authors: Can you elaborate a bit more on the fact that the parameter D does not explode in practical cases?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review!
Indeed, we agree that $n^{2d}$ running time would not scale to practical data and this is in particular why we need more precise complexity and running time analyses such as those we provide here. To add to these considerations, note that the implemented algorithm instead of $n^{2d}$ has an exponential running-time part limited by $D_T^{2 d_T} \le D^{2 d_T}$, where $D_T$ and $d_t$ are smaller than $n$ and $d$, respectively, see below:
The maximal number $D$ of thresholds per feature is often much smaller than the number of examples $n$; see Table 3 of Staus et al. (2024, https://arxiv.org/abs/2412.11954) for concrete values. Moreover, often the features are only binary implying that $D=2$. On average, $D$ is smaller than $n/2$.
$D_T$ is the maximal number of different thresholds on cuts in feature $i$ on path $P$ over all features $i \in [d]$ and all root-to-leaf paths $P$. This parameter $D_T$ is much smaller than $n$: In our Table 3 in the appendix we measured $D_T$ for both the unpruned and the pruned trees (there is a typo; the column says $D$ but it should be $D_T$). Together with the values for $n$ from Table 2 we can see that $D_T$ is usually much smaller than $n/10$, in fact in more than half of the instances $D_T< 15$.
Additionally, the exponent in the running time only depends on $d_T$, the maximal number of different features per root-leaf path, which is usually much smaller than $d$; again, see our Tables 2 and 3.
We will emphasize this more in the next version of the paper. | Summary: Decision trees are widely used for tabular datasets. The authors conduct a comprehensive analysis of decision tree pruning operations, including subtree replacement and subtree raising. The paper provides a theoretical analysis of the complexity of each pruning operation, showing that optimal subtree replacement can be achieved in polynomial time and runs linearly in tree size. In contrast, optimal subtree raising is proven to be an NP-complete problem, making it significantly more computationally expensive. The study includes detailed theorems and proofs to support these findings.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: - Both Table 1 and Figure 10 illustrate the tradeoff between the number of pruned nodes and misclassification error. The results suggest that the heuristic method often appears on the Pareto front constructed by the proposed dynamic programming method, and the conclusion acknowledges that the heuristic is usually sufficient. If the heuristic method already achieves competitive performance with significantly lower computational cost, what is the practical value of using the optimal strategy? The optimal pruning strategy does not seem to provide meaningful performance gains while being more computationally expensive.
- In the optimal tree literature, simple and accurate trees are typically found through search-based methods, and there is not necessarily a tradeoff between simplicity and accuracy. However, in this study, even optimal subtree raising seems to have an almost linear tradeoff between the number of pruned nodes and the misclassification error. Does this suggest an inherent limitation of the subtree raising operation itself? Or does this indicate that the tree induction method is not a good option in the first place?
Theoretical Claims: The authors provide precise complexity classifications for pruning operations. The claims are supported by detailed proofs.
Experimental Designs Or Analyses: Please see above.
Supplementary Material: I briefly glanced at the entire appendix.
Relation To Broader Scientific Literature: Pruning strategies date back to the origins of tree induction methods (Breiman 1984, Quinlan 1986), where they were introduced to simplify trees and prevent overfitting. This paper presents a comprehensive theoretical analysis of the complexity of different pruning strategies and compares heuristic pruning methods with optimal ones. The paper also relates to the literature on sparse decision trees, which aims to achieve both accuracy and simplicity simultaneously.
Essential References Not Discussed: The paper appropriately discusses the related works.
Other Strengths And Weaknesses: Please see above.
Other Comments Or Suggestions: NA
Questions For Authors: Please see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and the insightful questions!
1) About the concern about the practical value of using the optimal strategy:
Prior to our work there was no evidence how good these heuristics work in practice, that is, whether they are close to the optimum, or whether they can be outperformed substantially in quality.
Our efficient exact algorithms made such an evaluation feasible. Moreover, only with the help of our algorithms such a conclusion that the heuristics are almost always optimal was possible.
As shown by our hardness results, this is quite surprising and in the future one should investigate this phenomenon further, that is, find explanations as to why these heuristics are so good on these data.
2) Almost linear tradeoff between the number of pruned nodes and the misclassification error:
We feel that the intuition of a linear tradeoff might be misleading. Consider Figure 10 in the appendix, where an alternative interpretation is that the relation is rather exponential: Intuitively, pruning one node has a smaller impact in a decision tree of size s than in a tree of size 1. Thus, initially, there is an almost linear dependence on number of raised nodes and classification error, but after a certain barrier the number of errors increases exponentially (as the soybean dataset highlights). This is due the fact that the initial tree is likely overfitting and thus raising a few nodes only leads to few errors but after a certain barrier not enough nodes are left and thus the data cannot be explained anymore, leading to many errors.
Consequently, we don’t think there is an inherent limitation of these operations. Essentially, one should use them only to eliminate overfitting. | Summary: This manuscript analyzes the complexity gap between two tree pruning strategies: subtree replacement and subtree raising. The former is polynomially solvable, whereas the latter is NP-complete. This paper identifies the key parameters that can bridge the gap between these strategies and analyze their impact, providing precise boundaries between tractable and intractable cases. At last, some numerical experiments are utilized to validate the theorems.
Claims And Evidence: Most of the claims are relatively well-supported but not very clearly presented. Some of the theorems discuss the same concept under different conditions, particularly in Section 5. The authors could consider combining them into a single, more comprehensive theorem to cover all these cases for greater clarity.
Additionally, there is one claim about the Pareto front is not well analyzed. The authors do not specify the multi-objective optimization method used to obtain the Pareto front. Since the pruning problem is an integer programming problem and non-convex, how do the authors ensure that the solutions are (even approximately) Pareto optimal and that the Pareto front is complete? Without a proper analysis, the Pareto front may be inaccurate, leading to potentially incorrect conclusions.
Methods And Evaluation Criteria: Yes, the pruning strategy is an effective way to improve the performance of decision tree models. The relationship between complexity and accuracy has always been a key concern for researchers. This paper analyzes two pruning strategies and provides valuable insights into reducing complexity.
Theoretical Claims: The authors provide extensive analysis to illustrate and prove the theorems. However, some theorems and lemmas could be combined—for instance, Lemma 4.4, Theorem 4.5, and Theorem 4.6, as they all discuss the time complexity of $DTR_{AIS_{\geq}}$under different conditions. It is recommended to consolidate them into a single theorem for clarity and conciseness. A similar issue exists in Section 5, which analyzes whether $DTR_{AIS_=}$ and $DTR_{AIS_{\geq}}$ are solvable (time complexity) under different critical parameter conditions and the ETH assumption.
Experimental Designs Or Analyses: The experimental analysis in this paper is not very strong. If the contribution of this work is to demonstrate that a better solution exists for both error minimization and complexity compared to current methods, the authors should focus more on multi-objective optimization, as this issue stems from non-Pareto optimality.
Supplementary Material: Yes, I have reviewed some proofs, additional numerical results in the appendix, and the main code in the supplementary material.
Relation To Broader Scientific Literature: This paper contributes to the complexity analysis of pruning strategies, specifically, subtree replacement and subtree raising. These strategies enhance the performance of decision tree models and can be applied to various heuristic approaches such as CART and C4.5.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. This paper provides interesting insights into the complexity of two pruning strategies, which are useful for improving the performance of decision tree models.
2. Figures 1, 2, and 3 are well-designed and effectively facilitate the understanding of the results.
3. The supplementary materials include historical results.
Weakness:
1. Throughout the paper, the mathematical expressions are not well expressed. This may stem from the problem statement being somewhat difficult to understand and some notations being unclear. Certain notations remain abstract and require further clarification—for example, the domain size $D$. Additionally, in the first paragraph, the expression $w\in V(T)$ appears. What does $V$ represent? Please ensure that every symbol is clearly defined and well explained.
2. The appendix contains some typos, such as the first line of C.6, "We only show the statement for...”. Please check the appendix.
Other Comments Or Suggestions: 1. In Table 1, information about the datasets should be provided (rather than in the appendix), including the number of samples, features, and classes. Although misclassification is used as the objective, it is recommended to report its percentage to give a clearer sense of the error magnitude.
2. The relationship between complexity and accuracy (misclassification errors) can be represented by the Pareto front, which may reveal certain patterns, as demonstrated with the Soybean dataset in Figure 10. However, only a few datasets are analyzed. There may be underlying rules that can be extracted from the Pareto fronts.
Questions For Authors: Why do you focus on bi-class problems? What will the results change in multi-class problems?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback! Allow us to respond to all your concerns:
Presentation:
- Theorems in Section 4 and 5: Note that all of these theorems correspond either to different combinations of parameters or different subsets of parameters that are fixed to at most some constant value. We have invested quite a lot of time into how to present them in an accessible way; Figures 2 and 3 are highly condensed versions of the statements that still convey most of the information. We feel that merging the statements into one would indeed decrease the readability since it would be more difficult to match the proofs to the corresponding statements.
- The $V$ in $w \in V(T)$ is a common notation for the vertex set of the tree $T$; we have clarified this.
- The domain size $D$ is the maximum number of different values that a feature can attain, we have also clarified this.
We have already incorporated your specific feedback into our local version of the submission and will ensure that the presentation of the final version is as accessible as possible.
Pareto-front:
- We indeed correctly compute the Pareto front or, more precisely, for each $k$ the point $(t, k)$ such that it is possible to prune exactly $k$ nodes to achieve exactly t errors and every solution with at most $k$ pruned nodes has at least $t$ errors: Any algorithm solving the search problems DTRAIS=> and DTRAIS= can also be applied to finding the optimum values for the corresponding optimization problems (see also the introduction). That is, with such an algorithm we can minimize the number of errors for fixed number $k$ of pruning operations simply by incrementing the error parameter $t$, starting with 0, until the algorithm returns a solution. Solving these optimization problems for all $k$ will yield the Pareto front. Since our algorithms provably solve the search problems correctly, we are certain to find the optimum values. We will emphasize this more clearly in the final version.
Datasets:
- Indeed, it would be desirable to extend the experiments; however, this is out of scope for the current paper where we mainly want to develop the fundamental theory. Thus we leave this for future work.
- We agree, it would be good to add more info from the appendix to the overview of the datasets given in Section 6. If the paper were to be accepted, we would be happy to utilize the extra page for this purpose; in any case, we will make a full version available on arxiv or another public repository.
Biclass vs. multiclass:
- The focus on the biclass setting is for clarity of the theory. All our hardness results directly apply to the multiclass setting. All our dynamic-programming algorithms and the algorithms for Lemma 4.4 and Theorem 4.5 also directly apply to the multiclass setting without changes. The algorithm for Theorem 4.6. can easily be adapted to the multiclass setting without changing the running time: In the tree-enumeration algorithm, when introducing a new leaf, we set the leaf to the class of the dirty example. The remaining algorithm remains the same. The algorithm in Theorem 3.1 can straightforwardly be adapted to the multiclass setting: In the first step we instead compute, for each node in the tree and each class $c$, the number of misclassified examples in this node, if the node were to be replaced by a leaf with class $c$. The rest of the algorithm remains the same. | Summary: The submission at hand aims at understanding the parameterized complexity of problems relating to editing decision trees to conform to a data set up to a bounded number of errors. The considered operations are either of raising and replacing and the considered parameters are numbers given on input, relate to the size of the input or relate to the number of features with certain properties. Apart from few specific combination the obtained parameterized complexity theoretic classification is exhaustive with respect to the combination of all considered parameters and utilizes a number of maybe not particularly groundbreaking but diverse and well applied algorithmic techniques and hardness reductions.
The theoretical results are complemented by a small empirical section on the quality of common heuristics for minimizing a number of edits given a certain error budget. These experiments might weakly indicate some weaknesses of the considered heuristics that deserve further investigation.
Claims And Evidence: No concerns.
Methods And Evaluation Criteria: Appropriate.
Theoretical Claims: I had at least a superficial look at all proofs and am convinced by their correctness up to minor details.
Experimental Designs Or Analyses: I only read what is not in the appendix.
Supplementary Material: The proofs in the appendix.
Relation To Broader Scientific Literature: This submission studies the complexity theoretic behavior of a problem relevant to ML. There have not been extremely many papers like this on this specific kind of problem but there is much precedent of similar analyses for a range of problems.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: Overall, this submission is topically relevant and solid in terms of its contributions and presentation. I recommend acceptance.
Other Comments Or Suggestions: - Maybe specify that you assume decision trees to be binary
- Line 417(right): replace both operations by each operation
Questions For Authors: Which of these results can be adapted when allowing both replacing and raising?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the review and the helpful feedback!
We now make clear in the introduction that we focus on binary trees and incorporated the comment about line 417.
> Which of these results can be adapted when allowing both replacing and raising?
That’s an insightful question. While the details depend slightly on how the problems are formulated (e.g., do we have a budget for each operation on how many cuts should be pruned or do we only care about the total number), our algorithmic and hardness results for the raising operation are generally adaptable to support also the replacement operation.
As to hardness, observe that in our reduction for showing the hardness of the raising problem (Theorem 5.1), each subtree rooted at an inner node has both a blue and a red leaf. Thus, a replacement operation on an inner node that replaces it by a red (correspondingly, blue) leaf can be simulated by a sequence of raising operations that raises a red (blue) leaf of the subtree in the place of the inner node. Consequently, allowing also replacement operations does not make the problem any easier or harder.
For our algorithms, we would need to adapt the dynamic programming recurrences to take into account the possibility of replacing the whole subtree, but other than that the details are similar. More precisely, recurrence (1) of the proof of Theorem 1 now has a fourth case to consider a tree replacement operation.
In summary, allowing both replacement and raising would yield hardness and tractability results similarly to when allowing only raising operations. | null | null | null | null | null | null |
The Batch Complexity of Bandit Pure Exploration | Accept (poster) | Summary: The authors derive an instance-dependent lower bound on the batch complexity required by any $\delta$-correct pure exploration algorithm. This lower bound is expressed as a function of the instance’s complexity, $T^\star(\mu)$, and shows that as sample efficiency improves, the minimal number of batches required increases. This result is novel in that it quantifies, for every instance, the minimal “round” or batch count needed.
Building on the Track-and-Stop methodology, the paper introduces a new algorithm called Phased Explore then Track (PET). The algorithm is designed in phases. In each phase, it first performs uniform exploration to reliably estimate the means and the instance-dependent complexity; then, if a predetermined complexity threshold is met, it transitions into a more adaptive sampling phase. The authors prove that PET is $\delta$-correct and provide upper bounds on both its sample complexity (which is nearly optimal, up to logarithmic factors in $1/\delta$ ) and its batch complexity.
The general results are then specialized to well-studied pure exploration tasks such as Best-Arm Identification (BAI), Top-k, and thresholding bandits. For these settings, the authors show that the algorithm’s performance nearly matches the derived lower bounds.
Empirical results are provided for both BAI and thresholding bandit problems. The results indicate that PET achieves a favorable balance between sample and batch complexities by using only a few batches while keeping the sample complexity near-optimal.
For the stopping rule as they consider non-parametric distribution, they use mixtures method.
Claims And Evidence: Theorem 2.3 provides a lower bound on the necessary number of batches; Theorem 3.5 establishes the $\delta$-correctness of the PET algorithm; and Theorem 3.11 gives upper bounds on both the expected batch and sample complexities.
All the proofs are given in appendix, and seem correct.
Some numerical experiments support the theoretical findings.
## update after rebuttal"
I have read the discussions and finally I keep my assessment unchanged
Methods And Evaluation Criteria: The synthetic numerical experiments provide sufficient evidence given the theoretical nature of the work.
Theoretical Claims: proof coherence checked globally
Experimental Designs Or Analyses: no relevant here
Supplementary Material: the proofs in the supplementary material are correctly written
Relation To Broader Scientific Literature: The state of the art is correct
Essential References Not Discussed: nothing special
Other Strengths And Weaknesses: The use of batches in best-arm identification problems is real, and interesting to study theoretically.
The contributions are solid and rather clear, though pretty unsurprising.
This work hence seems to be a decent incremental contribution.
Other Comments Or Suggestions: On page~1, the introduction could benefit from clearer phrasing in the following sentence: "...samples is fixed and the objective is to minimize the probability of error, \textbf{we} talk about fixed-budget pure exploration (Audibert et al., 2010)." As the paper focuses on fixed-confidence settings, the use of the word \textbf{we} here may cause confusion.
A clearer motivation for investigating batched methods would enhance the paper. Fully adaptive algorithms, such as Track-and-Stop, require solving a min-max optimization problem at each round to determine optimal sampling proportions, which can be computationally intensive and impractical in many applications. In contrast, batched methods fix the sampling strategy for an entire batch, thereby reducing computational demands and enabling parallel sampling. Emphasizing this computational trade-off would further justify the focus on batched approaches, particularly in scenarios with delayed feedback or limited adaptivity.
In the paper "Optimal $\delta$-Correct Best-Arm Selection for Heavy-Tailed Distributions," a batch algorithm is introduced. The authors should cite this work and discuss the differences between their approach and the one presented in that paper.
Regarding Assumption 3.8 and line 5 of the algorithm, the paper employs an $l$-norm to quantify the distance between instances. Given that KL divergence often provides a tighter and more informative measure—especially in Gaussian settings—could the authors clarify the rationale behind this choice? Would it be possible or beneficial to replace the $l$-norm with a KL divergence measure in these parts of the analysis? Clarifying this point would help readers understand the trade-offs between tractability and the precision of the bounds.
The definition of $\mu^0$ as the initial instance appears to be missing. Including this definition would enhance clarity.
It would be helpful to add some intuition for Assumption 2.2, such as: "It essentially states that the ‘difficulty’ (as measured by the inverse of the complexity) scales as $x^2$ when the instance is linearly scaled in this manner. This property is useful for constructing sequences of instances and analyzing how the complexity evolves as the instances become progressively more challenging (i.e., as the gaps shrink)."
On page~7, it seems that the authors intended to refer to Lemma~3.7 rather than Theorem~3.7, as the latter does not appear in the paper. Clarifying this reference would improve accuracy.
Questions For Authors: Is it possible to directly recover the results of Garivier–Kaufmann (2016) by imposing some structure on the batches? Clarification on this point would be appreciated.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful suggestions.
The suggested paper, ``Optimal $\delta$-correct best arm selection for heavy tailed distributions'', contains a BAI algorithm for a different class of non-parametric distributions (given by a moment constraint).
They also use batches in a Track-and-Stop method. The main motivation for the batches seems to be to reduce the computational cost of the algorithm, and minimizing the expected batch complexity is not a goal they consider (and no bound is given).
They mostly use constant batch sizes, which would lead to a batch complexity that is proportional to $\log(1/\delta)$, which should not be the case according to the lower bound.
Thank you for the reference: we will include discussion of that paper in our revision.
On the usage of an $l$-norm instead of a KL divergence, it is first due to our goal of tackling the non-parametric sub-Gaussian setting, in which this is the natural distance measure.
If we were to consider parametric families of distribution, we would want to use a KL divergence.
However a hurdle with that lies in Lemma 3.10. Indeed, we would need to have guarantees about the variation of the KL in a confidence region around a mean estimate.
Investigating how to get past this problem and generalize the results for $\tilde{T}^*$ instead of $T^*$ would be interesting future work.
We do not understand the last question about recovering the results of Garivier and Kaufmann 2016 by imposing structure on the batches. Could you expand on what you mean by that question?
Their algorithm is not batched (or rather has all batches of size 1), so we don't see what the structure on the batches would be.
---
Rebuttal Comment 1.1:
Comment: > We do not understand the last question about recovering the results of Garivier and Kaufmann 2016 by imposing structure on the batches. Could you expand on what you mean by that question? Their algorithm is not batched (or rather has all batches of size 1), so we don't see what the structure on the batches would be.
The question was: when you allow the number of rounds to be as large as wanted, does the complexity you obtain by your analysis tend to T^*(\mu) ? but I agree that this is not so much the goal of your contribution.
---
Reply to Comment 1.1.1:
Comment: Thank you for your clarification. It is possible to have a main term (in $\log(1/\delta)$) for the sample complexity $(1+\varepsilon)T^*(\mu)\ln(1/\delta)$ for $\varepsilon > 0$ as small as we want with a few changes: multiplying the length of each subsequent phase by $1+\alpha$ with very small $\alpha$ instead of by 2; changing the definition of $l_1$ to reflect this change, and multiplying it by a constant; and multiplying the probability $p_r$ that appears in $\varepsilon_r$ by $\alpha$ as well.
However, both the batch complexity and the lower order terms of the sample complexity become bigger with those changes. | Summary: The paper investigates the batch complexity of fixed-confidence pure exploration problems in stochastic multi-armed bandits (MAB), where the algorithm is allowed to change its sampling strategy only across batches. The authors derive novel instance-dependent lower bounds on the number of batches required by any sample-efficient algorithm for general pure exploration tasks. They propose a general batched algorithm named "Phased Explore then Track" (PET), inspired by the Track-and-Stop method, and provide upper bounds on its expected sample and batch complexities. Numerical results are presented to show the effectivenes of the algorithm.
Claims And Evidence: The claims presented in the submission are supported by convincing evidence. The authors provide theoretical proofs for the lower bounds on batch complexity (Lemma 2.1 and Theorem 2.3), as well as detailed analyses of their proposed algorithm's upper bounds. Experimental results validate the effectiveness of the proposed algorithm
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand.
Theoretical Claims: I skimmed through the proofs of the main theoretical claims. The proofs appear sound and mathematically rigorous, with no evident errors.
Experimental Designs Or Analyses: No significant issues were identified with these experimental designs or analyses.
Supplementary Material: Yes, I reviewed supplementary material provided in the Appendices.
Relation To Broader Scientific Literature: The paper's contributions relate to existing literature on batched bandit algorithms and pure exploration problems and collaborative bandits with communications (which can be interpreted as batched observation). It builds upon fundamental work on fixed-confidence pure exploration (Garivier & Kaufmann, 2016). It generalizes previous studies that focused specifically on BAI or Top-k identification to the class of pure exploration problems with batched observation.
Essential References Not Discussed: The most relevant references have been discussed.
Other Strengths And Weaknesses: See comments in the other sections.
Other Comments Or Suggestions: Typos:
Page 4 line 177: "or" → "of"
Page 5 line 329: "an other assumption" → "another assumption"
Page 8 line 439: "That effect that was first observed" → "That effect was first observed"
Questions For Authors: 1. How sensitive is the algorithm's performance to the choice of $T_0$? How significantly does it affect your empirical results?
2. Could you elaborate further on potential approaches to replace uniform exploration (Lines 6-9 in Algorithm 1) with more adaptive strategies for general pure exploration problems? Would such approaches significantly reduce practical sample complexities?
3. Have you considered computational aspects when implementing your algorithm? In practice, how costly is it to $\bar{w}^\star$ and $\bar{T}^\star(\cdot)$? Is there a way to solve it through bisection methods similarly to (Garivier & Kaufmann, 2016) for the Top-1 (BAI) setting?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are thankful to the reviewer for their helpful comments.
Firstly, we'd like to redirect the reviewer to the two last paragraphs of our response to reviewer yvDd: the bound on the sample complexity in Theorem 3.11 was not accurate in the regime of big $T_0$. The parameter $T_0$ impacts the size of the first batch, and needs to strike a balance between sample optimality for easy instances, and round complexity for hard instances.
We have not tried varying $T_0$ in the experiments, to try and keep all the compared algorithms with the same initial batch size so none would be particularly penalized by how easy/hard the tested instance was.
We have thought of a few ways to replace uniform exploration. One would be to try and generalize BAI arm elimination schemes (see for example Algorithm 2 of Jin et al 2023), in which suboptimal arms stop being sampled. In general pure exploration, one could imagine stopping an arm $i$ once $N_i\geq \overline{w}^*_i(B)\overline{T}^*(B)$, since no instance in the confidence set would require more samples of that arm.
We could also forgo uniform sampling altogether, only sampling according to $\overline{w}^*(B)\overline{T}^*(B)$ at each batch. However, either of those methods would make the confidence set $B$ behave in more complex ways.
In the most general setting we consider, there is no obvious way of computing $\overline{w}^*(B)$ and $\overline{T}^*(B)$. Under Assumption 3.9, one would need to compute $w^*(\mu)$ and $T^*(\mu)$ over $B$ and find the supremum of $T^*$ (heuristically, finding the most difficult instance in $B$).
However, in BAI and thresholding this is much easier, since we know precisely where the hardest instance $b$ lies.
In BAI, it is the instance in which the best arm is moved down as much as possible, and the other arms moved up as much as possible.
In thresholding, it is the instance in which every arm is moved closer to the threshold. See Appendix C.5 for the proofs, and line 1330 for definition of $b$ in the BAI case.
Because of this, computing $\overline{w}^*(B)$ and $\overline{T}^*(B)$ has exactly the same cost as computing $w^*(\hat{\mu}_t)$ and $T^*(\hat{\mu}_t)$ in Track-and-Stop, and any method used to compute those also applies to our algorithm. | Summary: This work studies the batch complexity in the bandit pure exploration including best arm identification, top-k arm identification, and thresholding bandit problems. The paper begins by establishing a lower bound on the number of batches required for an algorithm that is $\delta$-correct for any Gaussian instance with sample complexity in a certain range. It then proposes an algorithm, named PET, that works in phase. Theoretical analysis shows that its batch complexity is close to the lower bound, and its sample complexity is also asymptotically optimal in the high confidence regime.
Claims And Evidence: This paper is highly technical, not easy to read. I am not confident about my understanding of the claims made in this paper.
Methods And Evaluation Criteria: The suggested algorithm is sensible to me. The sample complexity and the batch complexity are key performance metrics for batched pure exploration problems.
Theoretical Claims: I had a hard time parsing and understanding the statements in the main body. I gently believe that the proofs are solid, but I am not sure.
Experimental Designs Or Analyses: - I believe some experiments on Top-K or TBP should also have been conducted.
- In addition, I hope to see some experiments supporting the statement “$T_{min}$ is a tradeoff between prior knowledge and batch complexity”.
Supplementary Material: I checked that valid Julia codes are submitted through supplementary material, but I did not try to run these codes.
Relation To Broader Scientific Literature: I believe the research question that this paper asks is interesting and impactful. Given that the batched queries can be thought of as a restriction in the adaptivity, the results in this paper suggest how much we can reduce the adaptivity while preserving the sample complexity. I would believe that high level insights can also be applied in different learning tasks.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: A clear weakness of this paper is readability. See my suggestions below.
Other Comments Or Suggestions: I would like to recommend the authors to refine the statements for better readability. For example,
- In Assumption 2.2, it is totally unclear whether $y$ is a vector or a scalar. I would rather write: For any mean vector $\mu$, there exists a vector $y$ such that $T^\star(x \mu + (1-x) y) = x^{-2} T^\star(\mu)$ for all $x \in [0,1]$
- Lemma 2.1 is also extremely cryptic. Are $T_{min}$ and $T_{max}$ specific to the algorithm? Should it be $S_N$ instead of $S_n$?
- In Algorithm 1, the variables $r$, $t$, $N_i$ are initialized but never updated.
- …
Questions For Authors: Although I was not able to follow all the details, I am curious about the meaning of $T_0$ in Algorithm 1. It seems like an algorithm parameter, but it appears in Theorem 3.11 only, which implies that a larger $T_0$ leads to smaller batch complexity and smaller sample complexity without losing anything. Does Theorem 3.11 hold only if $T^\star(\mu) \geq T_0$? Otherwise, why don't we set $T_0$ to be extremely large.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's insights and corrections.
Thank you for pointing out a few unclear passages.
In Assumption 2.2, $y$ (in normal font) is a scalar, and $\bf{y}$ (in bold) is the vector $(y,y,...,y)$.
We will make sure to clarify this.
In Algorithm 1, we indeed did not detail where $r$, $t$ and $N_i$ (respectively the round count, the timestep count, and the number of samples of arm $i$) are updated.
We will add a line to explicitly mention their update.
In Lemma 2.1, $T_{\min}$ and $T_{\max}$ are indeed specific to the algorithm - and to the degree of optimality considered, impacting the values of $\gamma$ and $c$ - all algorithms are not uniformly optimal everywhere. While $T_{\max}$ could be set to $+\infty$ for some algorithms, fixing some $T_{\min}$ is necessary to guarantee the finiteness of $\gamma$, as explained around line 209.
No algorithm can achieve a sample complexity that is withing a constant multiple of $T^*$ if $T^* \approx 10^{-10}$, as every arm must be sampled at least once: $T_{\min}$ should be at least of order 1.
That Lemma should also indeed use $S_N$ instead of $S_n$, thank you for noticing that typo.
It is correct that $T_0$ should have more of an impact on the bound of Theorem 3.11.
This bound is indeed only valid for $T_b^*(\mu)$ big enough compared to $T_0$.
More precisely, in the Appendix, line 1173, $r_0$ and $r_1$ are necessarily strictly positive, which we forgot to reflect in the final bound.
To account for the case of large $T_0$, the batch complexity bound should be modified to be the maximum of 4 and the expression in Theorem 3.11.
In the sample complexity bound, one needs to replace every instance of $T_b^*(\mu)$ with $\max\{ T_b^*(\mu),32KT_0\ln(2\sqrt{2K}T_0)\}$.
We are very thankful to the reviewer for pointing out this mistake.
With that correction, one can see the need to balance $T_0$: a very small $T_0$ will lead to a small initial batch size, so we can efficiently solve even easy instance without wasting sample complexity, but at the cost of way more batches for complex instances. A very big $T_0$ will minimize the number of batches required, but the initial batch might be way bigger than necessary, requiring more samples than would be optimal. | Summary: The paper investigates pure exploration problems with a specific focus on batch complexity.
First, the authors establish a theoretical lower bound for batch complexity and characterize it in relation to sample complexity, which is well understood from previous work. They then propose the PET algorithm, which nearly attains the lower bound.
Both theoretical analysis and experimental results demonstrate the effectiveness of PET.
## update after rebuttal
Overall, I appreciate the technical novelty of this paper and will therefore maintain my current score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The work is mostly theoretical but also contain some experiments which have reasonable design.
Theoretical Claims: I checked the flow of some parts of proofs but I’m not certain all are correct.
Experimental Designs Or Analyses: The design of conducted experiments are seems sound.
Supplementary Material: I just reviewed the flow of theoretical lower bound.
Relation To Broader Scientific Literature: The key contributions of the paper are in the area of online decision-making (bandits).
Essential References Not Discussed: I am not aware of any missing prior works.
Other Strengths And Weaknesses: Strenghts:
* The authors address three well-known problems—BAI, Top-K, and TBP—within a unified framework, with their approach and results applicable to all.
* Considering the problem for sub-Gaussian distributions is an interesting contribution, as most previous work has focused on the 1-parameter exponential family.
* The paper provides rigorous theoretical results along with experiments demonstrating the effectiveness of the proposed algorithm.
Other Comments Or Suggestions: It would be better to restructure Subsection 3.1. Perhaps defining the threshold first and then presenting Lemma 3.1 would improve clarity.
Line 117: "is or order" → "is order".
Questions For Authors: 1. Can you elaborate on the inequality in Line 94? The explanation provided is quite brief.
2. I did not fully understand the reasoning behind Lemma 3.1. You assume a sub-Gaussian distribution and then use similar quantity from the Gaussian distribution (GLR), along with a new threshold, to derive a concentration inequality. Is this inequality a novel result? I am unsure whether the cited previous work has addressed this specific case. Could you clarify?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful suggestions.
The inequality on line 94 comes from Donsker-Varadhan duality, and can more precisely be found in Lemma 2 of Wang 2021, upper bounding the difference of means by the square root of the KL divergence for two sub-Gaussian variables.
Lemma 3.1 is a version of the Chernoff stopping rule (relaxed with the same inequality as above), which can be found for BAI in Garivier and Kaufmann 2016.
One can find a generalization of the Chernoff stopping rule for any pure exploration problem for example in Theorem 1 of Degenne et al. 2019 (Non-asymptotic Pure Exploration by Solving Games).
Our concentration result is in Lemma 3.2. While this specific threshold beta is a new result, it employs the classic techniques used to derive such Chernoff stopping rule thresholds (we were surprised not to find such a threshold for the sub-Gaussian case in the literature). | null | null | null | null | null | null |
Random Feature Representation Boosting | Accept (poster) | Summary: This paper introduces a Random Feature Representation Boosting method which uses random features at each layer and iteratively optimizes the functional gradient of the network representation. Experiments shows it outperforms traditional RFNNs and end-to-end trained MLP ResNets.
Claims And Evidence: My question is what is the main differences between Random Feature Representation Boosting and Existing Functional Boosting, My understanding is the dense version of Random Feature Representation Boosting is actually equal to Functional Gradient Boosting. And in latter evaluation, dense version performs better than the scalar and diag version.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Theoretical results show that RFRBoost aligns with the theoretical framework of Generalized Boosting, satisfying its weak learning conditions and risk bound guarantees.
Experimental Designs Or Analyses: Can you provide insights into why XGBoost achieves a lower RMSE in regression tasks?
Supplementary Material: I reviewed the code but didn't run it.
Relation To Broader Scientific Literature: This paper tries to extends boosting theory beyond function approximation to feature representation learning, integrating random feature methods with layer-wise boosting.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work and for your comments. Below, we address your questions and clarify the novelty and key differences of our work.
**Q: What are the main differences between Random Feature Representation Boosting and existing Functional Boosting? Are dense Random Feature Representation Boosting and Functional Gradient Boosting the same?**
We appreciate the opportunity to clarify the distinctions. There are crucial differences between RFRBoost and what might be considered 'Existing Functional Gradient Boosting' (FGB), particularly the classical form used in methods like XGBoost.
* **Boosting Target (Label Space vs. Feature Space):** Classical FGB, as outlined in our Section 2.1, typically operates in *label space*. It iteratively adds weak learners (like decision trees) to directly improve the model's prediction $F_t(x)$ by fitting the gradient of the loss with respect to the current prediction. In contrast, RFRBoost operates in the framework of *Gradient Representation Boosting* (GRB), detailed in Section 2.2. GRB boosts in *feature space*, iteratively building a deep residual-neural-network-like feature representation $\Phi_t(x)$ by adding residual blocks $g_t$ that approximate the functional gradient of the loss with respect to the neural network representation itself. The final prediction is then made by a **single** linear model $W^T \Phi_T(x)$ trained on the final representation. RFRBoost in the $\Delta_{dense}$ regime is not equivalent to classical gradient boosting, because only a single linear model is trained on the boosted ResNet feature representation, rather than an ensemble of such models as in traditional gradient boosting.
* **Random Feature Residual Blocks $g_t$:** While the theory of GRB allows for general function classes $g_t$, RFRBoost specifically uses fixed random features, allowing for optimized computations and specialized theoretical results. More specifically, we have $g_t = A_t f_t$, where $f_t = f_t(x, \Phi_{t-1}(x))$ is generated using random weights, and only the linear map $A_t$ is trained using linear methods. This is fundamentally different from previous GRB methods that train bigger neural networks end-to-end as residual blocks using SGD. Our use of fixed random features is what enables efficient computational routines and allows for analytical solutions as per Theorems 3.1 and 3.2, while retaining the theoretical guarantees stemming from GRB (Theorem 3.3), and significantly improving performance (Section 4.1). A specific contribution within our gradient-greedy approach is the incorporation of a unit-norm constraint when learning $g_t$, differing from related literature. This ensures we correctly identify the optimal direction for the residual block in function space, which is crucial for the validity of the functional Taylor approximation of the risk. The special setting of random features is what allowed us to prove that this can be obtained by solving a constrained least squares problem (Theorem 3.2).
**Q: Can you provide insights into why XGBoost achieves a lower RMSE in regression tasks?**
It is challenging to pinpoint a single definitive reason why XGBoost achieves slightly lower RMSE than RFRBoost in our experiments, as RFRBoost and XGBoost operate under fundamentally different paradigms as explained above. (Note however that while XGBoost achieves a slightly lower RMSE than RFRBoost, it performs worse in terms of average rank.)
XGBoost uses classical gradient boosting (Section 2.1) to construct a large ensemble, comprising hundreds or thousands of decision trees. It learns by fitting trees to the residual errors (gradients) in label/logit space. Its strengths lie in its ability to capture complex interactions via hierarchical, axis-aligned splits and regularization techniques tailored for trees.
RFRBoost, conversely, uses gradient representation boosting (Section 2.2) to build a deep feature representation $\Phi_T(x)$ using random projections, followed by a single linear predictor. Its inductive bias stems from the nature of the random features used (e.g., dense projections with tanh activation in our experiments) and the layer-wise boosting process in feature space.
Potential factors contributing to the observed difference on these specific benchmarks might be due to different inductive biases of the different approaches, ensembling vs using a single predictor, or difference in regularization of discrete trees versus continuous (logistic) regression targets.
---
We hope this clarifies the methodological distinctions and provides context for the performance comparison. We aim to make this clearer in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification, I will raise my score to 3. | Summary: This paper studies an area of "Extreeme learning" where the weights of a non-linear transformation are set without any gradient calculations. In this instances, a Residual Network connection is used to derive the math for setting the procedure in a deep randomized network for MSE prediction problems specifically. The results improve upon prior approaches in the space and show tentatively close RMSE to XGboost.
--- Post Rebuttal ---
I still support this paper. I hope the larger scale results are done for camera ready, the preliminary results are encouraging.
Claims And Evidence: The theory appears sound but I have not checked it in detail. The experimental methodology is very thorough in a way that we can ensure the comparisons being made are valid (Finally, someone uses the Wilcoxon signed rank test for model comparisons in an ML paper!). However, sub-sampling all datasets to <= 5k observations significantly weakens the strength of the conclusions.
Methods And Evaluation Criteria: The scope of considered datasets is reasonable and the large number of datasets is good.
Theoretical Claims: I have not the time to check the theory, but the appendix seems to have step-by-step derivations. I'm particularly thankful for the inclusion of "Python code for this is" blocks in much of the appendix, a refreshing balance of math and code exposition. I'd encourage the authors to add this to the C.3. Categorial Cross-Entropy section as well for completeness and so that others may experiment with classification problems (indeed, it is a little saddening they were not included at the onset).
Experimental Designs Or Analyses: I have checked all the experiments. The average reader ay not appreciate the difficulty of problem 4.2, and including results of resents of varying size trained with normal Adam to demonstrate how much computational/representational power to perform such separation would be beneficial in educating the less-informed.
While I understand the use of sub-sampling due to 91 datasets being used, including at least _some_ datasets with multiple larger sizes of varying orders of magnitude will majorly strengthen this work (e.g., benefit as datset sizes increase? Accuracy? Runtime? Lots of questions to answer).
Supplementary Material: I reviewed all the supplemental material, but did not check the derivations.
Relation To Broader Scientific Literature: Not only are extrem learning machines useful in their own right, but they expose interesting scientific questions about network initialization strategies and the scope of what SGD at large can and is needed to learn and achieve good performance. I'm quite surprised that such high performance can be obtained without any active gradient steps or iterative calculations on a per-layer basis, which is fundamentally interesting in its own right.
Essential References Not Discussed: I'm broadly familiar with basic extreme learning literature. I think discussing the related work in randomized/non-learned convolutional filters for CNNs would significantly improve the article and provide less enlightened readers with a better scope of the possible future works. In particular, Scattering networks perform the same task for CNNs specifically, and while I see no trivial path to including them in experimentation, I think they help establish the broader interest in this field of work:
See:
* Andreux, M., Angles, T., Exarchakis, G., Leonarduzzi, R., Rochette, G., Thiry, L., Zarka, J., Mallat, S., Andén, J., Belilovsky, E., Bruna, J., Lostanlen, V., Chaudhary, M., Hirn, M. J., Oyallon, E., Zhang, S., Cella, C., & Eickenberg, M. (2020). Kymatio: Scattering transforms in python. Journal of Machine Learning Research, 21(2012), 2012–2017.
* Bruna, J., & Mallat, S. (2013). Invariant Scattering Convolution Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1872–1886. https://doi.org/10.1109/TPAMI.2012.230
* Cotter, F., & Kingsbury, N. G. (2017). Visualizing and improving scattering networks. In N. Ueda, S. Watanabe, T. Matsui, J.-T. Chien, & J. Larsen (Eds.), 27th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2017, Tokyo, Japan, September 25-28, 2017 (pp. 1–6). IEEE. https://doi.org/10.1109/MLSP.2017.8168136
* Mallat, S. (2012). Group Invariant Scattering. Communications on Pure and Applied Mathematics, 65(10), 1331–1398. https://doi.org/10.1002/cpa.21413
* Oyallon, E., Belilovsky, E., & Zagoruyko, S. (2017). Scaling the Scattering Transform: Deep Hybrid Networks. 2017 IEEE International Conference on Computer Vision (ICCV), 5619–5628. https://doi.org/10.1109/ICCV.2017.599
* Oyallon, E., Zagoruyko, S., Huang, G., Komodakis, N., Lacoste-Julien, S., Blaschko, M., & Belilovsky, E. (2019). Scattering Networks for Hybrid Representation Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9), 2208–2221. https://doi.org/10.1109/TPAMI.2018.2855738
* Trockman, A., Willmott, D., & Kolter, J. Z. (2022). Understanding the Covariance Structure of Convolutional Filters (No. arXiv:2210.03651). arXiv. https://doi.org/10.48550/arXiv.2210.03651
Other Strengths And Weaknesses: See the above discussion.
Other Comments Or Suggestions: Some of the early exposition is a bit hard to follow and lacks explanation. E.g., what is a "law of the data"? The notation makes it unclear when something is a _function_ vs. a weight matrix (e.g., $f_t$ looks like a function but isn't!). Making the notation more consistent in something like "greek for functions" or some other style would help.
Questions For Authors: * Can you run XGBoost and your best proposed methods on more of the fulll-sized datasets and report per-dataset and aggregate results? In both accuracy and runtime would be very valuable.
* It seems like all experiments are MSE only, even though cross-entropy is stated to exist. Please clarify? Such problems should be presented as distinct groups rather than merged if both were done.
* If you saved the results, can you report the sensitivity of XGBoost and your method with respect to the hyper-parameters? e.g., are there "reliable defaults" for your method you observe empirically, or evidence of less fluctuation due to using the approach?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your valuable comments and thoughtful points raised, especially the positive feedback regarding experimental rigour. We address your points below.
**Are results MSE only?:** We apologize if this was unclear. Regression tasks are reported in Table 1 and Figure 2, while classification tasks (cross-entropy loss) are reported in Table 2 and Figure 3. We will revise the text in Section 4.1 to explicitly state that the regression and classification results are presented separately. Additionally, we will also add scatter/box-and-whiskers plots to better display the variability of the models, an ablation study for SWIM vs iid features, and full dataset-wise results in the Appendix.
RFRBoost supports any convex loss function, and the main body of the paper was developed with this in mind. The exact-greedy analytical solution are a special case for MSE loss, but the gradient-greedy algorithm of Sec 3.3 and Algo. 2 supports general convex loss function. More importantly, in the gradient-greedy case we argued why previous work on SGD-based representation boosting overlooked the crucial detail of constraining the functional norm of the residual block to unit norm. This is essential to obtain a valid functional Taylor approximation of the risk, and we proved in Theorem 3.2 how to efficiently calculate and incorporate this constraint when using random features. This step provably involves solving a constrained least squares problem (where the functional gradient vector depends on the convex loss), which is where the MSE confusion might have stemmed from.
**References on CNNs and scattering networks:** We thank the referee for pointing out this related field of study. We will incorporate these topics into the related works section in the revised manuscript. We agree that there is no immediate way to include these in experimentation as they concern image data. However, this is an interesting and important direction for future research to build upon the foundational work presented in our paper.
**Code Blocks:** We are glad you found the embedded Python code snippets helpful for clarity and reproducibility. Following your suggestion, we have added corresponding code examples for the gradient calculation in Appendix C.
**Dataset Size and Scalability:**
The decision to truncate datasets at 5000 observations stemmed primarily from computational resource constraints, balanced against the desire to experiment on a large number (91) of diverse datasets. Note that this threshold is substantially larger than the median dataset size (approx. 2400) within the OpenML Benchmark Suite. The primary computational bottlenecks were the E2E ResNet, and the rigorous nested 5-fold cross-validation procedure with Optuna (requiring 5*100 fits per dataset fold per model). Conducting such extensive evaluation on 91 datasets at their full size was computationally infeasible for us. However, we argue that evaluating performance on datasets of this scale is valuable, as this regime is highly relevant for many practical applications and is a common domain for RFNNs and extreme learning machines. Restricting dataset size is not uncommon in published papers (e.g. Bolager et al. 2023); we aimed for transparency by stating it directly in the main paper, rather than leaving this to the appendix.
To further address scalability we will:
* Add an analysis of the computational complexity of Algorithms 1 and 2, noting its scaling behaviour $O(NTD^2p^2d)$ for training.
* Add experiments in the Appendix where RFRBoost and baselines are trained on larger datasets using increasing fractions of the data (e.g., 10k, 25k, 50k, 100k, 200k, 400k samples, Covertype, YP-MSD). These will be displayed in a plot with number of samples on the X axis, and results/time on Y, to demonstrate scaling behaviour.
Scaling RFRBoost efficiently to datasets with millions of rows would likely require distributed computing paradigms such as MapReduce and implementation optimizations beyond the scope of this initial work, which we identify as an important direction for future research.
**Terminology:** We have replaced the phrase "law of the data" with "distribution of the data". The original phrasing, while common in some areas of theoretical statistics, might be less familiar to the broader machine learning audience, and we have revised the manuscript accordingly.
**Sensitivity of Parameters:**
We observed that optimal hyperparameters were highly dataset-dependent for *all* models, including XGBoost, ridge/logistic regression, E2E MLP ResNets, and RFRBoost. This variability motivated our use of a rigorous nested 5-fold CV scheme coupled with Optuna for hyperparameter tuning, ensuring a fair comparison on all datasets. Anecdotally, we observed a general trend where RFRBoost performance tended to improve with increasing depth, whereas the optimal E2E MLP ResNets depth was more varied.
---
We hope these responses adequately address your questions.
---
Rebuttal Comment 1.1:
Comment: Could you post in reply some of the large-scale results on one or two datasets?
Are so many hyper-parameter steps _needed_ for the approach? Do you have the optuna plot/graph for performance over trials?
---
Reply to Comment 1.1.1:
Comment: **Larger datasets:** We have not yet been able to complete the additional larger-scale experiments. This took longer than anticipated, as the evaluation code needed rewriting for these additional experiments and thorough evaluation requires extensive, costly hyperparameter optimization.
However, we can provide some preliminary full-scale results on the largest dataset in the used OpenML repository: id 23517 (classification, approx 100k samples).
Model accuracy vs training set size: https://postimg.cc/HjWgK7Sj
Model fit time vs training set size: https://postimg.cc/Pp3F0Nx8
As seen in the plots, RFRBoost consistently outperforms RFNNs, E2E MLPs and XGBoost across all data sizes up to full size (we use the same fixed hold out test set). There is a healthy margin between traditional RFNNs and our deep random feature-boosted model at this scale, too. RFRBoost also exhibits lower variance across multiple runs compared to the other baseline models, especially E2E MLP ResNets. It is possible that E2E trained weights may eventually outperform random weights at some dataset size threshold. However, this is not yet the case here. We also emphasize that 55/91 of the datasets in our main evaluation were *not* truncated (being below threshold), and truncation does not impact the rigour of the evaluation procedure and statistical tests. RFNNs are often used in medium-sized data regimes, making our rigorous experiments highly relevant. We will nevertheless include some additional results on even larger-scale datasets in our revised manuscript, but we reiterate that this is not the main contribution of the paper. Rather, the main contribution is our novel methodology for constructing deep residual RFNNs for arbitrary random features and rigorous validation.
This training time plot interestingly shows that RFRBoost can train faster than a single-layer RFNN. This is because RFNNs fit the wide random feature layer directly to logits, which is computationally costly. RFRBoost can reduce this by instead training random feature residual blocks and then fitting the comparatively lower-dimensional final representation to the final logits.
**Time complexity correction:** In our previous reply, we unnecessarily simplified the training time complexity of RFRBoost. The detailed serial time complexity is $O(T(ND^2 + NDd + D^3 + dD^2 + Np^2 + NpD + p^3 + Dp^2))$, following Algorithm 2, but is highly parallelisable on GPU due to the heavy use of matrix multiplications. A more detailed breakdown will be provided in the revised manuscript.
**Hyperparameters:** Unfortunately, we did not save the Optuna performance-over-trials data, only the final nested CV scores. The large number of hyperparameters and 100 Optuna trials per fold were primarily chosen for rigorous and fair comparison against XGBoost and especially E2E ResNets, which are notoriously sensitive and perform poorly without thorough tuning. Using at least 100 (optuna) random search trials is generally considered standard practice to our knowledge. As for RFRBoost, we generally found that the most important hyperparameter is the l2 regularization, and the number of boosting iterations. We found the other parameters (boosting learning rate, SWIM scale) to have a marginal effect on performance. | Summary: The paper introduces Random Feature Representation Boosting (RFRBoost), a novel approach that combines random feature neural networks (RFNNs) and gradient boosting theory to construct deep residual neural network models. The main idea is to build a deep ResNet structure using random feature layers that explicitly approximate the functional gradient of the network representation. This technique allows the resulting models to maintain computational efficiency while offering theoretical guarantees and improved performance.
Claims And Evidence: Overall, the claims made in this papers are supported well. Yet, additional clarification (or empirical validation) may be beneficial:
1. Empirical validation primarily focuses on tabular datasets and one synthetic point-cloud dataset. This leaves unaddressed the applicability and performance on more structured or complex data (e.g., image, sequential, time-series).
2. The paper hypothesizes briefly that this improvement may arise due to the constraint on the functional gradient norm, but it lacks deeper analytical or empirical exploration of why this occurs.
3. While experimental evidence clearly supports improved computational efficiency relative to MLP ResNets, the scalability to larger or high-dimensional datasets remains unclear.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I did not check the correctness of the proofs in the Appendix. However, I have gone through the statement and claims in the main text of the paper, which looks good to me.
Experimental Designs Or Analyses: The experimental design and analyses presented in the paper were thoroughly reviewed for soundness and validity. Overall, the methodology employed by the authors is strong and convincing, particularly with their selection of datasets. The authors utilized a broad and diverse set of 91 benchmark tabular datasets sourced from OpenML, providing a comprehensive basis for evaluating the proposed method. Furthermore, they adopted a rigorous nested 5-fold cross-validation strategy, effectively minimizing biases related to hyperparameter tuning and yielding reliable estimates of model generalization. Bayesian optimization with Optuna for hyperparameter tuning further enhances the rigor of their evaluation process.
Supplementary Material: No.
Relation To Broader Scientific Literature: The key contributions of Random Feature Representation Boosting (RFRBoost) relate closely to existing literature on random feature neural networks (RFNNs), residual neural networks (ResNets), and gradient boosting theory. It integrates these approaches with gradient boosting concepts, which traditionally optimize ensembles of weak predictors (e.g., AdaBoost, Gradient Boosting Machines, XGBoost), but here, uniquely, optimize residual blocks within deep networks. Conceptually, the method builds upon recent theoretical advances that connect ResNets to gradient boosting and neural ordinary differential equations, generalizing these insights to deep RFNN architectures.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Although the method creatively combines random feature methods and gradient boosting, the primary conceptual ingredients—random features, residual blocks, and gradient boosting—are individually well-established. Thus, the originality of the contribution primarily arises from integration rather than fundamentally novel ideas.
The evaluation predominantly focuses on tabular datasets, limiting the significance of the method for broader machine learning contexts. Without evidence on structured datasets such as images, graphs, or sequential data, the broader significance of the proposed method remains unclear.
The surprising empirical advantage of gradient-greedy over exact-greedy approaches is not deeply analyzed theoretically or experimentally. Thus, despite its significance, the paper misses the opportunity to clarify why this approach works better, limiting conceptual clarity and interpretability.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your thorough and positive assessment of our work and for your valuable comments. Below, we address your questions:
**Empirical validation primarily focuses on tabular data:** The main question we set out to answer in our paper was whether one could build deep random feature neural networks (RFNNs) that significantly improve the performance of single-layer RFNNs while retaining their computational benefits. Since traditional RFNNs, using dense random projections, are primarily designed for tabular data, our paper focused on this domain. A careful study on tabular data is a necessary first step before looking at structured data like images or sequences, that would necessitate more complex random feature structures (e.g., random convolutional kernels or recurrent structures). Integrating such major architectural changes introduces complexities that make it difficult to address thoroughly within a single theory and within the scope of this single paper. We therefore consider this a valuable avenue for future research.
**Q: The paper hypothesizes briefly that this improvement may arise due to the constraint on the functional gradient norm, but it lacks deeper analytical or empirical exploration of why this occurs.**
We appreciate the request for further clarification, however we are not quite sure what "improvement" here is referring to. We give two possible interpretations:
1: Regarding the empirical observation in Table 1 where the gradient-greedy approach slightly outperforms the exact-greedy approach for regression tasks, we hypothesise that this could be due to an implicit regularisation effect. The gradient-greedy method, by focusing only on aligning with the functional gradient direction (under a norm constraint), might avoid overfitting compared to the exact-greedy method which directly minimizes the loss at each step. Providing a rigorous theoretical analysis of this phenomenon is challenging and would likely require fundamentally new tools, making it an interesting direction for future theoretical work.
2: Regarding why our gradient-greedy approach succeeds while Suggala et al. (2020) observed it performed worse for medium-sized SGD-trained networks, we refer to our discussion at the end of Section 2.2. The crucial difference lies in how the gradient step is implemented. Our gradient-greedy approach (Algorithm 2 and Theorem 3.2) explicitly solves an optimization problem constrained to find the unit-norm functional direction of the residual block $g$ that best aligns with the negative functional gradient $-\nabla \widehat{R}$ (see Theorem 3.2). This ensures that the learned residual block $g$ points in the correct direction in function space, allowing for a valid functional first-order approximation used in boosting (Eq. 4). In contrast, standard SGD training of a residual block to minimize the functional inner product $\langle g, \nabla_2 \widehat{R} \rangle_{L_2^D({\mu})}$ does not inherently enforce a constraint on the functional norm. We believe incorporating an explicit norm constraint, focusing on finding the optimal *direction*, is key to the success of the gradient-greedy strategy in our random feature setting.
**Dataset Size and Scalability:** Please see `Dataset Size and Scalability' in response to Reviewer XEnQ.
**Originality and Contribution**
We respectfully disagree that our contribution is merely an integration of established components. While random features, residual blocks, and boosting are known concepts, successfully constructing *deep* RFNNs presents unique and significant challenges, since naively stacking random layers has been observed to degrade performance. Our key contribution is providing the first principled framework for building deep RFNNs using *any general* random feature type by rigorously using ideas from gradient representation boosting (which differs from classical gradient boosting). The use of random features within this framework is not just an implementation detail; it is fundamental to enabling the computational efficiency and theoretical analysis central to RFRBoost. This includes the closed-form solutions of Theorem 3.1, and the theory leading to the quadratically constrained least squares problem of Theorem 3.2.
**Reviewer Comment: No supplementary material:** We would like to clarify that supplementary material containing the Python code for our model implementations (RFRBoost, baselines) and experimental procedures was provided with our submission. We commit to making this code publicly available in a repository upon acceptance to ensure full reproducibility.
---
We hope these clarifications adequately address your comments and further highlight the contributions of our work. | Summary: This paper studies the problem of boosting random features and its connection to ResNets. At stage $t$, a new group of random features $f_t = f_t(\Phi_{t-1})$ are generated and added to the previous stage's features $\Phi_t = \Phi_{t-1} + \Delta_t f_t$ where the matrix $\Delta_t$ gets optimized and the readout weights are also optimized. The authors study constraining $\Delta_t$ in various ways, optimization routines are derived for square and convex losses at the readout, and some theory is provided. In the last section, the method is compared to XGBoost, standard random feature methods, an end-to-end trained ResNet, and ridge regression applied to OpenML regression and classification tasks.
Claims And Evidence: A number of tables are shown comparing the mean RMSE and fit times (across tasks) for the different methods. In general, the proposed method is close to as fast as XGBoost and shows similar performance. However, I have some issues with how the data are presented:
* Fig 2 and 3 "critical difference diagrams" are not discussed within the text as far as I can see. I am not familiar with this method of presenting data. This needs to be explained, since I don't know what I'm seeing here.
* Tables 1 and 2 are presented without any error bars on the RMSE or time numbers. The reader cannot evaluate whether these differences are significant or not. Since these numbers summarize performance over a large number of datasets (91 across both tables), we don't know how things vary here.
* Rather than summary tables like 1 & 2, it would be better to see the entire distribution of RMSE and fit time on all tasks. This could be shown with a plot of accuracy versus method with all datasets plotted as points and perhaps a box and whisker on top. I'm not familiar with how regression targets are generated in the OpenML datasets, but if these are of drastically different scales then summarizing this with mean RMSE will be misleading.
* The methods are meant to be scalable, however the authors restrict their study to datasets with less than 200 features and restrict themselves to 5000 observations per dataset. I'm not sure this is justified! If the fit times are as small as are reported, it seems reasonable to at least test a few larger datasets.
Methods And Evaluation Criteria: The method is only used with SWIM random features. These seem pretty specialized. How much do the results depend on this? These features depend on the previous layer features and data, but many people are interested in random feature models with, say, iid random Gaussian weights. Can you comment on whether the method works with these?
I think the OpenML datasets that are used seem fine. The point cloud separation task is pretty standard from the classification literature, but other "point cloud separation" data could be chosen as well.
I have listed other critiques of the evaluation above.
Theoretical Claims: I checked the math for the least-squares problems that were solved for Theorem 3.1 I didn't find any issues. For Theorem 3.2, it could be simplifying to write the expression for the optimal $\Delta$ in terms of the pseudoinverse. If you change your definitions $F \to F^T$ and $G \to G^T$ then you get that
$$\Delta = \frac{\sqrt{n}}{ \| G \|_F} G F^\dagger$$
which is pretty evident from the minimization problem.
I'm not sure how useful the theoretical guarantee (Thm 3.4) is. I'm not sure if this is vacuous or not. How does this compare to other methods, for instance those in the Suggala et al (2020) paper that is referenced?
Experimental Designs Or Analyses: Listed above in claims/evidence
How were the methods implemented?
Supplementary Material: I skimmed it.
Relation To Broader Scientific Literature: There seems to be a pretty good review of the existing literature, at least to my knowledge.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: My main concern with the paper is that I think the presentation of the experimental results is weak. If strengthened, I think the paper would be much better supported. Perhaps too much of the paper is taken up by Section 2 discussing all the various boosting methods.
Other Comments Or Suggestions: I have a number of small issues that I hope the authors will address here:
* Notation $\Delta_i$, $\partial_i$ is confusing and not standard to me. I believe the authors are indicating "in terms of the second argument" but should explain this notation.
* Line 111 right col: $W_t$ shape $C \times d$ isn't consistent with the $D \times d$ you use later.
* Lines 204-207 left col: Unclear what you mean by "diagonal matrix" for $Delta_t$ since it is rectangular. In general, how do $D$ and $d$ compare?
* Lines 232-233 left col: The language "sandwiched least squares" is evocative, but these solutions are known as "generalized Sylvester equation" solutions of the form $\sum_i A_i X B_i = C$ for unknown $X$. This could be worth mentioning.
* (throughout) I found the language "find amount of say" kind of distracting and unnecessary. I think you are just doing a line search; you might rephrase as "solve line search".
Questions For Authors: Given above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer fxoT for their careful reading and constructive feedback. Below, we provide detailed responses to each of the points raised.
**On Critical Difference Diagrams:** To facilitate a meaningful comparison across all baseline models and RFRBoost variants, we use the Wilcoxon signed-rank test with Holm correction, as mentioned in Sec. 4.1 Evaluation Procedure. This statistical test assesses significant pairwise differences between models, and the average relative ranks and statistically significant groupings are presented visually in critical difference diagrams (Figures 2 and 3). If two models are connected by bars, then the difference between the two methods is not statistically significant. This is a well-established methodology for comparing multiple algorithms across multiple datasets [Demšar, 2006; Benavoli et al., 2016] and is standard practice within the broader statistical learning and data mining literature (see also the enthusiasm of Reviewer JjfD regarding the use of Wilcoxon signed-rank test). We aim to explain this in more detail in the revised manuscript.
**Presentation of Results Tables:** We agree that presenting only mean scores limits the interpretation of the results. To address this, we will incorporated your suggestion of creating scatter plots overlaid with box-and-whiskers plots to visualize the distribution and variability of performance across all 91 datasets for each method. The critical difference diagrams (Figures 2 and 3) complement this by indicating which observed differences in relative rank are statistically significant. Furthermore, we will add a section to the Appendix with complete dataset-wise results for each model. All regression targets have been normalized to have 0 mean and variance 1 (Section 4.1), hence the results should be relatively comparable.
**Dataset Size & Scalability:**
Please see `Dataset Size and Scalability' in the response to Reviewer XEnQ.
**SWIM initialization vs iid:** We performed experiments using both iid random features and SWIM initialized random features. We initially presented only the SWIM results in the main paper due to slightly superior empirical performance and the 8 page space constraint. In the revised manuscript, we will add an ablation study with pairwise plots directly comparing RFRBoost with iid features versus SWIM features, quantifying the (small) difference in performance. For example, SWIM features only beat iid features 51 out of 91 times.
**How were the methods implemented?** We will add this missing detail to the revised manuscript. All baseline models were implemented in PyTorch, and RFRBoost in particular was implemented following Algorithms 1 and 2. We used LBFG-S as the minimizer for log likelihood loss, and standard torch linear algebra routines for the (sandwiched) least squares, following our derived formulas in the Appendix. E2E ResNets were trained with Adam as described in Section 4.1. The full code is currently contained in the supplementary material and we will additionally release it in a public repository upon acceptance.
**Pseudoinverse:** Thanks for this remark. We will point out the alternative expression in terms of the pseudoinverse in the revised version.
**Theoretical Regret Bound:** Our regret bound in Theorem 3.4 is of the same order as similar results in the literature, e.g. Suggala et al (2020) Corollary 4.3. We will make this clearer in the revised manuscript.
Minor comments:
**Notational confusion: $\nabla_i, \Delta_i, \partial_i$:**
Our original motivation for using $\Delta_t$ stemmed from the connection to the discrete time step in the Euler discretization view of ResNets (Eq. 1). Recognizing the ambiguity, we have replaced $\Delta_t$ (the linear map within the residual block) with $A_t$ throughout the paper. We have also ensured that the text in Section 2 and Appendix B/C clearly specifies when partial derivatives or gradients are taken with respect to specific function arguments.
**Line 111 $W_t$ shape:** Thanks for spotting this. We will correct the dimensions in the revised manuscript.
**Lines 204-207 'diagonal $\Delta_t$':** We have improved the text to make it clearer that we only consider $\Delta_{\text{diag}}$ and $\Delta_{\text{scalar}}$ in the case when hidden size is equal to feature dimension size $D=p$. The dense case is free of this restriction, allowing for rectangular matrices.
**Sandwiched Least Squares:** We have modified the paper to properly address the 'sandwiched least squares' problems as Generalized Sylvester Equations.
**Usage of 'amount of say':** While "amount of say" is standard terminology used in the classical boosting literature, notably in AdaBoost [Freund & Schapire, 1997], we understand it may be unfamiliar to a broader audience. We have adopted your suggestion and replaced it with "line search step size" or similar phrasing for improved clarity.
---
We hope these responses and revisions address your concerns adequately.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns in a good way. I look forward to seeing the more detailed experimental results. Will it be possible to see these in a preliminary revision before I finalize my score?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response.
Some preliminary larger-scale experiments have now been provided in our response to Reviewer XEnQ.
Additionally, linked below are the two requested box-and-whiskers plots overlaid with scatter points:
Regression: https://postimg.cc/Q98d0Y9d
Classification: https://postimg.cc/tsvkX8bL
Due to the large number of diverse datasets, these plots unfortunately become quite dense and potentially less readable and informative than hoped. Therefore, while we will include them in the revision, we still believe the rigorous critical difference diagrams (Figures 2 and 3), combined with the full dataset-wise results tables (which we will add to the Appendix), offer a clearer and statistically more informative summary of the comparative results.
---
We hope that providing these additional plots and results, alongside addressing your previous comments, proves helpful for your final assessment and score. | null | null | null | null | null | null |
Automated Hypothesis Validation with Agentic Sequential Falsifications | Accept (poster) | Summary: This paper presents POPPER, an agent framework inspired by Karl Popper's principle of falsification that can automatedly validate hypotheses statistically rigorously. POPPER validate a hypothesis through conducting analyses or experiments for each sub-hypothesis, calculating p-values and e-values, and determining whether to reject the global null hypothesis. Experiments were conducted on two benchmarks: TargetVal that addresses geneotype-phenotype hypotheses in biology and DiscoveryBench that spanning six domains including sociology, biology, humanities, economics, engineering, and meta-science. Results show that the proposed POPPER succeeds in controlling Type-I error under 0.1 across all datasets and also achieves significant power improvement against various baselines. Also, human study show that POPPER and human perform equally well on selected tasks, with POPPER being more efficient for spending less time and conducting more statistical tests.
## update after rebuttal
Thank you for your response! The observations on success mode are within expectation. Segmenting the hypothesis into a sequence of more verifiable sub-hypotheses is helpful, with rigorous statistical test providing suffcient evidences to validate. It is good to add some rigorous experiments on how these two elements respectively affect the result, just as POPPER-NoReleCheck presenting the result of removing relevance checker. I decided to maintain my original score.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the research problem.
Theoretical Claims: Yes, I checked the correctness of the proofs for theoretical claims and there are no issues.
Experimental Designs Or Analyses: Yes, I checked the soundness of the experimental designs and analyses.
Supplementary Material: Yes, I reviewed all textual parts of the supplementary material.
Relation To Broader Scientific Literature: This paper proposes to validate hypotheses via statistical tests, while previous work mainly utilize natural language directly for hypothesis validation. Such method leads to more rigorous validation and can effectively reduce hallucinations generated by LLMs, which is a solid contribution of this paper.
Essential References Not Discussed: This paper proposes to automatedly validate hypotheses via Popper's principle of falsification. However, similar idea of utilizing falsification for scientific discovery was first discussed in [1], which should be an essential reference but not mentioned in this paper.
[1] Liu et al., 2024. AIGS: Generating Science from AI-Powered Automated Falsification. arXiv:2411.11910.
Other Strengths And Weaknesses: **Strengths**
1. This paper proposes to validate hypotheses automatedly via statistical tests, which is not involved in previous work. Such method successfully ground the process of hypothesis validation on numerical analysis instead of pure language based analysis, which is more rigorous and reliable, and is helpful to reduce hallucinations for research tasks. Therefore, this paper may greatly advance the paradigm of research agent and I believe it can be a great contribution to the community.
**Weaknesses**
1. This paper conducts analysis on potential failure modes of the proposed POPPER. However, human study on the pattern how POPPER outperforms the baselines is not presented. The absence of such analysis may cause the incomplete understanding of why and how POPPER actually works.
2. The proposed statistical test based analysis is powerful, but however, it is questionable whether it can be applied to more research areas and hypotheses. For example, for mathematical theory proof tasks or linguistic research, statistical analysis may not be applicable.
Other Comments Or Suggestions: 1. Typo: several left double quotation marks are incorrectly written as right double quotation marks in both text and tables.
2. Nitpick: the use of * to indicate the best results in Tabel 3 and Table 4 lead to the misalignment of figures vertically. (It is my personal preference that figures should better be aligned and this nitpick can be simply neglected.)
Questions For Authors: 1. Analysis for the action mode of POPPER is now only conducted on failures instead of successes. Can authors conduct analysis on how POPPER outperforms the baselines? Is it because that it can propose better experiments or the explicit calculation of p-values and e-values is the most important factor? Such analysis can be helpful to better understand the mechanism of POPPER.
2. As described in the paper, the design agent is presented with the details of how to conduct the experiment in a given domain. Can authors elaborate to what extent is such assistance provided to the agent, i.e. only a brief description of how the experiment might be conducted / explanation of how to conduct the experiment step by step but without the coding details / all detailed code or function needed for the experiment. Furthermore, are all the possible experiments or only a subset of them are provided as assistance? Do agents strictly follow the given instructions or can they come up with some experiments that are not presented in the assistance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback! We respond to the specific comments below:
> **“Can authors conduct analysis on how POPPER outperforms the baselines?”**
We appreciate the reviewer’s insightful suggestion. Following the reviewer’s recommendation, we conducted a detailed manual examination and identified three main reasons for POPPER's superior performance:
1. The sequential experiments employed by POPPER significantly improves power over methods like CodeGen, ReAct, and Self-Refine, which uses only 1-2 experiments. In particular, many of the hypotheses cannot be directly observed, and therefore a direct attempt at validating the hypothesis might often fail, whereas Popper allows a sequence of carefully designed implication tests that can find more meaningful results.
2. Self-refine and relevance checker also help refine the experiment design, whereas the CodeGen, ReAct, and Self-Refine uses the most obvious experiment, which could contain bias and lacks rigor.
3. With its rigorous e-value-based approach, POPPER addresses the sequential dependencies of the falsification experiments and safely aggregates p-values from multiple experiments. This enables POPPER to achieve better Type I error control compared to the Fisher combined test and the LLM-likelihood. The Fisher combined test is not well-calibrated, and LLM-estimated scores often exhibit bias.
We will add these insights to our revised manuscript.
> **“The design agent is presented with the details of how to conduct the experiment in a given domain. Can authors elaborate on the extent of such assistance provided to the agent?”**
Thank you for this important clarification request. To clarify, we did not provide explicit experimental details or instructions to the agent for each domain. The only domain-specific information provided was: “You are an expert specialized in the field of {domain},” as outlined in Supplementary Notes Listing 2. Consequently, the LLM agent designs experiments solely based on its internal world knowledge without additional domain-specific guidance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! The observations on success mode are within expectation. Segmenting the hypothesis into a sequence of more verifiable sub-hypotheses is helpful, with rigorous statistical test providing suffcient evidences to validate. It is good to add some rigorous experiments on how these two elements respectively affect the result, just as POPPER-NoReleCheck presenting the result of removing relevance checker. I decided to maintain my original score. | Summary: The paper introduces POPPER, a framework for using AI agents to perform hypothesis validation. Given a hypothesis, the system designs and executes a series of falsification experiments, and uses statistical methods for accumulating evidence until the hypothesis can be accepted or rejected with a final p-value.
Careful statistical control of Type-I errors and power analysis allows balanced understanding of the system's reliability, and the authors find that an instantiation of POPPER using GPT-4o significantly outperforms baselines on a set of static datasets as well as interactive simulations.
In a human expert study, their instantiation also matches the power and Type-I error rates of computational biologists and bioinformaticians on hypothesis testing on biology datasets.
Claims And Evidence: The core results of Section 4.1 appear supported by their results for their chosen settings.
The claim "POPPER compares with human experts" should be appropriately caveated with limitations given the small sample size of human experts (only 9 experts compared) and limited complexity of the settings tested (the static datasets with clean variable headers are likely easier than situations one might encounter in reality).
Methods And Evaluation Criteria: - POPPER as a framework is nicely designed and feels like a flexible framework for hypothesis validation.
- The use of Type-I error and Power as success metrics are sensible and useful.
Theoretical Claims: I did not get to check the theoretical claims or proofs, but these would be important for complete assessment of the paper.
Experimental Designs Or Analyses: - The experimental setup of Section 3 makes sense to me, i.e. the choice of static datasets as the environment for ease of implementation. In future work, I would be excited to see POPPER attempt something like DiscoveryWorld (Jansen et al) which gives a different flavor of experimentation than static data analysis.
Supplementary Material: I have skimmed the code and appendix, finding it comprehensive and useful.
Relation To Broader Scientific Literature: This paper fills a useful role in tying together work in hypothesis generation and experiment execution (already discussed in the appendix "Full related works" section), which are typically studied separately and lack a statistically-rigorous framework to tie everything together. A structured framework like POPPER will grow in usefulness as both hypothesis generation methods and experiment execution methods (as well as the underlying models) improve.
Essential References Not Discussed: In "LLM for hypothesis testing and experiments" (appendix), the authors may consider including literature on LLMs for automated ML experimentation, such as RE-Bench (Wijk et al) and MLE-bench (Chan et al), which pose open-ended problems for LLM agents to attempt making progress on.
Other Strengths And Weaknesses: Strengths
- The POPPER framework is very clearly presented and has an elegant form, which appears to broadly support any sort of hypothesis testing. I like the sequential application of "Experiment Design" and "Experiment Execution" on sub-hypotheses to yield evidence that can be accumulated to evaluate the main hypothesis.
- Impressive results in Table 3 as well as on the human experts study.
- I liked the additional analyses and ablations the authors provided: the comparison between different LLMs, ablation results of NoReleCheck (and human annotations), error analysis, and especially the comparison to human baselines was an important and useful addition.
Weaknesses
- When reading the paper, I was looking for a section on Limitations. I found this in the appendix as part of the supplementary material (thank you for including this!), but I think it is important to mention these limitations in the main paper, even if just deferring readers to the appendix. As it stands, the main paper sounds like POPPER can be treated as a "solve all" method for automating science, but as I understand from the appendix there are important caveats to be mindful of, e.g. "Type-I error v.s. false discoveries". Acknowledging this in the main text would help readers have a more measured understanding of the work.
Other Comments Or Suggestions: None.
Questions For Authors: None. Thank you for your paper!
Ethical Review Concerns: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback! We address each point in detail below:
> **“The claim "POPPER compares with human experts" should be appropriately caveated with limitations.”**
We appreciate the reviewer highlighting this important point. We agree and will explicitly note the caveats related to the small sample size and differences from real-world complex settings in the revised manuscript.
> **“The authors may consider including literature on LLMs for automated ML experimentation, such as RE-Bench (Wijk et al.) and MLE-bench (Chan et al.).”**
We thank the reviewer for suggesting additional relevant references. We will incorporate these references in our revised version.
> **“The main paper sounds like POPPER can be treated as a "solve all" method for automating science, but as I understand from the appendix, there are important caveats to be mindful of.”**
We thank the reviewer for this valuable feedback. We agree that clearly presenting the limitations is crucial to ensure responsible usage and prevent potential misuse. We will add detailed discussions on failure modes and Type I error considerations in the main text in the revised manuscript. | Summary: The paper introduces a framework called Popper, which leverages Large Language Models to validate hypotheses specified in natural language. The proposed framework makes use of two LLM agents; one decomposes the hypothesis of interest into smaller sub-hypotheses and proposes experiments to test them, and another that executes the experiments designed by the first agent. The results of these experiment executions are combined through the use of e-values, derived from p-values produced by the analysis performed by the second LLM agent. A theorem is provided that justifies that the method used to combine the e-values results in a controlled Type-I error for the whole system. The framework is experimentally validated using some recently proposed benchmarks spanning six domains, where it is shown to be the only effective approach. Moreover, a comparison is provided with (human) bioinformaticians, with the proposed framework behaving similarly to the humans while completing the analysis an order of magnitude faster.
Claims And Evidence: * The paper claims to investigate a novel problem setting---validating hypotheses specified in natural language. They discuss prior literature that investigates the most similar problem settings. As far as I am aware, this claim is true.
* It is claimed that the proposed framework is able to design and execute any type of experiments, including laboratory procedures, simulations, or data analyses. This claim is not justified. There is no evidence provided that demonstrate the proposed agents are able to design experiments that can be carried out in a laboratory setting. Moreover, not enough details are provided to determine whether the evaluation includes simulation-based experiments. My understanding of the experimental evaluation is that only data analyses are included, but I am not certain that simulations are excluded.
* The manuscript claims that the Popper framework is able to maintain statistical rigor through the use of a novel sequential testing framework that aggregates evidence from tests. The evidence that this part of the framework is correct comes in the form of Theorem 4, which seems to be true.
* It is claimed that Popper achieves Type-I error control, substantiated by empirical evidence gathered using the DiscoveryBench and TargetVal-IL2 datasets. While I think the experiment referenced here is interesting, I think this claim needs to be toned down. Control of Type-I errors is usually established formally; one has the guarantees that, as long as the assumptions of the test are satisfied, the Type-I error is controlled. This is not the case here.
* It is claimed that Popper has significant power improvements, substantiated by empirical evidence gathered using the DiscoveryBench and TargetVal-IL2 datasets. For similar reasons as the Type-II error claim, I think this one needs to be toned down slightly; the power is determined either analytically or through, e.g., Monte Carlo simulations that will converge towards the true power value. Moreover, I disagree with the exclusion of most methods from this comparison on the basis of poor Type-I error control.
* Popper is demonstrated to be comparable with human experts. This claim is supported by a user study based on an empirical comparison with nine bioinformaticians. I think the claim, as it pertains to power and Type-I error comparisons, is relatively well supported. The comments about efficiency gains should be slightly more nuanced; the improved in time is positive, but I think one could justifiably interpret requiring more code and hypothesis tests as a decrease in efficiency.
Methods And Evaluation Criteria: The method is only presented at a high level of detail; it appears to consist of a pipeline of prompts coupled with standard tools from the e-values literature. It is not clear how the method (i.e., prompts) were constructed. The choice of benchmark datasets is sensible.
Theoretical Claims: The theorem seems to be correct, but I am concerned about novelty here. From what I can tell this is a restating of a fairly central result in the e-values literature. See, e.g., Ramdas et al. (2023), who provide an overview of this area and present essentially the same reasoning for why e-values can be used in this way.
Ramdas et al. "Game-Theoretic Statistics and Safe Anytime-Valid Inferece". In Statistical Science, 2023. https://doi.org/10.1214/23-STS894
Experimental Designs Or Analyses: I think the design of the experiments for determining the Type-I error and Power of the methods is reasonably sound, and I think they allow for making interesting claims. However, I would have appreciated more discussion about the limitations of the conclusions given the source of the data. In particular, the extent to which the hypotheses included in the evaluation are already present in the pre-training data of the the LLMs is unclear. As such, it is not obvious how well the proposed framework will generalise to novel discovery problems.
The comparison with human experts, which based on a small sample size, is still interesting. There are quite limited details provided in the main paper about how this study was carried out but, as with other experimental results, the uncertainties in the estimates are clearly quantified.
Supplementary Material: I looked at the proof in the supplemental material carefully. I skimmed over some other parts, such as the experimental setup for the user study and the expanded discussion on the relation to previous work.
Relation To Broader Scientific Literature: This submission addresses a novel formulation of the problem of using machine learning for scientific discovery. Rather than suggesting hypotheses, or focusing on executing experiments, the emphasis is on building a complete framework that can take a hypothesis and falsify it.
Essential References Not Discussed: I have no concerns in this area.
Other Strengths And Weaknesses: A major strength of this paper is the first demonstration of a framework that can tackle the problem of falsifying realistic natural language hypotheses.
The major weakness of this paper is the substantial amount of overclaiming. This cannot be overlooked just because of the substantial strength mentioned previously. The way the paper is currently written has the potential to be very misleading, and the claims should be toned down and appropriately caveated. For example, I think the level of emphasis on rigor and statistical guarantees of the proposed pipeline would likely lead some readers to believe that the LLM components of the pipeline are guaranteed to correctly identify and implement the appropriate hypothesis tests for each sub-hypothesis. It should be made much more clear that no such guarantees are provided.
Other Comments Or Suggestions: I think the experimental analysis could put a lot more emphasis on determining how robust the framework is. In particular, undertaking intrinsic analyses of the individual components of the system to determine where failures are introduced would be valuable.
Questions For Authors: Is there a distinction between hypothesis validation and hypothesis testing in the context of this work?
What are the two Power lines in the plot in Fig 4(2)?
Code Of Conduct: Affirmed.
Overall Recommendation: 1
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Research Integrity Issues (e.g., plagiarism)', 'Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)']
Ethical Review Concerns: There are two ethical issues with the paper:
1) The authors claim to develop a novel statistical testing framework, but this framework already exists. Key papers involved in the development of this framework were cited in the submission without acknowledging that these papers had already developed the proposed statistical framework. Moreover, when confronted with this, the authors now lie that the original manuscript acknowledges the prior work developed the framework.
2) The paper contains a human study, but there is no discussion of obtaining ethics approval. | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's thoughtful feedback and acknowledgment of our work's value in falsifying natural-language hypotheses. Below, we respond in detail to the specific points raised:
> **"Proposed framework claims to design and execute any type of experiments (laboratory, simulations, data analyses)"**
We appreciate the reviewer highlighting this point. Our theoretical framework is indeed valid across various experiment types. Since for wet-lab experiments, the data is collected on the fly, so it naturally satisfy the assumption stated in Section 2. We thus emphasize the broad scope of our framework. However, due to practical constraints (e.g., cost and time), we instantiated our approach specifically through data analysis experiments in our large-scale evaluation. We will revise the text to explicitly acknowledge this practical limitation.
> **"Power determined analytically or via Monte Carlo simulations that converge toward true power values"**
Thank you for raising this point. Currently, our power analysis is conducted using Monte Carlo simulations with five random seeds, and we reported the mean and standard deviations in our result tables. While we acknowledge that additional runs could further reduce variance and improve convergence toward the true power, scaling up significantly is not feasible due to computational constraints.
> **"Disagreement with exclusion of methods lacking proper Type I error control"**
We respectfully clarify that since methods lacking proper Type I error control can inflate false positives, including them in power analysis would be an unfair comparison. We also stress the importance of valid error control as the first-order criterion. We welcome additional elaboration from the reviewer on the disagreement and are eager to discuss this further.
> **"Concern about novelty regarding the theorem presented"**
As explicitly stated in our original manuscript, “Theorem 4 is a standard result following Grunwald et al. (2020), included in Appendix A.2 for completeness.” Our novelty claim does not rest on the safe testing framework itself but on leveraging this framework to instantiate a sequential falsification framework to enable practical and rigorous validation of abstract, free-form hypotheses in LLM-driven experiments. We will clarify this distinction explicitly in the revision.
> **"Extent to which evaluated hypotheses might be present in LLM pre-training data is unclear"**
We appreciate this insightful comment. Our validation approach strictly relies on aggregated e-values derived from statistical analyses grounded in data. In our experiments, as detailed in section 4, we used data permutations for controlling Type I error, which ensures the resulting data is independent of the pre-training data. This experiment setup enforces that any discovery must be purely data-driven and not reliant on the agent's prior knowledge. We will clarify this critical detail in the revision.
> **"Major weakness is substantial overclaiming"**
We thank the reviewer for this important feedback. Although we initially aimed to mitigate overclaiming by carefully delineating assumptions and providing comprehensive failure-mode analyses, we recognize the need for further clarity. In the revision, we will rigorously address ambiguous claims and explicitly highlight caveats to avoid potential misunderstandings. In particular, we will emphasize that the error control relies heavily on our assumptions, and several design components, such as relevance checker and using LLMs with strong capabilities, are intended to make sure the assumptions are satisfied. In practice, users should be careful in judging whether their system is sufficiently powerful to obey these assumptions.
> **"Intrinsic analyses of individual components to determine sources of failure would be valuable"**
Thank you for raising this important point. In section 4.2 and Supplementary Section G, we conducted human annotation on the quality of falsification experiment subhypotheses and the relevance checker’s performance. In Supplementary Section D, we conducted an extensive intrinsic failure-mode analysis by examining detailed logs from 128 failure cases, categorized into 10 distinct failure modes. Additionally, we performed trajectory analysis documented in Supplementary Section E. We will ensure this comprehensive analysis is more prominently highlighted in the manuscript.
> **"Distinction between hypothesis validation and hypothesis testing?"**
We appreciate this clarifying question. We used the terms interchangeably, recognizing that "hypothesis validation" tends to resonate more within scientific domains, whereas "hypothesis testing" is predominantly used in statistical contexts.
> **"Clarify the two power lines in Figure 4(2)"**
The upper line represents statistical power, and the lower line represents the Type I error rate, both plotted against the number of maximum tests conducted.
---
Rebuttal Comment 1.1:
Comment: >Our theoretical framework is indeed valid across various experiment types.
It is incorrect to claim that the proposed framework is valid in all of those settings without actually validating the framework in those settings. This is substantial overclaiming and needs to be changed.
> We also stress the importance of valid error control as the first-order criterion.
No justification is given in the paper or rebuttal stating why Type I errors are more important than Type II errors. This is likely very context dependent. If we instead decide that Type II is more important, the conclusions of the paper change completely.
> As explicitly stated in our original manuscript, “Theorem 4 is a standard result following Grunwald et al. (2020), included in Appendix A.2 for completeness.”
This text is not in the manuscript. In fact, the text before the statement of Theorem 4 cites Grunwald et al. (2020) only to establish a technical condition. There is no mention that the Theorem is already known.
> Our validation approach strictly relies on aggregated e-values derived from statistical analyses grounded in data.
The LLM agent is given the freedom to decide how this analysis is performed. If the experiments are based on known phenomena, for which we have already conducted successful analyses that appear in the training data, there is no guarantee that the proposed framework will generalise to new problems. This should be discussed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful follow-up! We recognize that our initial rebuttal did not clearly convey the scope and limitations of our claims, which may have led to misunderstandings. Below, we address each point in more detail to clarify our position.
> “It is incorrect to claim that the proposed framework is valid in all of those settings without actually validating the framework in those settings. This is substantial overclaiming and needs to be changed.”
We appreciate the reviewer’s attention to this important issue. We agree and acknowledge that our original phrasing was overly broad, and we will revise the manuscript to make our claims more precise.
Our intention was to convey that the framework is **theoretically** valid in a broader set of settings, provided that **all three key assumptions are satisfied**. The crucial remaining step—consistent with the reviewer’s observation—is to design a system that meets those assumptions and empirically demonstrates that the error control properties hold in practice. However, we recognize that our previous language may have blurred the line between theoretical potential and empirical evidence. We highlighted that wet-lab experiments may naturally satisfy Assumption 2 (new data collection being independent or conditionally independent of prior evidence), whereas this is a challenge in static data analysis and requires careful LLM design. Since we have not validated the framework in wet-lab settings, we will revise the manuscript to explicitly restrict our claims to static data analysis, where empirical validation has been performed, and significantly tone down any broader claims.
> “No justification is given in the paper or rebuttal stating why Type I errors are more important than Type II errors. This is likely very context dependent. If we instead decide that Type II is more important, the conclusions of the paper change completely.”
We thank the reviewer for this thoughtful comment. We fully agree that the relative importance of Type I versus Type II errors is context-dependent. Our intention was not to argue that Type I error is universally more important. In contrast, our intention was that **if a method fails to control Type I error, then apparent improvements in power (i.e., lower Type II error) can be biased**. For example, a method that accepts all hypotheses would show maximal power, yet its conclusions would be invalid due to uncontrolled false positives. Our emphasis on Type I error control is therefore to ensure fair and interpretable comparisons of Type II error performance. We will revise the manuscript to clearly articulate this rationale and avoid any implication that Type I error is inherently more important.
> “This text is not in the manuscript. In fact, the text before the statement of Theorem 4 cites Grunwald et al. (2020) only to establish a technical condition. There is no mention that the Theorem is already known.”
We sincerely apologize for the oversight. The discrepancy stems from referencing an updated internal draft that includes citations and clarifications not present in the submitted version. We will ensure the revised manuscript properly acknowledges Grunwald et al. (2020) and clearly states the novelty and context of Theorem 4. We’re grateful to the reviewer for highlighting this.
> “The LLM agent is given the freedom to decide how this analysis is performed. If the experiments are based on known phenomena, for which we have already conducted successful analyses that appear in the training data, there is no guarantee that the proposed framework will generalise to new problems. This should be discussed.”
Thank you for this insightful observation. We apologize for any confusion caused by our previous rebuttal.
We completely agree that the overlap between the training data and the experimental tasks poses a potential data leakage risk. In our earlier response, we intended to convey that **in our experiments, the evaluation setup of Type I error under the null is justified**. Specifically, we evaluated Type I error using permuted data, which ensures that the data is under the null, regardless of whether the original (unpermuted) data is present in the LLM’s training set, thus providing a way to isolate the framework’s behavior from potential data overlap. That said, we fully acknowledge that potential data leakage may impact our power estimation—a concern broadly relevant to any tasks using public datasets—and thus should be treated with care.
We will revise the manuscript to explicitly discuss this limitation, describe how we attempted to mitigate it in our experiments, and potential strategies such as using unpublished datasets or probing the likelihood of training data overlap.
⸻
Please let us know if any further clarification would be helpful. We sincerely appreciate the reviewer’s detailed and constructive feedback—the emphasis on rigor is especially valued and will greatly enhance the quality of our work. | Summary: Manuscript provides a contribution to the automated scientist literature. Premise is that free-form hypothesis positing and testing needs to be accomplished at scale and this necessitates automation. This task is accomplished using agentic/LLM flows which break-down a hypothesis into sub-hypotheses. Sub-hypotheses are sequentially tested and resulting e-values combined using a rigorous procedure which controls for Type I error. Proposal of sub-hypotheses from a free-form hypothesis is achieved via a sequence of LLM prompts with canonical – chain of thought – approaches to obtaining often-valid reasoning chains. Despite shortcomings, this approach compares favorably in terms of efficiency with trained experts performing the same task of hypothesis validation, while at the same time making just as few mistakes as data scientists and statisticians.
Claims And Evidence: Chief claim that the manuscript makes is that the method achieves statistical rigor as a result of theoretically sound approach to combining e-values when executing a sequence of sub-hypotheses tests.
Key assumption is that the hallucinations and unintentional introduction of an irrelevant sub-hypotheses would be caught through self-refinement (an LLM procedure) or relevance check (another LLM based procedure). Humans being susceptible to such mistakes as well, authors compare performance of their method POPPER to performance of trained statisticians. However, sentences like: “By integrating a sequential testing paradigm with automated experiment design and execution, POPPER delivers scalable, statistically rigorous hypothesis validation” extend the claim of statistical rigor obtained under assumption of sensible hypothesis selection are not obviously applicable when an LLM can swap in a sub-hypothesis or report incorrect p-values.
Prompts in the provided code such as: "**IMPORTANT**
You should ALWAYS report a p-value EXACTLY AS IT IS. If a p-value is 4.2e-01, report 4.2e-01, DO NOT REPORT 4.2e-02!" are indicative of challenges in using LLMs to process outputs of numerical methods.
Methods And Evaluation Criteria: Yes, assessing construction of sub-hypothesis in biological domains where cause of difference in expression can range across variety of factors: expression of causal genes, genetic variation, cell and tissue specific milleu, post-translational modifications etc. are well suited for assessing whether the proposed sub-hypothesis make biological sense. Comparing to trained experts is a sensible and practical baseline.
Theoretical Claims: I have read proof of theorem 4. No objections.
Experimental Designs Or Analyses: Using ChatGPT-o1 to assign failure modes and then sanity check a subset of those assessments seems to reliant on LLMs that need guidance as shown above. Please assess more than 30 examples of hypothesis validation failure modes especially since this is not an expensive effort compared to what has been done already.
It would be very useful to see how susceptible POPPER is on simulated data where the truth is known ahead of time and performance can be assessed without recourse to LLM critics. To recall the instruction to agents not to mess up quantitative data, testing with a range of p-values and hypothesis sequences would reveal any susceptibility of the method to ranges of p-values. At the very least, permute names of the diseases and genes and see whether the findings tend to be sensible or the generated hypotheses ignore the evidence.
Supplementary Material: Supplementary material is much more even-keeled regarding the challenges of using LLMs as agents and critics and the table of failure modes is very welcome.
Relation To Broader Scientific Literature: This work fits squarely into the automated-scientist literature, but it aims at a higher degree of rigor. Based on my understanding of the paper it does not quite reach the desired levels, but is none the less complementary to the existing work.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Taking a stab at statistical rigor when using LLMs is bold and laudable. With toned down language and claims, I think this paper can help start the conversation about how inherently noisy LLMs with poorly understood distributions can never-the-less be incorporated into statistically sane procedures.
In the opposite direction, mislabeling the method as a statistically rigorous has potential to devalue the label.
Other Comments Or Suggestions: Typo:
“Assumption 2 requires the e-value in each iteration is valid
conditional on prior information.
“
Missing “that” after requires
Questions For Authors: How hard would it be to create and run a synthetic experiment with a known ground truth and simulated data of expression or at least mock the p-values and results fed to POPPER? Would permuting gene names across the datasets violate any of the assumptions of the method? For a gentler approach, would it be informative to run POPPER on an alternative reality where genes are subtly renamed (interleukins and other signaling molecules exchanging their names) to see if the IL2 becoming IL9 or CXCL9 would be a bridge too far for POPPER?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive feedback and for recognizing our efforts to introduce rigor in the context of LLMs as bold and commendable. We address the thoughtful suggestions raised by the reviewer in detail below:
> **"Tune down the claim and specify the assumptions"**
We appreciate this suggestion and fully agree. We have explicitly highlighted all underlying assumptions in Section 2, discussed Popper's reliance on the base LLM's reasoning capabilities in Section 4, and addressed limitations and failure cases through detailed error analysis in the appendices of our initial submission. We will further clarify ambiguous statements and ensure all claims are appropriately moderated in the revised manuscript. In particular, we will emphasize that the error control relies heavily on our assumptions, and several design components, such as relevance checker and using LLMs with strong capabilities, are intended to make sure the assumptions are satisfied. In practice, users should be careful in judging whether their system is sufficiently powerful to obey these assumptions.
> **Prompts in the provided code are indicative of challenges in using LLMs to process outputs of numerical methods.**
We appreciate the detailed feedback. We acknowledge that p-hacking and misreporting p-values were indeed an issue in the initial experiments, especially when the base model is weaker. However, with the additional self-refine and other prompting mechanisms, we were able to consistently control the type I error rate with Claude 3.5 Sonnet. We discussed the performance variations across different backbone LLMs in section 4.1.
> **"Assess more than 30 examples of hypothesis validation failure modes"**
Thank you for highlighting this aspect. To clarify, we initially analyzed 20 failed experiments to derive 10 distinct failure-mode categories. Subsequently, we expanded this analysis using a comprehensive set of **128** failure cases collected from benchmark runs across TargetVal-IFNG, TargetVal-IL2, and DiscoveryBench, as stated in Appendix D. Indeed, these include all the failure cases from one run of our experiments. Therefore, our analysis already provides extensive coverage, which we will clearly emphasize in the revision.
> **"Run a synthetic experiment with known ground truth and simulated expression data"**
We thank the reviewer for this insightful suggestion. We would like to clarify that we indeed design the experiment setup for Type I error rate estimation to precisely address this concern. We simulated a null scenario by permuting rows of the dataset (e.g., shuffling gene names or expression values), thereby disrupting any real associations between variables. After permutation, all hypotheses become null—including those that may have been true positives in the original data—ensuring that our evaluation of the type-I error is faithful. Under this known null ground truth condition, POPPER consistently refrained from rejecting most null hypotheses, effectively controlling the Type I error rate. We will clarify this explicitly in the revised manuscript. | null | null | null | null | null | null |
Annealing Flow Generative Models Towards Sampling High-Dimensional and Multi-Modal Distributions | Accept (poster) | Summary: This paper proposes Annealing Flow, a method based on continuous normalizing flow to sample from high dimensional multi-modal distribution. The authors provide a efficient training and sampling algorithm, which can be also applied to Monte-Carlo estimation. Various experiments are conducted to verify the efficiency of AF compared to previous sampling methods.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes. Some claims are not mathematically rigorous, see weakness part
Experimental Designs Or Analyses: Yes, in Section 6
Supplementary Material: Yes, the proofs of main propositions
Relation To Broader Scientific Literature: It is generally related to machine learning community
Essential References Not Discussed: [1] Qiu, Yixuan, and Xiao Wang. "Efficient multimodal sampling via tempered distribution flow." Journal of the American Statistical Association 119.546 (2024): 1446-1460.
[2] Maurais, Aimee, and Youssef Marzouk. "Sampling in unit time with kernel fisher-rao flow." arXiv preprint arXiv:2401.03892 (2024).
see weakness part for details
Other Strengths And Weaknesses: ## Strengths
The paper is written clearly and all the mathematic derivations are easy to follow. Apart from developing algorithms, the authors also present some theoretical insights of annealing flow, in the limit of infinitesimal time. The proposed method is useful to handle multi-modal distribution.
## Weaknesses
### 1.
The idea is not very novel overall, and the paper lacks discussions and comparisons with some important previous works. For example, [1] considers similar annealing flow, utilizing L2 distance to train the map between $f_{k-1}$ and $f_k$ and claiming that KL divergence used in this paper will suffer from mode collapse. [2] uses kernel trick to estimate the map instead of neural network, which may save the training cost to some extent. The authors should include more comparisons, either from theoretical side or empirical side.
### 2.
The numerical experiments are not very convincing due to lack of more high-dimensional challenging distributions. The highest dimension in the experiments is merely 50, which is still too low in modern applications of machine learning. I suggest the authors include more high-dimensional experiments such as Bayesian neural networks.
### 3.
The theoretical claims in Proposition 3.3 and 3.4 are not mathematically rigorous. For example, eq (12) holds only in the sense of limit when $h\to 0$ and thus it's not mathematically sound to directly write eq (12). A better way to express this is, e.g., $\lim_{h\to 0} v_k^*-(s_k-s_{k-1}=0$.
[1] Qiu, Yixuan, and Xiao Wang. "Efficient multimodal sampling via tempered distribution flow." Journal of the American Statistical Association 119.546 (2024): 1446-1460.
[2] Maurais, Aimee, and Youssef Marzouk. "Sampling in unit time with kernel fisher-rao flow." arXiv preprint arXiv:2401.03892 (2024).
Other Comments Or Suggestions: see weakness part
Questions For Authors: Are the neural networks trained in block-wise manner also stored sperately? If the annealing timesteps are high and the size of neural network is large, the storage cost is not negligible.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your valuable comments and suggestions! Please see below for our response to your concerns.
> Summary of Review: The paper lacks novelty and misses key related works. [1] proposes an annealed NF using L2 distance. [2] employs kernel methods instead of neural networks to reduce training costs. The authors should provide more thorough comparisons.
Thank you for bringing these works to our attention! The novelty of our work lies in the 1. computational efficiency, 2. stability, and 3. unique theoretical property enabled by our annealing dynamic OT objective, along with 4. strong sampling performance—most notably on the challenging 50D ExpGauss with 1024 far-separated modes, well beyond existing sampling densities baselines.
While we tried best to survey a broad range of NFs/CNFs, and compared three very recent NF methods (plus four beyond NF/CNF scope), we acknowledge that some relevant works have been missed.
We've now implemented **[1] ("TemFlow")** and evaluated it on GMMs (with radius 8, 10, 12) and the extreme 50D ExpGauss distribution. Below is a link to the figures and tables:
**Anonymous link**: https://drive.google.com/file/d/1M_o4XF5440n0G_k_PaHioncx-ro1mHhd/view?usp=sharing.
We sincerely appreciate your time in reviewing the link—*it took considerable effort and we would be grateful for your feedback!*
Summary on [1]:
- *Architecture sensitivity*:
TemFlow is an NF method whose performance depends on its spline layer design (e.g., number/type of bins, bounds), which must be adapted for each target density. In contrast, AF consistently uses a fixed 32-32 MLP without any network-specific tuning. As shown in link~Fig. 3, suboptimal spline design can significantly degrade TempFlow’s performance.
- *Annealing steps*:
TemFlow’s official implementation uses 100 annealing on GMMs (radius=4), while our AF achieves comparable performance using 10 steps on a harder GMM (radius=12). Fig. 1 compares TempFlow when matched to our step count.
- *Efficiency*:
TemFlow trains each annealing block quickly (0.39 min vs. our 0.70 min), but needs more steps—its total training time on 50D ExpGauss reaches 49.6 min versus 14.5 min for AF.
Additionally, [2] is conceptually similar to SVGD and MIED, and we compared the latter two in our paper. [2] is a kernel-based, gradient-driven method and not directly related to NF/CNF. For their comparison with SVGD, please see the first paragraph of their Page 5. We’ll provide further context as we investigate further.
We also evaluated **NUTS** mentioned by Reviewer Eiy4. We'd be grateful if you could also review that response!
> The experiments are not very convincing due to lack of more high-dimensional challenging distributions [...].
Thanks for the thoughtful question! Our primary contribution lies in advancing statistical sampling methods, but not targeting specific real-world datasets. To our knowledge, prior work has not attempted sampling from the highly challenging distributions such as 50D ExpGauss distribution:
$$p(x_1,x_2,\cdots,x_{50}) \propto e^{10\sum_{i=1}^{10}\frac{|x_{i}|}{\sigma_{i}^{2}}+10\sum_{i=11}^{50}\frac{x_{i}}{\sigma_{i}^2}-\frac{1}{2}\|x\|^{2}},$$
which has 1024 far-separated modes (nearest two separated by 20, farthest by 63.25).
For context, prior works primarily evaluate on lower-dimensional or less multimodal distributions: **TempFlow** [1] experiments on a 2D GMM with 8 close modes, a Copula distribution with 4 modes, and samples from up to a 32D latent space (usually unimodal). **FisherRao** [2] explores up to 20D funnel and 2D close modes. **LFIS** (ICML 2024, compared in our work) focuses on a 2D GMM with 9 close modes and a 10D funnel. **AI-Sampler** (ICML 2024, compared in our work) tests on a 2D circular GMM (radius 5) and Bayesian regression up to 21 dimensions.
Statistical sampling is fundamental in areas like statistical physics, rare event analysis, and Bayesian modeling. Beyond testing on challenging distributions, we introduce the novel *Importance Flow framework* for rare-event probability estimation (Table 9); and through *Hierarchical Bayesian Modeling* on real datasets ranging from 8 to 61 dimensions (Table 3).
> Other Comments:
We have replaced the Eq. (12) in Prop. 3.4 with the following:
$$\lim_{h_k \to 0} v_{k}^*=s_k-s_{k-1}.$$
Thank you for the detailed observation!
> **Summary of our Contributions** (summarized in more detail at the end of our response to **Reviewer aCqJ**):
1. Our dynamic OT objective significantly reduces the number of intermediate annealing steps, compared to most recent NF/CNFs works.
2. Prop. 3.4 establishes $\lim_{h_k \to 0} v_{k}^*=s_k-s_{k-1}$, a desired property unique to our AF.
3. Our experiments on highly challenging densities show AF's superior performance towards multi-modal and high-dimensional sampling - on tasks that go beyond prior works.
Thank you once again for your valuable time and thoughtful feedback—we truly appreciate it!
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' feedback. The comparison results between AF and TempFlow are convincing, but it seems that the authors didn't answer the question "Are the neural networks trained in block-wise manner also stored sperately?" I'm willing to raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer mLHv,
Thank you sincerely for your reply to our response! We apologize for not fully addressing your concerns earlier due to character limitations.
> "Are the neural networks trained in block-wise manner also stored sparately?"
Yes, in our algorithm, each $v_k$ is trained sequentially after $v_1, v_2, \dots, v_{k-1}$ have been trained. Consequently, each $v_k$ network needs to be stored separately. However, enabled by our unique dynamic OT loss, the total number of annealing steps ($v_k$) is significantly reduced compared to other recent NF/CNFs works. Additionally, our algorithm requires much simpler neural network architecture —a simple NN with two 32-32 hidden layers.
In our challenging 50D ExpGauss experiments, there are 20 $v_k$ trained in total, leading to a total storage of:
$$20 \ \text{NNs} \times 4338 \ \text{parameters/NN} \times 4 \ \text{bytes/param} = 347,040 \ \text{bytes} \approx 339 \text{KB},$$
which, including optimizer states and other training components, gives approximately **1.9-2.2 MB** overall storage size in total for our **AF**.
This highlights another key advantage of our dynamic OT-based loss: fewer annealing steps and simpler neural network structures compared to other normalizing flow methods. For instance, **other CNF methods** compared in our work require the storage capacities **up to 120 MB**, due to the needs for a 128-128 MLP and much more annealing steps.
*The stored CNF parameters also provide substantial advantages for sampling tasks*. With these parameters available, sampling is reduced to numerical computations, rapidly generating 10,000 samples in just 2.1 seconds for the 50-dimensional case (Tables 10 and 11). This stands in clear contrast to MCMC methods, which typically require long mixing times and frequently struggle in multimodal scenarios.
We sincerely appreciate the time and effort you’ve dedicated to reviewing our work! | Summary: This paper proposed a new flow-based sampler from continuous target density functions via combining several ideas.
More specifically, they introduce a method that they call Annealing Flow (AF) by using continuous normalizing flows trained with dynamic optimal transport objective function. They use Wasserstein regularization and their method is guided by annealing the target pdf.
Claims And Evidence: To some extent.
My criticism is as follows:
While in the title and throughout the paper they claim their method is suitable for high-dimensional targets, their experimental models are quite toyish and low-dimensional. Clearly, reporting results for high-dimensional models would support their claim better (or would show its limitations that would still be valuable).
Also, (apart from one visual experiment in the supplementary) they do not compare their method against MCMC-based methods. For instance, it would be interesting to see the mode switches and MSE versus the model dimension for the AF and NUTS sampler. Also, it would be good to report the AF's training time and compare MCMC versus AF (and other flow-based methods) if the total time budget is limited and fixed.
Methods And Evaluation Criteria: Yes but see my comments on "Claims And Evidence"
Theoretical Claims: I did not closely check the correctness of the proofs. Also, it is hard for me to convince myself that the proposed density ratio estimation (Section 5.2) will perform better than the formula in line 289. It would be valuable if they would compare the performance of these two approaches.
Experimental Designs Or Analyses: Yes but see my comments on "Claims And Evidence"
Supplementary Material: I went through all parts but very briefly
Relation To Broader Scientific Literature: Combining several state-of-the-art algorithms in an smart way and proposing a more robust flow-based method that (apparently) can be used in high-dimensional and multi-model models.
Essential References Not Discussed: Not that I am aware of
Other Strengths And Weaknesses: As I mentioned earlier, my main criticism is not testing the performance of their method on high-dimensional models (e.g. d > 100) as well as not comparing it with MCMC.
Clearly, every algorithm has its limitations, but it would be good to provide experimental results that give an idea of cases where AF outperforms MCMC (say NUTS) and cases where the opposite is true.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your helpful review and thoughtful comments! We address your concerns point by point below:
> It is hard for me to convince myself that the proposed density ratio estimation (Section 5.2) will perform better than the formula in line 289.
Thank you for your detailed observation on the extended Importance Flow framework. The traditional normalized Importance Sampling (IS) estimator in line 289 is generally biased. For example, in our IS experiment on estimating $\mathbb{E}_{X \sim \pi_0}[h(X)]$ with $h(x) = \text{1}(||x||>c)$, $\pi_0 (x) = N(x;0,I)$, the resulting optimal proposal $\tilde{q}^* (x) = \text{1}(||x||>c) \cdot N(x;0,I)$. We can immediately see the self-normalized IS estimator is:
$$\hat{I}_{N}= \sum \frac{\pi_0 (x_i)}{\tilde{q}^* (x_i)} h (x_i) / \sum \frac{\pi_0 (x_i)}{\tilde{q}^* (x_i)} = \sum 1 / \sum \frac{1}{\text{1}(||x||>c)} = \sum 1 / \sum 1 = 1, \text{where} \ x_i \sim q^* (x).$$
The estimator always equals to 1 in this case. However, the true probability we want to estimate is $P_{X \sim N(0,I)} (||X||>c)$.
However, our Density Ratio Estimation (DRE) network introduced in Section 5.2 can theoretically recover the exact ratio $r^* (x) = \log \frac{\pi_0 (x)}{q^* (x)}$, where $q^* (x)$ is the normalized target density. This allows us to directly construct an importance sampling (IS) estimator:
$$\hat{I} = \sum \frac{\pi_0 (x_i)}{q^* (x_i)} h (x_i)=\sum \exp (r^* (x_i))\cdot h(x_i).$$
As shown in the preliminary results in Table 9, the Importance Flow yields a rare-event probability estimate that closely matches the true value, rather than defaulting to 1 as in $\hat{I}_{N}$.
> My main criticism is not testing the performance of their method on high-dimensional models (e.g. d > 100) as well as not comparing it with MCMC. It would be good to provide experimental results that give an idea of cases where AF outperforms MCMC (say NUTS) and cases where the opposite is true.
Thank you very much for your thoughtful feedback! We included comparisons with two MCMC-based methods in our experiments: **Parallel Tempering (PT)**, a classical tempered MCMC, and **AI-Sampler**, a neural-network-assisted MCMC. A functional comparison with MCMC is also briefly discussed in Section 4.2. That said, we agree that further comparison is valuable and have added more detailed analysis now in our draft. To summarize:
- *Sampling Efficiency*:
Unlike MCMC, which does not require pre-training but often suffers from long mixing times, AF requires one-time offline training, after which sampling becomes a deterministic, efficient numerical computations. We reported **training and sampling time** of AF in Tab. 10 and Tab. 11.
- *Performance in High Dimensions*:
In our 50D ExpGauss experiment with 1024 far-separated modes, PT captures <10 modes and AIS captures around 100 (Table 3), illustrating MCMC’s limitations in exploring highly multimodal, high-dimensional spaces. AF significantly outperforms in this setting.
- *Sample Balance*:
Even when MCMC reaches multiple modes, it often fails to maintain proper sample weighting due to the stochastic nature of mode-hopping.
Further, we have now implemented **NUTS**, and below is a link of figures and tables:
**Anonymous Link:** https://drive.google.com/file/d/1vyciBojsTKuRfMfw0GZ6LYfMXx49g7Fe/view?usp=sharing
*It took us great efforts so we sincerely appreciate your time in reviewing it!*
As shown in link~Tab. 1 and 2, MCMC methods—NUTS, PT, and AIS—fail immediately on the 50D target with 1024 well-separated modes. This is due to the exponential growth of the effective search space (roughly $O(\exp (d))$) for a MC, making exploration of all far-separated modes infeasible in high dimensions.
> For your main criticism on not testing the performance on high-dimensional models (e.g. d > 100):
Notably, the 50D ExpGauss (nearest two separated by 20, farthest by 63.25) we study is substantially more challenging than prior benchmarks.
For context, among very recent methods: **TemFlow** (referred by Reviewer mLHv) experiments on a 2D GMM with 8 close modes, a Copula distribution with 4 close modes, and samples from up to a 32D latent space (normally uni-modal). **Fisher-rao** (referred by Reviewer mLHv) explores up to 20D funnel and 2D closely aligned modes. **LFIS** focuses on a 2D GMM with 9 close modes and a 10D funnel. **AI-Sampler** tests on a 2D circular GMM (radius 5) and Bayesian logistic regression up to 21 dimensions.
In summary, we compared the three major sampling paradigms—MCMC (NUTS, AI-Sampler, PT), particle-based methods (SVGD, MIED), and normalizing flows (CRAFT, LFIS, PGPS, TempFlow)—on significantly more challenging densities than prior works.
We have also summarized our key contributions at the end of our response to **Reviewer aCqJ**, and we’d be grateful if you could take a look!
Thank you sincerely for your thoughtful feedback—it helped us clarify both our MCMC comparisons and overall contributions! | Summary: The paper proposes a new method to learn a vector field v(x,t) such that the neural ODE dx = v(x,t) dt approximates the optimal transport between two given distributions p and q. This is laudable, because scalable and accurate optimal transport solvers in high dimensions are still an important research topic. The paper derives a tractable loss that leads to an efficient learning algorithm. The new method appears to be superior to existing algorithms on several benchmarks.
Unfortunately, the presentation contains severe errors and shortcomings that might not be fixable in a rebuttal phase, see below.
EDIT: In light of the rebuttal discussion, I raised my score.
Claims And Evidence: The paper defines the "annealing flow" as the sequence of functions f_k(x) = p(x)^{1-beta_k} q(x)^{beta_k} with two predefined distributions p and q and a sequence of inverse temperatures beta_0 = 0 < beta_1 < ... < beta_K = 1 (equation (4)). However, the claim that the f_k(x) are distributions is false, because the RHS is not normalized for 0 < beta_k < 1 (this is easily seen by setting p = normal(0,1) and q = normal(1,1)). Consequently, the KL divergence in equation (6) is undefined, as it requires normalized distributions.
This mistake may not have severe consequences, because the authors subsequently allow q to be unnormalized, which requires the introduction of an explicit normalization constant. Since this constant is independent of the model, it has no influence on the gradient of the loss in equation (8) -- an incorrect normalization constant would thus be inconsequential. However, this is not quite clear, because the definition of tilde(E)_k after equation (7) makes no sense: The constant tilde(Z)_k and the energy function E_k(x) are both undefined at this point. The proof of proposition 3.1 in the appendix does not resolve the riddle: Contrary to equation (4), the f_k in the proof are defined by the probability path induced by the continuity equation (2). The thus defined f_k are normalized, but require that the vector field v(x,t) is already known. In addition, the proof uses a normalization constant Z_k instead of tilde(Z)_k, which is much more plausible.
These contradictions must be fixed.
Methods And Evaluation Criteria: The new method is compared to seven alternatives, using challenging synthetic data distributions, and wins most of the time. However, it seems that only one of these experiments uses an existing benchmark protocol, so results are hard to compare with the literature.
In the qualitative experiment on a 2D Gaussian mixture (figure 3), all methods except the proposed one perform terribly. This is highly implausible, as similar experiments with satisfactory results are frequently reported in the literature. It looks as if the hyperparameters of the competition have not been tuned properly -- this would be unacceptable. To be fair, it could also be a consequence of forcing all methods to use the same network architecture for comparability, but even if this were the case, the choice of network architecture would be suspicious. Of course, this also sheds a bad light on the other experimental comparisons.
Theoretical Claims: See above.
Experimental Designs Or Analyses: See above.
Supplementary Material: Excessive.
Relation To Broader Scientific Literature: The paper reviews a considerable body of existing literature. However, it is surprising that "Optimal flow matching" (Kornilov et al. arXiv:2403.13117) is not mentioned and not compared against.
The claim in the introduction "Discrete normalizing flows often suffer from mode collapse" is highly questionable. This only happens when either the hyperparameters are not chosen properly, or the network is trained with reverse KL. Properly chosen NF architectures trained with the forward KL (the standard approach) do not exhibit mode collapse.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: In additional to the errors discussed above, I'd like to raise the following issues:
* The authors hierarchically discretize time first into segments [t_{k-1}, t_k] and then into subsegments t_{k-1,s}. However, it is never discussed why this setup is necessary or beneficial or superior to alternatives, and what the underlying trade-offs are.
* In order to drop the constant c from equation (8), the objective in equation (10) must be an argmin. Moreover, it would be nice to illustrate how the divergence term and the energy term counterbalance each other such that this objective has a fixed point at the optimum (instead of converging to a trivial solution).
* In section 4.1, the authors implement the Hutchinson estimator in terms of discrete directional derivatives. However, the underlying expression epsilon^T J_v epsilon (see appendix C.1) can be evaluated exactly using Jacobian-vector primitives from the autodiff library. The proposed implementation introduces an unnecessary additional approximation error.
* Section 5 on importance flows seems out of scope, given that there are only preliminary results which are only reported in the appendix. Leave this topic for future work!
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and careful attention to our mathematical developments! We acknowledge that some symbols lacked rigor and appreciate you pointing them out! Please see below for our response to your concerns on both math rigor and experimental comparisons.
> The paper defines $f_k(x) = \pi_{0}(x)^{1-\beta_{k}} q(x)^{\beta_k}$ in Eq. (4). [...] However, f_k(x) are not normalized distributions. Consequently, the KL divergence in Eq. (6) is undefined, as it requires normalized distributions.
Thanks for the careful review! You are absolutely right—our original definition of $f_{k}(x)$ in (4) was unnormalized. We have now slightly revised $f_k(x)$ in (4) by:
$$\tilde{f}_k(x)=\pi_0 (x)^{1-\beta_k} \tilde{q} (x)^{\beta_k},$$
$$f_k (x)= Z_k \tilde{f}_k (x),$$
where $\tilde{q} (x)$ is the unnormalized target density and $Z_k = \frac{1}{\int \tilde{f}_k (x)dx}$ is an unknown constant.
- **The only change that needs to be made is in how we define $Z_k$.** Our algorithm solely relies on $\tilde{f}_k$, which is defined the same as original. The constant $Z_k$ of the normalized $f_k$ appears only in the constant term $c$ in Eq. (8) of Prop. 3.1, given by:
$$c = E_{x\sim \rho(x(t_{k-1}),t_{k-1})}[\log \rho(x(t_{k-1}),t_{k-1})]- \log Z_k,$$
which is a constant independent of our algorithm.
- With the revised definition of $f_k$, the KL divergence in Eq. (6) is well-defined, and all subsequent propositions and algorithmic steps remain correct.
Thanks for the detailed observation!
> Following $f_k (x)$, the $E_k $, $\tilde{E}_k$, and $\tilde{Z}_k $ are unclear [...]. Contrary to Eq. (4), the f_k in the proof are defined by the probability path (2). And it requires the vector field $v(x,t)$ already known.
Based on $f_k (x)$ above, the energy $E_k (x)$ is still defined as $E_k (x) = -\log f_k (x)$, and we've now made this clear in the draft. We have also removed the redundant definitions of $\tilde{E}_k$ and $\tilde{Z}_k $, now consistently used $Z_k$ in Prop. 3.1, and replaced all occurrences of $\tilde{E}_k$ in the draft with:
$\tilde{E}_k \mapsto -\log \tilde{f}_k (x).$
The density $\rho(\cdot, t)$ evolves according to Eq. (2). Under the constraints of the dynamic OT (3), we have: $\rho(\cdot, t_{k-1}) = f_{k-1},\ \rho(\cdot, t_k) = f_k$. We adopt an efficient block-wise training (as opposed to end-to-end), in which each $v_k (x,t)$ is trained sequentially after $v_1,\cdots, v_{k-1}$ have been learned, as given in Alg. 1.
> Your concerns about Methods And Evaluation Criteria:
We would like to clarify: **Fig. 3** shows GMM results for CRAFT, LFIS, and PGPS using the **same number of annealing steps (8–10)** as our AF, ensuring a fair comparison on **computational efficiency**. In contrast, **Fig. 5** presents these same methods with their **official settings**—using significantly more steps (128 for CRAFT and PGPS, 256 for LFIS) with PGPS using Langevin adjustments. They perform well in Fig. 5, but at a higher computational cost, as detailed in Tables 10 and 11.
Our experiments focus exclusively on *well-separated modes*, where methods without annealing - PT, SVGD, MIED, and AIS—naturally struggle. In contrast, *prior works commonly evaluate on closely aligned GMM modes*: e.g., radius 1 in LFIS (their paper's Fig. 2), radius 4 in both AI-Sampler (their Fig. 3) and the TemFlow (their Fig. 3) referenced by Reviewer mLHv.
By comparison, in our GMM settings (Fig. 3 and 5), modes are separated by radius 12. And in the extremely challenging 50D ExpGauss, by distances ranging from 20 to 63.25.
We also did ablation studies on **AF without annealing** (Fig. 10 and Tab. 12), where our performance drops notably. This shows the necessity of Annealing.
To fairly evaluate AF’s *efficiency and performance*, results reported in **Tab. 3** (main text), **Tab. 7**, and **Fig. 5** (appendix) for other NFs uses their official (and much higher) annealing steps to reflect their full performance.
The purpose of **Fig. 3** is only to *compare performance under matched annealing steps to highlight AF's efficiency*. We've now added clearer references to Fig. 5 in the main text to make this distinction.
> Response to other concerns:
1. "Optimal Flow Matching" is designed for image generation and requires target samples for training, not target densities as in statistical sampling. We've further implemented **TemFlow** mentioned by **Reviewer mLHv** and **NUTS** mentioned by **Reviewer Eiy4**. Please see the link in the responses!
2. $t_{k-1,s}$ is defined for $W_2$ discretization purpose in Prop. 3.2. We've made it clearer and more concise.
3. Our code implemented both torchdiff and Hutchinson for the divergence. torchdiff requires more computation with little improvement.
> Finally:
We have summarized our key contributions at the end of our response to Reviewer aCqJ, and we’d be grateful if you could take a look!
Thank you sincerely for your detailed review—it has indeed helped make our paper stronger!
---
Rebuttal Comment 1.1:
Comment: > In contrast, Fig. 5 presents these same methods with their official settings—using significantly more steps (128 for CRAFT and PGPS, 256 for LFIS) with PGPS using Langevin adjustments. They perform well in Fig. 5, but at a higher computational cost, as detailed in Tables 10 and 11.
You might want to move (part of) figure 5 to the main text, space permitting (e.g. by eliminating Section 5). It greatly contributes to the trustworthiness of your method. (In addition, make clear that figures 4 and 5 are in the Appendix.) You should also say explicitly in the caption that 128 steps for CRAFT etc. are the settings recommended by the original authors.
> "Optimal Flow Matching" is designed for image generation
This doesn't do justice to the paper. E.g. their figure 6 corresponds to your figure 1, and they explicitly establish an algorithm that requires only one step to generate the data.
The same example also features in figure 2 of arXiv:1808.04730, another one-step model that generates the distribution very accurately.
Please put your method into perspective properly. The two cited methods clearly do not "naturally struggle" with this setting.
> The authors hierarchically discretize time first into segments [t_{k-1}, t_k] and then into subsegments t_{k-1,s}. However, it is never discussed why this setup is necessary or beneficial or superior to alternatives, and what the underlying trade-offs are.
Your answer to this question is not yet satisfactory. It is clear that the discretization is needed in Prop. 3.2. My question concerns why your particular choices in this regard are good and outperform other possibilities.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer zGCn,
We sincerely appreciate your in-time reply to our rebuttal!
> You might want to move (part of) figure 5 to the main text, space permitting (e.g. by eliminating Section 5). It greatly contributes to the trustworthiness of your method.
Yes, thank you very much for the thoughtful suggestion! We plan to move Fig. 5 to the main text, potentially replacing Fig. 3 and moving Fig. 3 to the appendix.
We also appreciate your suggestion regarding Section 5 (Importance Flow). This section aims to provide the community with a potential framework for constructing an unbiased importance sampling estimator using flows. This is a relatively unexplored direction and offers an additional metric to assess sample quality. That said, we will consider removing it to better comply with the page limit!
> Concerns about "Optimal Flow Matching":
Thanks for the clarification. We should not have said that "Optimal Flow Matching" is solely designed for image generation, as it also includes illustrative GMM experiments:
However, a **key distinction** is: methods "Optimal Flow Matching" and arXiv:1808.04730 **train the flow model using samples from the target distribution**, and they are not designed for statistical sampling. For example, to learn a flow that maps to a GMM (as in their Fig. 2), they have access to samples from that GMM during training. As a result, *such methods cannot be used to sample challenging densities like the 50D ExpGauss in our experiments*, where obtaining target samples for training is infeasible.
In contrast, our Annealing Flow **(AF) requires only the target density, not target samples**. During training, AF samples exclusively from the initial distribution $\pi_0(x)= N(x; 0,I)$ and learns the velocity field $v_k$ using only forward-evolved particles $x(t_{k-1})$, but never relying on target samples $x(t_k)$.
Our method is specifically designed for the **statistical sampling** setting, where the target density is known but sampling from it directly for training is infeasible.
Besides, we have removed the words “Discrete normalizing flows often suffer from mode collapse”. Thank you for pointing out this misleading sentence!
> Response to concerns about the discretization in Prop. 3.2:
We apologize for not addressing it more clearly earlier due to character limits.
As you noted, discretization is necessary in Proposition 3.2 because the integral $\int_{t_{k-1}}^{t_k} \|v_k\|^2 \, dt$ cannot be directly computed in closed form, and approximating it numerically would require significantly finer discretization.
One could approximate the transport cost using only the endpoints, e.g., $\|x(t_k) - x(t_{k-1})\|^2$. However, incorporating intermediate points $x_{t_{k-1},s}$ allows for enforcing a smooth and more optimal transport path and reflects the *dynamic* nature of the transport cost, rather than reducing it to a static movement between two fixed points.
In our experiments, we consistently use two intermediate points for regularization: $x(t_{k-1} + h_k/3)$ and $x(t_{k-1} + 2h_k/3)$. This approach already yields strong empirical performance without increasing the computational burden much.
We hope we have correctly understood and addressed your questions! If you have any further concerns, please edit your reply to rebuttal above, and we would be more than happy to answer more!
If you feel your questions have been addressed, we would be grateful if you consider updating your review. Thank you sincerely for your time spent in our work! | Summary: The authors devise a new technique for sampling from high-dimensional multi-modal distributions. The assumed setting is that we are given an unnormalized analytical form of the density we wish to sample from. The proposed technique, dubbed Annealing Flow, is based on a continuous normalizing flow guided by an annealing procedure and is trained with a dynamic optimal transport objective. This has several benefits such as enabling stable training without a score-matching objective and cutting down the number of annealing steps needed to achieve state-of-the-art performance on the tested datasets.
## Update after rebuttal:
I find the authors answers satisfying and appreciate the extra effort put in to improve this work clarity and benchmarking. Therefore, I'm happy to change my score to "Accept" to increase their chances for acceptance.
Claims And Evidence: Yes, the experiments do seem to prove empirically that methods/claims are valid in comparison to the baselines. Nonetheless, as I'm unfamiliar with the tasks in this field, I'm not certain how representative these unnormalized densities/sampling tasks are.
Methods And Evaluation Criteria: I'm not very familiar with this line of work, but after skimming through some of the references in the paper, I noticed that the authors performed extensive comparisons with quite a few representative recent baselines.
Theoretical Claims: I did go over some of the proofs in Appendix A and as far as I can tell they seem sound. Also, the theoretical results regarding the optimal velocity field with infinitesimal annealing steps make sense and feel overall intuitive.
Experimental Designs Or Analyses: I haven't thoroughly checked the details as I'm unfamiliar with the examined densities, but the overall experiments section seems sound.
Supplementary Material: Only partially, like the trace approximator in Appendix C, the equivalence to Wasserstein gradient flow in Appendix B, and some parts of Appendix A.
Relation To Broader Scientific Literature: The paper aims to provide an easier-to-train and overall more efficient sampler for multi-modal high-dimensional distributions. The paper seems theoretically solid as far as I can tell, however, I'm not sure how novel and impactful are the contributions regarding the dynamic optimal transport objective, and if these had been proposed elsewhere in prior works (other than the referenced ones).
Essential References Not Discussed: I'm not very familiar with this line of work and do not follow it closely. However, it does seem the authors did their due diligence and cited the existing literature properly.
Other Strengths And Weaknesses: In terms of mathematical rigor, the paper is written with a very high standard. Nonetheless, the writing could be made clearer and more concise for the benefit of the average reader, and some parts could be relayed to the appendix and replaced with more intuitive explanations. For example, how do you calculate $\tilde{E}_{k}(x(t_k))$ is still unclear to me in the current version.
Other Comments Or Suggestions: General suggestion: I think overall the method goes through so many objectives until it arrives at the final one, and this part could be made significantly more concise and clearer for the reader. For instance, a small paragraph explaining the task at the beginning of the method section would be very helpful.
Caught preceding shorthand: In line 26 in the abstract the MC shorthand is used before being defined as Monte Carlo.
Questions For Authors: 1) How do you calculate $\tilde{E}_{k}(x(t_k))$?
2) You mentioned your method requires more expensive pertaining, is this quantified somewhere?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you sincerely for your time and thoughtful review! Below, we respond to each of your questions in detail. We also provide a summary of our key contributions and the representativeness of our comparison experiments at the end of this response.
> Nonetheless, the writing could be made clearer and more concise for the benefit of the average reader, and some parts could be relayed to the appendix and replaced with more intuitive explanations.
Thanks for the helpful comments! We agree that the algorithmic development and theoretical claims can be made more concise and accessible. Following your suggestions, we have made the following revisions:
1. *Reorganized Fig. 2*: We moved Fig. 2 earlier in the paper to accompany the introduction of $f_k$ and $\tilde{f}_k$, making the exposition more illustrative.
2. *Clarified definitions*: We corrected the definition of $Z_k$ associated with $f_k$, as pointed out by **Reviewer zGCn**; We added Wasserstein distance $W_2$ definition in Section 2 to help general readers follow the paper more easily.
3. *Removed redundancy*: We eliminated the redundant definition of $\tilde{E}_k$ and replaced it with $-\log \tilde{f}_k$; We removed the redundant definition of $\tilde{Z}_k$;
4. *Streamlined the objective*: We made the final objective (Eq. 10) more explicit and moved Proposition 3.3 (Objective Reformulation) into Proposition 3.4 to clarify that the reformulation is used solely for the derivation of Proposition 3.4, avoiding unnecessary repetition.
> How do you calculate $\tilde{E}_k (x(t_k))$?
We originally defined $\tilde{E}_k (x) = -\log \tilde{f}_k$ under (7), but upon revision, found this definition unnecessary and have replaced it with the direct expression $-\log \tilde{f}_k$ throughout the paper. Given the unnormalized target density $\tilde{q}(x)$ and the definition of $\tilde{f}_k$ in Eq. (4), the form $\tilde{E}_k (x(t_k))=-\log \tilde{f}_k$ is known.
In objective (10), the expectation $\mathbb{E}$ is taken over sample $x(t_{k-1})$ from the density $f_{k-1}$, while the evaluation of $\tilde{E}_k (x(t_k))=-\log \tilde{f}_k (x(t_k))$ requires the sample at the later time step $x(t_k)$. Therefore, a numerical integration is needed to compute $x(t_k)$, as shown in Eq. (13):
$$x(t_k) = x(t_{k-1}) + \int_{t_{k-1}}^{t_k} v_k (x,s) ds,$$
where $v_k$ is the learnable velocity field optimized in the objective (10).
We referenced the computation of $x(t_{k-1})$ in Line 5 of Alg. 1. We have now clarified this step further in the revised paper.
> You mentioned your method requires more expensive pertaining, is this quantified somewhere?
Yes, we reported training and sampling times for AF and other Normalizing Flows (NFs) in Tables 10 and 11. While AF requires one-time pre-training, it can be done offline and reused for efficient sampling—producing 10,000 samples in 50D space in just 2.1 seconds (given in Tab. 10). In contrast, MCMC methods require significantly longer mixing times and yield poor performance.
We have also included AF vs. MCMC comparisons in our response to **Reviewer Eiy4**. We would greatly appreciate it if you could take a look. We hope these have address your concerns!
Besides, We have now further implemented **NUTS** (mentioned by Reviewer Eiy4) and **TemFlow** (mentioned by Reviewer mLHv), and would sincerely appreciate it if you could take a look at the anonymous links provided in our responses to them!
> Below, we summarize our **key contributions** and further highlight the **depth of our experimental evaluations**:
1. *Algorithmic novelty*:
Our annealing-based dynamic Optimal Transport (OT) objective is novel in the current literature, combining the strengths of both annealing and dynamic OT to yield several key advantages outlined below.
2. *Greatly improved training efficiency and sampling performance*:
With our dynamic OT loss, AF achieves superior performance using only 10 annealing steps—compared to 256 in LFIS, 128 in CRAFT and PGPS, and 100 in TemFlow. As shown in Tables 10 and 11, this results in significantly faster training.
3. *Theoretical advantages*:
In Proposition 3.4, we establish that the optimal velocity field satisfies $\lim_{h \to 0} v_{k}^* = s_k - s_{k-1}$, i.e., the score difference between successive annealing densities. This is a desirable property uniquely enabled by our annealing-based dynamic OT objective.
4. *Experimental evaluation*:
We benchmark AF against leading methods across three sampling mainstreams: MCMC (PT, AI-Sampler, and the new NUTS mentioned by Reviewer Eiy4), gradient-based methods (SVGD, MIED), and NF-based methods (CRAFT, LFIS, PGPS, and the new TempFlow noted by Reviewer mLHv). Notably, we also take a big step towards sampling very challenging cases, the 50D ExpGauss distribution with 1024 far-separated modes (separated by distances ranging from 20 to 63.25).
We sincerely appreciate your valuable time and feedback! We look forward to any further comments you may have! | null | null | null | null | null | null |
On the Training Convergence of Transformers for In-Context Classification of Gaussian Mixtures | Accept (poster) | Summary: This work studies the convergence and training dynamics of transformers for in-context classification tasks of Gaussian mixture data. The results show that a single-layer transformer trained by gradient descent converges to the global optimal at a linear rate. A quantification of how the training and testing prompt lengths affect the inference is provided. The analysis can also be extended to multi-classification problems.
--------------------------------
## Update after rebuttal
I appreciate the authors' clarification. I prefer to keep my current rating.
Claims And Evidence: Yes, the analysis is solid to me.
Methods And Evaluation Criteria: N/A. This paper is mainly theoretical.
Theoretical Claims: The theoretical analysis looks rigorous, and the conclusions make sense.
Experimental Designs Or Analyses: N/A. This paper is mainly theoretical.
Supplementary Material: I checked the proof sketch. The proof idea is reasonable to me overall.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: I would like to mention some weaknesses here.
1. Theoretical novelty. The technical contribution beyond [Zhang et al., 2023a] is not clear to me. You consider the classification problem with a different loss function. But why is it challenging enough compared with [Zhang et al., 2023a]. It is better to include related discussions in the paper.
2. The practical insight of this work is not clear. For example, how can the theoretical analysis be used to improve ICL in practice? Are the theoretical conclusions aligned with any existing empirical finding? Or can the theory be used to explain any theoretical finding?
3. The writing needs some improvement. It is better to include some remark after each Theorem to provide an intuitive explanation. This can help readers understand the key point of each result.
Zhang et al., 2023a. Trained transformers learn linear models in-context.
Other Comments Or Suggestions: N/A
Questions For Authors: Can you prove that $\alpha>0$ in Eqn 18? Otherwise, the convergence analysis is not that strong.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >The technical contribution beyond [Zhang et al., 2023a] is not clear to me. You consider the classification problem with a different loss function. But why is it challenging enough compared with [Zhang et al., 2023a]. It is better to include related discussions in the paper.
Compared to [1], we use the cross-entropy loss for the classification problem. Thus, a critical distinction to [1] is that our model does not have a closed-form expression of the global minimizer, while [1] does. This distinction adds a significant challenge and leads to a different technical approach compared to [1]. By analyzing the Taylor expansion at the stationary point, we establish the convergence properties of $W^*$. Moreover, [1] considered optimizing the transformer using gradient flow. In contrast, our work proves the convergence of optimizing the transformer with the more practical gradient descent method. We will add more related discussions in our revised paper.
>The practical insight of this work is not clear. For example, how can the theoretical analysis be used to improve ICL in practice? Are the theoretical conclusions aligned with any existing empirical finding? Or can the theory be used to explain any theoretical finding?
Yes, our theoretical conclusions align with existing empirical findings. For example, in Figure 1, we conducted experiments of single-layer and multi-layer transformers for in-context classification of Gaussian mixtures. We can notice that both transformer models' ICL inference errors decrease as training prompt length ($N$) and test prompt length ($M$) increase, and increase as the number of Gaussian mixtures ($c$) increases. These behaviors are consistent with our theoretical claims. Moreover, the results also indicate that some of our insights obtained from studying this simplified model may hold for transformers with more complex structures, and studying this simplified model can help us have a better understanding of the ICL abilities of transformers. We hope this paper can provide valuable insights into the theoretical understanding of the ICL mechanisms of transformers. These insights may be helpful for potential architectural design and building safe and explainable AI systems.
>The writing needs some improvement. It is better to include some remark after each Theorem to provide an intuitive explanation. This can help readers understand the key point of each result.
Thanks for your suggestion. We will add more marks and intuitive explanations in the revised paper.
**References**:
[1] Ruiqi Zhang, Spencer Frei, and Peter L Bartlett. Trained transformers learn linear models in-context. arXiv preprint arXiv:2306.09927, 2023a.
[8] Yu Huang, Yuan Cheng, and Yingbin Liang. In-context convergence of transformers. arXiv preprint arXiv:2310.05249, 2023.
[9] Hongkang Li, Meng Wang, Songtao Lu, Xiaodong Cui, and Pin-Yu Chen. Training nonlinear transformers for efficient in-context learning: A theoretical learning and generalization analysis. arXiv preprint arXiv:2402.15607, 2024.
[10] Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, and Peter L Bartlett. How many pretraining tasks are needed for in-context learning of linear regression? arXiv preprint arXiv:2310.08391, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response. I prefer to keep my current rating. The theoretical analysis is solid. The contributions are not that exciting to lead to a higher rating. BTW, I left a question in "Questions For Authors", where I am actually asking whether a positive lower bound of $\alpha$ can be proved. If so, it can make the convergence analysis stronger.
---------------------------
Thank you for the further clarification.
---
Reply to Comment 1.1.1:
Comment: Thanks for your acknowledgment for our contributions.
>Can you prove that $\alpha>0$ in Eqn 18? Otherwise, the convergence analysis is not that strong.
Yes. Actually, we have proved it in Lemma D.2 and Lemma F.3. For example, in Lemma D.2, Line 789-802, we proved that, for any compact set $R_W$ of $R^{d\times d}$, for any $W\in R_W$, we have $\nabla^2 L(W)\succ C(\Omega)S(\Omega)I_d/4$ with $C(\Omega)S(\Omega)>0$. The definitions of $C(\Omega)$ and $S(\Omega)$ can be found in Line 772 and 792. Thus, with the initial point $W_0$ and the $R_W$ we defined in Line 1091, we have $\nabla^2 L(W)\succeq \alpha I_d$ with $\alpha>0$. We will make this point more clear in the revised paper. | Summary: This paper studies the in-context learning (ICL) capabilities of the transformer model. In particular, this paper shows that one layer of linear attention mechanism, after pre-training through gradient descent, can implement classification of Gaussian mixture data. The main results of this paper are the convergence guarantees and test-loss bounds of pre-training a (single-layer, sparse, linear) attention model for both binary and multi-class classification.
Compare to previous results such as [Bai et al. 2023], which only provided a construction, this paper presents rigorous guarantees on the GD pre-training convergence dynamics and additionally studies the case of multi-class classification.
Claims And Evidence: I find the overall writing of this paper to be good and the theoretical claims to be clear. The study of ICL for classification tasks is under-explored in the literature and this paper is a valuable addition towards that direction.
However, I find that the paper is lacking in a few areas:
1. The multi-class case in Section 4 is certainly a worthwhile exercise, but the setting and results are largely direct extension of the binary case in Section 3. I find that there is very little benefit from essentially repeating Section 3. I suggest the author to condense this section, elaborate more on the technical difficulty of the multi-class case, and leave rest of the details to the Appendix.
2. I find Assumptions 3.1 and 3.5 to be slightly problematic. It is okay to assume homo-scedasticity (same $\Lambda$ for both classes), but the assumption on the means having the same $\Lambda^{-1}$-weighted norms is hard to justify. I guess this assumption is made for the convenience of the proof. If so, the authors should be upfront about this and explain why the problem is intractable without this assumption.
3. The discussion of the prediction (around eq (12)) is rather lacking. It seems that the derivation is about to connect the pre-trained attention model to linear/quadratic discriminant analysis (LDA/QDA) but then abruptly stops. Actually, I think it is very helpful to describe what kind of learning algorithm is implemented by the pre-trained transformer. This type of argument is in fact a major selling point of [von Oswald, 2022] and really helped with its popularity. In fact, if I am not wrong, setting $W = 2\Lambda^{-1}$ implements the LDA decision criterion?
4. What happens when you stack multiple layers? We know this helps in the regression setting. Do you have reasons/hypothesis on why more layers does not seem to help much for the classification task?
And some minor points:
1. remarks E.1 and G.2 should be included as part of the main body.
2. The well-conditioned property (9) is proven in the paper, but currently it sounds like an assumption. Please clarify this.
Methods And Evaluation Criteria: n/a
Theoretical Claims: The theoretical claims and analysis look to be the natural extension of the techniques of [Zhang et. al. 2023], and the authors did a decent job at differentiating their works from the existing results. While I am not a fan of the sparsity assumption (5), almost everyone in the area imposes this assumption. So I will not complain about this too much.
I noticed that in Section D.1, the description of D.3 does not match the actual lemma, and I cannot seem to find a lemma that fits the description. So I would like to see a more elaborate proof sketch that correctly connects different steps of the proof.
Overall, I think the ideas presented in the paper are interesting and valuable. However, I have some reservations regarding 1) the assumptions on the data, 2) connection to classical Gaussian mixture techniques such as LDA, 3) the gap in the proof sketch. I think this paper barely falls short of the standard for publication, but I am happy to upgrade my score if the authors can address my concerns.
Experimental Designs Or Analyses: I know this is a theory paper, so the few proof-of-concept experiments in Section 5 are fine. However, I find that the discussion around Figure 3 to be extremely lacking.
1. How many classes are there?
2. For the "inference error" are you referring to the TV distance in equation 3? I don't think this notion is directly applicable to k-nearest neighbor or SVM.
3. I find this comparison to be unfair since the transformer models have been pre-trained on a lot (what is the exact number?) of data, whereas I assume the classical models are directly applied to the test prompt.
4. Why is LDA not part of the classical baseline? This is the perfect setting for LDA.
Also, you should include your source code even if they are short.
Supplementary Material: I read Appendix A-C and H and took a brief look at Appendix D.
Relation To Broader Scientific Literature: see above
Essential References Not Discussed: I think the coverage of the recent works on ICL is sufficient, but I think adding a few references on the classical methods for Gaussian mixtures, e.g. linear discriminant analysis, would be very helpful.
Other Strengths And Weaknesses: see above
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >I suggest the author to condense Section 4. Remarks E.1 and G.2 should be included as part of the main body.
Thanks for your suggestions. We will modify them in the revised paper.
>It is okay to assume homo-scedasticity (same $\Lambda$ for both classes), but the assumption on the means having the same $\Lambda^{-1}$-weighted norms is hard to justify. I guess this assumption is made for the convenience of the proof. If so, the authors should be upfront about this and explain why the problem is intractable without this assumption.
Yes, this assumption is made for the convenience of the proof. Actually, in Remark E.1 and G.2, we explained why we need this assumption and showed that if this assumption is not satisfied, the single-layer transformer cannot correctly perform the in-context classification tasks. We also verified this in experiments (Section 5.1, Figure 2) that the single-layer transformer cannot perform well for varying norms.
>It seems that the derivation is about to connect the pre-trained attention model to linear/quadratic discriminant analysis (LDA/QDA) but then abruptly stops. Actually, I think it is very helpful to describe what kind of learning algorithm is implemented by the pre-trained transformer. Setting $W = 2\Lambda^{-1}$ implements the LDA decision criterion?
Yes, we are connecting the pre-trained attention model to LDA and setting $W = 2\Lambda^{-1}$ implements the LDA decision criterion. We will add more discussions about the connection between the pre-trained attention model and LDA in the revised paper. Thanks for your suggestion.
>What happens when you stack multiple layers?
In our experiments (Section 5.1, Figure 2), we showed that multi-layer transformers have better robustness for varying covariances and norms. We believe developing a better understanding of multi-layer and more complex transformers is an intriguing direction for future research.
>The well-conditioned property (9)
es, we prove the that there exist $\alpha>0$ and $l<\infty$ that satisfy $(9)$. In Lemma D.2, we prove that in a compact domain, the strong convexity parameter $\alpha$ is larger than 0. In Lemma D.5, we show that $l\leq \frac{1}{4}\sum_{i\in[d^2]}E[(p_{t_1(i)}q_{t_2(i)})^2]$, where $p,q$ are some combinations and rotations of Gaussian random variables. Since Gaussian random variables have finite second moments. Thus, $l<\infty$. We will clarify this in the revised paper.
>I noticed that in Section D.1, the description of D.3 does not match the actual lemma
What we exactly did in Lemma D.3 matches what we described in Section D.1. We are analyzing the Taylor expansion of $L(W)$ in Lemma D.3. In line 892-894, we show that as $N\to\infty$, our loss function $L(W)$ point-wisely converges to $\widetilde{L}(W)$. In line 926, we also show that as $N\to\infty$, the global minimizer $W^*$ converges to $2\Lambda^{-1}$. The logic of the proof here is that we first show the loss function $L(W)$ converges point-wisely to $\widetilde{L}(W)$, which implies that the global minimizer $W^*$ converges to $2\Lambda^{-1}$. Then, given the property that the global minimizer $W^*$ converges to $2\Lambda^{-1}$, in Lemma D.4, we can derive a tighter convergence rate for $W^*$. We will clarify this in the revised paper.
>Figure 3: How many classes are there?
We mentioned in the paper that we compare the classification of three Gaussian mixtures.
>For the "inference error" are you referring to the TV distance in equation 3? I don't think this notion is directly applicable to k-nearest neighbor or SVM.
We used KNeighborsClassifier and SVC from sklearn, and they provided the prediction probability for given test data. We then use the prediction probability to calculate the TV distance. See the API of sklearn for details.
>I find this comparison to be unfair since the transformer models have been pre-trained on a lot (what is the exact number?) of data
The exact number of training data can be found in H.2. Experiment Details. It is somewhat unfair to compare Transformer models with classical models. However, the goal of our experiments is not to make a fair comparison between Transformers and classical methods, nor to prove that Transformers are superior. Instead, our main purpose is to demonstrate that trained Transformers have in-context learning capabilities and perform no worse than classical methods.
>Why is LDA not part of the classical baseline? You should include your source code
We upload our code and additional results in the following link. https://anonymous.4open.science/r/In-Context-Classification-of-Gaussian-Mixtures-2374
We added LDA in LDA_Comp.png
>Adding a few references on the classical methods for Gaussian mixtures, e.g. linear discriminant analysis, would be very helpful.
Thanks for this comment. We will cite and discuss a few references about Gaussian mixtures and linear discriminant analysis in the revised paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses.
> Yes, this assumption is made for the convenience of the proof.
I don't think this is a fatal weakness, but please be upfront about it in your revision.
> We will add more discussions about the connection between the pre-trained attention model and LDA in the revised paper.
**I want to see a precise statement for this before I can decide if I should change my rating.**
> What we exactly did in Lemma D.3 matches what we described in Section D.1.
After reading the Lemma more carefully, I think you are right. But please consider revise Appendix D.1 to make it easier to understand.
> We used KNeighborsClassifier and SVC from sklearn, and they provided the prediction probability for given test data
I see. If I recall correctly, the prediction probability of `SVC` is computed by doing K-fold validation, which I think is not a fair comparison to make. But the most important comparison is with LQA, so I will forgive this.
---
Reply to Comment 1.1.1:
Comment: >I don't think this is a fatal weakness, but please be upfront about it in your revision.
Thanks for your suggestions. We will do it upfront in the revised paper.
>I want to see a precise statement for this before I can decide if I should change my rating.
Thanks for your suggestions. We would like to add the following discussion in the revised paper to better illustrate the connections between the trained transformer and LDA.
Our pre-trained single-layer transformer can be viewed as approximately implementing Linear Discriminant Analysis (LDA). To see this, we consider the binary classification case as example. Suppose we are given $x_i,y_i, i=1,..., M$ and $x_q$, and we need to predict the label $y_q$ for $x_q$. LDA assumes that $x_i,y_i, i=1,..., M$ and $x_q, y_q$ are i.i.d. samples, with $P(y=1)=P(y=-1)$, and the conditional probability density functions $f(x|y=1)$ and $f(x| y=-1)$ are Gaussian with means $\mu_1, \mu_{-1}$ and same covariance $\Sigma$. Under these assumptions, it can be derived that the optimal decision criterion for $x_q$ is to predict $y_q=1$ if $(\mu_1-\mu_{-1})^\top \Sigma^{-1}x_q+\frac{1}{2}(\mu_{-1}^\top\Sigma^{-1}\mu_{-1}-\mu_1^\top\Sigma^{-1}\mu_1)>0$ and $y_q=-1$, otherwise. LDA can estimate $\hat{\mu}\_1$ as the average of $x_i$ with $y_i=1$, $\hat{\mu}\_{-1}$ as the average of $x_i$ with $y_i=-1$, and estimate the covariance $\hat{\Sigma}$ from the within-class variances. The decision criterion becomes $(\hat \mu_1-\hat \mu_{-1})^\top \hat \Sigma^{-1}x_q+\frac{1}{2}(\hat \mu_{-1}^\top \hat \Sigma^{-1}\hat \mu_{-1}-\hat \mu_1^\top\Sigma^{-1}\hat \mu_1)$. For the single-layer transformer, it can compute the in-context estimate $\hat \mu_1-\hat \mu_{-1}=\sum_{i=1,…,M}y_ix_i/M$, however, it is hard for the single-layer transformer to compute $\hat \Sigma$ and $\hat \mu_{-1}^\top \hat \Sigma^{-1}\hat \mu_{-1}$ in-context. Thus, in our paper, we make the following assumptions. We assume the pre-train data and test data have the same covariance $\Lambda$ so that the transformer can learn an approximation of $\Lambda$ during pre-training. Moreover, we assume the two class means $\mu_1$, $\mu_{-1}$ have the same $\Lambda^{-1}$-weighted norm so that $\mu_{-1}^\top\Sigma^{-1}\mu_{-1}-\mu_1^\top\Sigma^{-1}\mu_1=0$. Under these assumptions, the quadratic term cancels out, and the decision criterion simplifies to $(\sum_{i=1,…,M}y_ix_i/M)^\top \Lambda^{-1}x_q$, which is very close to Eqn (12) in our paper. When we use $\hat W$ to approximate $2\Lambda^{-1}$, this becomes exactly Eqn (12). Therefore, when $\hat W = 2\Lambda^{-1}$ and the in-context examples are balanced across classes, the transformer's decision criterion is the same as that of the LDA with exact knowledge of $\Lambda$. In our experiment (see LDA\_Comp.png), since the pre-trained transformer has already learned a relatively good approximation of $\Lambda^{-1}$, while LDA must estimate $\Lambda^{-1}$ in context, the trained transformer significantly outperforms LDA when the number of in-context examples is small. As the context length increases, LDA's performance approaches that of the trained transformer. Thus, our paper theoretically demonstrate that the trained transformer can approximately implement LDA, and our experiments (LDA\_Comp.png) corroborate the theoretical findings.
>After reading the Lemma more carefully, I think you are right. But please consider revise Appendix D.1 to make it easier to understand.
Thanks for your suggestions. We will revise Appendix D.1 to make it easier to understand in the revised paper.
>I see. If I recall correctly, the prediction probability of SVC is computed by doing K-fold validation, which I think is not a fair comparison to make. But the most important comparison is with LQA, so I will forgive this.
Thanks. Yes, it is not a fair comparison. We will modify it in the revised paper. | Summary: This paper looks at ICL for classification using Gaussian mixtures (same covariance across classes but different means) by trained transformers. They show that under the condition that the training and test data come from the same covariate covariance distribution, using linear attention can provably work with a single layer. They also show experimental evidence that multiple layers can do better as well as work even when this assumption of the same covariance distribution is broken.
Claims And Evidence: Yes, although I must confess that I did not have time to read the full proofs or check them carefully.
Methods And Evaluation Criteria: Yes, sort of. There is a key missing comparison in my opinion: to just doing "least-squares for classification" --- what is sometimes called the LS-SVM --- where the shared covariate covariance from the training and test data is known to this LS-SVM as well (either directly by magic or from just doing a dumb natural algorithm involving averaging over the training data to extract it.)
Given the work in Zhang, Frei, and Bartlett (2023a), this feels like the critical question.
Theoretical Claims: Not really. Sorry.
Experimental Designs Or Analyses: Yes, see above.
Supplementary Material: No.
Relation To Broader Scientific Literature: The problem of understanding exactly why transformer layers work and what the limitations are is an important problem. The paper does a good job of surveying the literature.
Essential References Not Discussed: They made a good choice I feel.
Other Strengths And Weaknesses: Fundamentally, I felt that the lack of serious comparison to just the linear-regression approach is a big weakness. Treating classification as linear regression isn't optimal, but it can work decently well especially in the kinds of settings here.
So, what parts of the analysis are picking up extra nuances of what single-layer linear-attention transformers can do beyond linear regression and what parts are just casting this problem into its shadow linear-regression form in disguise? I can't tell from the discussion. But this is what I really want to know.
Other Comments Or Suggestions: None. Just answer my questions.
Questions For Authors: How does just doing "least-squares for classification" --- what is sometimes called the LS-SVM --- do for the experiment in Figure 3? First straight-up least-squares knowing nothing. Second, one that has learned the covariance $\Lambda$ by magic. Third, one that has learned the covariance $\Lambda$ using the same training data provided to the transformers?
If I think about how linear regression for classification works in a high-enough-dimensional space, when the means are large, it works quite decently. Anyone who studies mixture-models as toys for classification knows that the signal-to-noise ratio is important. Given that your plots have inference errors dropping to by 5% or lower, it suggests that the SNR is decent. So how do the results change if we vary the SNR?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We conducted additional experiments and uploaded code in https://anonymous.4open.science/r/In-Context-Classification-of-Gaussian-Mixtures-2374
>Fundamentally, I felt that the lack of serious comparison to just the linear-regression approach is a big weakness. Treating classification as linear regression isn't optimal, but it can work decently well especially in the kinds of settings here.
So, what parts of the analysis are picking up extra nuances of what single-layer linear-attention transformers can do beyond linear regression and what parts are just casting this problem into its shadow linear-regression form in disguise? I can't tell from the discussion. But this is what I really want to know.
In SNR_LR_Comp.png, we conducted experiments comparing the transformer with linear regression for classification tasks. We can notice that trained transformers, in general, have better performance than linear regression, especially when the prompt length is small. This is not that surprising since the trained transformers somewhat learned $\Lambda$ during training.
>Given the work in Zhang, Frei, and Bartlett (2023a), this feels like the critical question.
Compared to [1], we use the cross-entropy loss for the classification problem. Thus, a critical distinction to [1] is that our model does not have a closed-form expression of the global minimizer, while [1] does. This distinction adds a significant challenge and leads to a different technical approach compared to [1]. By analyzing the Taylor expansion at the stationary point, we establish the convergence properties of $W^*$. Moreover, [1] considered optimizing the transformer using gradient flow. In contrast, our work proves the convergence of optimizing the transformer with the more practical gradient descent method. We will add more related discussions in our revised paper.
>First straight-up least-squares knowing nothing. Second, one that has learned the covariance $\Lambda$ by magic. Third, one that has learned the covariance $\Lambda$ using the same training data provided to the transformers?
In SNR_LR_Comp.png, we compared the trained transformer with straight-up linear regression, knowing nothing. Actually, as the reviewer aULa pointed out, our pre-trained attention transformer approximately implements LDA. If a LDA model learned the covariance $\Lambda$ by magic or learned the covariance $\Lambda$ using the same training data provided to the transformers, this LDA model should have better or comparable performance as the trained transformer.
>If I think about how linear regression for classification works in a high-enough-dimensional space, when the means are large, it works quite decently. Anyone who studies mixture-models as toys for classification knows that the signal-to-noise ratio is important. Given that your plots have inference errors dropping to by $5\%$ or lower, it suggests that the SNR is decent. So how do the results change if we vary the SNR?
In SNR_LR_Comp.png, by changing the magnitude of the covariance matrix of testing data ($\Lambda, 4\Lambda, 16\Lambda$), we compared the accuracy of trained transformers and linear regression with different rates of SNR. Our results show that smaller SNR leads to worse accuracy for both the transformer model and the linear regression model.
References can be found in our reply to reviewer M6vU. | Summary: In this paper, the authors provide a theoretical analysis of in-context learning of linear classification tasks on the Gaussian mixture models. By assuming a simplified linear self-attention structure and fixing some parameters during the whole training, the authors prove that linear attention can converge to the global minimum in linear convergence rate when minimizing the population loss with gradient descent. Additionally, the authors prove that the global minimum can achieve a small total variance between the output of the trained model and the true label when applied to a new testing prompt. Finally, the authors conduct simple numerical experiments, supporting their conclusions.
Claims And Evidence: From my perspective, there exist some major concerns regarding the assumptions and conclusions of this paper. I will list them as follows and discuss the details in the next "Methods" and "Theory" sections.
1. By assuming an over-simplified attention structure, the optimization problem equation (7) is a convex optimization problem obviously, which is almost totally understood. Additionally, given the loss function (cross-entropy loss function) considered in equation (7) and equation (17), the optimization problem considered in this paper is essentially a logistic regression for the binary case and multi-class case respectively, which is highly understood. I do not feel that there exist any essential challenges in converting the conclusions from binary cases to multi-class cases. Therefore, I do not feel that the optimization problem considered in this paper is as highly non-linear (actually this is a generalized linear model) and challenging as they claimed.
2. Additionally, the author proposes utilizing total variance as the criteria to evaluate the test performance of the new prompt. However, the total variance would only imply the similarity between the distributions, instead of the random variables themselves. Consequently, the Theorem 3.6 implies nothing regarding the testing performance.
Methods And Evaluation Criteria: 1. As I mentioned in the previous section, the attention layer considered in this paper, adopts a simplified linear attention structure. Additionally, it constrains all the parameters in $W^V$ and $W^{KQ}$ as fixed except the left-top block in $W^{KQ}$. Therefore, all the nonlinearity of this model comes from the cross-entropy loss function. Such oversimplification has rendered this linear attention model equivalent to logistic regression, for which the conclusion that this represents a strongly convex optimization problem within each compact set is well-established. Additionally, the data model of this paper is specified as a Gaussian mixture, which is obviously non-linear separable when given the size of the training set, i.e. $B$ is infinitely large. Therefore, the loss function is coercive and must have a unique minimum given the strongly convex property. Therefore, the optimization problem considered in this paper, from my perspective, is well-studied and trivial to some extent. I have reviewed the proof details and found that most of the proof in this paper focuses on establishing the strong convexity of the cross-entropy loss, which is a well-known fact. Note that even in the original paper studying the in-context learning for training [1], the authors consider a more practical training strategy without fixing any parameters, which makes their loss non-convex and the theoretical analysis highly non-trivial. Besides, there exists multiple theoretical studies that consider training the $W^v$ and $W^{KQ}$ simultaneously, even in a more challenging softmax attention setting [2, 3, 4, 5].
2. Additionally, as I mentioned in the previous section, total variation cannot be used to evaluate test performance. To illustrate this, consider a simple example where the true label $y$ is a Rademacher random variable, and I choose $\hat y = -y$ as the prediction of this label $y$. According to the total variation formula in equation (3), we have $\Delta(y,
\hat y)=0$. However, the test accuracy for this prediction $\hat y$ is $0$. Consequently, the conclusion of Theorem 3.6 does not provide any meaningful insights regarding test performance.
[1]. Zhang, R., Frei, S. and Bartlett, P.L., 2024. Trained transformers learn linear models in-context. JMLR
[2]. Jelassi, S., Sander, M. and Li, Y., 2022. Vision transformers provably learn spatial structure. NeurIPS
[3]. Li, H., Wang, M., Liu, S. and Chen, P.Y., A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity. ICLR
[4]. Wang, Z., Wei, S., Hsu, D. and Lee, J.D., 2024, July. Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot. ICML
[5]. Zhang, C., Meng, X. and Cao, Y., 2025. Transformer learns optimal variable selection in group-sparse classification. ICLR
Theoretical Claims: Besides the major concerns I proposed in the previous sections, I still have the following concerns for the proof details.
1. The authors obscure several important conditions in their presentation of theorems. What concerns me most is their decision to drop all terms and factors that are independent of $N$ during the training process. In most theoretical studies related to ICL or transformers, the embedding dimension $d$, is an extremely critical parameter. However, in this paper, the authors directly treat $d$, along with the norms of the mean vectors $\mu_1$ and $\mu_2$, and other factors related to the spectral distribution of the variance matrix $\Lambda$ at the scale of constant order, neglecting these factors entirely. In comparison, in [1], the authors retain all these terms in their conclusions of theorems, therefore the readers can clearly understand how large the sequence length $N$ is required to cancel the effect from the other terms and achieve a good performance.
2. Additionally, I also do not understand why the query token $q$ can appear on the RHS of the formula of Theorem 3.6 when you are taking the expectation over the query token pair.
3. I find that Assumptions 3.2 and 3.5, which state that the mean vectors $\mu_1$ and $\mu_2$ have the same $\Lambda^{-1}$ norm, are somewhat strong. Given these assumptions, I do not see a significant difference from directly assuming an isotropic Gaussian noise.
Experimental Designs Or Analyses: I have checked the experimental results. There is no major incorrectness.
Supplementary Material: Yes, I have carefully checked the details of the proof for the binary classification case. I also briefly reviewed the proof for the multi-class case and found no significant differences compared to the binary case.
Relation To Broader Scientific Literature: As I said in the previous section, even compared with the original paper [1], the model considered in this paper is much over-simplified.
[1]. Zhang, R., Frei, S. and Bartlett, P.L., 2024. Trained transformers learn linear models in-context. JMLR
Essential References Not Discussed: There are several theoretical studies on the optimization of one-layer transformers that consider training $W^v$ and $W^{KQ}$ simultaneously, even in the more challenging softmax attention setting [2, 3, 4, 5]. There are also results on general convergence guarantees of transformers such as [6]. In addition, [7] considers almost the same question as this paper. I suggest that the authors should compare their results with these works.
[2]. Jelassi, S., Sander, M. and Li, Y., 2022. Vision transformers provably learn spatial structure. NeurIPS
[3]. Li, H., Wang, M., Liu, S. and Chen, P.Y., A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity. ICLR
[4]. Wang, Z., Wei, S., Hsu, D. and Lee, J.D., 2024, July. Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot. ICML
[5]. Zhang, C., Meng, X. and Cao, Y., 2025. Transformer learns optimal variable selection in group-sparse classification. ICLR
[6]. Gao, C., Cao, Y., Li, Z., He, Y., Wang, M., Liu, H., Klusowski, J.M. and Fan, J., Global Convergence in Training Large-Scale Transformers. NeurIPS
[7]. Frei, S. and Vardi, G., Trained Transformer Classifiers Generalize and Exhibit Benign Overfitting In-Context. ICLR
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: Is there a typo in line 719? Should it be $\frac{1}{4N^2}$ instead of $\frac{1}{4N}$?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: >By assuming an over-simplified attention structure...
Setting some parameters to fixed values and considering spare form parameters is commonly used in ICL theory papers [7,8,9,10], and we adopt a similar parameterization as in [7,8,10]. Even for this simplified structure, our analysis for the convergence of $W^*$ is nontrivial. For example, in order to get a tighter bound for $W^*$, in Lemma D3, we first show the loss function $L(W)$ converges point-wisely to $\widetilde{L}(W)$, which implies that the global minimizer $W^*$ converges to $2\Lambda^{-1}$. Then, given the property we prove in Lemma D3, we can derive a tighter convergence rate for $W^*$ in Lemma D4 than the bound we got in Lemma D3.
>total variation cannot be used to evaluate test performance.
We thank the reviewer for this comment, but we believe that total variation distance can be used to evaluate test performance in our setting. We will illustrate this point using a binary case example: Suppose for data $x_q$, the conditional distribution of the ground truth label $y$ is $P(y=1|x_q)=p$, $P(y=-1|x_q)=1-p$. Suppose the prediction of the model is $z$, which depends on $P=(x_1, y_1, ...,x_M, y_M, x_q)$. Suppose the conditional distribution of $z$ is $P(z=1|P)=q$, $P(z=-1|P)=1-q$. Since $(x_i, y_i)$ are i.i.d. sampled, for given $P=(x_1, y_1, x_2, y_2, ...,x_M, y_M, x_q)$, $z|P$ and $y|x_q$ are independent. Thus, the example given by the reviewer is not valid in our setting. If we consider the accuracy, we can calculate it as $Acc(y, z)=pq+(1-p)(1-q)=1+2pq-p-q$. We can notice that, if $p>1/2$, the best $q$ to maximize the accuracy should be $q=1$ and if $p<1/2$, the best $q$ to maximize the accuracy should be $q=0$. Thus, if the output of the model $\widehat y_{out} | P$ is the same as $P(y=0|x_q)$, we can output $z$ based on the value of $\widehat y_{out} | P$ to maximize the possible accuracy. The smaller the difference between $\widehat y_{out} | P$ and $P(y=0|x_q)$, the better the model performs. Actually, the total variation distance in our case measures this difference.
>What concerns me most is their decision to drop all terms and factors that are independent of $N$ during the training process...
We'd like to clarify that we did not drop all terms and factors that are independent of $N$ during the training process. For example, in Theorem 3.3 Eqn (10), we retained the coefficient of the $1/N$ term and showed that this coefficient is affected by the distribution of distance between two means $\mu=\mu_{\tau, 1}-\mu_{\tau, 0}$, the query, and the covariance matrix $\Lambda$. Intuitively, the classification problem becomes easier when the distance between the two class means $\mu$ increases and the variance $\Lambda$ decreases; conversely, it becomes harder when $\mu$ is small and $\Lambda$ is large. Eqn (10) reflects this intuition. For instance, when $\mu$ is large and $\Lambda$ is small -- corresponding to an easier classification problem -- the parameter $a$ becomes larger, and the derivative $\sigma'(a)$ decays exponentially with increasing $a$, thereby reducing the overall training error. On the other hand, when $\mu$ is small and $\Lambda$ is large -- corresponding to a harder classification problem -- the training error increases accordingly. This theoretical prediction aligns well with our intuition. We did not explicitly present results in terms of the dimensionality $d$, because intuitively, $d$ is not the primary factor determining the classification difficulty. Instead, it is the relationship between $\mu$ and $\Lambda$ that plays the central role. Our theory captures this key insight effectively.
>I also do not understand why the query token $q$ can appear on the RHS of the formula of Theorem 3.6 when you are taking the expectation over the query token pair.
We did not take the expectation over the query token $q$. You can check Theorem 3.6. Our expectation is take over $x_i, y_i; i=1,.., M$, which does include $q$.
>I find that Assumptions 3.2 and 3.5, which state that the mean vectors $\mu_1$ and $\mu_2$ have the same $\Lambda^{-1}$ norm, are somewhat strong.
See our second reply to reviewer aULa.
>There are several theoretical studies on the optimization of one-layer transformers that consider training $W^v$ and $W^{KQ}$ simultaneously, even in the more challenging softmax attention setting [2, 3, 4, 5]. There are also results on general convergence guarantees of transformers such as [6]. In addition, [7] considers almost the same question as this paper. I suggest that the authors should compare their results with these works.
Thank you very much for providing those related works. We will cite and discuss them in the revised paper.
>Is there a typo in line 719? Should it be $\frac{1}{4N^2}$ instead of $\frac{1}{4N}$?
No. This is not a typo. You can verify that the result is $\frac{1}{4N}$.
References can be found in our reply to reviewer M6vU.
---
Rebuttal Comment 1.1:
Comment: 1. The authors' rebuttal can not address my concerns regarding the technical contribution of the optimization problem considered in this paper. As I mentioned very clearly, the over-simplified settings reduce the optimization of transformers to a logistic regression. Under this classic setting, the existence of a global minimum, the strong convexity, the smoothness, and linear convergence have been well-studied and demonstrated in the previous works. The author seems to be deliberately ignoring this comment in their rebuttal. Based on this discussion, the only theoretical contribution of the training optimization is to provide a characterization of the global minimum. However, we need to further note that there is a closed-form for this global minimum. The author only provides a range for this minimum. When $N$ is not large, this range is loose and can not provide informative guidance for the global minimum. On the other hand, when $N$ is large, LLN guarantee that $\frac{1}{N}\sum x_i y_i \to \frac{1}{2}(\mu_1 -\mu_0)$, which provide intuitive choice of the global minimum. I do not feel this is a challenging issue based on all previous discussions. As I mentioned, several existing works have addressed more impractical and challenging settings compared to those presented in this paper.
2. I completely fail to understand why the independence between$\hat y$ and $y$ is of such great importance. Even following the authors' requirement for independence between $\hat{y}$ and $y$, I can still propose counterexamples. Let $\hat{y}$ and $y$ be two independent Rademacher random variables; then $\Delta(\hat{y}, y) = 0$, but there still exists a half classification error. I believe I have clearly stated that, due to the existence of such counterexamples, the total variation itself is not suitable for measuring test performance, nor is it a common choice, as it only addresses the distance between distributions rather than the relationship between random variables. There are standard metrics for evaluating population generalization, such as test loss (utilized in [1]) and test error (utilized in [7]). I suggest that the authors consider these metrics as alternatives to total variation.
3. Additionally, if you do not take the expectation over $q$, how can you guarantee the RHS of your Theorem 3.6 is small, given that $q$ is generated from an unbounded distribution and can take on extremely large values? If such a guarantee can only hold with high probability, it must be explicitly calculated and illustrated how it affects your final results.
4. I respectfully disagree with the authors' claim that they did not drop any factor. If not, why does there appear a term $o(\frac{1}{n})$? I wonder what the original numerator and denominator of this term are and why this term could be represented as $o(\frac{1}{n})$ if you never use $N$ to cancel other factors? Indeed, I have checked the proof details, and the authors directly treat a term as $o(\frac{1}{n})$ if the denominator contains a factor $n$ with power larger than 1, regardless of the numerator. As I clearly mentioned, I do not feel it's reasonable to directly drop these terms, as the authors do not claim any requirement or assumptions for the scale of $N$ compared to other parameters. I also do not recognize the authors' claim that "$d$ is not the primary factor determining the classification difficulty". There is no doubt that the dimension $d$ is of great importance for studying the modern over-parameterized and high-dimensional regimes. I sincerely suggest the authors compare their presentations regarding theorems with those of [1].
5. I know that your conclusion fails without such an assumption, and this is the issue. If your conclusion has to be established under such an assumption, what is the essential difference with the settings considering isotropic noise?
In summary, I'm not satisfied with the current manuscript or the rebuttal from the authors. All my previous concerns remain.
---
Reply to Comment 1.1.1:
Comment: 1. Since the reviewer claims there is a closed-form solution, we would greatly appreciate it if the reviewer could provide it. To the best of our knowledge, we are aware of an $l_2$-max margin solution in [7] for a setting similar to ours, which however is not a closed-form solution. Moreover, to ensure the max-margin solution in [7] is well-behaved enough for their theoretical analysis, they made additional assumptions, such as requiring a sufficiently large signal-to-noise ratio. In contrast, our work does not rely on such assumptions. Additionally, compared to [7], we consider a more general multi-class setting. We believe these distinctions highlight the independent contributions of our work.
2. If for a given $x_q$, the ground truth label $y|x_q$ is a Rademacher random variable, i.e. $P(y=1|x_q)=0.5$, $P(y=-1|x_q)=0.5$, as the reviewer claimed, in the setting of our paper, this only happens when there are two Gaussian classes with $\mu_1$, $\mu_{-1}$ and $x_q$ lies exactly on the line where $(\mu_1-\mu_{-1})^\top\Lambda^{-1} x_q=0$ (for example, perpendicular to $(\mu_1-\mu_{-1})$ when $\Lambda=I$). In this situation, even the best classifier will inevitably have a 50% classification error, and this error is **intrinsic**. As a result, we do not consider this to be a problem. Total variation distance ($\Delta(y, \hat y)$) can serve as the loss to measure the difference between two independent random variables. Moreover, if we consider the cross-entropy loss between $\hat{y}$ and $y$ ($CEL(\hat{y}; y)$), a bounded TV distance also means a bounded $CEL(\hat{y}; y)$. One can easily prove that the minimum of cross-entropy loss $CEL(\hat{y}; y)$ is achieved when $\hat{y}$ has the same distribution as $y$, which means $\Delta(y, \hat y)=0$. Moreover, denoting the $\delta=\min_{i} P(\hat y=i)$, one can easily prove that $CEL(\hat{y}; y)\leq CEL(y; y)+\Delta(y, \hat y)/\delta$. We will clarify it in the revised paper.
3. We showed how $q$ affects the inference error in Theorem 3.6. Actually, in most cases, **extremely large $q$ makes the classification problem extremely easy**. For example, let us consider the binary case and let $\Lambda=I$ without loss of generality. We can see that the main coefficient of the inference error in Theorem 3.6 is $\sigma'(\mu^\top\Lambda^{-1}q)=\sigma'(\mu^\top q)$. Thus, if $\mu^\top q = 0$, $q$ lies exactly on the line perpendicular to $\mu$. In this case, there is an equal 50% chance of belonging to class 1 or -1. However, in most cases, when $\mu^\top q \neq 0$, an extremely large $q$ makes $\sigma'(\mu^\top q)$ extremely small, because $\sigma'(x)$ decays exponentially as $|x|$ increases. This, in turn, causes the inference error in Theorem 3.6 to become extremely small as well (in Line 1134, you can see that the coefficient of the $o(1/N + 1/\sqrt{M})$ term is related to $\sigma''(\mu^\top q)$, which also becomes extremely small). This also matches the real-world behavior: as long as $q$ does not lie on the line perpendicular to $\mu$, an extremely large $q$ makes the classification problem extremely easy. This example demonstrates the consistency between our theory and reality.
4. Actually, we kept the coefficient of $1/N$ and dropped the coefficients of $o(1/N)$.
>I do not feel it's reasonable to directly drop these terms, as the authors do not claim any requirement or assumptions for the scale of $N$ compared to other parameters.
When we use the notion $o(1/N)$, the notion itself contains the assumption that $N$ is sufficiently large. Thanks for the reviewer's suggestion, we will clarify this in the revised paper.
>There is no doubt that the dimension $d$ is of great importance for studying the modern over-parameterized and high-dimensional regimes.
As we have clarified in the original rebuttal, **$d$ is not the direct and primary factor determining the classification difficulty**. Factors like the SNR (signal-to-noise ratio, here we can consider it as the relationship between the $\mu_1-\mu_{-1}$ and $\Lambda$) play a more critical role. For example, one can easily construct Gaussian mixtures with a large $d$ but low SNR, and compare with another one with a small $d$ but high SNR. In these cases, the latter can be significantly more difficult to classify than the former. The impact of $d$ in our setting is more on the memory and computation costs of the model. However, as our main focus is on the inference error, we have correspondingly focused on how relevant factors like $\mu, \Lambda, q, N, M$ affect the inference error.
5. We do not think our setting is equivalent to isotropic noise. We couldn’t quite understand this comment. Could you clarify? | null | null | null | null | null | null |
Understanding Generalization in Quantum Machine Learning with Margins | Accept (poster) | Summary: The authors address generalization in quantum machine learning by introducing a margin-based framework. The authors critique traditional uniform generalization bounds, which have been shown to be ineffective in both classical and quantum settings, and propose margin-based generalization bounds as a more reliable alternative. They extend classical margin-based theory to quantum neural networks by leveraging Lipschitz continuity and matrix covering techniques. Their experiments on quantum phase recognition datasets demonstrate a strong correlation between margin distribution and generalization performance, surpassing traditional metrics like parameter count. They also establish a connection between margins and quantum state discrimination, highlighting how maximizing the separability of quantum embeddings improves generalization. Their findings suggest that margin-based methods provide better theoretical insights and practical guidance for designing QML models with improved generalization capabilities.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem, but they can be improved with more datasets.
Theoretical Claims: No proofs in the main manuscript.
Experimental Designs Or Analyses: The optimized 8-qubit QCNNs and the generalization gap analysis are well-designed experiments. However, the discretization of the experiments is somewhat coarse, which may impact the precision of the results.
Supplementary Material: The additional experiments are well-constructed and appear sound.
Relation To Broader Scientific Literature: The authors relate to the previous margin-based generalization. The paper is missing a related works section.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The method is well-motivated, addressing the limited understanding of generalization in quantum machine learning. The connection between quantum state discrimination and generalization is a novel and valuable contribution to QML theory.
2. The paper is well-written, and the results are engaging and clearly presented.
3. The method outperforms the compared approaches, particularly in the comparative analysis. The experiments show a strong correlation between margin distribution and generalization performance, supporting the theoretical claims.
4. The application of QML to classical data using quantum embeddings is not novel but is a useful and relevant demonstration.
Weaknesses:
1. Limited experiments, broader testing across different parameters is needed for statistically robust validation.
2. Generalization gap results are mixed, especially in the appendix.
3. Lacks comparison with other methods, relying mainly on comparison with effective parameter analysis.
Other Comments Or Suggestions: 1. Add a relevant work section
Questions For Authors: 1, How would the method of margins compare to other methods in terms of robustness, or how generalizable is this approach.
## Update after Rebuttal
The additional results and clarifications strengthen the massage of the already interesting work. However, the new experiments are similar to the existing ones and the sections need to be improved to match the style of ICML better, e.g. clear Relevant works and limitations are missing. Additionally, I am skeptical about the significance of the results, partly due to the limited experimentation, in regard to a venue like ICML.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 5bgA for insightful comments. Below, we address each point directly and explain how we plan to incorporate your suggestions.
---
### 1) **Limited experiments; broader testing is needed for statistically robust validation**
To strengthen empirical validation, we have now conducted extensive additional experiments on two canonical quantum many-body datasets: **Transverse Field Ising Model** and **XXZ-Heisenberg chain**. We repeated all main-text experiments on these datasets; results are summarized clearly at the following anonymous links: [TFIM](https://tinyurl.com/47mpp43c) and [XXZ-Heisenberg](https://tinyurl.com/3r598nmm).
For both TFIM and XXZ datasets, the margin consistently correlate with generalization performance. This strongly reinforces our central claims regarding the effectiveness of margin bounds for QNNs. These additional datasets significantly broaden our experimental scope, providing comprehensive validation of margin-based generalization theory.
We appreciate the reviewer highlighting the coarse discretization of our initial experimental results (originally QCNN layers = $\{1,5,9\}$). To comprehensively validate our claims, we now provide fine-grained experimental results covering $\{1,3,5,7,9\}$ layers. We also extend these detailed margin distribution analyses to the TFIM and XXZ-Heisenberg model. Results are summarized [at this anonymous link](https://tinyurl.com/5n6j7us3). We will explicitly incorporate these detailed results into the appendix of our revised manuscript.
---
### 2) **Generalization gap results are mixed, especially in the appendix**
We acknowledge the reviewer’s observation regarding mixed generalization gap results presented in the appendix. However, even within these initial results, margin-based metrics (both Q1 and mean) still consistently outperform all parameter-based metrics.
To further address this concern, we conducted additional experiments using the TFIM and XXZ-Heisenberg datasets, available [at this anonymous link](https://tinyurl.com/mrx8943h). These new results demonstrate highly stable and consistent correlations between margin-based metrics (Q1 and mean) and the generalization gap, without mixed outcomes. This further supports our central thesis that margin-based metrics offer stable and robust predictors of generalization in QNNs.
---
### 3) **Lack of comparison with methods beyond parameter-based metrics**
We appreciate the reviewer highlighting this point. Compared to classical machine learning, Quantum Machine Learning (QML) is a much newer field, and currently has relatively few established methods for evaluating generalization. While several recent works have proposed bounds from different theoretical perspectives, most ultimately fall within the class of parameter-based or uniform generalization bounds.
For instance:
- [Banchi et al.](https://tinyurl.com/yjx386x9) derive information-theoretic bounds based on the mutual information between the training data and the parametric quantum states. The analysis is based on Rademacher complexity, reflecting limitations shared with other uniform bounds, such as that proposed by [Caro et al](https://tinyurl.com/mrsrrjk4).
- [Abbas et al.](https://tinyurl.com/mw2ua99y) propose using the effective dimension, defined via the Fisher information matrix, to capture model complexity. This approach captures model expressivity through parameter sensitivity and thus also remains fundamentally parameter-based.
Given this, we chose to compare against more recent work by [Caro et al.](https://tinyurl.com/mrsrrjk4) as a representative of this broader class of approaches. Their bounds are constructed using covering numbers and Rademacher complexity, directly in terms of trainable parameters and model capacity. This makes their work both relevant and representative of the prevailing generalization bounds in QML to date.
Moreover, the recent critical analysis by [Gil-Fuster et al.](https://tinyurl.com/4kjnvb44) directly points to inherent shortcomings of parameter-based uniform bounds, motivating us to benchmark against parameter-based methods explicitly.
We will clearly outline these justifications in our revised manuscript, contextualizing our choice of comparisons and the current state of generalization theory in QML.
---
### **Additional Points**
**Related Work Section:** We agree and thank the reviewer for this suggestion. We will include a concise **Related Work** section in the revised manuscript, situating our contributions within recent quantum generalization literature.
**Proofs in the Main Manuscript:** To maintain readability, detailed proofs were provided in Appendix A.1. We will refer readers to these proofs from the main text, clearly guiding navigation without compromising readability.
---
We greatly appreciate Reviewer 5bgA’s valuable suggestions and believe these adjustments significantly enhance the clarity, robustness, and overall quality of our manuscript.
---
Rebuttal Comment 1.1:
Comment: The paper is interesting and presents novel aspects, but I still find the experimentation somewhat limited. I will maintain a skeptical 3.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer 5bgA for highlighting the concern regarding the scope and precision of our experiments. Here, we would like to emphasize additional experiments we performed in our initial rebuttal and present further experiments.
---
## **1. Additional Dataset**
In our original rebuttal, we expanded our empirical analysis by introducing two widely recognized quantum many-body benchmarks datasets: the Transverse Field Ising Model (TFIM) and the XXZ-Heisenberg model. We systematically reproduced all main-text experiments using these additional datasets, ensuring broad applicability and robustness of our margin-based generalization theory. Detailed results can be accessed via the following links:
- [Github Link for TFIM experiments](https://tinyurl.com/47mpp43c)
- [Github Link for XXZ experiments](https://tinyurl.com/3r598nmm)
## **2. Fine-Grained Experiments: QCNN Layers**
To address your initial concern regarding coarse experimental discretization, we refined our experimental design by expanding the number of QCNN layers studied. Originally, our manuscript included results for QCNN layers $1, 5, 9$. In our initial rebuttal, we enhanced this by incorporating layers $1, 3, 5, 7, 9$. We repeated these fine-grained analyses on both the TFIM and XXZ datasets, demonstrating the robustness and consistency of our results. Results can be found at:
- [Github Link for Fine-Grained Experiments: Number of QCNN Layers](https://tinyurl.com/5n6j7us3)
## **3. Fine-Grained Experiments: Noise Levels**
While our initial rebuttal focused on adding more datasets and refining QCNN depth, we have conducted additional experiments in this second round to further broaden the scope of experimental testing, in response to the continued concern regarding limited experimental scope. Specifically, we enhanced our study of label noise by performing more fine-grained experiments across five levels of label randomization: 0%, 25%, 50%, 75%, 100%. This extends our original setup, which included 0%, 50%, 100%, and provides greater experimental granularity.
These newly added experiments were repeated on both the TFIM and XXZ datasets to confirm the consistency and robustness of the observed trends across different parameter settings and data distributions. Detailed results are available here:
- [Github Link for Fine-Grained Experiments: Noise Levels](https://tinyurl.com/3s4xtu5j)
---
## **Comparative Analysis with Existing QML Literature**
To illustrate the comprehensiveness and depth of our experiments compared to the most well-known and foundational results in generalization in Quantum Machine Learning (QML), we summarize key aspects of prior works in the following table:
| Work | Datasets Used | Variational Ansatz | Number of Layers | Label Randomization |
|------------------|-----------------------------------------------------------------|-----------------------------------------------------------------|----------------------------|---------------------------------------------|
| [Abbas et al.](https://tinyurl.com/mw2ua99y) | 1 classical (Iris) | 1 (Strongly Entangling Layers) | 1 (fixed) | N/A |
| [Caro et al.](https://tinyurl.com/mrsrrjk4) | 1 quantum (cluster) | 1 (QCNN) | 1 (fixed) | N/A |
| [Banchi et al.](https://tinyurl.com/yjx386x9) | 1 quantum (TFIM), 1 classical (2-moon) | 2 (Fidelity Classifier (TFIM), Single-qubit data-reuploading (2-moon)) | 1 (fixed) | N/A |
| **Ours** | 3 quantum (cluster, TFIM, XXZ), 3 classical (MNIST, Fashion-MNIST, Kuzushiji-MNIST) | 3 (QCNN, parameter sharing QCNN, Strongly Entangling Layers) | 5 (1, 3, 5, 7, 9 layers) | 5 levels (0%, 25%, 50%, 75%, 100%) |
This comparison illustrates that our experimental setup is designed to provide broader empirical coverage—spanning multiple datasets, architectures, and levels of label randomization—extending beyond the experimental analyses conducted in prior state-of-the-art studies.
Finally, we would like to note that, following the anonymous review process, we plan to release our code and experimental data on a public repository, to support reproducibility and facilitate further exploration by the community.
We hope these additional clarifications and comprehensive experimental refinements fully address your concerns. | Summary: This paper establish a margin-based generalization bound for multiclass classification with Quantum Neural Networks, adapting techniques from classical neural networks to the quantum domain. This approach interprets quantum measurements as nonlinear activations and extends matrix covering techniques to complex-valued spaces. Through experiments on quantum phase recognition datasets, they demonstrate that margin-based metrics strongly correlate with generalization performance, even when traditional metrics like parameter count fail. Also they conduct experiments on three quantum embedding methods and showing that Neural Quantum Embedding (NQE), a classical-quantum hybrid approach, enhances generalization by yielding larger margins through increased data distinguishability.
Claims And Evidence: The claims made in the paper are generally well-supported by evidence through both theoretical development and experimental validation: The margin-based generalization bound is mathematically derived with clear steps, extending established techniques from classical to quantum settings. Experimental evidence strongly supports the predictive power of margin-based metrics for generalization. These evidences include usage of multiple datasets, testing various models and challenging scenarios like randomized labels
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for studying generalization in quantum machine learning. They are using margin-based approach is sensible given its success in classical deep learning and the demonstrated limitations of uniform bounds. Also the theoretical framework acknowledges quantum-specific properties like POVM measurements. For the evaluation, they test on both quantum data and classical data to show broader applicability. Comparing against parameter-based metrics establishes relative improvement over existing approaches.
Theoretical Claims: There are no significant issues in the theoretical proofs to the best of my knowledge.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound: Multiple experimental setups testing different aspects of the theory (generalization gap prediction, margin distribution analysis, quantum embedding comparison), Appropriate statistical measures (mutual information, Kendall rank correlation) to quantify relationships, Controls for variability by averaging results over 15 repetitions with different training samples, Systematic exploration of hyperparameters (layers, noise levels, embedding strategies)
Supplementary Material: The supplementary materials mainly provide the detail of proofs and experiments and these materials substantiate the main paper's claims.
Relation To Broader Scientific Literature: This paper brings margin-based framework to the quantum machine learning area, continuing the trend of non-uniform generalization bounds that better predict real performance. Also it links fundamental quantum information theory (trace distance, Helstrom measurements) to machine learning performance, connecting QML to established quantum information concepts.
Essential References Not Discussed: No to the best of my knowledge.
Other Strengths And Weaknesses: Strengths: It successfully bridges classical margin theory with quantum information theory, creating a unified framework for understanding QML generalization. It provides actionable guidance for designing better quantum embeddings based on trace distance maximization. It tests across multiple datasets, model architectures, and hyperparameters. Mathematical derivations are well-structured and the experimental results are clearly presented.
Weaknesses: Experiments on small qubit systems (8 qubits) may not fully represent challenges of larger quantum systems. No discussion of computational costs for calculating margins versus parameter counts. Lacks analysis of how real quantum hardware noise might affect the margin-based approach.
Other Comments Or Suggestions: It would be good to include a diagram illustrating the conceptual relationship between quantum embeddings, trace distance, and margins. Figure labels in Figure 2 are somewhat difficult to understand.
Questions For Authors: How would your margin-based approach scale to larger quantum systems (beyond 8 qubits)? Do you anticipate any theoretical or practical challenges?
What is the computational overhead of calculating margin-based metrics compared to parameter-based metrics?
How might real quantum hardware noise affect your margin-based approach to generalization?
Your small training set (20 samples) for QPR experiments raises questions about statistical significance. Have you verified these results hold with larger datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer rwMH for thoughtful evaluation.
---
### 1) **Experiments on small qubit systems**
We agree that our experiments were conducted on relatively small quantum systems (8-qubit QCNNs). While it is possible to increase the number of qubits by one or two, we chose 8 qubits as a practical design decision. First, powers of two are a natural choice for QCNNs due to their hierarchical structure. Second, we do not expect a small increase in the number of qubits to significantly affect the generalization behavior.
While extending the numerical studies to much larger systems (16 or 32 qubits) would be of interest, classical simulation becomes exponentially expensive with the number of qubits. For example, training a single 8-qubit QCNN requires ~**0.5hours** on state-of-the-art commercial CPUs. Our experiments were performed across extensive hyperparameters: [1,3,5,7,9] for QCNN layers, [0, 0.5, 1] for label noise, [QCNN, QCNN_shared, SEL] for variational ansatz. Each setting was repeated **15 times** to control variability, resulting in a total time of ~**300 hours**. Increasing the number of qubits by one would double simulation time, quickly becoming computationally impractical.
Nonetheless, our theoretical results are valid beyond 8-qubits. Our margin bound scales linearly with the number of qubits, suggesting that larger systems would exhibit analogous behaviors, provided the sample grows proportionally. Thus, while explicit numerical validation at larger scales is limited by classical computational constraints, our theory remains scalable.
Moreover, [prior work](https://tinyurl.com/4kjnvb44) has shown that QNNs can efficiently memorize polynomially increasing training data, implying uniform bounds can remain vacuous even in larger quantum systems. Thus, the generalization challenges we address are fundamental to QML, not artifacts of small system size. This further motivates non-uniform measures, such as our proposed margin bound, which remain meaningful regardless of scale.
---
### 2) **Computational costs for calculating margins vs. parameter counts**
The margin for a single data point $(\rho, y)$ is defined as $h(\rho)_y - \max\_{j \neq y} h(\rho)_j$. In principle, evaluating margin distributions for an $m$-sample dataset requires $O(m)$ quantum circuit executions. However, practically, the predictions $h(\rho)$ are computed as part of the final step of model training. Therefore, calculating margin distributions **requires no additional computational costs** beyond what is already used during model training. We will explicitly mention this practical consideration in our revised manuscript.
---
### 3) **Lack of analysis regarding real quantum hardware noise**
Our work offers a new perspective on how quantum noise affects generalization by connecting margin bounds to quantum state discrimination. Quantum noise, which are contractive CPTP maps ($\Lambda$), reduces trace distance $D(\Lambda(\rho_1),\Lambda(\rho_2)) \leq D(\rho_1,\rho_2)$. In Section 4, we explicitly show that the margin mean is upper-bounded by the trace distance. Consequently, quantum noise shrinks trace distance and shifts margin distributions leftward, resulting in decreased generalization performance.
This connection is not captured by previous generalization bounds in QML and, represents a meaningful advancement in the theoretical understanding of generalization. The margin-based generalization framework naturally incorporates quantum noise effects, making it uniquely insightful. We agree that deeper experimental investigation of noise effects would be a valuable next step, and we believe our work provides a strong theoretical foundation for such studies.
---
### 4) **Small training set (20 samples) and statistical significance**
We recognize that our QPR experiments employed relatively small training datasets (20 samples). This experimental choice was deliberate and aligned precisely with previously established works ([Caro et al.](https://tinyurl.com/mrsrrjk4) and [Gil-Fuster et al.](https://tinyurl.com/4kjnvb44)), which served as direct references. Specifically, Gil-Fuster et al. experimentally demonstrated that QCNNs could overfit randomized labels with small datasets, questioning the effectiveness of uniform generalization bounds presented in Caro et al. Since a core contribution of our work is to propose margin bounds as a tight, non-uniform alternative to these uniform bounds, we intentionally retained similar experimental setups for direct comparison.
Furthermore, to bolster empirical validation, our manuscript also presents additional extensive experiments on classical datasets (MNIST, Fashion-MNIST, and Kuzushiji-MNIST) in Section 4, each consisting of approximately **12,000 samples**. In this large-sample regime, we consistently observed strong correlations between margin metrics and generalization, confirming the statistical significance and robustness of our claims beyond the small-sample scenario.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the detailed response. I especially thanks for the explanation of 2), its pretty clear. Also for 3), it could be really exciting to see more results that consider the noise - however, I also understand it could be even more expensive for your simulation. Regarding your 1), 8 qubits for 0.5 hrs for a QCNN probably means you are not using advanced simulator, you can try with stabilizer tensor network simulator, I assume it could enable your simulation for ~10-12 qubits system. Since its 2025, 8 qubits small scale experiment is somehow not acceptable to me, but I do agree with the novelty of the paper, I would like to stand with positive score.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer rwMH again for your detailed and insightful engagement with our work. We're delighted to hear that our clarification of points (2) and (3) was helpful and clear. Following your advice, we are currently preparing additional experiments on larger quantum systems (10–12 qubits), and we plan to include these results in the revised version of the manuscript, before camera-ready. We greatly value your input and thank you again for your constructive review. | Summary: The manuscript describes a theoretical and experimental analysis of quantum machine learning models, with the focus on generalization bounds. The authors build on prior quantum machine learning results indicating vacuity of bounds based on parameter count or other measures of complexity of the hypothesis space. They adapt bounds based on margin distribution from classical to quantum ML, and show that it explains generalization gap more accurately than parameter-based metrics.
Claims And Evidence: The two main claims are:
1) Generalization in QML theoretically depends on margin. Here, the authors adapt existing margin-based generalization bound to take into account the characteristics of quantum models. The evidence, in the form of proof of the bound, is sound. The assumptions relating to the quantum nature of the model architecture (model is a parameterized unitary followed by measurement, projective or more generally, POVM) are also sound.
2) The bound from 1) has practical relevance. Here, the authors perform an experimental analysis evaluating to what extent inverse margin aligns with generalization gap. The experimental evidence is convincing (inverse margin aligns well, qualitatively in Fig 2 and quantitatively in Fig 3) with generalization gap for varying number of layers in the model, varying internal architecture, and varying randomization of the dataset). The evidence is, however, only limited to one dataset.
The two results above are focused on quantum model working on quantum input data. The paper also extends the margin-based generalization analysis to classical data – quantum model setup, and provides a margin-based explanation of the previously observed relationship between the choice of how classical data is embedded into quantum state and generalization. The theoretical link between mean margin and quantities previously shown to lower-bound loss is sound, and the authors show experimental evidence for the link between margin and test accuracy. The experimental limited (three MNIST-based datasets, one model QCNN). The presentation in the main manuscript is in terms of test accuracy, without showing training set accuracy, which does not directly support the claims that are related to generalization gap. Some limited exploration of generalization gap is provided in the Appendix (A.2, Fig. 6), but lacks details in the description (e.g. which MNIST dataset is used in Fig. 6?).
Methods And Evaluation Criteria: Empirical results are somewhat limited in scope, and (in section 4) not perfectly aligned with theory (see above for details).
Theoretical Claims: Correctness of proofs in Appendix has not been checked in detail.
Experimental Designs Or Analyses: The overall choice of experimental setup (models and dataset in Sect. 3 & 4) is sound, though limited in breadth.
Supplementary Material: I reviewed the experimental section (A.2).
Relation To Broader Scientific Literature: The paper extends prior findings on generalization bounds for quantum data-quantum model, and prior findings about quantities that affect how quantum embeddings related to effectiveness of classical data – quantum model; in both cases they extend the prior findings by introducing margin as an explanatory variable. This mirrors earlier developments in classical machine learning.
Essential References Not Discussed: None noted.
Other Strengths And Weaknesses: In terms of strengths, the paper extends our understanding of generalization in quantum machine learning.
One weakness of the work is the limited nature of the experimental evidence. Another weakness is the separate treatment of classical data scenario (Sec. 4) from the earlier parts (Sect. 2). The paper would be stronger if it could present a unifying theoretical framework that incorporates properties of the embedding circuit as parameters in the bounds, in the same manner as different options for measurements (properties of E) are incorporated.
Other Comments Or Suggestions: The authors should consider moving the generalization gap-focused results into the main manuscript (Fig. 6), providing more detailed description.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer uadb for thorough evaluation and insightful suggestions.
---
### 1) **Experimental Scope and Additional Results**
We acknowledge the reviewer's concern regarding the breadth of empirical evidence. To address this directly, we conducted extensive additional experiments on two canonical quantum many-body benchmarks: **Transverse Field Ising Model** and **XXZ-Heisenberg Model**.
These additions go beyond the original QPR dataset and validate that our margin-based framework generalizes well across distinct data distributions. All main-text experiments were systematically repeated on these datasets. Here are the anonymous link for additional experimental results: [TFIM](https://tinyurl.com/47mpp43c) and [XXZ-Heisenberg](https://tinyurl.com/3r598nmm).
We consistently observe that margins strongly correlate with generalization gaps across multiple datasets and experimental conditions, significantly strengthening the relevance and robustness of our margin bounds for QNNs.
---
### 2) **Unified Theorem Incorporating Quantum Data Embedding**:
With [Neural Quantum Embedding](https://tinyurl.com/4nyhdv5k), the quantum classifier for classical data is constructed through a two-step training procedure: first optimizing only the embedding circuit, and subsequently training the quantum neural network while keeping the embedding circuit fixed. Developing a fully unified theory would require separately deriving generalization bounds for the embedding optimization stage—which fundamentally differs from the classification task—and then integrating these results into a bound for overall classification performance. Such an analysis is inherently non-trivial and is currently beyond the scope of our work.
Note that our existing margin-based bound implicitly captures the embedding circuit parameters, as these parameters directly influence the margin distribution of the trained model. Nevertheless, we agree that developing unified theoretical bounds explicitly incorporating embedding circuit properties is an interesting direction for future research.
Moreover, many quantum machine learning applications involve quantum data directly, with no classical-to-quantum embedding required. In these cases, explicitly incorporating embedding circuit properties is not relevant.
---
### 3) **Margin and Generalization Gap Analysis in Figures**:
We confirm that Figures 1 and 4 initially emphasized test accuracies for clarity. However, margin distributions also strongly correlate with generalization gaps in these experiments. We have updated these figures to include both test accuracy and generalization gaps on this [anonymous link](https://tinyurl.com/4xtxt2a7).
As predicted by our theory, we observe in Figure 1 (QPR) that generalization gaps consistently decrease as margin distributions shift rightward. For Figure 4 (MNISTs), we see the same trend explicitly for the MNIST and Fashion-MNIST. For the KMNIST, we observe a slight discrepancy, where the gap marginally increases.
This minor deviation can be explained by the significantly larger number of samples (~12,000) relative to system complexity (8-qubit QCNN). Our margin bound scales as $1/\sqrt{m}$, where $m$ is the sample size. Thus, when the number of samples greatly exceeds system complexity, generalization gaps become naturally small, and margin effects appear less pronounced. Indeed, the gaps observed for the MNIST-based dataset are substantially smaller compared to those with the QPR dataset, which includes only 20 samples following the experimental setup of [Caro et al.](https://tinyurl.com/mrsrrjk4) and [Gil-Fuster et al.](https://tinyurl.com/4kjnvb44). Therefore, this subtle deviation aligns with our theory rather than contradicting it. We will explicitly clarify this nuance and present both test accuracies and generalization gaps clearly in the revised manuscript.
In addition, we would like to clarify that generalization gap analysis was already presented in the main manuscript through Figures 2 and 3. These figures present the relationship between margin-based metrics and generalization gap across various architectural choices and data corruption levels, supporting our theoretical claims. This is also why we originally placed Figure 6, which provides supplementary generalization gap analysis comparing parameter-based metrics with the lower quartile and mean of the margin for the QPR problem, in the appendix.
Lastly, our work substantially broadens the experimental scope compared to prior state-of-the-art studies, such as [Caro et al.](https://tinyurl.com/mrsrrjk4) and [Gil-Fuster et al.](https://tinyurl.com/4kjnvb44). While these focus on a single task involving quantum data, our experiments span multiple QML architectures and a range of both classical and quantum datasets. This broader scope enables a more comprehensive assessment of the practical relevance of margin-based generalization theory and deepens our understanding of generalization in QML.
---
Rebuttal Comment 1.1:
Comment: Thank you for adding additional experiments; I am increasing my score to 3.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer uadb for your thoughtful evaluation and for recognizing our additional experiments. We greatly appreciate your constructive feedback. | Summary: This paper provides generalization error upper bounds for parameterized quantum neural networks using arguments from Bartlett et al. (2017) 's construction.
Claims And Evidence: I find all the claims in the paper to be reasonable. What helps this work is that there is a long line of work in deriving generalization error bounds based on margins starting from Bartlett (1996) [cited in the paper] and Bartlett et al. (1998) [1]. This also constitutes a drawback of the paper, in my opinion - the theoretical contributions of this paper are a very straightforward generalization of existing techniques. I invite the authors to correct me on this point if I have missed any non-trivialities in A.1 that they feel should be highlighted as an important contribution.
[1] Peter Bartlett. Yoav Freund. Wee Sun Lee. Robert E. Schapire. "Boosting the margin: a new explanation for the effectiveness of voting methods." Ann. Statist. 26 (5) 1651 - 1686, October 1998. https://doi.org/10.1214/aos/1024691352.
Methods And Evaluation Criteria: I find the proposed evaluation criteria to be acceptable. However, since I am not an experimentalist, I will defer to the judgment of other reviewers.
Theoretical Claims: I have gone over the theoretical parts of this paper in detail (and I am already familiar with the existing papers in the literature) and I don't see any obvious issues with the theoretical claims of this paper.
Experimental Designs Or Analyses: No issues.
Supplementary Material: Yes I have reviewed A.1 and A.3 in detail.
Relation To Broader Scientific Literature: This paper makes a nice addition to the literature on the generalization bounds of parameterized QNNs.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This paper should only be accepted at a premier ML conference like ICML if the experimental section is worthy of being highlighted, since the theoretical contributions are incremental at best in my opinion. This is not meant as a targeted criticism of the author's efforts - my issue is that margin bounds have long since shown to be loose for generalization bounds (especially for overparameterized circuits).
---------------------------------------------
Post Rebuttal
---
I am still unconvinced on the second point. However I am raising my score to reflect that I no longer stand by the "incremental advance" part.
Other Comments Or Suggestions: See the above comments. I am willing to raise my score after the rebuttal period based on discussions with other reviewers on the novelty in the experimental section. As it stands, despite the paper being well written, its contributions are incremental (merely quantizing existing analysis does not constitute a good contribution in my opinion), I cannot justify recommending acceptance.
Questions For Authors: See Claims section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer tcJi for constructive feedback.
We understand the concern that our theoretical contribution could appear incremental, given the established history of margin-based bounds in classical ML. However, our work is the first to systematically extend this theoretical framework to QNNs. This extension is neither straightforward nor trivial. The translation of classical techniques to the quantum setting requires addressing unique mathematical and physical challenges that arise from quantum information processing, and in doing so, our work makes the following nontrivial technical advances:
1. **Lipschitz Continuity of POVMs** (Sec.2): Unlike classical neural networks, QNNs employ POVMs. Handling the Lipschitz continuity of quantum measurements required specialized analysis unique to quantum states, involving spectral norms of POVM operators.
2. **Covering Number of Complex Unitary Matrices** (A.1): Classical margin bounds rely on covering numbers of real-valued neural networks. By contrast, QNNs require derivations for complex-valued unitary matrices constrained by quantum mechanics. This differentiates our theoretical approach from existing classical frameworks.
3. **Normalization properties of quantum states** (A.1): Quantum states possess normalization constraints that substantially simplify and alter complexity arguments. We exploit this property to derive tighter bounds, a simplification with no direct classical counterpart.
4. **Extension to mixed input states via vectorization** (A.1): Our margin bound is further generalized to mixed quantum states utilizing vectorization arguments. This step is fundamentally quantum-specific, with no classical counterpart.
Notably, our work addresses key open problems raised by two recent publications in Nature Communications ([13:4919, 2022](https://www.nature.com/articles/s41467-022-32550-3) and [15:2277, 2024](https://www.nature.com/articles/s41467-024-45882-z)), which represent the state of the art in QML generalization theory. The first introduces a generalization bound based on Rademacher complexity, while the second demonstrates the vacuity of such uniform bounds through extensive randomization experiments, concluding that existing theoretical tools are insufficient to explain generalization in QML.
Our work goes beyond these foundational studies by introducing a margin-based framework that more accurately predicts generalization behavior in QNNs. In addition, our analysis applies to arbitrary quantum states—pure or mixed—whereas the prior works provide theoretical guarantees only for pure states. Rather than being a routine extension of classical results, our contribution directly addresses the theoretical gap identified in the second Nature Communications paper and significantly advances the understanding of generalization in QML beyond what is provided in the first.
Moreover, our work establishes a previously unexplored theoretical connection between margin bounds and quantum state discrimination. This provides not only conceptual insight but also practical guidance for optimizing quantum data embedding. Specifically, by directly linking margins to trace distances, our theory offers a principled route for improving generalization performance when applying QNNs to classical datasets. This perspective also enables us to systematically optimize the quantum feature maps introduced in [Nature 567, 209–212 (2019)](https://www.nature.com/articles/s41586-019-0980-2), which proposed using quantum-enhanced feature spaces for supervised learning. We experimentally demonstrate that our margin-based approach significantly improves classification accuracy compared to the method in the Nature paper (Fig. 4). Thus, our work not only advances the theoretical foundations of QML, but also delivers a concrete performance gain over one of its most recognized experimental baselines.
Additionally, our experimental section provides novel and extensive evidence showing that margin-based metrics outperform standard parameter-count metrics across multiple challenging scenarios, including randomized labels, varying network depths and ansatz, and different embedding methods. In response to Reviewer 2’s suggestion, we have added additional experiments using the [Transverse Field Ising Model](https://tinyurl.com/47mpp43c) and [XXZ-Heisenberg Model](https://tinyurl.com/3r598nmm), further validating the robustness and practical relevance of our margin-based framework.
We believe these clarifications demonstrate the depth and originality of our contributions, which go beyond a trivial adaptation of classical techniques. By addressing open questions in QML with quantum-specific analysis and expanded empirical evidence, we believe our work offers timely and meaningful progress. We are grateful for the reviewer’s thoughtful feedback and hope this response helps convey the full value and relevance of our submission.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I am raising my score to 3 for now, pending further discussions in post-rebuttal period.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank Reviewer tcJi for reconsidering your evaluation and raising your score. We appreciate your thoughtful engagement and constructive discussions.
Regarding your remaining reservation about the “second point,” we are not entirely certain whether it refers to the technical treatment of covering numbers in the complex unitary setting or the theoretical looseness of margin-based generalization bounds. To ensure completeness, we address both interpretations below.
---
### **1. On covering number of complex unitary matrices:**
Firstly, the quantum measurement function $g:\mathbb{C}^N \mapsto \mathbb{R}^k$ is $2 \sqrt{\sum_i \vert\vert E_i \vert\vert_\sigma}$-Lipschitz, meaning:
$\vert\vert g(u) - g(v) \vert\vert_2 \leq 2 \sqrt{\vert\vert E_i \vert\vert_\sigma} \vert\vert u - v \vert\vert_2.$
Note that the latter norm $ \vert\vert\cdot\vert\vert_2 $ denotes the complex vector 2-norm. Consequently, when peeling off quantum measurements, and reducing it to a matrix covering, it becomes essential to consider matrix coverings with respect to complex 2-norms.
To rigorously achieve this, several subtle quantum-specific modifications were introduced in Appendix A.1:
- We applied Maurey's Sparsification Lemma by introducing a discrete set:
$$
V = \lbrace V_1, \dots, V_{4N^2} \rbrace = \lbrace gY e_i e_j^\mathrm{T} : g \in \lbrace+1, -1, +i, -i\rbrace,\ i \in [N],\ j \in [N]\rbrace.
$$ This discrete set, including complex constants $g \in \lbrace+1, -1, +i, -i\rbrace $, is essential to properly cover the complex matrix space.
- We introduced a complex-adapted norm:
$\vert\vert B \vert\vert_* = \sum_{i,j} \left(|\text{Re}(B_{ij})| + |\text{Im}(B_{ij})|\right)$,
which is then upper-bounded using the complex Hölder’s inequality.
These quantum-specific adjustments lead to subtle yet necessary modifications in the covering number bound (introducing factors such as 2 from $ g \in \lbrace+1, -1, +i, -i\rbrace $ and $ \sqrt{2} $ from the norm adjustments). Although subtle and hidden inside big-O notation at the conclusion, these careful steps were necessary to rigorously generalize the classical covering number analysis into the complex domain required for quantum machine learning.
### **2. On the looseness of margin-based generaization bounds:**
We would like to emphasize that our work does not claim tight worst-case theoretical guarantees. Instead, we focus on showing that margin distribution–based quantities correlate more strongly with generalization behavior than conventional parameter-based uniform bounds commonly used in QML.
In this sense, our framework offers a practical and interpretable alternative that empirically outperforms other generalization metrics in diverse QML scenarios. Nevertheless, we agree that exploring tighter bounds, particularly tailored to quantum settings, remains a valuable direction for future theoretical work.
---
We hope this clarification effectively addresses your concerns. | null | null | null | null | null | null |
ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think | Accept (poster) | Summary: This paper investigates continual learning for deep neural network when the gradient is not accessible -- instead of using backpropagation methods (first order methods (FO)), the gradient is approximated by forward pass methods (zeroth-order optimization (ZO)). The paper present ZeroFlow benchmark, where different zeroth-order approaches are proposed (combination of different apprximation method, direction computation and parameter update). The benchmark use ViT-B/16 pre-trained model on IN21K and perform 10 task class incremental learning on multiple datestes (Cifar-100, CUB, ImageNet-A, OmniBenchmark). Based on experimental results, this work proposes 3 different enhancement that can be used to further improve ZO approaches for continual learning.
Claims And Evidence: The claims are clearly supported by the appropriate experimental results. Mostly are easy to follow by the reader, however, some description can be improved for a better readability, e.g. Figure 4 - presenting last-task accuracy, however the bolded description "Results for Forgetting".
Methods And Evaluation Criteria: The proposed methods seem like a comprehensive combination of multiple ZO forward-pass methods. But not an expert here if the selection is good.
For the dataset and evaluation metrics -- this work uses standard CL performance evaluation metrics. However, here, maybe interesting would be to see forward/backward transfer and values for the particular tasks (in appendix). That would be interesting and make the benchmark more informative.
Theoretical Claims: Most of the claims are supported by empirical analysis. No theoretical claims with proofs.
Experimental Designs Or Analyses: Experimental designs follow the standard class-incremental strategy of spiting datasets into multiple tasks. In this case, it's always ten.
Supplementary Material: Yes. Both. More focusing on A1 and discussion on memory usage.
Relation To Broader Scientific Literature: The relation to forward-pass only optimization and appropriate methods are well discussed and introduced in Sec.3.
CL literature is well-covered and referenced.
Essential References Not Discussed: In Sec. 4.1. when EASE and APER appears, there are no references. One sentence explanation for each method would be also helpful for the reader (or a longer one in different place -- Sec 3?). Currently, the reader just receives the acronyms.
Other Strengths And Weaknesses: Strengths
1. Quite comprehensive benchmark with multiple methods, datasets, and ViT-based backbone.
2. Interesting insights and proposal of further enhancements.
Weaknesses:
1. Not presenting longer sequences than 10 tasks.
Other Comments Or Suggestions: 1. Presenting joint training for all the methods - as an upper-bound.
2. Would be interesting to see if there's a different knowledge transfer between the tasks in ZO vs FO methods. Maybe that can be added to the appendix and one the most interesting plot to the main paper?
Small ones:
1. A mistake in Conclusion in the last sentence, 3 enhancements were proposed, not 2.
2. Formatting of Table 1: what bolding means? Some sections are missing bolding at all - it is confusing for the reader in the current form.
Questions For Authors: Enhancements are presented alone, separately posing improvements. It's not clear if they can be combined together and what will be the results. Maybe the authors already tried this and can share the results in the appendix?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1: Caption of Figure 4**
Thanks for pointing out! We've corrected it.
**Q2: Missing Quotes and Descriptions**
We've included quotes (EASE, CVPR-24 and APER, IJCV-24) and use one sentence to describe them.
**Q3: Longer task sequences**
As shown below, we evaluated the results on a task sequence of 20. ZO method still provides comparable performance across learning metrics. Besides, we will add evaluations on three other datasets for further comparison in the appendix.
| | | | CIFAR-100 (20) | | |
|--------|--------|---|----|---|----|
|Optimizer|Strategy|Avg|Last|FWT|BWT|
|**SGD**|FO|87.32|80.20|-6.89|-6.79|
| |ZO|82.65|75.98|-8.33|-7.71|
| |Sign|83.47|76.13|-8.01|-7.22|
| |Conserve|82.20|75.94|-8.64|-7.93|
|**Adam**|FO|86.67|78.19|-7.17|-6.80|
| |ZO|84.07|76.89|-7.92|-7.19|
| |Sign|84.16|76.90|-7.95|-7.20|
| |Conserve|83.82|76.76|-8.04|-7.07|
|-|Forward|82.84|76.32|-8.25|-7.84|
**Q4: Knowledge Transfer**
Following [7], we provide the results of BWT and FWT of EASE with 20-split CIFAR-100 datasets above. We'll add these results and analysis into the Appendix of the main paper as an interesting plot. Thanks for your great suggestions to further enhance the quality of our manuscripts!
[7] A Comprehensive Survey of Continual Learning: Theory, Method and Application, TPAMI 2024.
**Q5: Upper-bound for ZeroFlow**
Thanks for your nice advice. We agree that the upper bound is significantly valuable for ZeroFlow. We added a linear layer for classification after the pretrained ViT-B/16-IN21K backbone and then fine-tune it with several ZO methods as the upper bound, as shown below. The updated results will be included in Table 1 of the revised manuscript. Moreover, we're opening up a leaderboard to the community, and will offer more upper bound results on various datasets and CL method. Thank you again!
| Joint-training | FO | ZO | Sign | Conserve | Forward |
|----------------|------|------|------|----------|----------|
| SGD | 92.01| 90.66| 91.71| 91.89 | 91.65 |
| Adam | 91.97| 91.46| 91.63| 91.50 | 91.65 |
**Q6: Enhancement together**
The results shown below indicate that the proposed enhancements do not conflict with each other and can be integrated. Specifically, FO training benefits from fine-tuning using hybrid ZO optimization, as illustrated in Figure 7. Additionally, it can be further enhanced by utilizing historical gradients and sparsity perturbation to reverse forgetting and achieve stable optimization, as analyzed in R2Q3. Thanks for your great suggestions.
| | Hybrid | Historical | Sparsity | Avg | Last |
|----------------|--------|------------|----------|-------|--------|
| FO-SGD | - | - | - | 61.24 | 51.02 |
| ZO-SGD | - | - | - | 57.87 | 48.32 |
| | ✔ | | | 61.40 | 51.34 |
| | | ✔ | | 58.90 | 49.04 |
| | | | ✔ | 59.47 | 49.24 |
| | ✔ | ✔ | ✔ | 62.07 | 51.94 |
**Q7: Small Typoes**
We've corrected the typo in Conclusion and added explanations for the bold terms in Table 1. | Summary: This submission presents a novel benchmark, ZeroFlow, for evaluating overcoming catastrophic forgetting under a gradient ban. The key insight is that forward pass optimization alone can also mitigate forgetting, which challenges the conventional reliance on backpropagation-based optimization. The study evaluates seven forward-pass optimization algorithms across various forgetting scenarios, datasets, and evaluation metrics. The key findings are, (i) Insights into the trade-offs between forgetting and learning; (ii) Effectiveness of forward pass methods for forgetting; (iii) Efficiency and memory management, more practical for CL scenarios; Next, the submission further proposes three new enhancement techniques: hybrid optimization, historical information, and sparsity-induced estimation method, to improve the efficiency and stability of forward-pass.
Claims And Evidence: The claims made in the submission are indeed well-supported by clear and convincing evidence, as follows,
* The visualization of optimization trajectories (Figure 3/6) supports the claim that forward pass methods can effectively balance learning and forgetting.
* The comparable or better performance supports the claim that effectively overcomes forgetting (Table 1/2).
* The proposed enhancements are well-supported by experimental results that demonstrate improved performance in overcoming forgetting (Tables 3/4, and Figure 7).
Methods And Evaluation Criteria: The manuscript employs reasonable methods and evaluation criteria, including common datasets in forgetting scenarios (CIFAR-100, CUB, ImageNet-A, and OmniBenchmark), well-recognized metrics (accuracy, last-task accuracy, and forgetting measures, and memory, query, and runtime efficiency).
Theoretical Claims: The manuscript includes reasonable theoretical claims, such as the process of gradient estimation (Equation 1, Algorithm 1) and the definition of zeroth-order optimization in CL scenarios (Section 3.2). Moreover, the authors provide empirical evidence to support their theoretical claims (Figure 3).
Experimental Designs Or Analyses: The soundness and validity of the experimental designs and analyses in this submission are well-constructed, such as the comprehensive benchmark experiment (Table 1, Figure 2/4), efficiency analysis (Table 2), and reasonable enhancement designs (Figure 7, Table 3/4). It is commendable that the manuscript provides ample illustrative observations, such as Figures 2/5/6/8/10, which add to its value.
Supplementary Material: Yes, I have reviewed the entire supplementary material.
Relation To Broader Scientific Literature: The submission provides a thorough discussion of the relevant literature. Prior to this study, substantial efforts relied on gradient information to overcome forgetting, they tries a new technical pathway to do this. And their key contribution is overcoming forgetting using the forward pass method, which is related to catastrophic forgetting, and optimization in CL. The authors discuss literature related to these concepts, including CL, FO method, ZO method, and others.
Essential References Not Discussed: The literature discussed by the authors appears to be sufficiently comprehensive and closely related to the topic. The cited works cover the necessary background and prior findings, providing a thorough context for their contributions.
Other Strengths And Weaknesses: **Strengths,**
* The manuscript proposes an interesting and practical scenario, that is overcoming catastrophic forgetting under a gradient ban. The concept of the gradient ban is an intriguing idea. In short, this manuscript introduces a new topic to the community that could potentially inspire some upcoming work.
* The authors propose a benchmark to explore the optimization principles of overcoming forgetting under gradient bans. They evaluated a series of methods that enable learning new tasks and retaining old ones using only forward passes. More valuable, the authors provide insightful observations and build their motivation on these findings.
* The authors provide insights into the evolution of gradient trajectories for new and old tasks during the optimization, such as managing task conflicts. These observations are inspiring for understanding how just forward pass balance learning new tasks and retaining old knowledge, and would likely inspire further research in CL.
* The manuscript is logically structured and engagingly written, with clear presentations. Notably, the authors acknowledge the potential risks of this new topic and discuss them thoroughly. Moreover, they also fully explore the advantages of this benchmark, such as reduced memory usage, faster training, and less forgetting.
* Building on their empirical observations, the authors propose three enhancement techniques that achieve promising performance.
**Weaknesses,**
* Compared to the significant contribution of introducing a new technological pathway for the community, the contributions of the three enhancements are somewhat weak but still represent valuable attempts. As I understand it, the mechanisms of Enhancements 1, 2, and 3 seem to also offer shorter training times or less memory usage. I speculate that this efficiency might stem from a reduction in the average queries. Unfortunately, the authors did not conduct such an analysis. I believe completing this would be highly valuable, as this would well-align their observations and the advantages of these optimization principles.
* The different behavior of the optimization principles on new and old tasks are insightful for researchers looking to follow this topic. However, the legends in the figures (Figures 3, 7, and 8) are too small, including the optimization endpoints and the joint optimization center for new and old tasks, this makes it difficult to fully appreciate the insights conveyed by these visualizations.
* The authors clearly explain the motivation for selecting these forward-pass methods. Given my experience, I can easily follow them. However, for those unfamiliar with these optimization algorithms, there may be a learning curve. Providing brief explanations would be beneficial for improving the accessibility of the manuscript.
Other Comments Or Suggestions: The key claim of the manuscript is that forward passes alone are sufficient to mitigate forgetting, which challenges the traditional reliance on BP-based optimization idea. Based on my experience, this work offers a new technological pathway for the CL community. And, the insights provided are quite valuable for understanding this topic. Overall, I tend to be positive about this work. Given some concerns and weaknesses (Please see Weaknesses and Questions), I'm willing to discuss them in the rebuttal.
Questions For Authors: Here are a few questions that interest me,
* Why does Forward-Grad consume slightly more memory than other forward-pass methods?
* Could the authors provide the brief explanations of the optimization principles behind the used forward pass methods in benchmark? (Please see Weakness 3)
* Could the mechanisms of Enhancements 1, 2, and 3 provide shorter total training times or reduced total memory usage? Or do only specific enhancements offer benefits? (Please see Weakness 1)
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: **Q1: More Discussion of Enhancements**
Indeed, as you note, Enhancement 3 has a potential acceleration advantage, which benefits from the reduction in average queries. More analysis see Enhancement 3 in our response 3 to Review HREV. Thank you for your suggestion, which effectively improves our proposed enhancement technique.
**Q2: Visualization of Trajectory**
We fully acknowledge that the small legend hindered the clarity of key insights. We will enhance figure readability by (i) enlarging all legends (old and new task), and (ii) zooming in on optimization endpoints (black, red and blue mark) and joint optimization centers (star mark). These revisions will ensure the visualized insights about task-specific optimization behaviors are more intuitively conveyed.
**Q3: Concise Overview of ZO Series**
Zeroth-order optimization aims to minimize/maximize an objective function $f: \mathbb{R}^n \to \mathbb{R}$ without derivative information. The core problem is formulated as $ \min_{\theta \in \mathbb{R}^n} f(\theta) $, where $\theta$ denotes the optimization variable. To enable gradient-based updates, Simultaneous Perturbation Stochastic Approximation (SPSA) is a commonly used technique to approximate gradients by perturbing the input variables. Specifically, the gradient $ \nabla f(\theta) $ at point $ \theta $ is estimated as:
$\nabla L(\theta, \xi; B) = \frac{f(\theta + \epsilon \xi; B) - f(\theta - \epsilon \xi; B)}{2 \epsilon} \cdot \xi^{-1},$
where $ \xi \sim \mathcal{N}(\mathcal{0}, \mathcal{I}) $ is a random perturbation vector, and $ \epsilon > 0 $ is a small perturbation step size (typically adjusted during optimization).
**ZO-SGD:** Using the gradient estimator $\nabla L(\theta, \xi; B)$, zeroth-order algorithms, such as ZO-SGD, follow the iterative update rule:
$\theta_{t+1} = \theta_t - \alpha_t \nabla L(\theta_t, \xi_t; B),$
where $\alpha_t$ is the learning rate at step $ t $. ZO-SGD bypasses explicit gradient computation through local function evaluations, making it suitable for high-dimensional, non-convex optimization problems.
**ZO-SGD-Sign:** A variant of ZO-SGD, known as ZO-SGD-Sign, improves upon the original approach by approximating the gradient direction using the sign of the gradient estimate. The update rule becomes:
$\theta_{t+1} = \theta_t - \alpha_t \, \text{sign}(\nabla L(\theta_t, \xi_t; B)),$
where $ \text{sign}(\cdot) $ denotes the element-wise sign function. This approach often leads to faster convergence in some problems where the magnitude of the gradient is not as important as its direction.
**ZO-SGD-Conserve:** Another variant is ZO-SGD-Conserve, which conserves the gradient information over multiple iterations. The update rule for this method is:
$\theta_{t+1} = \theta_t - \alpha_t \cdot \frac{1}{k} \sum_{i=0}^{k-1} \nabla L(\theta_{t-i}, \xi_{t-i}; B),$
where $ k $ is the number of past iterations used to average the gradient estimates. This method is beneficial when the gradient updates are noisy, and averaging helps stabilize the optimization process.
**Forward Gradient:** The forward gradient is an approximation technique where the gradient is computed by evaluating the function at a perturbed point and using a linear approximation. Specifically, the gradient $\nabla f(\theta)$ at point $\theta$ is estimated as:
$\nabla f(\theta) \approx \frac{f(\theta + \epsilon \xi) - f(\theta)}{\epsilon} \cdot \xi^{-1},$
where $\xi \sim \mathcal{N}(\mathcal{0}, \mathcal{I})$ is a random perturbation vector, and $\epsilon > 0$ is a small perturbation step size. The forward gradient is generally used when the optimization problem requires accurate directional updates without relying on an explicit derivative.
**Q4: Extra Memory of Forward-Grad**
Forward Gradient requires more memory than ZO-SGD because it involves computing gradients through the Jacobian-vector product, which necessitates storing all intermediate activations during the forward pass. In models like ViT, this includes storing large attention matrices and other intermediate results. In contrast, ZO-SGD only requires two forward passes with perturbed inputs, without needing to store these intermediate activations, resulting in lower memory usage.
---
Rebuttal Comment 1.1:
Comment: The authors addressed my concerns, given their interesting idea and contribution to understanding the optimization behavior of overcoming forgetting, I am inclined to accept this submission and raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our rebuttal and for raising your score. We're glad to hear that our response addressed your concerns. We truly appreciate your recognition of our work as interesting and inspiring, and its contribution to the community.
Gratefully, the experiments suggested by other reviewers have also helped make ZeroFlow more solid. We would be greatly encouraged if our work could benefit researchers working on this topic. Thank you again! | Summary: Claimed contributions:
- Contrib 1: benchmark (called ZeroFlow) of continual learning using two previously published strategies: EASE and APER, but only using zero order estimation of descent directions, on vision tasks
- Contrib 2: insights into the role of forward pass in managing task conflict, and trade-offs between forgetting and memory efficiency
- Contrib 3: 3 tricks to improve zero order continual learning:
- Contrib 3A: Hybrid zero order
- Contrib 3B: Leverage historical gradients
- Contrib 3C: Sparse update directions: randomly set some directions to zero
Claims And Evidence: Contrib 1 is as far as I can tell a new contribution, but it seems rather limited to only compare 2 strategies: EASE and APER.
I was not able to fully appreciate contrib 2: I assume that it refers to figures 3 and 6 and section 3.2, but the figures are not adequately explained (e.g. what do the axes represent ? what is the setup studied here ?).
I was not able either to appreciate the proposed methods to improve zero order continual learning (contrib 3): the techniques are barely described in section 6 with not enough details to be self contained, whereas their should be discussed in more details with ablation studies in order to benefit the community.
Methods And Evaluation Criteria: The benchmarked datasets seem reasonable, as they are commonly used in the evaluation of continual learning methods. A limitation is that there are only vision tasks in the benchmark.
Theoretical Claims: Not applicable: there is no theoretical claim in the paper.
Experimental Designs Or Analyses: From experience, the performance of continual learning techniques at mitigating forgetting is greatly dependent on specific values of hyperparameters. An extensive discussion of the strategy to choose these hyperparameters is currently missing so I don't think the current state of the benchmark provides any reliable conclusion
Supplementary Material: I did not review it.
Relation To Broader Scientific Literature: Previous literature in continual learning and in zero order optimization is adequately mentioned as far as I can tell.
Essential References Not Discussed: Not applicable.
Other Strengths And Weaknesses: Many statements are rather imprecise, or even do not make sense, and would benefit from proofreading (examples in the next field below). Some definitions are missing and new terms seem to be introduced without much discussion.
Other Comments Or Suggestions: - "gradient ban" is, as far as I can tell, a phrase coined in the paper.
- please define a "forward pass method"
- please define "forgetting measure" in the benchmark
- figure 8: what is the "function value" ?
- "genetic" (section 3) should it be "generic" ?
- discussion regarding the ImageNet input dimension in section 3 which is irrelevant for defining the number of parameters in conv nets (convolution kernels do not depend on the input size)
- in eq. 1, what does it mean to compute the inverse of the vector xi ?
- BP-free and BP-based in section 4.2. This probably refers to backpropagation, but it is never defined, neither used elsewhere in the paper.
Imprecise statements:
- section 3.2 "ZO optimization for catastrophic forgetting" sounds like forgetting is a desirable property that the method tries to amplify
Questions For Authors: I would suggest some revisions to make the paper more self-contained and easy to read: limit the number of repeated statements and instead carefully define the material needed to understand the benchmarked methods as well as the proposed improvements.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: **Q1: Extensions to Contrib 1**
We extended the experimental scope to enhance Contrib 1. In detail, we evaluated ZeroFlow on extra strategies: memory replay CL and VLM-CL (see **Q5/7 of Reviewer see4**).
**Q2: Explanation of Contrib 2**
In Section 3.2, ZeroFlow examines how ZO optimization helps mitigate catastrophic forgetting by comparing its optimization landscape to that of FO optimization. Figures 3 and 6 show the optimization paths of both methods as they balance preserving prior knowledge and adapting to new tasks. The axes represent the two most influential feature vectors, with the blue and red "X" markers indicating the optima for old and new tasks, respectively. The black dot represents the learned parameters, initially biased toward the old task, while the black star marks the optimal balance between both tasks. We will explain it in the revision.
Also, **Reviewer Js2L** also provided a clear explanation of Contrib 2 (**Strengths** and **Designs Or Analyses Section**) for your consideration.
**Q3: More Discussion of Enhancements**
**Enhancement 1: Hybrid Optimization** begins by leveraging gradients for fast adaptation to new tasks. Once the parameters are sufficiently close to the optimal solution, ZO is employed to fine-tune the solution, addressing forgetting.} i) FO ensures rapid convergence and efficient adaptation to new tasks, while ZO refines the solution in regions where gradients are unreliable or sparse. ii) The inherent randomness of ZO helps avoid sharp minima, akin to the principles of SAM, fostering more stable and generalizable solutions.
**Enhancement 2: Historical Utilization** employs an online Exponential Moving Average (EMA) to retain the past update information of old task gradients, adjusting them to minimize deviations from the historical direction.} By weighting the historical gradients, it reduces the impact of fluctuations induced by new tasks, effectively alleviating forgetting. Moreover, it enhances the stability of ZO optimization, ensuring smoother convergence and preserving knowledge from old and new tasks.
**Enhancement 3: Sparse Perturbation** introduces sparsity into the ZO by setting a fraction of perturbation dimensions to 0, thereby reducing the number of perturbed parameters.} i) This mitigates the instability inherent in ZO by lowering variance in gradient estimation, leading to more consistent updates. ii) Sparsity reduces overhead, making ZO method more practical for high-dimensional CL settings.
Overall, we will include above into the revision for clarity.
**Q4: Effects of the Hyperparameters from CL**
We provided the results on varying projection dimension $r$ and the trade-off parameter $\alpha$ of EASE. As below, ZeroFlow remain solid, which means that we provided reliable conclusions. We'll provide more analysis in version to ensure reliable conclusions. "‘Cons’ below refers to ZO-Conserve.
| r | FO_64 | FO_32 | FO_16 | ZO_64 | ZO_32 | ZO_16 | Sign_64 | Sign_32 | Sign_16 | Cons_64 | Cons_32 | Cons_16 |
|--------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| $\alpha$ 0.3 | 91.11 | 91.02 | 91.37 | 78.25 | 78.14 | 78.97 | 82.77 | 83.16 | 83.84 | 82.77 | 82.15 | 82.19 |
| $\alpha$ 0.1 | 91.23 | 91.30 | 91.47 | 78.62 | 78.81 | 79.21 | 83.21 | 83.90 | 83.58 | 82.22 | 82.25 | 82.46 |
| $\alpha$ 0.05| 91.37 | 91.39 | 91.54 | 78.45 | 78.82 | 78.94 | 83.12 | 83.25 | 83.15 | 82.46 | 82.18 | 82.41 |
**Q5: More Precise Statements**
We've carefully reviewed the manuscript, clarifying unclear phrases, as follows,
- Gradient ban: We clarify that we've defined this concept in the original manuscript, see Lines 16, 54 and 40. And we've described its scenario to make the concept easier to understand, see Lines 23 to Lines 40.
- Forward pass method: this concept refer to gradient-free optimization methods that rely on forward pass, specifically referring to ZO and Forward-Grad method. We'll define it.
- Forgetting measure: The forgetting measure we used is a common metric in CL [5]. We will define it again for clarity.
- Function value: It represents the optimization objective in Figure 7, where values approaching zero indicate proximity to the global optimum.
- $x^{-1}$: denotes the element-wise inversion of the perturbation vector, which is necessary to ensure that the expected value of the gradient estimate aligns with the true gradient when $x$ follows a broader asymmetric distribution [6].
- Other misc: Small errors are fixed, e.g., the subtitle of section 3.2 corrected to ZO optimization for overcoming forgetting.
[5] A Comprehensive Survey of Continual Learning: Theory, Method and Application, TPAMI-24.
[6] Multivariate Stochastic Approximation using a Simultaneous Perturbation Gradient Approximation, TAC 1992.
---
Rebuttal Comment 1.1:
Comment: I don't see the claimed update of the manuscript, did you update the pdf ?
Figure 3. is still missing a clear legend, and axes labels, which makes it difficult to parse. Your additional comment in the rebuttal that the axes are the "two most influential feature vectors" raises additional questions: how do you define this influence ? Why is it relevant to observe function space trajectories? There are symmetries in the parameter space: e.g. swap two rows in the weights of a linear layer, and the corresponding 2 columns in the weights of the following linear layer, and you obtain the same function. So how is looking at trajectories in parameter space relevant?
Another very vague statement noticed in my last review, at the end of page 4:
"[...] ZO methods [...] naturally facilitate the exploration of flat regions in parameter space" => this is never actually checked
I am also a bit surprised by the very good reviews given by other reviewers.
I dont think in its current state that the paper meets the standards of ICML.
---
Reply to Comment 1.1.1:
Comment: We would like to kindly remind you that, according to ICML policy, **uploading a revised manuscript during the rebuttal phase is not allowed** (please refer to the ICML Reviewer Instructions for more details). Overall, we have thoroughly revised the paper based on the thoughtful feedback and are committed to **incorporating all the changes in the revised manuscript**, including the broadened experimental scope, detailed expansion on the enhancements, and clarification of vague statements.
**1. Re-clarify to Fig. 3**
We would like to respectfully remind you that analyzing optimization trajectories is a common practice in machine learning, particularly continual learning (CL) (***e.g., Fig. 3 in [1]***) and multi-task learning (MTL) (***e.g., Fig. 1 in [2], Fig. 1 in [3], Fig. 2 in [4]***), to study gradient conflicts between tasks.
**Strictly following the setup in [1, 3]**, we visualize the optimization behavior of first-order (FO) and zeroth-order (ZO) methods in overcoming forgetting. Specifically, we consider a two-dimensional parameter space $\theta = (\theta_1, \theta_2) \in \mathbb{R}^2$ with the following individual loss functions:
**$L_1(\theta) = c_1(\theta) f_1(\theta) + c_2(\theta) g_1(\theta)$ for old tasks**,
**$L_2(\theta) = c_1(\theta) f_2(\theta) + c_2(\theta) g_2(\theta)$ for new tasks**.
**Thus, the contour plot in Fig. 3 illustrates the overall objective function, defined as $L = L_1(\theta) + L_2(\theta)$, with the $x$- and $y$-axes representing $\theta_1$ and $\theta_2$, respectively.**
Adherence to [3],
$f_1(\theta) = \log \left( \max \left( |0.5(-\theta_1 - 7) - \tanh(-\theta_2)|, \; 0.000005 \right) \right) + 6$,
$f_2(\theta) = \log \left( \max \left( |0.5(-\theta_1 + 3) - \tanh(-\theta_2 + 2)|, \; 0.000005 \right) \right) + 6$,
$g_1(\theta) = \frac{(-\theta_1 + 7)^2 + 0.1 \cdot (\theta_2 - 8)^2}{10} - 20$,
$g_2(\theta) = \frac{(-\theta_1 - 7)^2 + 0.1 \cdot (\theta_2 - 8)^2}{10} - 20$,
$c_1(\theta) = \max \left( \tanh(0.5 \cdot \theta_2), \; 0 \right)$,
$c_2(\theta) = \max \left( \tanh(-0.5 \cdot \theta_2), \; 0 \right)$.
Note that the similar visualization of optimizing behavior in overcoming forgetting is shared in [1]. **For clarity, we will add a description of the objective function, legend and axes in all the figures.**
[1] Embracing Change: Continual Learning in Deep Neural Networks, Cells 2020. (*Citations: 626*)
[2] Gradient Surgery for Multi-task Learning, NeurIPS 2020. (*Citations: 1208*)
[3] Conflict-averse Gradient Descent for Multi-task Learning, NeurIPS 2021. (*Citations: 361*)
[4] Independent Component Alignment for Multi-task Learning, CVPR 2023. (*Citations: 50*)
**2. Clarification of vague statement to "ZO methods naturally facilitate the exploration of flat regions in parameter space"**
Zeroth-order gradient estimates are known to be noisy approximations of the gradient (Sec. 4.3 in [5], Sec. 3.3 in [6]). Existing researches (Sec. 3.1 in [7], Sec. 1.1 in [8], Sec. 1 in [9], Sec. 6 in [10]) have demonstrated (both experimentally and theoretically) that injecting noise into the gradient direction can help the algorithm escape bad or spurious local minima. Moreover, **[6] (Sec. 3.3: Zeroth-order updates may help to escape spurious and sharp local minima) explicitly shows that noisy gradients can assist the algorithm in finding flat minima and avoiding sharp local minima.** All these insights suggest that ZO methods **have the potential** to guide models toward better local minima. **We have carefully checked the statements to ensure they are rigorous and well-supported.**
[5] Randomized Gradient-free Methods in Convex Optimization, Encyclopedia of Optimization 2023.
[6] Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models, NeurIPS 2024 Workshop.
[7] Escaping from Saddle Points—online Stochastic Gradient for Tensor Decomposition, COLT 2015.
[8] How to Escape Saddle Points Efficiently, ICML 2017.
[9] Toward Understanding the Importance of Noise in Training Neural Networks, ICML 2019.
[10] Noisy Gradient Descent Converges to Flat Minima for Nonconvex Matrix Factorization, AISTATS 2021. | Summary: The paper explores the challenge of catastrophic forgetting in continual learning under a gradient ban setting, where gradients information is unavailable. To address this, the authors investigate zero-order optimization methods, which rely only on forward passes without requiring backpropagation. Their key finding is that zero-order optimization can mitigate catastrophic forgetting while improving computational efficiency compared to first-order optimization. The paper proposes three enhancements to further improve zero-order optimization on mitigating catastrophic forgetting: hybrid first- and zero-order optimization, integrating historical gradient to stabilize optimization, random sparsity in gradient estimation.
Claims And Evidence: Yes, the paper’s empirical coverage of ZO variants (sign-based, conservative, etc.) and ablations on query budgets (q=1,2,4…) reinforce the authors main claims claims.
Methods And Evaluation Criteria: Yes. This paper is a benchmark paper, and they authors use metrics including average accuracy, final accuracy, and forgetting.
Theoretical Claims: The paper primarily focuses on benchmarking and algorithmic proposals rather than formal proofs. The authors reference known theoretical properties of ZO, such as the expected convergence behavior in high-dimensional spaces, but do not introduce new formal theorems.
Experimental Designs Or Analyses: Yes.
For evaluation on EASE and APER models, because the models are based on ViT-B/16 models pre-trained on the ImageNet-21k dataset, it is important to provide justification that the test datasets (CIFAR-100, CUB-200, ImageNet-A, and OmniBenchmark) do not overlap with the pre-training data, or that any overlap has been properly accounted for. This ensures that the reported results are not affected by potential data leakage.
Supplementary Material: Yes. All of them. The supplementary details memory usage as a function of batch size, further plots of training trajectories, and partial ablations.
Relation To Broader Scientific Literature: The paper draws on two lines of work: (1) zeroth-order optimization, especially in black-box or gradient-banned contexts (e.g., black-box LLM APIs, non-differentiable modules), and (2) catastrophic forgetting in continual learning. The authors cite standard CL approaches (e.g., EWC, replay-based methods), plus references on ZO from both classical (SPSA, random gradient-free) and modern contexts (MeZO for large language models). The synergy of these areas is interesting: while gradient-based solutions dominate CL, “ZeroFlow” demonstrates that forward-only methods can provide surprisingly strong performance plus memory savings.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- Interesting demonstration that catastrophic forgetting can be addressed without gradient signals, which is valuable for real-world “gradient ban” scenarios.
- Comprehensive coverage of ZO methods (SPSA-based, sign-based, conservative, etc.) and an ablation on query budgets.
- Detailed analyses with standard CL metrics (accuracy, forgetting), plus resource usage.
Weaknesses
- The experimental scope, while broad, focuses mostly on standard classification tasks. Future expansions into more diverse data modalities or multi-domain tasks would strengthen generalizability claims.
- ZO-based training can still be slow if the query budget or the model scale is large (though they partly address this with memory overhead analyses).
Other Comments Or Suggestions: none
Questions For Authors: 1. In Table 2, the performance difference between FO and ZO optimization methods on APER is quite marginal, whereas the difference is more pronounced for EASE. Could you provide a possible explanation for this discrepancy?
2. Investigating black-box LLM usage would be an interesting extension, especially given the paper’s references to LLM-as-a-service scenarios. Can the authors comment on this?
3. More examples of how ZO training interacts with memory replay or generative replay strategies might broaden applicability.
4. Potential tests on domain-incremental or cross-modal tasks (e.g., image → text) would help confirm the approach’s general utility.
5. Have the authors tested ZeroFlow on extremely large-scale models (like full ImageNet training or large pretrained transformers), and do the memory/time advantages still hold there?
Ethical Review Concerns: No ethics concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1: Justification of Datasets**
We follow the typical dataset setup to perform all evaluation [1,2]. In general, any overlap has been properly accounted for and domain gap is further considered (e.g. ImageNet-A and OmniBench are acknowledged to have large domain gap with ImageNet, please refer to [1,2]- Datasets Section). Thanks for your nice suggestion, we've stated it in revision for clarity.
[1] Continual Learning with Pre-trained Models: A Survey, IJCAI-24
[2] Class-Incremental Learning: A Survey, TPAMI-24
**Q2: Concern about the Query Budget**
Actually, ZeroFlow has comprehensively addressed this concern. In details, (i) Consistent Query Budget: ZeroFlow consistently operates under a query budget of 1 across all evaluations, which can be seen in Tables 1/2, Figures 1/2/4 etc. (ii) Enhanced Training Efficiency: The proposed enhancement methods retain the same query budget of 1 (see Figures 7/8, Tables 3/4.) while further accelerating training (see Enhancement 3 in our response 3 to Review HREV). (i) and (ii) strongly support our claim--Overcoming Forgetting is Easier. (iii) As you mentioned, the overhead analyses partly address this concern. Moreover, we provide extra insights on query budgets dynamics in Figure 5 to ensure a full understanding. We expect such an analysis to inspire subsequent work to extend the scope of ZeroFlow. Overall, this concern has been addressed in the original manuscripts.
Your expertise in raising this issue is much appreciated.
**Q3: Performance Discrepancy**
Although both EASE and APER are prototype-based PTM CL models, EASE incorporates adapter training for each incremental task, wherein the training process builds upon previously trained adapters. In contrast, APER adapts the PTM solely during the initial training stage and subsequently maintains frozen for all following tasks. Consequently, the performance of later tasks in EASE is highly contingent on prior task training, whereas in APER, tasks remain independent of one another. Therefore, if a performance discrepancy arises between ZO and FO optimizers, EASE tends to amplify this gap, whereas APER is only marginally affected.
**Q4: Open Discussion about Black-box LLM**
First, our ZeroFlow naturally extends to LLM-as-a-service scenarios. Our tests on the VLM in the rebuttal (the results see Q6) have demonstrated this potential. Second, the forward-pass method we evaluate could be adapted to optimize prompts or lightweight external adapters through sequential API calls. In short, our findings on memory efficiency and forgetting mitigation potentially address key constraints of LLM deployment. Future work could establish benchmarks for black-box LLM continual learning.
**Q5: Extra Eval. on Memory Replay Method**
We further offer the performance of ZeroFlow on a typical replay-based method (MEMO [3], replay buffer=2000) to broaden applicability. As shown below, ZeroFlow remains stable in overcoming forgetting. Below, CFR and INA denote CIFAR-100 and ImageNet-A, respectively.
|Optimizer|Strategy|CFR-Avg|CFR-Last|INA-Avg|INA-Last|
|---------|--------|------|-------|------|-------|
|SGD|FO|87.43|79.66|53.15|38.97|
| |ZO|**85.92**|79.00|52.87|35.81|
| |Sign|85.72|79.10|53.31|38.18|
| |Conserve|85.86|**79.20**|49.20|36.51|
|Adam|FO|86.45|76.17|54.06|41.54|
| |ZO|85.86|**78.59**|52.70|39.01|
| |Sign|**86.16**|76.38|53.10|39.82|
| |Conserve|85.89|77.71|53.20|39.57|
|-|Forward|84.63|76.32|**53.59**|**40.64**|
[3] A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning, ICLR-23 (Spotlight)
**Q6: Potential Tests on VLM-CL**
We evaluated ZeroFlow on continual learning of vision-language model (using MoE4Adapter [4], all training protocols follow [4]). As below, the general utility of ZeroFlow is reconfirmed. Furthermore, we are preparing to open up a leaderboard to the community that will cover more tasks, including PTM-CL, VLM-CL etc.
| Method | Strategy | CFR | |
|--------------|----------|----------------|----------------|
| | | **Avg** | **Last** |
| | FO | 84.32 | 76.89 |
| | ZO | 84.27 | **76.91** |
| MoE4Adapter | Sign | **84.38** | 76.73 |
| | Conserve | 84.26 | 76.75 |
| | Forward | 83.96 | 76.53 |
[4] Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters, CVPR-24.
**Q7: Memory/Time Advantages on Larger Transformers**
Yes, we had employed two larger transformers (ViT-L/16 and ViT-H/14) to evaluate the efficiency of ZeroFlow, as shown below. All of them offer memory advantages, and ZO with ZO-Sign still run faster than FO.
|Opt|Base-Mem|Base-Speed|Large-Mem|Large-Speed|Huge-Mem|Huge-Speed|
|---|-------|-------|--------|--------|-------|--------|
|FO|12.08GB|59.3s|33.27GB|65.0s|78.09GB|190.1s|
|ZO(q=1)|2.41GB|32.4s|3.77GB|47.0s|6.45GB|118.7s|
|ZO(q=4)|2.41GB|111.7s|3.77GB|178.3s|6.45GB|442.6s|
|Sign|2.41GB|32.4s|3.77GB|48.7s|6.45GB|119.3s|
|Conserve|2.41GB|70.1s|3.77GB|108.9s|6.45GB|222.3s|
|Forward|3.94GB|45.9s|5.82GB|142.0s|9.85GB|372.5s| | null | null | null | null | null | null |
RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers | Accept (poster) | Summary: This paper proposes RePaViT, a method for accelerating Vision Transformers (ViTs) through structural reparameterization of the feedforward network layers. Specifically, this paper argues that the computation costs of FFN layers cannot be ignored, thus a structural reparameterization method on FFN layers are designed. By using the channel idle mechanism, the RePaViT reduces the computation complexity in the inference stage.
Claims And Evidence: The paper's claims are supported by some evidence, particularly through performance comparisons and ablation studies. However, there are still some problems remain. The details are listed below:
Pros:
1. Table 1 shows that the RePaViT can reduce the number of parameters, therefore improve the inference speed.
2. Table 3 shows that the RePaViT has better inference speed and accuracy than the previous re-parameterization methods.
3. Ablation studies shows that importance of the idle ratio and the training time overparameterization.
Cons:
1. Table 2 tries to show the advantages of RePaViT comparing with pruning methods. However, the number of model parameters of most of pruning methods are not provided, which means it is hard to directly compare them with the RePaViT.
2. Most of baseline methods are proposed in 2021. More recent baselines may make the experimental results stronger.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper make sense for the problem or application at hand.
Theoretical Claims: This paper does not include theoretical proofs.
Experimental Designs Or Analyses: The experimental designs are valid to support the method.
Supplementary Material: I have reviewed th supplementary materials.
Relation To Broader Scientific Literature: The key contribution of the paper is a novel re-parameterization methods on FFN layers, which has some relationship with RepVGG(Ding et al., 2021), and the authors analyzed the main differences with RepVGG in 3.5
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate Reviewer 8Sgy's recognition of our method's high performance and would like to address the concerns raised:
---
__C1: Table 2 tries to show the advantages of RePaViT comparing with pruning methods. However, the number of model parameters of most of pruning methods are not provided, which means it is hard to directly compare them with the RePaViT.__
A1: We acknowledge the difficulty in directly comparing RePaViT with certain pruning methods due to the unavailability of parameter counts in their publications and/or unavailability of source code:
1. __X-pruner [1] does not report the number of parameters in its paper and has not released source code so far.__ So, we're unable to provide its number of model parameters.
2. __DC-ViT [2] does not report the number of parameters in its paper and had not provide its source code before the submission deadline of ICML__. As a result, we're unable to provide its number of model parameters in our submission. However, we would like to reproduce its code and report the updates in the table below. Notably, DC-ViT's model size and computational complexity depends on both the number of pruned blocks and a special MLP compression ratio. Unfortunately, the authors haven't released the MLP compression ratios for reproducing their reported results (except 0.751734186 for DeiT-Base) and searching for exactly the same ratios would require a search process exceeding the rebuttal phase. So we can only provide the number of parameters and complexity that yields the closest performance as reported.
3. __LPViT [3], while its source code is available, has not released the hardware benchmarking code so far.__ It still employs masking to simulate pruning without reducing the actual parameter count, which remains equivalent to the original model. Our investigation on the source code confirm this. And this is also why LPViT only reports sparsity in its paper.
To facilitate a more comprehensive comparison, we have reproduced several pruning baselines using their latest released code. The results are summarized in the table below (italic for updated, bold for best).
|**Backbone**|**Method**|**#MParam. ↓**|**Compl. (GMACs) ↓**|**Speed improv. ↑**|**Top-1 acc. ↑**|
|-|-|-|-|-|-|
|**DeiT-Small**|WDPruning|13.3|2.6|+18.3%|78.4%|
|| X-pruner|-|2.4|-|78.9%|
|| DC-ViT|_16.6_|_3.2_| +20.0%| 78.6%|
|| LPViT|_22.1_|__2.3__| +16.3%| **80.7%** |
| | RePaViT/0.50 | 16.7 | 3.2| +34.9% | 79.1% |
| | RePaViT/0.75 | **13.2** | 2.9 | **+54.4%** | 79.6% |
| **DeiT-Base** | WDPruning | 55.3 | 9.9 | +18.2% | 80.6% |
|| X-pruner | - | **8.5** | - | 81.0% |
| | DC-ViT | _65.1_ | _12.7_ | _+18.4%_ | 81.3% |
| | LPViT | _86.6_ | 8.8 | +18.8% | 80.8% |
| | RePaViT/0.50 | 65.3 | 12.7 | +21.8% | **81.4%** |
| | RePaViT/0.75 | **51.1** | 10.6 | **+67.5%** | **81.4%** |
| **Swin-Small** | WDPruning | 32.8 | 6.3 | +15.3% | 81.3% |
| | X-pruner | - | 6.0 | - | 82.0% |
| | RePaViT/0.50 | 37.8 | 6.4 | +28.8% | **82.8%** |
| | RePaViT/0.75 | **29.9** | **5.1** | **+41.2%** | 81.6% |
| **Swin-Base** | DC-ViT | _66.4_ | _11.5_ | +14.9% | **83.8%** |
| | LPViT | _87.8_ | 11.2 | +8.9% | 81.7% |
| | RePaViT/0.50 | 66.8 | 11.5 | +24.5% | 83.4% |
| | RePaViT/0.75 | **52.8** | **9.0** | **+49.6%** | 82.6% |
We would like to emphasize that RePaViT still achieves the best trade-off in terms of model efficiency and accuracy.
---
__C2: Most of baseline methods are proposed in 2021. More recent baselines may make the experimental results stronger.__
A2: We respectfully disagree with this comment. All baseline methods compared in our study are published after 2021 as shown below:
| Method | Conference | Year |
| --------- | ---------- | ---- |
| WDPruning | AAAI | 2022 |
| X-Pruner | CVPR | 2023 |
| DC-ViT | CVPR | 2024 |
| LPViT | ECCV | 2024 |
| SLAB | ICML | 2024 |
And the backbones choices (DeiT/ViT/Swin/LV-ViT) follow the convention in token pruning and network pruning works.
---
We sincerely appreciate your review comments, hope our responses satisfactorily address your concerns, and kindly request a consideration of recommendation score.
---
Rebuttal Comment 1.1:
Comment: C1: The authors' responses can solve my concern.
C2: My comment "Most of baseline methods are proposed in 2021" is corresponding to the Table 1, and I means that the authors may consider to apply the proposed method on some more recent backbone networks.
Based on authors' rebuttal, I would like to keep my score unchange.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer 8Sgy for acknowledging our rebuttal responses and are glad to see your major concern on the baseline comparisons has been addressed. We also thank you for clarifying the suggestion on incorporating recent _"backbones"_, rather than _"baseline methods"_. Unfortunately, we do not have enough time to provide additional experiments on new backbones before the end of this discussion period. However, we respect your suggestion and are willing to include results on more recent backbones in the revised version when possible. Also, we will make our best effort to support more architectures in our released code.
Regarding the backbone selections, our method proves to work well with DeiT/ViT/Swin/LV-ViT, which are also the prominent choices in recent state-of-the-art methods for token pruning and network pruning. This is because these are the most widely used ViT architectures in today's VFM and VLM families [1-4]. We would like to point out that the coverage of these architectures in our experiment provides a clear indication of this work's impact and relevance.
The followings summarize the ViT backbones utilized in recently published works.
Among the methods we compared in our paper:
* __WDPruning__ [5]: DeiT-T/S/B and Swin-S.
* __X-Pruner__ [6]: DeiT-T/S/B and Swin-T/S.
* __DC-ViT__ [7]: ViT-T/S/B/L, DeiT-B and Swin-B.
* __LPViT__ [8]: DeiT-S/B and Swin-T/B.
* __SLAB__ [9]: DeiT-T/S, Swin-T/S/B and PVT-T/S/B.
Some other recent network pruning works for ViTs:
* __DIMAP__ [10]: Swin-T/S/B.
* __NViT__ [11]: DeiT-T/S/B and Swin-S.
And recent token pruning/merging works:
* __TokenReduction__ [12]: DeiT-T/B.
* __LTMP__ [13]: DeiT-T/S/B.
* __ToMe__ [14]: DeiT-S and ViT-T/S/B/L/H.
* __Nose__ [15]: DeiT-B.
* __Zero-TPrune__ [16]: DeiT-T/S/B and LV-ViT-S.
* __ToFu__ [17]: ViT-B/L and DeiT-S
As listed above, recent works still use backbones introduced as early as 2021. In our paper, we have adopted ViT-L/H, DeiT-T/S/B, Swin-T/S/B and LV-ViT-S/M, which already cover a diverse range of small-to-large and plain-to-hierarchical ViT architectures.
__Once again, we sincerely appreciate your recognition of our work and your insightful suggestions. We hope that our detailed explanation regarding the backbone selections will earn you stronger support.__
\
\
\
\
__References__
[1] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." ICML, 2021.
[2] Li, Liunian Harold, et al. "Grounded language-image pre-training." CVPR, 2022.
[3] Kwon, Gukyeong, et al. "Masked vision and language modeling for multi-modal representation learning." ICLR, 2023.
[4] Lin, Ji, et al. "Vila: On pre-training for visual language models." CVPR, 2024.
[5] Yu, Fang, et al. "Width & depth pruning for vision transformers." AAAI, 2022.
[6] Yu, Lu, and Wei Xiang. "X-pruner: explainable pruning for vision transformers." CVPR, 2023.
[7] Zhang, Hanxiao, et al. "Dense vision transformer compression with few samples." CVPR, 2024.
[8] Xu, Kaixin, et al. "Lpvit: Low-power semi-structured pruning for vision transformers." ECCV, 2024.
[9] Guo, Jialong, et al. "Slab: Efficient transformers with simplified linear attention and progressive re-parameterized batch normalization." ICML, 2024.
[10] He, Yang, and Joey Tianyi Zhou. "Data-independent module-aware pruning for hierarchical vision transformers." ICLR, 2024.
[11] Yang, Huanrui, et al. "Global vision transformer pruning with hessian-aware saliency." CVPR, 2023.
[12] Haurum, Joakim Bruslund, et al. "Which tokens to use? investigating token reduction in vision transformers." ICCV, 2023.
[13] Bonnaerens, Maxim, and Joni Dambre. "Learned thresholds token merging and pruning for vision transformers." TMLR, 2023.
[14] Bolya, Daniel, et al. "Token merging: Your vit but faster." ICLR, 2023.
[15] Lin, Sihao, et al. "Mlp can be a good transformer learner." CVPR, 2024.
[16] Wang, Hongjie, et al. "Zero-TPrune: Zero-shot token pruning through leveraging of the attention graph in pre-trained transformers." CVPR, 2024.
[17] Kim, Minchul, et al. "Token fusion: Bridging the gap between token pruning and token merging." WACV, 2024. | Summary: This paper proposes a novel structural reparameterization method -- RePaViT that targets the feedforward network (FFN) layers of Vision Transformers (ViTs) to accelerate inference. The key idea is a channel idle mechanism—during training, only a subset of FFN channels are activated (with the others kept “idle”), which creates a linear shortcut that can later be merged into a simplified, reparameterized structure at test time. This approach not only reduces computational complexity (both in terms of parameter count and FLOPs) but, in many cases, also improves or preserves accuracy.
Claims And Evidence: Claim: FFN layers are the major bottleneck in ViTs, and optimizing them can yield significant latency reductions.
Evidence: The latency analysis shows that FFN layers account for a growing portion of inference time as model size increases.
Claim: The channel idle mechanism and subsequent reparameterization can reduce both parameters and computation while maintaining accuracy.
Evidence: Extensive experiments on various backbones (DeiT, Swin, LV-ViT, ViT-Large/Huge) illustrate that RePaViT achieves up to 68% speedup and, in some cases, even higher accuracy compared to the original models.
Methods And Evaluation Criteria: Methods:
In each FFN layer, only a fraction of the channels pass through the nonlinear activation while the rest follow a linear path. During inference, the activated and idle branches are merged using structural reparameterization (with BatchNorm merging) to form a more compact FFN. The approach is applied on standard ViT architectures trained from scratch on ImageNet-1K, and evaluations span image classification, object detection, and semantic segmentation.
Evaluation:
The paper reports throughput (images per second), parameter counts, FLOPs, and top-1 accuracy on ImageNet-1K. It further compares performance on dense prediction tasks (MS COCO, ADE20K) and benchmarks against network pruning and alternative reparameterization methods (e.g., SLAB).
Theoretical Claims: I do not think there are any theoretical claims. Most of the designs are empirical based.
Experimental Designs Or Analyses: Experiments:
The authors conduct comprehensive experiments across multiple ViT backbones (plain and hierarchical) on ImageNet-1K, as well as on downstream tasks like object detection (MS COCO) and semantic segmentation (ADE20K).
Analysis:
Detailed tables compare the original and reparameterized models in terms of speed, accuracy, and computational cost.
Supplementary Material: I have check all the appendix.
Relation To Broader Scientific Literature: Its focus on directly optimizing FFN layers through a novel channel idle mechanism distinguishes it from existing methods that mainly target attention layers or use pruning strategies.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The visualization is clear and understandable.
2. The experiments are solid and comprehensive.
3. The reparameterized models achieve good efficiency and accuracy.
Weaknesses:
1. Need more discussion about edge devices or resource limited scenarios.
2. Need more discussion about combination with other efficiency methods.
Other Comments Or Suggestions: See questions.
Questions For Authors: 1. How does RePaViT perform in deployment scenarios on edge devices?
2. I am curious about the performance of the proposed method when combined with other efficiency methods (e.g., quantization, pruning).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer d1Yn's detailed and careful review comments. We thank Reviewer d1Yn for pointing out the strengths of our work, including
* clear and understandable presentation
* solid and comprehensive experiments
* high performance
* and novel from existing methods.
We would like to answer the question as below:
---
___W2&Q2: I am curious about the performance of the proposed method when combined with other efficiency methods (e.g., quantization, pruning).___
A2: We thank for this insightful question on the combination of our method and other kind of existing efficient methods.
* __Pruning__
Since our method primarily focuses on reducing channel-wise complexity for ViTs, we decide to test our method combined with a spatial-wise token reduction method, ToME, rather than network pruning methods. The results of ToMe-RePa-ViTs with different token reduction numbers (r) are shown in the Table below. In general, our method combines with token reduction method well and yields more significant improvement in the trade-off between accuracy and efficiency.
__Notably, ToMe-RePa-ViT-Large/0.75 with reduction number 8 achieves more than 200% acceleration (374.7 imgs/s vs 107.7 imgs/s) with merely 0.8% accuracy drop.__
|Backbone|r|Reparam|#MParam.|GMACs|Speed (imgs/s)|Accuracy|
|-|-|-|-|-|-|-|
|ToMe-RePa-DeiT-Small/0.75|0 (No ToMe)|×|22.1|4.6|1279.1|77.1%|
||0 (No ToMe)|√|13.2|2.9|1975.5|77.1%|
||1|√|13.2|2.7|1983.3|77.0%|
||2|√|13.2|2.6|2077.5|76.9%|
||3|√|13.2|2.4|2118.0|76.8%|
||4|√|13.2|2.2|2225.3|76.6%|
||5|√|13.2|2.1|2286.5|76.4%|
||6|√|13.2|2.1|2370.1|76.1%|
||7|√|13.2|2.0|2438.9|75.9%|
||8|√|13.2|1.9|2542.3|75.4%|
|ToMe-RePa-DeiT-Base/0.75|0 (No ToMe)|×|86.6|17.6|393.8|81.8%|
||0 (No ToMe)|√|51.1|10.6|659.5|81.4%|
||1|√|51.1|9.6|660.6|81.5%|
||2|√|51.1|9.3|678.9|81.4%|
||3|√|51.1|9.0|697.4|81.4%|
||4|√|51.1|8.7|738.8|81.3%|
||5|√|51.1|8.4|746.9|81.2%|
||6|√|51.1|8.1|773.3|80.9%|
||7|√|51.1|7.8|812.0|80.6%|
||8|√|51.1|7.5|843.7|80.1%|
|ToMe-RePa-ViT-Large/0.75|0 (No ToMe)|×|304.5|59.8|102.7|82.0%|
||0 (No ToMe)|√|178.4|34.9|207.2|82.0%|
||1|√|178.4|32.8|208.6|82.0%|
||2|√|178.4|30.7|220.9|81.9%|
||3|√|178.4|28.6|236.3|81.9%|
||4|√|178.4|26.5|257.1|81.8%|
||5|√|178.4|24.4|278.9|81.8%|
||6|√|178.4|22.3|304.9|81.6%|
||7|√|178.4|20.2|337.2|81.5%|
||8|√|178.4|18.1|374.7|81.2%|
* __Quantization__
For simplicity, we employed TensorRT to perform FP32-to-FP16 quantization on our pretrained and reparameterized models. TensorRT provides a comprehensive ecosystem of tools designed to deliver high-performance deep learning inference via _post-training_ quantization. The evaluations before and after reparameterization of our RePaViT and RePaSwin models were conducted on the same hardware platform mentioned in the paper (NVIDIA A6000).
__Remakably, when quantized to FP16, our method presents more than 200% acceleration. And the acceleration even signifies with reparameterization, demonstrating the effectiveness of combining quantization with our method in real-world scenarios.__
Model| Reparam | FP16 Imgs/s | FP16 Acc | FP32 Imgs/s | FP32 Acc
-|-|-|-|-|-
RePa-DeiT-Small/0.75|×| 11945.84 (+210.3%) | 76.41%| 3849.89| 76.41%
||√| 18329.13 (+248.4%) | 76.40%| 5260.94| 76.41%
RePa-DeiT-Base/0.75| × | 3135.53 (+158.3%) | 81.31%| 1213.87| 81.31%
|| √ | 5441.13 (+189.8%) | 81.32%| 1877.24| 81.32%
RePa-ViT-Large/0.75 | × | 935.75 (+172.6%) | 81.96%| 343.29| 81.96%
|| √ | 1650.12 (+210.8%) | 81.97%| 530.94| 81.95%
RePa-Swin-Tiny/0.75 | × | 7144.35 (+154.8%) | 78.43%| 2804.70 | 78.44%
|| √ | 13351.39 (+202.8%) | 78.43%| 4408.99 | 78.43%
RePa-Swin-Small/0.75 | × | 4225.21 (+152.0%) | 81.56%| 1676.96 | 81.56%
|| √ | 7782.22 (+191.7%) | 81.56%| 2667.86 | 81.56%
RePa-Swin-Base/0.75 | × | 2670.20 (+152.8%) | 82.58%| 1056.28 | 82.59%
|| √ |4873.75 (+205.7%) | 82.58%| 1594.54 | 82.58%
__In conclusion, our method combines with quantization and pruning methods well, exhibiting more significant enhancement in performance-efficiency trade-off.__ And the integration of TensorRT indicates the potential of deploying on edge devices.
---
___W1&Q1: How does RePaViT perform in deployment scenarios on edge devices?___
A1: We thank the Reviewer for this question, which is indeed interesting and essential for demonstrating the real-world practicality of our RePaViT.
Unfortunately, due to strict equipment management policies in our organization and time limits, we were unable to successfully apply for access to an edge device during the rebuttal period. As a result, we could not provide real-world inference speed measurements on actual edge hardware. We would like to report detailed performance metrics upon deployment to the Jetson AGX Orin platform in the follow-up discussion phase.
---
We hope our responses address your concerns and would sincerely appreciate your reconsideration of raising the score. | Summary: The paper introduces RePaViT, a method for accelerating Vision Transformers by applying structural reparameterization specifically to FFN layers. The key observation is that FFN layers significantly contribute to ViT inference latency, especially as the model scales. To address this, the authors propose a "channel idle" mechanism, maintaining a subset of channels inactive, forming a linear path that can be structurally reparameterized during inference. Experimental results demonstrate substantial speed-ups and even accuracy gains compared to the original ViTs. Notably, RePaViT achieves superior efficiency compared to state-of-the-art pruning and reparameterization methods, validating its practical value for real-world applications.
Claims And Evidence: The primary claims—that FFN layers dominate ViT latency, and that structural reparameterization of these layers significantly improves computational efficiency—are strongly supported by comprehensive experiments.
Methods And Evaluation Criteria: The proposed method—structural reparameterization applied directly to FFN layers via a channel idle mechanism—is clearly defined and justified by latency analyses. Evaluation criteria are appropriate and rigorously applied. Experiments with standard benchmarks like ImageNet, MSCOCO, and ADE20K provide credible validation.
Theoretical Claims: No proofs in the paper.
Experimental Designs Or Analyses: The experimental design, including various model sizes and comparative baselines (vanilla ViT, pruning methods, reparameterization methods like SLAB), is robust. However, additional transparency about how exactly throughput measurements were standardized across different methods would enhance reproducibility.
Supplementary Material: No.
Relation To Broader Scientific Literature: RePaViT effectively positions itself within current literature by contrasting clearly with:
1. Network pruning methods: emphasizing higher practical efficiency and hardware-friendliness.
2. Other reparameterization methods: highlighting methodological differences (vertical versus horizontal reparameterization) and targeting intrinsic ViT architectures rather than hybrid or CNN-augmented structures.
The paper situates its contribution well by emphasizing originality in applying structural reparameterization directly to FFN layers.
Essential References Not Discussed: The paper provides comprehensive references, and there were no obvious omissions of essential references directly relevant to the proposed method.
Other Strengths And Weaknesses: Strengths:
1. The idea of structurally reparameterizing FFN layers is original and clearly presented.
2. Clear real-world application potential, especially given that many vision foundation models use ViT backbones.
3. Thorough and convincing experimentation demonstrating scalability, accuracy trade-offs, and latency improvements.
Weaknesses:
Batch norm instead of Layer norm may limit the training efficiency.
Other Comments Or Suggestions: See questions.
Questions For Authors: 1. What motivated the default choice (0.75)? How sensitive is this choice in terms of generalization to other datasets/tasks beyond those tested?
2. Training Stability: Can you discuss any observed training instabilities or convergence issues with larger channel idle ratios or model scales, and how these might be mitigated in practice?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer NyjX for the recognition of our work, especially on
* clear novelty
* significant real-world application potential
* and thorough and convincing experiments.
We would like to answer and clarify the questions as below:
---
___Q1: What motivated the default choice (0.75)? How sensitive is this choice in terms of generalization to other datasets/tasks beyond those tested?___
A1: We thank for this insightful question.
Firstly, we would like to clarify that the channel idle ratio $\theta$ is a __static architectural hyperparameter__ designed to control the trade-off between model efficiency and performance. It is not meant to be universally optimal but rather user-configurable based on specific hardware constraints and application requirements. __More importantly, the choice of channel idle ratio is highly related to the model size__. As shown in Table 4, larger models (e.g., DeiT-Base) tolerate higher idle ratios with minimal accuracy drop. In particular, when the model size grows to DeiT-Base level and beyond, 0.75 idle ratio can result in high performance, which motivates our choice.
Secondly, **as for the method-level generalization across tasks**, we would like to emphasize that RePaViT has been evaluated on four diverse tasks: (1) image classification on ImageNet-1K, (2) object detection on MS COCO, (3) semantic segmentation on ADE20K, and (4) zero-shot image classification using CLIP on LAION-400M. These results collectively demonstrate the method’s robustness and applicability across different computer vision problems.
Thirdly, **as for the model-level generalization across datasets**, we have provided zero-shot classification results in Table 9, Appendix B, where RePaViT is applied to CLIP models with different idle ratios. Since CLIP is pretrained on the large-scale unlabelled LAION-400M dataset and directly evaluated on the ImageNet-1K validation set without finetuning, this setup reflects the model's generalizability on unseen data. Notably, RePa-CLIP-ViT-B/16 with an idle ratio of 0.5 can outperform the vanilla model by 0.5%.
Finally, **on the sensitivity of the idle ratio**, Table 4 presents a detailed analysis. We observe that performance remains stable or improves slightly up to a certain point, and then degrades as the idle ratio increases. Specifically, when the idle ratio exceeds 0.75, a noticeable accuracy drop occurs. We hypothesize that excessive pruning may cause insufficiency of nonlinearity in the model, negatively affecting the model representation capacity.
While our current study has provided extensive experiment results demonstrating the generalizability of our method and the sensitivity of idle ratio choice, we acknowledge that further evaluation could offer deeper insights into its robustness. However, conducting such experiments would require substantial additional time that can extend beyond the current rebuttal phase. We still welcome specific task/dataset suggestions from Reviewer NyjX and are happy to include additional results in the revised version.
---
___Q2: Training Stability: Can you discuss any observed training instabilities or convergence issues with larger channel idle ratios or model scales, and how these might be mitigated in practice?___
A2: We appreciate this insightful question.
Fortunately, we have not observed instability or convergence issues when training RePaViT with large idle ratios or larger models. As shown in Tables 1 and 4, and according to our training logs, RePaViTs converge smoothly across all configurations. We have also provided our source code and detailed instructions in the supplementary material for reproducing our results.
However, __reparameterizing before training can cause instability__, particularly when BatchNorm is merged early. Table 5 shows that while such reparameterization reduces training time, it degrades performance and may destabilize training—especially for large models—since normalization layers are crucial for mitigating issues like internal covariate shift, ensuring consistent signal scales across layers, which is particularly important in larger models.
---
___W1: Batch norm instead of Layer norm may limit the training efficiency.___
A3: We would like to clarify that BatchNorm is a dedicated architectural choice to facilitate a further reparameterization of normalization layers and shortcuts into the backbone. It enhances computational efficiency during inference, and we consider the increase in training time a worthwhile trade-off for the efficiency benefits achieved. Nonetheless, our method is still compatible with LayerNorm.
---
We hope our responses address your questions and concerns, and would sincerely appreciate your consideration of raising the recommendation score. | null | null | null | null | null | null | null | null |
High Dynamic Range Novel View Synthesis with Single Exposure | Accept (poster) | Summary: The paper introduces Mono-HDR-3D, a framework for High Dynamic Range Novel View Synthesis (HDR-NVS) that operates effectively with only single-exposure Low Dynamic Range (LDR) images during training. The approach addresses limitations of previous multi-exposure methods by proposing a meta-algorithm that includes two dedicated modules: an LDR-to-HDR Color Converter (L2H-CC) and an HDR-to-LDR Color Converter (H2L-CC), forming a closed-loop design. Experimental results on synthetic and real datasets demonstrate significant improvements in HDR novel view synthesis quality compared to previous state-of-the-art methods (HDR-NeRF & HDR-GS).
## update after rebuttal
Many thanks to the rebuttal. Out of concern for the rigor expected from reviewers and the noticeable lack of many relevant related works, I maintain my recommendation to reject.
Claims And Evidence: The evidence presented is convincing and supports the claims made by the authors. The experiments span both synthetic and real datasets, and the comparisons are conducted in a fair and comprehensive manner. Additionally, the ablation studies effectively isolate the contributions of various components within the proposed framework.
However, while it may be relatively straightforward to outperform HDR-GS and HDR-NeRF when tailoring the design to a specific setting, the real challenge lies in demonstrating that the method can also surpass these baselines under their own conditions (i.e., with inconsistent exposure times). Addressing this would provide stronger evidence for the generalizability of the approach.
A major problem is that the authors consider ideal conditions, where multi-view images have the same exposure. However, in real-world scenarios, this is difficult because each camera has different exposure settings (i.e., ISO, exposure time). I believe that single-view reconstruction is more suitable for scenes with a single exposure, rather than multi-view reconstruction. This is my biggest confusion with the paper.
Methods And Evaluation Criteria: The proposed methods make sense for the problem of HDR-NVS with single-exposure LDR images. The architecture of Mono-HDR-3D, with its dedicated color conversion modules and closed-loop design, directly addresses the challenge of learning HDR representations from limited information.
The evaluation criteria (PSNR, SSIM, LPIPS) are standard for novel view synthesis tasks and appropriate for assessing both quantitative quality and perceptual performance, this part is OK for me.
Theoretical Claims: The paper does not present extensive theoretical proofs but rather focuses on conceptual framework and experimental validation, for me this is not a big problem.
Experimental Designs Or Analyses: This paper utilizes appropriate benchmark datasets that include both synthetic and real scenes, compares its approach against state-of-the-art methods such as HDR-NeRF and HDR-GS, incorporates both quantitative metrics and qualitative visual comparisons, and conducts ablation studies to validate its design choices.
Supplementary Material: No supplementary material provided.
Relation To Broader Scientific Literature: Closely related to HDR-NeRF and HDR-GS, but improve with only need single exposure time.
However, I believe the innovation is not sufficiently strong. Revising the paper does not allow for the exploration of a more challenging setting, such as single-view, in addition to single-exposure. In an ideal single-exposure scenario, it often only captures a single view. The assumptions made in this paper are therefore too restrictive.
Essential References Not Discussed: Some LDR novel view synthesis methods maybe need to discuss, which handle lightness correction in LDR domain, and may could connect with 2D inverse tone mapping methods to serve as a baseline.
Like :
[1]. Lighting up NeRF via Unsupervised Decomposition and Enhancement , ICCV 2023
[2]. Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption, AAAI 2024
[3]. A Bilevel Optimization Approach for Novel View Synthesis, AAAI 2024
Other Strengths And Weaknesses: Would it be possible to include some video comparison results in the supplement? This could make the visual effects more apparent, especially regarding 3D consistency.
Other Comments Or Suggestions: It might be more reasonable to add some 2D inverse tone methods and basic 3DGS combinations to the comparison methods.
Currently, the baselines are limited, and the paper lacks more depth analyze about how to ensure multi-view consistency.
Questions For Authors: See "Other Strengths And Weaknesses" and "Other Comments Or Suggestions"
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ## Reviewer ZUxK
**Q1: While it may be relatively straightforward to outperform HDR-GS and HDR-NeRF when tailoring the design to a specific setting, the real challenge lies in demonstrating that the method can also surpass these baselines under their own conditions.**
Great point! As suggested, we have now evaluated Mono-HDR-GS under the conventional multi-exposure setting. As shown in the results below, we show that overall, our method achieves superior performance for both HDR and LDR rendering. We will add this test in the final version.
HDR rendering results on the synthetic datasets.
| Method | PSNR ($\uparrow$) | SSIM ($\uparrow$) | LPIPS ($\downarrow$) |
| :----: | :----------: | :------: | :-------: |
| HDR-GS | 38.31 | 0.972 | 0.013 |
| Mono-HDR-GS | **38.66** | **0.976** | **0.012** |
LDR rendering results of observed exposure (LDR-OE) on the synthetic datasets.
| Method | PSNR ($\uparrow$) | SSIM ($\uparrow$) | LPIPS ($\downarrow$) |
| :----: | :----------: | :------: | :-------: |
| HDR-GS | **41.10** | 0.982 | **0.011** |
| Mono-HDR-GS | 40.55 | **0.983** | **0.011** |
LDR rendering results of novel exposure (LDR-NE) on the synthetic datasets.
| Method | PSNR ($\uparrow$) | SSIM ($\uparrow$) | LPIPS ($\downarrow$)|
| :----: | :----------: | :------: | :-------: |
| HDR-GS | 36.33 | 0.977 | 0.016 |
| Mono-HDR-GS | **36.43** | **0.979** | **0.014** |
**Q2: The authors consider ideal conditions, where multi-view images have the same exposure. However, in real-world scenarios, this is difficult because each camera has different exposure settings.**
Apologies for this misunderstanding (multi-view vs. multi-camera). Under the proposed single-exposure setting, even a single camera device with a single shutter time setup would suffice to acquire multi-view training imagery. This is more convenient than the multi-camera requirement as the reviewer mentioned. We will clarify this.
**Q3: Some LDR novel view synthesis methods maybe need to discuss, which handle lightness correction in LDR domain, and may could connect with 2D inverse tone mapping methods to serve as a baseline, i.e. [5-7]. It might be more reasonable to add some 2D inverse tone methods and basic 3DGS combinations to the comparison methods.
[5] Lighting up NeRF via Unsupervised Decomposition and Enhancement, ICCV 2023;
[6] Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption, AAAI 2024;
[7] A Bilevel Optimization Approach for Novel View Synthesis, AAAI 2024.**
Thanks for the suggestions. We will discuss them though being less relevant, as they either focus on luminance correction (vs. our color space transformation) or single image cases (vs. our 3D scene modeling). They are thus not proper competitors due to addressing different problems.
**Q4: Currently, the baselines are limited, and the paper lacks more depth analyze about how to ensure multi-view consistency.**
To the best of our knowledge, HDR-NeRF and HDR-GS are the only two state-of-the-art methods for HDR-NVS. We are more than happy to include more if suggested. The core of our model lies in how to learn the mapping from LDR to HDR, whilst multi-view consistency is ensured by the 3D scene model (e.g., 3DGS or NeRF) adopted. As a result, our approach is open and generic to integrate with any 3D representation models.
**Q5: Would it be possible to include some video comparison results in the supplement? This could make the visual effects more apparent, especially regarding 3D consistency.**
Great suggestion! We will add video results. | Summary: This paper studies the high dynamic range novel view synthesis problem with only single-exposure LDR images given.
The authors propose a generic framework, Mono-HDR-3D, that learns to capture the underlying camera imaging process for bridging LDR and HDR space effectively under the challenging single exposure scenario. Designed as a generic approach, this method can be integrated with different 3D scene models such as NeRF and 3DGS.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: No supplementary material is submitted by the authors
Relation To Broader Scientific Literature: This work studies the 3D HDR imaging problem. It is related to 3D reconstruction techniques like NeRF and 3DGS. The most related work is HDR-GS
Essential References Not Discussed: The references are fairly enough.
Other Strengths And Weaknesses: Strengths:
(i) This work studies a more difficult and novel problem, single-exposure high dynamic range novel view synthesis, which is more challenging as previous methods all require at least three exposure times to learn the mapping function from LDR to HDR in 3D space.
(ii) The idea of decomposing the tone-mapping function into the camera imaging process is interesting. Based on this decomposition, the authors design the low dynamic to high dynamic color converter (L2H-CC) and high dynamic to low dynamic color converter. This is good and insighted.
(iii) The writing is good and clear. Especially the mathematical notations in section 3. The presentation is well-dressed. Especially the workflow paradigm of the pipeline in figure 2.
(iv) The performance is good and solid. As shown in Table 1. The improvements on HDR-NeRF and HDR-GS are 19 dB and 3 dB. Very impressive.
Weaknesses:
(i) How to validate the two MLPs L2H-CC and H2L-CC decompose the camera imaging process. There are no supervision in the loss function to ensure this part.
(ii) The HDR results in Figure 5 (c) look very terrible and are totally different from those in the original paper of HDR-GS.
(iii) The improvements on real scenes are marginal. Why? The authors do not explain this.
(iv) Code and models are not submitted. The reproducibility cannot be checked.
Other Comments Or Suggestions: I suggest the authors to re-organize the paper to remove the blank in Line 159 - 164.
Questions For Authors: How did you visualize the HDR results of real scenes? More visualization results should be provided.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: ## Reviewer nprx
**Q1: How to validate the two MLPs L2H-CC and H2L-CC decompose the camera imaging process. There are no supervision in the loss function to ensure this part.**
Great question! It is exaclty due to no such supersion that makes the problem extremely challenging. It is the architecture of the two convertors (Fig. 3 & 4) we introduce here that imposes structural prior of camera imaging (Eq. (6)) to drive the model approaximate the underlying camera imaging process. This has been validated in ablation study (see Tab. 3) by comparing with plan MLP without such structure.
**Q2: The improvements on real scenes are marginal. Why? The authors do not explain this.**
On real scenes without HDR ground truth, the improvement is indeed more challenge to acquire. However, to provide more evidence, we have now quantitatively evaluated two no-reference image quality assessment (NR-IQA) metrics: NIQE [3] and CLIP-IQA [4], without the need for HDR ground truth. We report the HDR results on real-world datasets below:
| Method | NIQE ($\downarrow$) | CLIP-IQA ($\uparrow$) |
| :----: | :---:| :------: |
| HDR-GS | 6.40 | 0.48 |
| Mono-HDR-GS | **3.63** | **0.50** |
This test further indicates the meaningful superiority of our method over previous alternatives. Pleases note, Tab. 2 (main paper) reports the results of LDR rendering which is not the focus of this work.
[3] Mittal A, Soundararajan R, Bovik A C. Making a “completely blind” image quality analyzer. IEEE Signal processing letters, 2012.
[4] Wang J, Chan K C K, Loy C C. Exploring clip for assessing the look and feel of images. AAAI 2023.
**Q3: Code and models are not submitted. The reproducibility cannot be checked.**
Our code and models will be released later.
**Q4: I suggest the authors to re-organize the paper to remove the blank in lines 159-164.**
Thanks, we will.
**Q5: The HDR results in Fig. 5\(c\) look very terrible and are totally different from those in the original paper of HDR-GS.**
Great spot, but please note that our single-exposure setting is a more challenging and more practical, as comthe to the conventional multi-exppsure setting as used in the HDR-GS. This contrast illustrates exactly the challenges with our new setting (e.g., the limited luminance information cannot fulfill the Nyquist-Shannon sampling theorem requirements for dynamic range recovery). We used the official codes of HDR-GS to ensure the correction (with the same code, we can reproduce the results of multi-exposure setting).
Please note, this work does not fully solve the single-exposure HDR-NVS problem but indicates a meaningful forward step and forster more advanced research in the future.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. My concerns have been addressed. I raise my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your detailed review and constructive feedback, which greatly improved our work. We’re truly grateful that our response addressed your concerns and appreciate your updated score of 5. Your expertise has been instrumental in enhancing the quality of our work. | Summary: This paper introduces Mono-HDR-3D, a novel single-exposure HDR-NVS approach that reconstructs 3D HDR scenes in NeRF or 3DGS using only LDR images, eliminating the need for multi-exposure inputs. The method comprises two modules based on LDR image formation principle, which is LDR-to-HDR module that predicts HDR details from LDR images, and HDR-to-LDR module that allows the model to be trained with LDR images.
Claims And Evidence: This paper claims that having a single 3D representation for HDR and approximating the LDR-to-HDR process and HDR-to-LDR process with neural networks brings about performance improvement through its closed-loop design, even in cases where only LDR images are available. In this process, they argue the importance of modeling the architecture in likeness to the camera imaging mechanism. This claim is backed up in the experiment section at Table 3 and 4.
However, I question the authors about the case when only LDR images are available for optimizing the scene, which the setting implied in the first paragraph of **H2L-CC** section at Section 3.2. In this case, how is the L2H module able to learn the mapping from LDR to HDR? It seems that this module is not generalizable and optimized per scene with NeRF or 3DGS, so I suppose it would not be able to receive and guidance signals for learning L2H in cases when no HDR images are available. I ask the authors to provide additional elaboration and experimental results for this setting.
Methods And Evaluation Criteria: Please view Claims and Evidence: I believe additional experiment would have had to be performed to validate the performance of this method in cases when different ratio of LDR / HDR images are available, including extreme cases where only LDR or HDR images are available for 3D scene optimization.
Theoretical Claims: See Claims and Evidence.
Experimental Designs Or Analyses: The soundness and validity of experimental designs and analysis have been verified. However, I find qualitative results to be somewhat lacking for me to be fully convinced in the performance of this method.
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: This paper builds upon and extends several key areas of research in HDR imaging, Novel View Synthesis (NVS), and computational photography by addressing the limitations of existing HDR-NVS methods and introducing a new single-exposure-based approach.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - This paper is well-written and easy to follow.
- The architecture that emulates real-life LDR-to-HDR process and HDR-to-LDR process is somewhat novel, though the idea of basing the model architecture in real-life physical properties is familiar and well-known in the field of novel view synthesis.
Other Comments Or Suggestions: N/A
Questions For Authors: I question the necessity of H2L module: what happens if you render the image in HDR, and simply convert it to LDR with existing modules or analytic method, instead of approximating it with MLP? Is this case not possible because there are no module which support backpropagation for training? If not, how does the model perform in the case of such naive HDR-to-LDR conversion method? Please elaborate.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Reviewer yk8h
**Q1: I question the necessity of H2L module: what happens if you render the image in HDR, and simply convert it to LDR with existing modules or analytic method, instead of approximating it with MLP? Is this case not possible because there are no module which support backpropagation for training? If not, how does the model perform in the case of such naive HDR-to-LDR conversion method? Please elaborate.**
Great point! Please note analytical HDR-to-LDR conversion needs the camera CRF which is typically unknown. To address this, we thus learn to approximate. We will further clarify.
**Q2: I question the authors about the case when only LDR images are available for optimizing the scene, which the setting implied in the first paragraph of H2L-CC section at Section 3.2. In this case, how is the L2H module able to learn the mapping from LDR to HDR? It seems that this module is not generalizable and optimized per scene with NeRF or 3DGS, so I suppose it would not be able to receive and guidance signals for learning L2H in cases when no HDR images are available. I ask the authors to provide additional elaboration and experimental results for this setting.**
Without HDR ground truth, any model will be less properly constrained including ours. However, compared with previous methods, our Mono-HDR-3D has advantages due to leveraging the inherent camera imaging mechasnim (imaging physics prior) along with the closed-loop design (H2L-CC followed by L2H-CC, forming a loop).
As suggested, we now quantitatively evaluate two no-reference image quality assessment (NR-IQA) metrics: NIQE [3] and CLIP-IQA [4], without the need for HDR ground truth. We report the HDR results on real-world datasets below:
| Method | NIQE ($\downarrow$) | CLIP-IQA ($\uparrow$) |
| :----: | :---:| :------: |
| HDR-GS | 6.40 | 0.48 |
| Mono-HDR-GS | **3.63** | **0.50** |
This test validates the efficacy of our model design.
[3] Mittal A, Soundararajan R, Bovik A C. Making a “completely blind” image quality analyzer. IEEE Signal processing letters, 2012.
[4] Wang J, Chan K C K, Loy C C. Exploring clip for assessing the look and feel of images. AAAI 2023.
**Q3: I believe additional experiment would have had to be performed to validate the performance of this method in cases when different ratio of LDR / HDR images are available, including extreme cases where only LDR or HDR images are available for 3D scene optimization.**
Great suggestions and many thanks! We have now conducted this suggested experiment, with the ratio of LDR / HDR images being 1/1, 2/1, 3/1, 5/1, 0/1, 1/0, respecitively.The results are reported below:
| Method | LDR / HDR | PSNR ($\uparrow$) | SSIM ($\uparrow$) | LPIPS ($\downarrow$) |
|:---:|:-------: | :--: | :---: | :---: |
| HDR-GS | 1/1 | 35.30 | 0.965 | 0.030 |
| Mono-HDR-GS | 1/1 | **38.57** | **0.975** | **0.012** |
| HDR-GS | 2/1 | 35.26 | 0.963 | 0.033 |
| Mono-HDR-GS | 2/1 | 37.97 | **0.975** | 0.013 |
| HDR-GS | 3/1 | 35.16 | 0.958 | 0.035 |
| Mono-HDR-GS | 3/1 | 37.53 | 0.974 | 0.014 |
| HDR-GS | 5/1 | 34.89 | 0.961 | 0.027 |
| Mono-HDR-GS | 5/1 | 35.51 | 0.963 | 0.023 |
| HDR-GS | 0/1 | 33.46 | 0.936 | 0.075 |
| Mono-HDR-GS | 0/1 | 33.93 | 0.925 | 0.050 |
| HDR-GS | 1/0 | 10.51 | 0.503 | 0.350 |
| Mono-HDR-GS | 1/0 | 13.50 | 0.507 | 0.359 |
We highlight that:
- As the amount of HDR data decreases, our model degragdes only marginally, suggesting the merit of data efficiency.
- LDR supervision is useful by providing a better quality scene model to be converted.
- HDR supervision is most critical as expected.
- Compared with HDR-GS, overall our model is superior acrss all the cases.
**Q4: I find qualitative results to be somewhat lacking for me to be fully convinced in the performance of this method.**
Reconstructing HDR radiance fields from single-exposure LDR inputs constitutes an ill-posed inverse problem, as the limited luminance information fails to satisfy the Nyquist-Shannon sampling theorem for dynamic range recovery. Therefore, the perceptual quality of synthesized HDR images remains fundamentally constrained by the limited luminance information in single-exposure inputs.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, and it seems my concerns have been addressed. I raise my score to 4. | Summary: This paper proposes a novel method for HDR scene novel view rendering with single-exposure LDR images. The approach involves two key components: an LDR-to-HDR (L2H) converter and an HDR-to-LDR (H2L) converter, both designed based on the camera imaging process. The L2H module first converts LDR images into HDR representations, while the H2L module generates LDR images for supervision using the input images. Experiments on synthetic datasets demonstrate that this method achieves superior rendering quality compared to existing approaches.
Claims And Evidence: The major claims are supported by experiments.
Methods And Evaluation Criteria: The proposed methods make sense for the problem.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental designs are valid.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper is significant in the field of NVS since it proposes a novel method to create HDR representations with single exposure LDF images.
Essential References Not Discussed: References are adequately discussed.
Other Strengths And Weaknesses: Strength
The design of HDR and LDR converters is novel. It’s interesting to use only single-exposure LDR images to generate HDR 3D representation, which can further inspire fields in view synthesis and inverse rendering. Results on synthetic data are promising.
Weakness
- The proposed L2H-CC converts an LDR model to an HDR model, however, after reading the paper, I'm not completely clear how its design prevents the module from learning a trivial solution of mapping to an LDR instead of HDR mode.
- Important technical details on implementation and experiments are missing and could benefit from more explanation:
1. How does Mono-HDR-GS use HDR loss? The paper claims to only use LDR images, there should be no HDR ground truth during optimization.
2. The ablation study of closed-loop design lacks implementation details. How does supervision work without “H2L-CC’? What supervision is used to learn “L2H-CC’?
3. It’s better if authors can add ablation on the losses, l_{ldr}, l_{hdr}, and l_{h2l} for a more comprehensive evaluation.
The paper presents an interesting approach to HDR novel view synthesis using only LDR images, without relying on any data-driven priors. However, my main concerns lie in the lack of clarity in certain technical explanations and the omission of crucial experimental details. Notably, while the paper claims to train solely with LDR images, the implementation in Section 3.3 appears to incorporate HDR image loss, which raises questions about the training setup. A more positive rating will be considered if the authors can thoroughly address these concerns.
Other Comments Or Suggestions: N/A
Questions For Authors: See Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ## Reviewer dY8v
**Q1: The proposed L2H-CC converts an LDR model to an HDR model, however, after reading the paper, I'm not completely clear how its design prevents the module from learning a trivial solution of mapping to an LDR instead of HDR mode.**
Let us summarize the key features of our method:
First, previous methods such as HDR-GS (Cai et al., 2024) and HDR-NeRF (Huang et al., 2022) are inferior in design to tackle this more challenging single-exposure HDR-NVS problem since directly learning a HDR scene model from single-exposure multi-view imagery is extremely challenging (see L86-93).
With Mono-HDR-3D, we instead first learn a LDR scene model. More importantly, we impose the inherent camera imaging mechanism [1-2] (see the camera imaging formula Eq. (6), Sec 3.2) to facilitate HDR color estimation - enabling more robust translation from LDR to HDR by leveraging imaging physics prior knowledge (L190-216, Sec. 3.2). We will further clarify.
[1] Noise-optimal capture for high dynamic range photography[C]//CVPR, 2010: 553-560.
[2] Compressed-SDR to HDR Video Reconstruction[J]//TPAMI, 2023, 46(5): 3679-3691.
**Q2: How does Mono-HDR-GS use HDR loss? The paper claims to only use LDR images, there should be no HDR ground truth during optimization.**
Apologies for this misunderstanding. As stated in L255-265 that, our model supports both cases, with and without HDR ground truth. Given HDR data, the HDR loss $L_\text{hdr}$ is used along with the others as shown in Eq. (9).
**Q3: The ablation study of closed-loop design lacks implementation details. How does supervision work without “H2L-CC”? What supervision is used to learn “L2H-CC”?**
In the closed-loop ablation study (Tab. 4), when the H2L-CC module is omitted, the HDR loss $L_\text{hdr}$ directly supervises the L2H-CC module to learn the LDR-to-HDR transformation, and $L_\text{ldr}$ enforces the input to L2H-CC as a valid LDR model. Additionally, if H2L-CC exists, an extra loss $L_\text{h2l}$ will be added to supervise the L2H-CC.
**Q4: It’s better if authors can add ablation on the losses, $L_\text{ldr}$, $L_\text{hdr}$, and $L_\text{h2l}$ for a more comprehensive evaluation.**
Thanks! As suggested, we have now conducted an exhaustive analysis of loss combination. The results of HDR rendering are reported below:
| Index | Loss | PSNR ($\uparrow$) | SSIM ($\uparrow$) | LPIPS ($\downarrow$) |
|:--:|:--------------:|:---------:|:-----:|:-----:|
| 1 |$L_\text{ldr}$ | - | - | - |
| 2 |$L_\text{hdr}$ | 33.93 | 0.925 | 0.050 |
| 3 |$L_\text{h2l}$ | 11.87 | 0.504 | 0.371 |
| 4 |$L_\text{ldr}$ + $L_\text{hdr}$ | 38.19 | 0.974 | 0.015 |
| 5 |$L_\text{ldr}$ + $L_\text{h2l}$ | 13.50 | 0.507 | 0.359 |
| 6 |$L_\text{hdr}$ + $L_\text{h2l}$ | 33.58 | 0.934 | 0.058 |
| 7 |$L_\text{ldr}$ + $L_\text{hdr}$ + $L_\text{h2l}$ | **38.57** | **0.975** | **0.012** |
We highlight that:
- HDR loss $L_\text{hdr}$ is basically important as expected;
- LDR loss helps clearly by properly supervising the LDR scene model optimization;
- The closed loop loss $L_\text{h2l}$ adds further value on top. | null | null | null | null | null | null |
ENSUR: Equitable and Statistically Unbiased Recommendation | Accept (poster) | Summary: This paper introduces ENSUR (Equitable and Statistically Unbiased Recommendation), a novel framework aimed at ensuring confidence and fairness in recommender systems. The authors propose a dynamic method for generating prediction sets that guarantee:
1. A user-predefined confidence level (e.g., 90%) for including the true item,
2. Fairness across different user groups,
3. Minimal average prediction set sizes.
To achieve these goals, the authors develop the Guaranteed User Fairness Algorithm (GUFA), which optimizes fairness and risk constraints efficiently. They establish theoretical guarantees for fairness control, risk control, and minimal prediction set size. Extensive experiments validate the ENSUR framework across multiple datasets and base models, demonstrating improved fairness and reliability without sacrificing performance.
Claims And Evidence: The paper’s claims are well-supported by rigorous theoretical analysis and empirical validation. The authors provide detailed proofs for fairness and risk control guarantees (Theorem 5.1) and minimal prediction set size (Theorem 5.2). Additionally, they derive upper bounds for risk and fairness metrics to accelerate optimization (Theorem 4.1 and Theorem 4.2). The experimental results across four datasets confirm the framework’s effectiveness, showing improved fairness and risk control compared to baseline methods.
Methods And Evaluation Criteria: The proposed methodology is well-founded and aligns with established fairness-aware recommendation frameworks. The authors adapt the Risk-Controlling Prediction Sets (RCPS) approach while incorporating fairness constraints, making their contributions novel and practically relevant. The evaluation criteria—risk control, fairness control, and minimal prediction set size—are well-justified, and the results demonstrate meaningful improvements over baseline models.
Theoretical Claims: The theoretical claims in this paper are sound and rigorously proved. Theorems 4.1 and 4.2 provide upper bounds for risk and fairness metrics, facilitating efficient optimization. Theorems 5.1 and 5.2 ensure fairness and risk constraints while maintaining minimal prediction set size. The derivations are mathematically solid, and the assumptions are well-motivated and clearly stated.
Experimental Designs Or Analyses: The experimental setup is comprehensive, covering four diverse datasets (e.g., MovieLens, AmazonOffice) and five base recommendation models. The comparisons with fairness baselines (NFCF, MFCF, GMF-UFR, NeuMF-UFR) are appropriate, and the results consistently support the authors’ claims. The parameter analysis further enhances the credibility of the findings. The efficiency comparison demonstrates ENSUR’s computational advantage over existing fairness methods.
Supplementary Material: No supplementary material is explicitly mentioned, but the paper is self-contained, and all necessary proofs are provided within the appendices.
Relation To Broader Scientific Literature: This paper extends prior work on fairness in recommendation (e.g., Yao & Huang, 2017; Abdollahpouri et al., 2019) and uncertainty quantification via risk-controlling prediction sets (e.g., Bates et al., 2021). ENSUR builds upon these foundations by integrating fairness and confidence guarantees into a unified optimization framework. The approach is novel in its combination of theoretical guarantees and practical implementation, making it a valuable contribution to fairness-aware recommendation research.
Essential References Not Discussed: The paper covers essential prior work, particularly in fairness-aware recommendation and statistical risk control. No significant omissions were identified.
Other Strengths And Weaknesses: Strengths:
• Theoretical rigor: Provides strong mathematical guarantees for fairness, risk control, and efficiency.
• Practical relevance: Demonstrates applicability across multiple datasets and base models.
• Computational efficiency: ENSUR outperforms fairness baselines in training time.
• Clear writing and well-structured methodology.
Weaknesses:
• Assumptions: Some fairness constraints may not hold universally across all domains.
• Evaluation scope: While diverse datasets are used, real-world deployment studies would further strengthen the impact.
Other Comments Or Suggestions: 1. It would be useful to discuss potential limitations when fairness groups are highly imbalanced.
2. Consider elaborating on real-world applicability beyond academic datasets.
3. Future work could explore alternative fairness constraints and their implications.
Questions For Authors: 1. How does ENSUR perform when fairness groups have significantly different sizes? Does it remain stable under severe imbalance?
2. Could the fairness constraints be extended to multi-group settings beyond binary group definitions?
3. Are there practical limitations to applying ENSUR in online recommendation scenarios?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their encouraging feedback and for appreciating the relevance, rigor, and practical efficiency of our framework. Below, we address the questions:
a) ENSUR's performance under significant group size imbalance:
While the empirical results presented in the paper cover moderately imbalanced scenarios, ENSUR remains robust under more severe imbalances. This is primarily due to its design: each user group's risk and fairness thresholds are independently optimized. As a result, ENSUR ensures stable statistical guarantees—even when groups differ significantly in size—by tailoring the learned calibration parameter ($\lambda$) to each group’s distribution. However, underrepresented groups may require more conservative λ values, leading to slightly larger prediction sets. We will clarify this behavior in the Discussion in the revised version.
b) Extending fairness constraints to multi-group settings:
Thank you for this insightful question. The optimization in GUFA can effectively handle multiple fairness constraints simultaneously by introducing a unique fairness threshold (η) for each group. Each group's constraint will correspond to its threshold ($\eta$), and the optimization problem remains structurally the same. For example, if a dataset has three user subgroups (e.g., by age, gender, and region), GUFA jointly calibrates all three using a separate fairness constraint. This makes the extension to multi-group fairness both practical and computationally efficient.
c) Practical limitations in applying ENSUR to online recommendation scenarios:
We appreciate this critical point. ENSUR, as a statistical calibration framework, remains practically applicable to online recommendation scenarios, but given dynamic changes in user behavior online, ENSUR would require periodic recalibration (e.g., daily, weekly, or monthly, depending on system dynamics and data drift). Given ENSUR's efficiency, such recalibration is feasible in large-scale systems. We will expand this point further in the Discussion section and explore this direction in our future work.
Once again, we are incredibly grateful to the reviewers for their valuable comments, suggestions, and positive recognition of our work. | Summary: The paper introduces ENSUR (Equitable and Statistically Unbiased Recommendation), a framework designed to enhance fairness and confidence in recommender systems. The core idea is to generate dynamic prediction sets that (1) ensure a high-confidence inclusion of the true item, (2) guarantee fairness across diverse user groups, and (3) minimize set sizes for efficiency. To achieve this, the authors propose the Guaranteed User Fairness Algorithm (GUFA), which optimizes prediction sets while maintaining statistical risk and fairness bounds. The paper provides a rigorous theoretical foundation, derives upper bounds for risk and fairness control, and supports claims with extensive empirical evaluations.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Theorem 4.1 and Theorem 4.2 establish upper bounds for risk and fairness, which are correct.
Experimental Designs Or Analyses: The experimental evaluation is comprehensive, covering multiple datasets (AmazonOffice, MovieLens, Last.fm, Book-Crossing) and a variety of base models (DeepFM, GMF, MLP, NeuMF, LightGCN).
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The authors draw on concepts from risk-controlling prediction sets (Bates et al., 2021) and fairness-aware collaborative filtering (e.g., Yao & Huang, 2017; Abdollahpouri et al., 2019).
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Novel contribution: The combination of risk control and fairness guarantees in recommendation is innovative. Rigorous theoretical foundation: Well-structured proofs and upper-bound derivations.Strong empirical validation: Comprehensive experiments demonstrate real-world applicability. Computational efficiency: ENSUR is significantly faster than existing fairness-aware baselines.
Fairness group assumptions: The approach assumes predefined user groups, which may not always be straightforward in practice.
Other Comments Or Suggestions: No.
Questions For Authors: 1. How does ENSUR adapt when fairness groups are not predefined but inferred dynamically from user interactions?
2. Could ENSUR be extended to handle multi-sided fairness constraints (e.g., fairness for both users and content providers)?
3. Are there any observed limitations when applying ENSUR to highly imbalanced user groups?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and for recognizing our framework's novelty, theoretical rigor, and practical efficiency. We appreciate their insightful comments and questions and address them below:
a) Adaptation of ENSUR when fairness groups are inferred dynamically:
We thank the reviewer for the important point. While ENSUR assumes predefined user groups, we can naturally extend it to dynamically inferred groups using clustering techniques based on user interaction patterns (e.g., click history, content preferences, time of activity). We can rerun the ENSUR's statistical calibration step periodically or upon significant distributional changes in user behavior, reflecting these evolving group definitions. Since GUFA performs per-group calibration and is computationally efficient, such updates are feasible at scale. This approach ensures that fairness and confidence guarantees are preserved even as the definitions of user groups evolve over time. In the final version of the paper, we will add a Discussion on this adaptation strategy.
b) Extension to multi-sided fairness constraints
We are thankful for suggesting this interesting direction. Our framework can be generalized to multi-sided fairness by adding multiple fairness constraints simultaneously— i.e., fairness for users as well as content providers. Each side (user or provider) will have associated fairness thresholds ($\eta$ values), and the GUFA optimization strategy will jointly ensure fairness guarantees are maintained across all these dimensions while ensuring that the combined objective remains tractable. This extension maintains the modularity of the current approach and is practically beneficial in many multi-stakeholder platforms such as marketplaces.
c) Limitations when applying ENSUR to highly imbalanced user groups:
We sincerely appreciate this critical observation. In highly imbalanced settings, ENSUR will continue to maintain valid fairness guarantees as the calibration of risk and fairness constraints is done independently for every group. This will ensure statistical robustness. However, in such settings, due to underrepresentation, the minority group may require a more conservative learned parameter $\lambda$ to ensure statistical guarantees. This may eventually result in larger prediction set sizes for underrepresented groups. We recognize that extreme imbalance is a real-world challenge and will discuss ENSUR behavior under such cases and its implications in the Discussion section of the revised version.
We are again deeply grateful to the reviewer for their positive acknowledgment of our work and for their valuable questions. | Summary: This paper proposes a novel and reliable framework called Equitable and Statistically Unbiased Recommendation (ENSUR)) to dynamically generate prediction sets for users across various groups. This paper further designs an efficient algorithm named Guaranteed User Fairness Algorithm (GUFA) to optimize the proposed method and derive upper bounds of risk and fairness metrics to speed up optimization process. Rigorous theoretical analysis and extensive experiments are also provided.
## update after rebuttal
Authors have addressed my concerns.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I checked the proofs of this paper and do not find the errors.
Experimental Designs Or Analyses: Yes. I check the setting of experiments in Section 6.1.
Supplementary Material: Appendix B.1 and B.2.
Relation To Broader Scientific Literature: The paper builds on Risk-Controlling Prediction Sets (Bates et al., 2021) and fairness-aware recommendation literature.
Essential References Not Discussed: In my view, related works are currently cited/discussed in this paper.
Other Strengths And Weaknesses: Strengths:
a. This paper is well-written and easy to follow.
b. The proposed method and theory in this paper is solid.
c. Unlike existing frameworks, this paper offers rigorous theoretical guarantees.
d. It is highly efficient compared to current baselines.
e. The framework is lightweight and is model and dataset-agnostic.
f. Comprehensive experiments are conducted across multiple datasets, models, and user groupings. Extensive hyperparameters experiments are done to show how they impact coverage, performance and fairness.
Weakness:
a. I appreciate the proposed method and theory, but in my view, the technique in method or proof process of theorems is not surprising.
b. The authors could explain further on how the choice of hyperparameters is made in the main paper.
c. The framework illustration could be elaborated further to make a reader understand the work quickly.
Other Comments Or Suggestions: Some minor issues: Eq.(12) in line 226.
Questions For Authors: a. I need a complete overview of the technical innovations of this paper. In order to prove the theory in the article, what technique is used in this article, what problems does this technique solve, and how innovative is this technique?
b. How should the practitioners determine appropriate fairness thresholds (eta)?
c. How does guaranteeing a minimum average prediction set size improve recommendation quality? Some real-world example will be helpful
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their encouraging and supportive feedback. We are glad they found our paper well-written, rigorous, and efficient. Below, we aim to address their insightful queries and suggestions:
a) Technical Innovation:
Our technical innovation lies in ENSUR being a unified statistical framework for generating recommendation sets that satisfy both confidence and group fairness constraints. We achieve this by formulating the recommendation problem into the Risk-Controlling Prediction Sets (RCPS) and Fairness-Constrained Prediction Sets (FCPS) paradigm, thereby integrating them into a single calibration process. A key innovation is the development of the GUFA (Guaranteed User Fairness Algorithm), a powerful yet efficient algorithm that enables post hoc calibration without retraining the base recommender model. GUFA leverages theoretically derived upper bounds on risk and fairness violations (presented in Theorems 4.1 and 4.2), using derived Binomial and Bernstein concentration inequalities to make the optimization tractable. These bounds are further used to construct guarantees for risk and fairness control in Theorems 5.1 and 5.2 while ensuring minimal prediction set size.
While prior methods usually rely on heuristic or empirical approaches, our approach thereby results in a model and dataset-agnostic pipeline that is theoretically sound, computationally efficient, and practically deployable, with no need for any architecture-specific modifications or retraining.
b) Selection of Fairness Threshold ($\eta$) by practitioners:
We thank the reviewer for the insightful question. To effectively choose a fairness threshold (η), the practitioners can: 1) assess historical performance disparity between user groups. 2) conduct preliminary analyses on validation datasets to balance fairness requirements against recommendation performance, and 3) finally align the thresholds with institutional fairness standards. We will add a detailed discussion about the approach to paper revision. For example, in the AmazonOffice Dataset, we set $\eta = 0.2\$ by selecting the smallest value satisfying fairness constraints across groups based on interaction count, using our defined fairness metrics verified on validation sets.
c) Real-world Value of Minimizing Prediction Set Size:
Minimizing average prediction set size significantly improves recommendation quality as: 1) It reduces cognitive overload on users, thereby reducing the problem of ad blindness 2) On a streaming platform, a minimum yet accurate prediction set could help in skipping prefetching and caching trailers for irrelevant content thereby saving bandwidth and compute. 3) a smaller yet better prediction set size in ads will lead to higher click-through rates (CTR) and better returns.
We also thank the reviewer for highlighting the minor issue in Eq. (12) and suggesting improving our framework illustration. We will address them in the final revision.
Once again, we thank the reviewer for their thoughtful questions and encouragement, and we look forward to strengthening the final version of the paper based on the valuable feedback. | Summary: The paper introduces a comprehensive framework named ENSUR (Ensuring Statistical Fairness and Confidence in Recommendation Systems), which is designed to statistically ensure both fairness and confidence in the outcomes generated by recommendation systems. The authors propose that by utilizing two key components, RCPS (Recommendation Confidence Probability Sets) and FCPS (Fairness Constrained Probability Sets), the framework can generate high-confidence dynamic recommendation sets tailored to individual users. These sets not only meet the stringent requirements of confidence levels but also rigorously satisfy group fairness criteria, ensuring that the recommendations are equitable across different groups.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I think there is no problem.
Experimental Designs Or Analyses: The experimental designs are thorough and sound.
Supplementary Material: I have checked the materials.
Relation To Broader Scientific Literature: The paper extends the prior works in fairness in recommendation systems.
Essential References Not Discussed: I think the essential references are discussed in the paper.
Other Strengths And Weaknesses: The proposed framework, ENSUR, is designed to be both model-agnostic and dataset-agnostic, meaning it can seamlessly integrate with any base recommender model and adapt to diverse datasets across various domains without requiring specific modifications or constraints. One of the key strengths of ENSUR is its ability to provide rigorous theoretical guarantees for fairness, a critical aspect that has been notably absent in prior works, thereby addressing a significant gap in the field of fair recommendation systems. To validate the effectiveness of the proposed framework, the authors conduct extensive and comprehensive experiments that span multiple datasets and compare against numerous fairness baselines. Additionally, the study includes thorough and complete hyperparameter sensitivity testing, which meticulously examines the impact of different parameter settings on the framework's performance, further solidifying the reliability and generalizability of the results. The experimental outcomes demonstrate that ENSUR achieves significant improvements over existing baselines, not only in terms of recommendation performance but also in computational efficiency, highlighting its practical utility and superiority in real-world applications. Overall, the framework's versatility, theoretical rigor, and empirical validation make it a substantial advancement in the pursuit of fair and efficient recommendation systems.
While the paper presents a robust and well-structured framework, it could benefit from providing more detailed explanations regarding the selection and tuning of hyperparameters within the main body of the text, particularly addressing how these choices might vary across different dataset settings and domains. Such insights would offer readers a clearer understanding of the practical considerations involved in implementing the framework and how to adapt it to specific use cases. Additionally, while the supplementary material is comprehensive and contains valuable technical details, its density and complexity might make it less accessible to readers who are not deeply familiar with the mathematical foundations of the work. To improve readability and accessibility, the authors could consider summarizing the key proof ideas and theoretical insights in the main paper, using intuitive explanations and high-level overviews to convey the core concepts without overwhelming readers who may not have a strong mathematical background. This approach would make the paper more inclusive and engaging for a broader audience, while still preserving the rigor and depth of the technical content for experts in the field.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are thankful to the reviewer for their positive and encouraging feedback. We sincerely appreciate their recognition of our framework's rigor, versatility, and substantial empirical validation.
a) Hyperparameter Selection and Tuning:
We acknowledge a detailed explanation presenting hyperparameter selection and tuning will significantly benefit the paper's clarity. The choice of our hyperparameter parameters (e.g., risk threshold $\alpha$, fairness threshold $\eta$, confidence parameters $\delta$ and $\hat{\delta}$) is primarily guided by extensive empirical validation and standard practices from relevant literature such as Bates et al. (2021). For example, on the AmazonOffice Dataset, the risk threshold ($\alpha$) is set as 0.2 to control over-coverage while ensuring robustness, and η is chosen as 0.2 by selecting the smallest value satisfying fairness constraints across groups, based on interaction count using defined fairness metrics, verified on validation sets. Similarly, the confidence parameters δ and $\hat{\delta}$ were chosen from the set $\{0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5}\$ based on the value that consistently achieved the desired confidence level without unnecessarily large recommendation sizes. We followed this same procedure across the other datasets. We will include a detailed discussion of these guidelines in the final version.
b) We thank the reviewer for their valuable suggestion to improve readability. In the final version, we will include intuitive explanations of the theoretical contributions in Section 5 alongside the formal results. Similar to the existing remarks for Theorems 5.1 and 5.2, we will explain how Theorems 4.1 and 4.2 provide a high-probability surrogate for population-level constraints and how these bounds are later used in Theorems 5.1 and 5.2, which further guide the design of the GUFA optimization procedure. We will illustrate this flow clearly for accessibility.
Based on the valuable feedback, we will (i) add a dedicated paragraph in Section 6 explaining hyperparameter selection across datasets, supported by our sensitivity experiments from Section 6.2.2, and (ii) give intuitive summaries of the main theoretical results for broader understanding.
Again, we are deeply thankful to the reviewer for the positive outlook of our paper and their insightful suggestions for further improving its accessibility and clarity. | null | null | null | null | null | null |
Rethinking Causal Ranking: A Balanced Perspective on Uplift Model Evaluation | Accept (poster) | Summary: This work focuses on building and evaluating models for uplift modeling. The work finds a critical limitation in existing evaluation metrics, as many of these models do not weigh negative outcomes enough. The work finds that this lead to biased evaluations due to incorrect orderings between persuadable and sleeping dogs with negative outcomes, potentially resulting in biased models receiving higher curve values. The authors show this through both empirical results and theoretical results. Given the limitation of existing evaluation metrics, the work proposes the principled uplift curve (PUC), and show that it properly weighs different individuals in both the positive and negative outcome groups. The authors propose PTONet by integrating the PUC into the objective function to reduce bias during uplift model training. Through experimental results, the efficacy of PUC is shown by its alignment with ground-truth evaluation in synthetic settings, and the efficacy of PTONet is established in both synthetic and real data.
## Update after Rebuttal
I appreciate the authors work in answering the questions. Overall, I am satisfied with the response and have updated my score. I do hope the authors consider the limitations brought up by other reviewers and discuss them to ensure the paper does not over claim the contributions of the proposed metric (which I do not believe they currently do).
Claims And Evidence: The primary claims of this work are:
1) Existing uplift model evaluation curves can result in suboptimal ordering
2) The proposed PUC provides a more balanced and unbiased evaluation compared to regular uplift and Qini curves
3) The proposed PTONet enhances a model's ability to rank CATEs effectively.
The authors provide sufficient evidence for all of these claims.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand, and the inclusion of both synthetic data and real data is appreciated.
Theoretical Claims: I examined the correctness of proposition 4.1. Truthfully, it is a bit difficult to follow. The rough steps make sense, though it is not clear why we define the value function has the difference between the two bounds -- this is stated without justification in the proof. The proof for the second half of 4.1 is more clear.
Experimental Designs Or Analyses: The experimental design and analyses seem sound overall.
Supplementary Material: I reviewed the proofs and derivations in the supplementary material as well as the experimental details.
Relation To Broader Scientific Literature: The ability to accurately evaluate uplift models is critical across many fields, including marketing and advertising where personalized decision-making is key. The contributions in finding the weaknesses of existing evaluation metrics as well as the new proposed evaluation metric are hence quite important. The proposed PTONet also shows a new way to improve uplift modeling, which may be used in these downstream applications as well.
Essential References Not Discussed: I would appreciate more discussion on [1]. In the evaluation metrics considered in [1], every example will contribute to the TOC/AUTOC curve, which may mitigate some of found issues with most uplift modeling metrics.
[1] Yadlowsky, Steve, et al. "Evaluating treatment prioritization rules via rank-weighted average treatment effects." Journal of the American Statistical Association (2024): 1-14.
Other Strengths And Weaknesses: Strengths:
1. The authors find a really subtle yet interesting flaw in how most uplift evaluation metrics are evaluated
2. The clear examples of when an unbiased model is rated worse than a biased model in Table 2/3 is very compelling
3. The correlation between the ground-truth AUTGC and PUC in the experimental results is very convincing, and seems to support the theoretical findings for PUC
4. PTONet has strong performance in terms of PUC. The fact that the next performing model is PU S-Learner shows more credence to the authors findings regarding how to optimize the PUC.
5. The ablations of the proposed method are convincing in terms of showing the importance of every part of the objective
Weaknesses and Concerns:
1. The work does not mention the relationship to [1] (i.e., does TOC overcome many of the issues of past work?)
2. The organization of the proposed method is quite difficult to follow. Specifically, it is not clear how the loss in (11) is formulated. A more clear description of this would be useful as this is the crux of the proposed method.
3. The proof of Proposition 4.1 is not well formulated and very difficult to follow. Once again, as this is a major part of the proposed contribution, a more clear proof for this proposition would be appreciated.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. When proving Proposition 4.1, how does the difference between the two bounds prove the first half of Proposition 4.1? I would appreciate seeing the steps more clearly.
2. What is the exact intuition for why the loss (11) is useful? What does identifying g(t_i, y_i) from the estimated treatment effect help guide models to assign higher CATEs to persuadable individuals?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **References &Weaknesses 1:** I would appreciate more discussion on [1].
**Response 1:** Thank you for your concern. As far as we know, TOC/AUTOC can be understood as introducing a threshold $u$ and a logarithmic function to the conventional uplift and Qini curves, as shown in the following formula (from Equation (2.5) in [1]):
$\begin{aligned} & \operatorname{AUTOC}(S)=\mathbb{E}\left[\left(-\log \left(1-F_S\left(S\left(X_i\right)\right)\right)-1\right)\left(Y_i(1)-Y_i(0)\right)\right] \\ & \operatorname{QINI}(S)=\mathbb{E}\left[\left(F_S\left(S\left(X_i\right)\right)-\frac{1}{2}\right)\left(Y_i(1)-Y_i(0)\right)\right]\end{aligned}$
This indicates that the metric **places particular emphasis on the contribution of the top few individuals** (as shown in Figure 2 of [1], where, if only the top 10% of the population is considered, the overall gain from AUTOC is higher than that from QINI). In other words, this metric **amplifies the imbalance issue inherent in the uplift and Qini curves.** In contrast, the goal of our metric is the opposite—**we aim to address and mitigate this imbalance problem.**
**Weaknesses 2&Questions 2:** What is the exact intuition for why the loss (11) is useful? What does identifying g(t_i, y_i) from the estimated treatment effect help guide models to assign higher CATEs to persuadable individuals?
**Response 2:** Thank you for your question. The intuition behind the loss function in equation (11) is derived from Table 1. Specifically, **the TP ($T=1, Y=1$) and CN ($T=0, Y=0$) samples should be ranked ahead of TN ($T=1, Y=0$) and CP ($T=0, Y=1$).**
To incorporate this constraint during training, we introduce $g(t_i, y_i)$ as a binary classification task label, where the labels for TP and CN samples are 1, and the labels for TN and CP samples are 0. Then, we use the estimated causal effect $\hat{\tau}(x)$ as the input to train this binary classification task. This approach effectively constrains the model such that **the estimated causal effects $\hat{\tau}(x)$ for TP and CN are as large as possible, while for TN and CP, $\hat{\tau}(x)$ should be as small as possible.** In this way, when the trained model is tested or during model selection, the model will rank TP and CN ahead of TN and CP based on $\hat{\tau}(x)$ in descending order. The persuadable group corresponds to the TP and CN samples, while the sleeping dog group corresponds to the TN and CP samples.
**Claims&Weaknesses 3&Questions 1:** The proof of Proposition 4.1 is not well formulated and very difficult to follow. When proving Proposition 4.1, how does the difference between the two bounds prove the first half of Proposition 4.1?
**Response 3:** Thank you for your concern. Below, we will focus on explaining the meaning of these two bounds and why the difference between them is able to distinguish between the persuadable group and the sleeping dog group.
In appendix E, we define the number of persuadable individuals in total $k$ samples as $N^P(k)$ and the number of sleeping dog individuals in total $k$ samples as $N^S(k)$, then we have the two bounds that $N^P(k) \le R^T(D,k) + NR^C(D,k)$ and $N^S(k) \le R^C(D,k) + NR^T(D,k)$.
The first bound holds because the **persuadable group $(\tau(x) > 0)$ only includes the TP ($T=1, Y=1$) and CN ($T=0, Y=0$) groups.** Therefore, the number of individuals in the persuadable group, $N^P(k)$, will always be less than or equal to the sum of the number of TP individuals, $R^T(D,k)$, and CN individuals, $NR^C(D,k)$. **Similarly, the sleeping dog group only includes the TN ($T=1, Y=0$) and CP ($T=0, Y=1$) groups.** Thus, $N^S(k) \le R^C(D,k) + NR^T(D,k)$.
We aim for the evaluation metric PUC to correctly distinguish between the persuadable group and the sleeping dog group. This means we want **PUC to increase as the persuadable group grows, and decrease as the sleeping dog group increases.** Therefore, we define:
$V_{\operatorname{PUC}}(k,S) = R^T_{S}(D,k) + NR^C_{S}(D,k) - R^C_{S}(D,k) - NR^T_{S}(D,k).$
As we can see, the first two terms include all the individuals in the persuadable group, while the last two terms include all individuals in the sleeping dog group. **When the persuadable group is included in PUC, the value of PUC increases**; conversely, **when the sleeping dog group is included in PUC, the value of PUC decreases**. Thus, our metric can effectively distinguish between the persuadable group and the sleeping dog group.
We will include this explanation in Appendix E of the final version of the paper.
Thank you once again for your valuable feedback. If you have any further concerns or questions, we are always happy to address them. If you feel that our responses have addressed your concerns, we would appreciate it if you could consider raising your recommendation score. | Summary: This paper proposes PTONet, a new uplift model that integrates the Principled Uplift Loss (PUL) to improve CATE ranking accuracy, outperforming existing models in experiments on simulated and real-world datasets.
Claims And Evidence: The paper effectively presents its claims and supports them with clear evidence.
Methods And Evaluation Criteria: Yes, the paper proposes a method to improve CATE ranking accuracy.
Theoretical Claims: The theoretical claims and their proof are correct.
Experimental Designs Or Analyses: The paper's experimental design appears methodologically sound.
Supplementary Material: I've reviewed the Appendix.
Relation To Broader Scientific Literature: The key contributions of this paper are positioned within the broader context of uplift modeling, which has been a subject of significant interest in domains like marketing, customer retention, and personalized treatment recommendations.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: Please refer to the above.
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback. If you have any additional concerns or questions, we would be happy to answer them. If you have no additional concerns, we would appreciate you considering increasing your recommendation score.
---
Rebuttal Comment 1.1:
Comment: I confirm that I have read the author's response to my review and will update my review in light of this response as necessary. | Summary: In an RCT with two groups, treatment and control, an uplift model is supposed to rank four types of units - treatment positive, treatment negative, control positive and control negative in alignment with their CATE (which is unobservable). This paper claims that existing evaluation metrics such as uplift curves and Qini curves are biased towards treating all negatives the same way, i.e., they ignore the potential that a control negative could potentially be a treatment positive and must be ranked at par with treatment positives. It then proposes a simple fix to the evalution metrics that leads to the proposed 'Principled Uplift Curve'. This idea is then also used to add an additional loss function for uplift modeling. Experimental results show that the proposed fix correlates best with true CATE rankings.
Claims And Evidence: I don't see enough clear explanation for the claim that equation 12 handles treatment assignment bias. While this is not the main point of the paper, given that it is included and important for the PTONet (the proposed uplift model), either it should be explained more or if it is from previous work, clear citations should be added.
Methods And Evaluation Criteria: The paper compares its fix against multiple uplift models and evaluation criteria present in literature. A few specific issues a) Why does the architecture in Figure 4 have the input from h(X,T) being added to g(T,Y). It seems like in equation 11, the BCE only takes in $\sigma(x)$. b) the paper needs to address the text on treatment bias with some more explanation.
Theoretical Claims: The main theoretical claim that the paper makes is that the proposed uplift curve is sound in its ranking. This follows immediately from the proposed fix.
Experimental Designs Or Analyses: The experimental design is sound as per my understanding. A few issues a) I couldn't understand why the outcome in the synthetic data, in Appendix I is real-valued when the rest of the analysis is for a binary outcome. b) Also, the form of the functions don't seem to have a rationale mentioned in the text.
Supplementary Material: Yes, the supplementary material contains the related work section, proof of the theoretical claim and experimental details.
Relation To Broader Scientific Literature: This work challenges existing evaluation metrics in the uplift modeling literature. It draws specifically from Devriendt et. al. 2020 that uses helper functions to come up with loss functions for training uplift models.
Essential References Not Discussed: None that I know of.
Other Strengths And Weaknesses: Overall, the main contribution is to propose a fix to existing evaluation metrics in the uplift modeling literature. This is certainly important and impactful. The paper is written clearly overall. There are some parts which need more work that I specify later. My overall impression was that the proposed fix was 'obvious' and what anyone should do in the first place. On one hand, the presentation of the problem such that the solution is obvious, is a strength of the paper. However, in this case, I find that the paper lacks any further insight apart from this fix.
Other Comments Or Suggestions: 1) Section 2.2 has some notation that is not right. I(k) is assumed to be an ordered index but it is not made clear what the order is when the index i ranges from 1 to I(k). Is I_diff same as I?
2) SUC in line 124 occurs before it is defined below.
3) The plot in Figure 5 needs some more explanation about what the shaded region is and what the lines are.
Questions For Authors: I have mentioned most questions in the previous sections. Regarding the results, even the S-Learner with the loss function fix seems to be competitive with PTONet for the fixed evaluation metrics. It would interesting to see if this holds for other uplift models too.
###update after rebuttal ###
I am satisfied with the responses that the authors provided. On the one hand, the experiments that they performed underscore an important point that existing learners with the loss function fix seem to be competitive with their proposed learner. However, on the other hand I feel it takes away from the utility of PTONet. So, I'll keep my score unchanged.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback. We will address each of your concerns one by one.
**Claims&Methods:** ... explanation for the claim that equation 12 handles treatment assignment bias. ... clear citations should be added.
**Response 1:** Thank you for your suggestion. We will revise the citation in line 315 of the original text to **"(please refer to Section 3 in Shi et al. (2019))"** for better readability.
Treatment Assignment Bias occurs when treatment assignment is influenced by systematic factors rather than being entirely random, potentially leading to biased results. Since this paper focuses on RCT data, where treatment is assumed to be random, Treatment Assignment Bias is not a concern. The **Targeted Regularizer** in PTONet improves scalability to non-RCT data, enhancing its applicability in industry and future research.
**Methods:** a) Why does the architecture in Figure 4 have the input from h(X,T) being added to g(T,Y).
**Response 2:** Thank you for your question. The function $\sigma(x)$ can be derived as:
$\sigma(x) = \frac{1}{1+\exp(-\hat\tau(x))} = \frac{1}{1+\exp(-(h_Y(x,1)-h_Y(x,0)))},$
The function $h_Y$ corresponds to the arrow in Figure 4. To avoid this ambiguity, we will modify the arrow in Figure 4 from **$h(X,T)\rightarrow g(T,X)$** to **$h_Y(X,T)\rightarrow g(T,X)$**.
**Analysis:** a) why the outcome in the synthetic data is real-valued. b) the form of the functions don't seem to have a rationale.
**Response 3:** Apologies for any confusion in your reading. We forgot to emphasize in Appendix I that the final observed outcome is generated as follows:
$Y_i = T_i \mathbb{I}(\tau_i > 0) + (1 - T_i) \mathbb{I}(\tau_i < 0) + \epsilon_i^y \mathbb{I}(\tau_i = 0) $
where $\epsilon_i^y \sim \operatorname{Binomial}(1, 0.5)$. The first term represents the observed outcome for the **persuadable** group, the second term corresponds to the **sleeping dogs**, and the third term accounts for the observed outcome of **sure things** and **lost causes**.
The rationale behind this data generation process is as follows:
$T_i$ is designed to simulate the real-world scenario in our business data, where **the number of treated samples is significantly smaller** than the number of control samples.
In outcome functions, the sine and cosine functions are introduced to incorporate nonlinearity, while the different coefficients are used to adjust the proportion of samples with $\tau(x) > 0$. This adjustment helps simulate our real-world business scenario where **positive outcomes are relatively rare.**
We will include these details in Appendix I of the final version of the paper.
**Comments 1:** what the order is when the index i ranges from 1 to I(k).
**Response 4:** Thank you very much for your comments. **The ordered index used in $I(k)$ is based on the descending order of the score function $S$.** Due to the character limit of the response, please refer to Appendix C of the paper, where we provide a detailed explanation of the calculation process for $I(k)$.
**Comments 1&2:** Is I_diff same as I? SUC in line 124 occurs before it is defined below.
**Response 5:** Thank you for your correction. The term $I_{diff}$ is a typo and will be revised to $I$ in the final version. Similarly, "SUC" in line 124 will be corrected to "uplift and Qini curve."
**Questions:** It would interesting to see if this holds for other uplift models too.
**Response 7:** Thank you for your question. We have additionally included experiments with **T-Learner, TARNet, and EUEN models**. The results are as follows:
| Synthetic | PEHE (↓) | SUC (↑) | SQC (↑) | JUC (↑) | JQC (↑) | PUC (↑) | AUTGC (↑) |
| -------------- | ------------ | ------------------- | ----------------- | ---------------- | -------------- | --------------------- | ------------ |
| T-Learner (PU) | 0.867 ± 0.14 | 0.763 ± 0.17 | 0.536 ± 0.12 | 0.748 ± 0.15 | 0.537 ± 0.11 | 0.937 ± 0.20 | 0.952 ± 0.15 |
| TARNet (PU) | 0.893 ± 0.08 | 0.759 ± 0.12 | 0.533 ± 0.09 | 0.754 ± 0.11 | 0.534 ± 0.08 | 0.944 ± 0.14 | 0.957 ± 0.11 |
| EUEN (PU) | 0.781 ± 0.15 | 0.767 ± 0.16 | 0.538 ± 0.11 | 0.742 ± 0.15 | 0.538 ± 0.11 | 0.932 ± 0.19 | 0.948 ± 0.15 |
| PTONet | 0.883 ± 0.13 | 0.780 ± 0.14 | 0.547 ± 0.10 | 0.746 ± 0.13 | 0.546 ± 0.10 | 0.948 ± 0.15 | 0.961 ± 0.11 |
The performance of these three **models after incorporating the Principled Uplift loss function is comparable to that of PTONet**, with significant improvement observed across all metrics. We will add these experiments in the final version of this paper.
Thank you once again for your valuable feedback. If you have any further concerns or questions, we are always happy to address them. If you feel that our responses have addressed your concerns, we would appreciate it if you could consider raising your recommendation score. | Summary: This paper reveals the limitations of previous uplift and Qini curves in evaluating uplift models, demonstrating their susceptibility to manipulation by suboptimal ranking strategies that can artificially enhance the performance of biased models. To address this, the authors introduce the Principled Uplift Curve (PUC), a metric that accounts for both positive and negative outcomes, offering a new assessment of uplift models. Additionally, they propose PTONet, a PUC-guided uplift model that optimizes uplift predictions by directly maximizing the PUC value.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem or application at hand.
Theoretical Claims: I did not thoroughly check each proof. However, they do not contradict my prior understanding.
Experimental Designs Or Analyses: The paper evaluates the performance of uplift models and their corresponding evaluation metrics using both synthetic and real-world datasets. It conducts experiments on a synthetic dataset and the Criteo dataset (Diemert Eustache et al., 2018; Diemert et al., 2021) to assess model effectiveness in practical scenarios. Additionally, to examine the scalability of the proposed method in high-dimensional settings, the paper presents further experimental results on the Lazada dataset (Zhong et al., 2022).
Supplementary Material: All.
Relation To Broader Scientific Literature: The paper contributes to the broader literature on uplift modeling methods and evaluation metrics for uplift models. It examines limitations in existing evaluation approaches, particularly the susceptibility of uplift and Qini curves to biased rankings. By introducing the Principled Uplift Curve (PUC) and the PTONet model, the paper offers a refined evaluation metric and an optimization-based modeling approach, adding to the ongoing research on uplift modeling and causal inference.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper should provide a stronger justification for the proposed methods and explain why they outperform existing metrics. A key concern is that the approach does not improve the worst-case scenario, where ST^TP and ST^CP are ranked first. In many real-world applications, such as advertising and recommendation systems with budget constraints, this ranking can lead to significant opportunity costs. In contrast, conventional metrics at least ensure that PE^{TP} is ranked no lower than second place. While the proposed method distinguishes persuadable individuals from sleeping dogs, it also introduces a tradeoff in opportunity cost and does not guarantee improved decision-making performance in practical applications.
It is not surprising that the proposed PTONet achieves the highest PUC and AUTGC, as it is specifically designed based on PUC. However, it does not outperform other methods on alternative evaluation metrics.
Other Comments Or Suggestions: No.
Questions For Authors: Please see the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your feedback. We will address each of your concerns one by one.
**Weak 1:** A key concern is that the approach does not improve the worst-case scenario, ... conventional metrics at least ensure that PE^{TP} is ranked no lower than second place. ..., it also introduces a tradeoff in opportunity cost and does not guarantee improved decision-making performance in practical applications.
**Response 1:** Thank you for your concern. As stated in the original manuscript, although PUC is not a perfect metric—it cannot fully identify the four groups—**it is more suitable than conventional metrics for evaluating uplift models.** When the budget is limited, in the worst-case scenario, the PUC metric **at least ensures less harmful decisions** compared to conventional metrics. In the best-case scenario, PUC guarantees **both less harmful decisions and the highest possible gains**. Therefore, our metric outperforms conventional metrics.
Specifically, regarding 'conventional metrics ensuring that $PE^{TP}$ is ranked no lower than second place', it seems to overlook **the severe issue where $PE^{CN}$ is ranked behind $SD^{TN}$.** This means that using regular metrics for causal ranking in an uplift model forces decision-makers to target all potential customers ($PE^{TP}$ and $PE^{CN}$) at the expense of customers who would have originally clicked or purchased ($SD^{TN}$). Such a strategy should be avoided in practice as **it harms a large group of customers.**
Even with a budget that covers only $PE^{TP}$ and not $SD^{TN}$, the regular metrics-guided model **loses a large number of potential customers, $PE^{CN}$**. This misstep can mislead decision-makers into believing there are no further growth opportunities when, in fact, potential customers remain untapped. (Please refer to Figure 2 in the original paper; in the worst-case scenario, samples from **3. $SD^{TN}$ to 6. $PE^{CN}$** are considered neither beneficial nor harmful under conventional metrics.)
In contrast, the PUC metric does not exhibit this issue **in the worst-ranking case**. If decision-makers realize that a small-budget promotion won't yield incremental returns, they can continue to expand the budget until the PUC slows down or the promotion is scaled back, **without harming customer interests or missing potential customers**. (Refer to Figure 3 in the original paper; decision-makers can clearly identify that only the groups from **1. $ST^{TP}$** to **4. $PE^{CN}$** are yielding benefits.)
**Most importantly, in the best ranking case, PUC guided models can achieve the highest-gain decisions with the minimal budget, while conventional metrics cannot.**
Therefore, the **PUC metric should be used** over regular metrics to select an uplift model that accurately targets potential customers without alienating existing customers who are willing to purchase. **Future work can improve upon the limitation of PUC's inability to identify the four groups, but regular curves should no longer be used for uplift model evaluation.**
Finally, we appreciate you for suggesting the application scenarios in advertising and recommendation.
**Weak 2:** It is not surprising that the proposed PTONet achieves the highest PUC and AUTGC, as it is specifically designed based on PUC. However, it does not outperform other methods on alternative evaluation metrics.
**Reponse 2:** Thank you for your concern. Our simulated data is not specifically designed for PUC; Our data generation process is simple and easy to follow:
$T_i \sim \operatorname{Binomial}(1,0.1)$ is designed to simulate the real-world scenario in our business data, where **the number of treated samples is significantly smaller** than the number of control samples. This also ensures differentiation from the Criteo and Lazada datasets.
The outcome functions are defined as:
$Y_i(0) = 0.5\sin\left(\sum_j^q X_i^j + 1\right) + \epsilon^0_i $
$Y_i(1) = 0.1\left(\sum_j^5 \cos(X_i^j) + 2\right) + \epsilon^1_i $
Here, the sine and cosine functions are introduced to incorporate nonlinearity, while the different coefficients are used to adjust the proportion of samples with $\tau(x) > 0$. This adjustment helps simulate our real-world business scenario where **positive outcomes are relatively rare.** For details on the proportion of the treated group and the positive outcome rate, please refer to Table 10 in the original paper.
PTONet performs suboptimally with regular metrics but achieves the best performance with the PUC metric, which further **validates the issue of bias in regular metrics highlighted in this paper, and confirms that the PU loss function can directly improve the model's performance on the PUC metric.**
Thank you once again for your valuable feedback. If you have any further concerns or questions, we are always happy to address them. If you feel that our responses have addressed your concerns, we would appreciate it if you could consider raising your recommendation score. | Summary: This paper proposes a new evaluation metric, the Principled Uplift Curve (PUC), which assigns equal importance to individuals with positive and negative outcomes and offers an unbiased evaluation of uplift models. The authors derive a new loss function with a new model architecture to reduce bias during
uplift model training.
Claims And Evidence: The paper claims that the traditional uplift and Qini curves might lead to biased evaluations and proposes the Principled Uplift Curve (PUC) and compares it with other evaluation metrics by their correlation with AUTGC.
The proposed method is demonstrated to outperform the existing method with both synthetic and real-world datasets.
Methods And Evaluation Criteria: The proposed architecture is evaluated in multiple real-world datasets benchmarks and evaluation metrics.
Theoretical Claims: I checked the derivation of individual contributions in Appendix D and did not find any issue with it.
Experimental Designs Or Analyses: I checked the experiment for the proposed model and evaluation metrics.
The model is shown to outperform the existing models in the proposed evaluation metric.
The proposed metric is shown to be more reliable with synthetic data. I think there could be more discussion and careful experiments on this.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: Evaluation is a challenging problem in this area. This paper proposes a new evaluation metric and shows it is more reliable than the existing ones, which could be a nice contribution to uplift modeling.
Essential References Not Discussed: I'm not aware of any essential references that are not discussed.
Other Strengths And Weaknesses: Pros:
- The paper tackles a challenging problem in uplift modeling and proposes a method to improve the modeling and evaluation.
- The proposed method is evaluated with many real-world benchmarks
Cons:
The proposed evaluation metric could be discussed in more detail since it is an essential part of the proposed method.
Other Comments Or Suggestions: NA
Questions For Authors: - Could you give some intuition about the discussion in section 3? What does the max curve indicate here in Fig 3.
- Is the experiment for the correlation between AUUQC and AUTGC as described in Appendix I? Do the results still hold for other data distribution?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback. We will address your concerns and questions one by one.
**Cons:** The proposed evaluation metric could be discussed in more detail since it is an essential part of the proposed method.
**Response 1:** Thank you for your concern. Based on your suggestion, we provide additional discussion on the evaluation metric as follows:
- **Providing intuition behind** $S_{Max}(D_i)$:
$S_{Max}(D_i)$ represents the maximum PUC value, which is achieved when all samples with $y = 1$ and $t = 1$, as well as those with $y = 0$ and $t = 0$, are ranked ahead of samples with $y = 1$ and $t = 0$, as well as those with $y = 0$ and $t = 1$.
- **Repositioning the intuition behind the PUC metric (Proposition 4.1):**
We will move the first paragraph of explanation after Proposition 4.1 to the paragraph after formula (9) to improve readability.
- **Clarifying the intuition behind** $g(t_i, y_i)$:
We will clarify that $ g(t_i, y_i) $ assigns a value of 1 to samples with $ y = 1 $ and $ t = 1 $, as well as those with $ y = 0 $ and $t = 0$, while samples with $y = 1$ and $t = 0$, as well as those with $y = 0$ and $t = 1$, are assigned a value of 0.
Using $g(t_i, y_i)$ as the label, we train a binary classifier to constrain $\hat{\tau}(x)$, ensuring that samples with $y = 1$ and $t = 1$, as well as those with $y = 0$ and $t = 0$, have a larger $\hat{\tau}(x)$, whereas samples with $y = 1$ and $t = 0$, as well as those with $y = 0$ and $t = 1$, have a smaller $\hat{\tau}(x)$.
- **Providing intuition behind** $L^{PU}(D)$:
This loss function encourages the CATE of samples with $y = 1$ and $t = 1$, as well as those with $y = 0$ and $t = 0$, to be greater than the CATE of samples with $y = 1$ and $t = 0$, as well as those with $y = 0$ and $t = 1$.
We appreciate your feedback and will incorporate these intuitions in the final version of our paper.
**Question 1:** Could you give some intuition about the discussion in section 3? What does the max curve indicate here in Fig 3.
**Response 2:** Thank you for your question. The **intuition** behind the discussion in Section 3 is that we aimed to verify **whether SUC and other regular metrics reach their maximum values only when the causal effect ranking is completely accurate**. If this were the case, then SUC would be a reliable metric. However, we found that **even when the causal effect ranking is entirely correct, SUC does not always attain its maximum value.** On the contrary, certain **incorrect causal effect rankings can lead to SUC achieving its highest score** (refer to Tables 2 and 3). This observation led us to further investigate SUC and related formulas, ultimately inspiring this paper.
In Figure 3, the max curve corresponds to $S_{Max}(D_i)$ in Equation (8). We have supplemented its underlying intuition in Response 1.
**Question 2:** Is the experiment for the correlation between AUUQC and AUTGC as described in Appendix I? Do the results still hold for other data distribution?
**Response 3:** Thank you for your question. Yes, the experimental setup in this paper is described in Appendix I. **The results still hold for other data distributions, as long as the data is RCT data.**
Thank you once again for your valuable feedback. If you have any further concerns or questions, we are always happy to address them. If you feel that our responses have addressed your concerns, we would appreciate it if you could consider raising your recommendation score. | null | null | null | null |
Extracting Rare Dependence Patterns via Adaptive Sample Reweighting | Accept (poster) | Summary: This paper tackles independence testing, when there is rare dependence. Rare dependence is defined as the case when most of the data points exhibit independent behaviour between two variables, but a subset exhibits dependence. The authors propose to solve this problem by augmenting the dataset with weights that are a function of some reference variable, that reweight the samples by maximising the HSIC test statistic for dependence (maximising a measure of dependence). The authors derive the null distribution, show that it is too complex and calculate the required quantile using a permutation test. A conditional independence test version is also introduced. Experiments show that the tests perform better than baselines.
## update after rebuttal
On reading the response and the other reviews, I will keep my score.
Claims And Evidence: The only claim that is not thoroughly tested is the fact that rare dependence is actually an issue in reality. The authors also test their causal discovery method in the appendix, however, I would also like to see application of their method on a consensus benchmark (one that is not constructed to exhibit rare dependence), to see what is lost in this case.
Methods And Evaluation Criteria: The authors test only on examples where rare dependence is synthetically created. Some real world example of this would be beneficial. Furthermore, it would be interesting to see what is lost compared to baselines if the data does not actually exhibit rare dependence.
Theoretical Claims: I can't seem to find the proof for proposition 3.6. The proofs in the Appendix are also numbered differently than the main paper, this makes finding the exact proof for a claim a little difficult. Furthermore, it might make sense to write (at the start of the section or where the proof is written), exactly where this is proven.
The rest of the proofs seem correct.
Experimental Designs Or Analyses: The experiments seem sound (see above for some issues).
Supplementary Material: Yes.
Relation To Broader Scientific Literature: I have not seen the issue of rare dependence tackled in this way.
Essential References Not Discussed: Up to my knowledge the required references are discussed in Appendix B.
Other Strengths And Weaknesses: Strengths:
- The paper is tacking an issue that has not been tackled before.
- The solution is sound with experiments showing improvement over the baselines.
Weaknesses:
- The paper does not convincingly argue that rare dependence is a problem in practice.
- The splitting into test and train may lead to a loss of performance compared to other baselines. This has not been explored properly.
Other Comments Or Suggestions: It is not quite clear to me how to select the reference variable C. From the motivating examples, it seemed to me that it has to be either X or Y, but the experiments seem to suggest that it can be a third variable as well. How is this variable chosen in practice? What are the effects of choosing one variable over another given a certain causal graph?
Questions For Authors: - Figure 3 is not clear at all, n1 and n2 are not defined, the text around the figure is also confusing. I can see what point is being made, but unsure how the figure is helping here.
- Eq 1, why do you want the expected $\beta(C)$ to be equal to 1? This should be explained in the text.
- L169 LHS, it might make sense to say that the test statistic is being maximised (instead of optimised).
- KCIT and RKCIT are not formally defined in Section 4.
- I'm unsure why your rules are defined in terms of KCIT and RKCIT both? The relation and the compromises between KCIT and RKCIT should be explained properly here.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and constructive feedback. Below we address all raised points grouped by topic. All figures and tables are available at https://tinyurl.com/mwafx6kh.
- **Rare dependence in reality** (Claims & Methods & W1)
We clarify that we have evaluated our method on a real-world dataset (Sachs et al., 2005) in Sec. 5.2. We agree that real-world support is important and include additional real-world experiments (see first & second responses to Reviewer MSVk).
- **Consensus datasets** (Claims & Methods)
We tested our method on three standard dependence benchmarks: Data Generation III (DG III) in Appendix with $\tau=1$, i.e. totally dependent; Sinusoid in (Ren et al., 2024) and ISA in (Gretton et al., 2007). As shown in the table, RHSIC slightly underperforms full HSIC due to data splitting to learn reweighting function (split ratio=0.5), but outperforms HSIC using half of the data.
In practice, when we are unsure whether rare dependence exists, it is preferable to use HSIC first, and then apply RHSIC if HSIC fails.
|Method|RHSIC||HSIC||HSIC(n/2)||
|-|-|-|-|-|-|-|
|Data Generation|Type I ↓|Power ↑|Type I ↓|Power ↑|Type I ↓|Power ↑|
|DG III|0.03|0.99|0.04|1|0.04|0.67|
|Sinusoid|0.04|1|0.05|1|0.02|1|
|ISA|0.09|0.16|0.1|0.26|0.1|0.15|
- **Test-train splitting** (W2)
We agree that splitting for reweighting may cause a performance drop compared to full-data baselines, as shown above. However, RHSIC significantly outperforms them when rare dependence exists, suggesting the splitting loss is limited and outweighed by the gain in detecting rare dependencies.
To examine the effect, we list different split ratios on DG III $\tau=0.1$. A 0.5/0.5 ratio generally performs well, consistent with prior work [1,2,3].
|Tr:Te|Type I ↓|Power ↑|
|-|-|-|
|7:3|0.07|0.68|
|5:5|0.05|0.8|
|3:7|0.01|0.74|
|HSIC|0.04|0.17|
- **Reference variable C selection** (Comments)
When do UI/CI tests where only X and Y are observed, we consider C=X or Y. Once more information is available, e.g. more observed variables as in causal discovery, C can be a third variable to leverage such information.
Take the DG III $\tau=0.1$ as an example, in the causal graph X<--$\epsilon_b$-->Y<--Q, we observe X, Y, Q and want to test X $\perp$ Y. Proposition C.2 shows that C=X, Y, or Q are all valid here (i.e., do not introduce spurious dependence). RHSIC with C=Q performs the best since Q directly controls the dependence between X and Y, while C=Y, a child of Q, brings some power loss though still acceptable. C=X is ineffective since X $\perp$ Q.
|Method|Type I ↓|Power ↑|
|-|-|-|
|RHSIC (C=Q)|0.05|0.8|
|RHSIC (C=Y)|0.1|0.57|
|RHSIC (C=X)|0.01|0.06|
|HSIC|0.04|0.17|
In practice, if only X, Y are available, we recommend testing with both C=X and C=Y and selecting the lower p-value.
- **Proof for Prop. 3.6** (Theoretical)
We are sorry for omitting the proof for Proposition 3.6 in the submission, and we include it below. The proof uses the independence preservation under measurable transformations.
*Proof.* Since the data are i.i.d., $D_{tr} \perp D_{te}$. As $\hat{\beta} = f(D_{tr})$ is a measurable function of $D_{tr}$, it follows that $\hat{\beta} \perp D_{te}$ (Thm 4.3.5 in [4]). $\square$
We also agree that the appendix numbering could be improved, and we will revise it to clearly indicate where each proposition is proven.
- (Q1) We have updated Fig. 3 (see the link above) and will revise the surrounding text for clarity. We use it to visualize that the original HSIC may fail to detect dependence in rare dependence settings, even if we have infinite samples.
- (Q2) We should make sure the reweighted p.d.f., $\tilde{\mathbb{P}}(X,Y)=\beta(C)\mathbb{P}(X,Y)$, is still a well-defined one, i.e. $\int_{\mathcal{X\times Y}}\tilde{\mathbb{P}}(X,Y)dXdY=1$, which gives us $\mathbb{E}[\beta(C)]=1$. We will clarify this in our manuscript.
- (Q3&4) Thank you for your careful reading. We will correct the terminology in L169 and formally define KCIT and RKCIT in Section 4.
- (Q5) Thanks for your question. In our rules, we employ KCIT to detect non-rare dependencies and RKCIT to detect rare ones. While RKCIT is capable of identifying non-rare dependencies as well, its reliance on data splitting may introduce unnecessary statistical inefficiency. Therefore, we prefer KCIT in such cases. Assumption 4.1 guarantees the reliability of KCIT when it rejects the null hypothesis. However, if KCIT fails to reject, this could be due to the presence of a rare dependence, in which case RKCIT serves as a complementary test.
We sincerely appreciate your thoughtful feedback and recognition of our work. Thank you for your time.
[1] Interpretable distribution features with maximum testing power. NeurIPS 2016.
[2] An adaptive test of independence with analytic kernel embeddings. ICML 2017.
[3] Learning adaptive kernels for statistical independence tests. AISTATS 2024.
[4] Casella & Berger, Statistical Inference. | Summary: This paper considers the discovery of the dependence pattern in a specific small region, coined "rare dependence". The authors proposed a reweighting (importance sampling) -based approach and presented several statistical properties. They also demonstrated several applications, e.g., causal discovery, on synthetic and real-world datasets.
## update after rebuttal
I appreciate the author addressing some of my concerns, and I will keep my score.
Claims And Evidence: The theoretical claims are clear in general. However, it is unclear how the kernel choice might affect these claims. In particular, the value of HSIC is subject to the kernel choice, which should significantly impact the results, e.g., the independence test.
Methods And Evaluation Criteria: The proposed methods and evaluations are reasonable.
Theoretical Claims: The results are mostly intuitive despite the heavy mathematical notations. The proofs seem reasonable, though I did not check them line by line.
Experimental Designs Or Analyses: The experiment designs are reasonable.
Supplementary Material: The appendix contains detailed proofs and details of experiments, which are helpful for readers.
Only functions are provided in the code (zip file).
Relation To Broader Scientific Literature: The proposed approach could potentially help improve the performance of causal discovery, which has broader applications in scientific domains.
Essential References Not Discussed: Recent works on dependence and conditional dependence learning have extended Rényi's maximal correlation to general analyses of dependence and conditional dependence, e.g.,
[XZ2024] Xu, Xiangxiang, and Lizhong Zheng. "Neural feature learning in function space." Journal of Machine Learning Research 25.142 (2024): 1-76.
For example, the covariance ($\Sigma$) -based criterion also appeared in analyzing maximal correlation [XZ2024]. The construction of $\ddot Y$ (cf. Lemma 3.10 of the manuscript) also appeared in [XZ2024] without the kernel assumptions. It could be interesting to compare these analyses and approaches.
Other Strengths And Weaknesses: Strength: The theoretical claims are sound.
Weaknesses: see questions.
Other Comments Or Suggestions: The presentation can be improved. In general, the notations can be so heavy that key ideas are deeply buried in the equations.
Examples:
1. The characteristic kernel was not defined in its first appearance (end of Sec. 2);
2. The notation $\mathcal{H}$ was used to indicate both hypothesis and the RKHS.
Questions For Authors: Q1: How do kernel choices affect the results? In practice, how should we select the kernel (e.g., to compute the HSIC claimed by Examples 1.1 and 3.1)?
Q2: Why use two different examples (1.1 and 3.1)?
Q3: Section 3.2, first line: "two disjoint sets of variables": is X a set of random variables or a single random variable? Similarly, the definition of C as a "subset of X or Y" is problematic. Note that this definition of sets does not apply to examples 1.1. and 3.1, and from later contexts, X and Y seem to be random variables instead of sets.
Q4: Can the designed reweighting be extended to a more general case? Note that in the manuscript, it was assumed that the rare dependence happens in a rectangular region. (equivalently, in Eq. (1), C can only be a "subset" of X or Y, not arbitrary shapes.) This might not necessarily be the case when considering nonlinear dependence.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's constructive comments and helpful feedback. Please see below for our response. All figures and tables are available at https://tinyurl.com/mwafx6kh.
- **On kernel choice** (Comments & Q1)
Indeed, RHSIC/RKCIT performance depends on kernel choice. Our theoretical claims require a characteristic kernel. In practice (e.g., Examples 1.1 and 3.1), we use the Gaussian kernel with median heuristic width, as in standard HSIC.
We conducted additional experiments comparing kernels and found that Gaussian consistently performs best. The poor performance of the polynomial kernel is expected, as it is not characteristic. Laplace is characteristic but has heavier tails than Gaussian; its performance drop suggests that lighter-tailed kernels may be preferable.
|Method (Kernel)|Type I ↓|Power ↑|
|-|-|-|
|RHSIC (Gaussian)|0.05|0.8|
|RHSIC (Laplace)|0.08|0.6|
|RHSIC (Polynomial)|0.04|0.42|
- **On related work** (References)
We thank the reviewer for pointing out this related work. While our goals differ — ours focus on hypothesis testing, theirs on representation learning — both leverage dependence-maximization criteria to learn meaningful functions. We will cite and discuss [XZ2024] in the manuscript.
[XZ2024] proposes a feature learning framework maximizing dependence with the target using an extension of Rényi maximal correlation (RMC) in $L^2$ space, with functions parameterized via neural networks. In contrast, we focus on detecting rare dependence using HSIC/KCI, widely adopted in independence testing, and learn a reweighting function to assign sample-wise importance. Theoretically, RMC provides a general measure of nonlinear dependence via optimal function pairs in $L^2$. Modal decomposition captures the full spectrum of the cross-covariance operator, with the leading mode corresponding to RMC. Restricting to RKHS and aggregating all squared singular values yields HSIC. While RMC is more general, its estimation and statistical inference in $L^2$ with neural networks might be more challenging. In contrast, our methods inherit the kernel-based formulation of HSIC, allowing efficient estimation by kernel trick and asymptotic distribution analysis with statistical guarantees.
- (Q2) Examples 1.1 and 3.1 illustrate two types of rare dependence and show a shared solution. Example 3.1 is *local*, with $X \perp Y$ holding in a subregion. Example 1.1 shows *global but weak* dependence, where the signal exists throughout but is largely buried by noise. Both of them motivate our method, which adaptively reweights samples to better reveal underlying dependence patterns.
- (Q3) We thank the reviewer for pointing this out. We acknowledge the inconsistency in notation: X, Y are introduced as random variables in Sec. 2 but later referred to as variable sets. We will unify the notation to "random variables or sets of variables". Most examples in our paper treat X and Y as variables for simplicity and illustration purpose. The phrase “C is a subset of X or Y” assumes that X and Y are sets of variables; when they are individual random variables, C is either X or Y. We agree this was ambiguous and will rephrase accordingly.
- (Q4) We thank the reviewer for this question. Our reweighting framework is general and does not assume rare dependence lies in a rectangular region. In nonlinear Example 1.1, axis-aligned boxes are purely for visualization; they are not a limitation of the method. The nonlinearity is captured by the reweighting function $\beta(C)$, which is flexible and nonlinear. It assigns higher weights to samples more likely to exhibit dependence, based on their values of C. This allows the method to adapt to complex dependence structures - as long as some signal is present along the reference variable C. Hence, our method naturally handles nonlinear rare dependence.
We say that C can be either X or Y, but not both. As mentioned in Proposition 3.3, C being either X or Y preserves the independence between them and avoids introducing spurious dependence. However, using $C=(X,Y)$ in Eq. (15) makes the reweighted distribution $\tilde{\mathbb{P}}_{XY}=\beta(X,Y)\mathbb{P}_X\mathbb{P}_Y$ no longer factorizable in X and Y. This means the reweighed distribution changes the original independence relation. We will revise the manuscript to clarify these points.
- **On notation and presentation** (Comments)
We appreciate the reviewer's comments. We will define "characteristic kernel" upon its first mention (end of Sec. 2), and revise ambiguous notation — in particular, replacing $\mathcal{H}$ with $\mathcal{F}$, denoting RKHSs of X, Y, Z as $\mathcal{F}_X, \mathcal{F}_Y, \mathcal{F}_Z$. We will also provide intuitive explanations and examples for each theorem or proposition when it appears, and streamline the notation for clarity.
We sincerely thank the reviewer for your recognition and your valuable feedback that helps improve the presentation of our submission. | Summary: Existing conditional independence testing methods suffer to detect dependencies that occur in a small regions of the data which is referred to as rare dependence. This work aims to resolve this issue by proposing a kernel-based independence testing with an importance reweighting, which assigns higher weight to data point that involves such rare dependencies. Theoretical analysis and experimental results demonstrate the validity and effectiveness of the proposed method.
Claims And Evidence: The paper proposes a (conditional) independence tests for detecting rare dependencies and provides a theoretical guarantee regarding bound and asymptotic property. The effectiveness of the method is evaluated with two synthetic data generating process and one real-world dataset.
Methods And Evaluation Criteria: - The method and evaluation criteria are appropriate for the task.
Theoretical Claims: - The paper provides an asymptotic analysis on the importance reweighting statistics of the test. Theoretical claims seem to be convincing, though I did not check the proof in detail.
Experimental Designs Or Analyses: - Experimental setup looks reasonable. That being said, evaluation on only a single real-world dataset seems a bit narrow.
Supplementary Material: - I checked Appendix ~B (related works), but did not check in detail from Appendix C (omitted proofs)
Relation To Broader Scientific Literature: The dependency pattern within the data is heterogenous and imbalanced in many practical scenarios (e.g., economics, biology, social science). I believe the paper has potential impact to broader scientific literature.
Essential References Not Discussed: I think the motivation should be better presented since in it’s current form, it might be unclear whether such rare dependencies are prevalent in real-world applications which is related to the significance of the problem. Accordingly, I suggest the authors to include the literature on local independences [1-5] to discuss and better motivate the relevance of “rare dependencies” in practical applications.
Other Strengths And Weaknesses: **[Strengths]**
- The paper is well-written and easy to follow.
- The paper tackles an important problem and theoretical analysis looks solid. Another strength is the application to causal discovery and corresponding evaluation.
**[Weaknesses]**
- The motivation is somewhat weak in the current manuscript, since some readers might question the importance and relevance of “rare dependencies” in real-world datasets and scenarios.
- The authors mention that discovering local (in-)dependencies (e.g., context-specific independence [1, 2] or local independence [3]) are “opposite to our objective”. Can you elaborate on what this means? Intuitively, such local (in-)dependencies can be regarded as a special case of “rare dependence”, i.e., soft vs. hard, and I don’t see any reason why it is opposite to the scope of this work. It would be interesting to see how the proposed method performs in such settings.
- The evaluation considers only a single real-world dataset and the proposed method is limited to continuous variables.
Other Comments Or Suggestions: See above and below.
Questions For Authors: - Could the proposed method be evaluated on discovering local (in-)dependencies?
- One desirable property is to find a particular region of the data where the “rare dependence” exhibits (e.g., [-2, 2] in Fig. 1 and [0,0.25] in Fig. 2) as it provides an interpretability and could be further leveraged, e.g., for efficient [4], robust inference [5], and causal effect identification [6]. I assume the proposed method could be further extended to capture such specific subgroup of the data, e.g., by collecting datapoints with large importance weights; I would like to hear the thoughts from the authors.
- Could the proposed method be extended to discrete variables?
***References***
[1] Context-specific independence in Bayesian networks
[2] The role of local partial independence in learning of Bayesian networks
[3] On discovery of local independence over continuous variables via neural contextual decomposition
[4] Exploiting contextual independence in probabilistic inference
[5] Fine-grained causal dynamics learning with quantization for improving robustness in reinforcement learning
[6] Identifying causal effects via context-specific independence relations
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and constructive feedback. Please see below for our response. All figures and tables are available at https://tinyurl.com/mwafx6kh.
- **On Motivation** (Reference & W1)
We thank the reviewer for this valuable suggestion. We agree that the motivation can be strengthened, especially regarding real-world relevance. We reviewed the literature on local and context-specific independence [1–5], which supports the practical significance of rare dependence. For instance, [3] applies CSI to physical dynamics such as friction; [5] demonstrates local dependence in reinforcement learning, e.g., autonomous driving; [6] discusses CSI in biomedical inference, e.g., dose-response effects in antibiotic treatment. These works are insightful and we will cite and integrate these as examples.
Rare dependence is common across domains. In economics, income and consumption may appear weakly related in low-income groups but become strongly coupled at higher income levels. In psychology[7], the impact of social media on adolescent mental health becomes significant only with excessive usage. Similar rare dependencies occur in medicine[8], physics[9], and sociology[10], highlighting the practical relevance of detecting rare dependencies.
We will revise the motivation accordingly and include references [1–5] and these real-world examples to better contextualize the relevance of rare dependence in practice.
- **On real-world datasets** (Experimental & W3)
We agree that additional real-world evaluation would further strengthen the work. In response, we have added an experiment **(see link above)** using monthly JPY/USD exchange rates (E) and U.S. federal funds rates (F) from 1990 to 2010, sourced from FRED. While the original HSIC fails to reject independence (p = 0.2174), RHSIC detects dependence (p = 0.0005) using F as the reference variable. The learned weights assign higher importance to the samples in 2001 and 2008. These correspond to the Dot-com recession and the global financial crisis, respectively — showing that our method not only detects rare dependence but also provides interpretable insights.
- **On local (in-)dependencies** (W2 & Q1)
We thank the reviewer for this thoughtful question. We apologize for the misleading wording "opposite" and will revise it accordingly. We intended to highlight a difference in focus — which we now realize is not necessarily exclusive: prior works aim to model precise conditional independence structures, while our goal is to detect rare dependence overlooked by existing tests.
We agree that our methods can deal with context-specific or local (in-)dependence problems. For example, Example 3.1 and Data Generation II & III in our experiments involve local dependence, and our method performs well in these cases. Moreover, the learned sample weights can help identify independent regions — low-weight samples often correspond to locally independent areas. We will clarify this point in the revision.
- (Q2) Indeed, one advantage of our method is that the learned importance weights provide a natural way to identify subgroups of data that contribute most to the rare dependence signal. As shown in our real-world dataset analysis and local-independence discussion above, these weights help uncover meaningful patterns and guide downstream interpretation, enhancing interpretability.
We agree that explicitly extracting high-weight subgroups could further extend the utility of our method. In particular, the learned weights can support fine-grained causal structure discovery by highlighting context-dependent relationships. Moreover, the structures recovered by our approach (e.g., the algorithm in Sec. 4) can provide valuable inputs for downstream tasks. Both aspects are relevant to applications discussed in [4,5,6], as suggested by the reviewer. We view this as a promising direction for future work.
- (Q3) Yes. Since kernel matrices can be constructed for discrete variables using Kronecker kernels [11], our method — which operates on these matrices — directly applies to the discrete case.
We sincerely thank you for your constructive feedback, the time you have dedicated, and your recognition of our work. We especially appreciate the valuable suggestions regarding future directions, which we find highly inspiring and will consider in our subsequent research. Thank you.
[7] A Systematic Review: The Influence of Social Media on Depression, Anxiety and Psychological Distress in Adolescents.
[8] Opioid-induced Hyperalgesia: A Qualitative Systematic Review.
[9] Frequency-Dependent Local Interactions and Low-Energy Effective Models from Electronic Structure Calculations.
[10] Time Spent on Social Network Sites and Psychological Well-Being.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the rebuttal. I have read the author response and my concerns are now well-addressed.
Specifically:
- As acknowledged by the authors, the paper will benefit from better motivation and how the rare dependencies arise in real-world scenarios. I noticed that the reviewer 6TwN shares the same concern. Please include the relevant discussions on the motivation and local independences and references which would further strengthen the paper.
- Thanks for the experiments on the additional real-world dataset.
- Thanks for acknowledging the implications and the importance of identifying subgroups of the data where the rare dependencies arise. It's nice to see that the proposed method does naturally provide this interpretability through the learned weights. Please include the discussions in the revised version.
Accordingly, I increased my score from 3 to 4.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for taking the time to read our rebuttal and for the recognition of our work. We are glad that our responses addressed your concerns. We will incorporate the suggested revisions into the revised version of the paper. Thank you again for the constructive suggestions, and we truly appreciate your feedback. | Summary: This paper proposes the use of adaptive sample weighting to detect rate dependencies in data. The key idea is to incorporate weights for data points by formulating an objective problem that maximizes the reweighted HSIC along with regularization terms. Asymptotic hypothesis test guarantees for the resulting reweighted HSIC test are provided through direct applications of the classical theory of V-estimators and U-estimators.
The estimation of sample weights is analyzed as a statistical learning problem (empirical risk minimization) using simple arguments from empirical process theory, leading to non-asymptotic uniform convergence guarantees.
Finally, this idea is extended to conditional HSIC, and, building on this, a new version of the PC algorithm is designed that utilizes the reweighted conditional HSIC to infer conditional independence. In the experiments, the strong performance of the new PC algorithm is demonstrated on both synthetic data and a real-world dataset from flow cytometry (Sachs et al., 2005).
Claims And Evidence: Yes, the theoretical claims are indeed classical, and the proofs are provided. The experiments are also discussed in detail.
Methods And Evaluation Criteria: Yes, test power and Type I error are reported for the experiments, which is common in the causality literature.
Theoretical Claims: Yes, I skimmed briefly, and they seem to be correct.
However, one important note should be mentioned in the paper: the uniform bound in Theorem 3.7 is not designed for the optimization problem (9) but rather for the version of the empirical risk that does not include any regularization terms. With regularization, the term $B$ in line 1080 is not negative, and as a result, the entire conversion of the problem into uniform convergence bounds breaks down.
Experimental Designs Or Analyses: Yes, they seem to be valid.
Supplementary Material: Yes, I skimmed the proofs briefly.
Relation To Broader Scientific Literature: The authors design a new independence test, leading to a new class of constraint-based causal discovery algorithms, which may have broader impacts in applications such as biology or the human sciences.
Essential References Not Discussed: Yes, indeed, the idea of adaptive sample reweighting for causality is not novel and has been proposed before (and is not cited in this manuscript), particularly in:
Zhang, A., Liu, F., Ma, W., Cai, Z., Wang, X., & Chua, T. S. Boosting Causal Discovery via Adaptive Sample Reweighting. In The Eleventh International Conference on Learning Representations.
In that work, adaptive sample reweighting was formulated as an optimization problem in combination with score-based causal learning algorithms. In contrast, this paper considers a constraint-based approach (PC). However, in general, this prior work significantly undermines the novelty of this submission.
Other Strengths And Weaknesses: Strengths:
- Paper is well-written and all aspects of theory are analyzed (despite following classical results and not necessarily very novel) in depth, especially on asymptotics of hypothesis testing, causal discovery of the resulting PC algorithm up to markov blanket and uniform convergence results for estimation of the sample weights.
- A comprehensive set of synthetic and real-world experiments are provided.
Weakness:
- The idea is not novel and similar idea have been proposed before (Zhang, An, et al. 2023).
Because the paper is complete (in terms of theory and experiments), I am leaning toward acceptance.
Other Comments Or Suggestions: Besides the missing reference, I don't have any other suggestions.
Questions For Authors: The paper is well-written, and I don’t have any questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's constructive comments and helpful feedback. Please see below for our response. All figures and tables are available at https://tinyurl.com/mwafx6kh.
- **(Weakness & Reference)**
Thank you for your comments and for pointing out the work by Zhang et al. (2023), which we will cite in our revised version. **Our work and Zhang et al. (2023) originate from different motivations and aim to address distinct problems.** Our research is motivated by the specific challenge of detecting rare dependence, a topic that remains unexplored to the best of our knowledge. Rare dependence is widespread across many real-world domains and confirmed by lots of literature, such as psychology[1], medicine[2], physics[3], sociology[4], etc. Our focus is on **detecting rare dependencies** that are otherwise missed by existing independent tests. The proposed RHSIC and RKCIT are statistical tools for independence and conditional independence testing designed to recover rare dependencies. We emphasize that discovering causal relations in the presence of rare dependence is one important application of our methods, not the ultimate goal. Our method is not restricted to causal discovery.
On the other hand, ReScore (Zhang et al., 2023) aims to optimize the structure recovery of score-based causal discovery methods with fewer spurious edges and more robustness to heterogeneous data. They propose an effective model-agnostic framework of reweighting samples to boost differentiable score-based causal discovery methods. The core idea of ReScore is to identify and upweight less-fitted samples, as these samples provide additional insight into uncovering the true causal edges—compared to those easily fitted through spurious associations. Although RD-PC, which employs our proposed RHSIC and RKCIT with correction rules, is also a sample reweighting-based method for causal discovery, its objective is fundamentally different from that of ReScore. ReScore is designed to mitigate the influence of spurious edges by focusing on less-fitted samples, whereas RD-PC aims to recover true causal edges that are easily overlooked or erroneously removed due to rare dependence. Exploring the connection between rare dependence and less-fitted samples may offer an interesting direction for future research.
We appreciate the opportunity to clarify this and will update the manuscript accordingly.
- **(Theoretical claims)**
Thank you for your insightful comments. We agree that the uniform bound in Theorem 3.7 applies to the empirical risk without regularization terms, rather than to the optimization problem (9). We acknowledge that this discrepancy exists and appreciate the opportunity to address it. We should have discussed this in detail in the paper. A gap remains between the theoretical aspect and the current algorithmic implementation. We have included a discussion in our updated manuscript. In practice, however, when analyzing data, we have to introduce a trade-off by employing the normalized RHSIC with two penalties in (9): the first term to control the smoothness of the $\hat{\beta}$ function, and the second to constrain the deviation of $\hat{\beta}$ from one to select as many samples as possible. These two penalties allow us to balance theoretical rigor with practical performance in practice.
We appreciate the reviewer for raising this issue. To address this concern more thoroughly, we will revise the manuscript by adding an ablation study in the Experiment Section to quantify the regularization’s effect and discuss this theoretical-practical tradeoff explicitly.
We are encouraged that the reviewer recognizes our work as "well-written" and our theoretical analysis as "in depth." We also appreciate the reviewer’s acknowledgment of our comprehensive synthetic and real-world experiments.
Besides, we would like to add that our contribution goes beyond the classical results that the reviewer mentioned; we identify a new problem setting, rare dependence in statistical testing, and establish a general framework to solve this problem. In real-world scenarios, such rare dependencies often arise due to noise or imbalanced data distribution, making them difficult to detect using standard tests. To this end, we hope this work provides a foundation for various potential practical extensions, such as interpretable subgroup discovery, robust causal inference, and scientific analysis in domains like medicine, economics, and social science.
We sincerely appreciate your thoughtful feedback and recognition of our work. Thank you for your time and efforts.
[1] A Systematic Review: The Influence of Social Media on Depression, Anxiety and Psychological Distress in Adolescents.
[2] Opioid-induced Hyperalgesia: A Qualitative Systematic Review.
[3] Frequency-Dependent Local Interactions and Low-Energy Effective Models from Electronic Structure Calculations.
[4] Time Spent on Social Network Sites and Psychological Well-Being.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed response and clarification, especially regarding Zhang et al. (2023). As I mentioned earlier, this is a complete and well-executed paper, though the extent of its contribution is not game-changing. For this reason, I am leaning toward acceptance, and I believe my score accurately reflects my evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your time, thoughtful comments, and engagement throughout the review process. We truly appreciate your recognition of the completeness and clarity of the paper, and we're glad our responses were helpful. | null | null | null | null | null | null |
Achieve Performatively Optimal Policy for Performative Reinforcement Learning | Reject | Summary: - **Proposed Algorithm:** This work introduces a zeroth-order performative policy gradient (0-PPG) algorithm that converges to the PO policy with polynomial computational complexity under mild conditions.
- **Key Theoretical Properties:**
- When the policy regularizer dominates the environmental shift, the value function exhibits a gradient dominance property, meaning any stationary point is a PO.
- Although the value function may have unbounded gradients, all sufficiently stationary points lie within a convex and compact policy subspace $\Pi_\Delta$, where the policy value is bounded below by $\Delta > 0$, ensuring the gradient is both bounded and Lipschitz continuous.
Claims And Evidence: Evidence well supports the claims
Methods And Evaluation Criteria: Proposed methods make sense, and there are no experiments.
Theoretical Claims: I have not taken a close look at all proof,s but the takeaways and intuitions, remarks after the Theorem all make sense (at least to me)
Experimental Designs Or Analyses: There are no experiments.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper looks promising since it has done a theoretical analysis on PO convergence. However, the second concern that I have written down on [Questions For Authors] may challenge the novelty of this paper.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- I like how authors provide a key takeaway of Theorems (Like Remark after Theorem 1 or Implications after Theorem 2, and remark after Proposition 1). This makes the reading more comfortable.
Weakness:
- Please refer to the [Questions For Authors]
Other Comments Or Suggestions: Please refer to the [Questions For Authors]
Questions For Authors: My current score is approximately 2.5 (I cannot choose between 2 and 3, but I am currently just set at 2), primarily due to following questions, especially 2 and 4. I am willing to increase my score if the following issues are addressed:
1. **Clarification on Existing Work:** Could the authors specify why previous research in performative RL has focused solely on the PS policy?
2. **Interpretation of Regularized Value Function:** In both the abstract and the remark following Theorem 1, the analysis suggests that when the policy regularizer dominates the environmental shift, the value function exhibits a gradient dominance property, which is intuitively appealing. However, I am concerned about the practical significance of the optimal policy derived from this regularized value function. Since the policy regularizer may impede convergence to the true optimal policy (thereby affecting generalization), if it dominates the environmental shift, does this imply that the optimal policy is biased towards a more uniform distribution? If so, this might render the primary contribution somewhat trivial.
3. **Insights from Theorem 3:** Could the authors elaborate on the key takeaways of Theorem 3? The Lipschitz continuity property appears to be a direct consequence of Assumptions 1 through 3. It would be helpful to understand how the upper bound is affected by the parameters $L$ and $l$.
4. **Experimental Validation:** Has the proposed approach been tested empirically? Given that the paper introduces several convergence theorems, including experimental results—perhaps on a simple environment like a grid world—would strengthen the manuscript by demonstrating the convergence behavior of the PO policy.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Clarification on Existing Work:** Could you specify why previous research in performative RL has focused solely on the PS policy?
**A:** Great question. There are two reasons. First, the method to obtain a performatively stable (PS) policy is more straightforward to think of than to obtain a performatively optimal (PO) policy. To elaborate, since a PS policy $\pi_{PS}$ is only required to be optimal in its corresponding fixed environment $(p_{\pi_{PS}}, r_{\pi_{PS}})$, so we can obtain a PS policy by repeated training, i.e., applying **traditional policy optimization methods** to a fixed environment. In contrast, a PO policy $\pi_{PO}$ is required to have larger value in the environment $(p_{\pi_{PO}}, r_{\pi_{PO}})$ than the value of any policy $\pi$ in its own environment $(p_{\pi}, r_{\pi})$, so we cannot use **traditional policy optimization methods**. Second, the distance between a PS policy and a PO policy is $\mathcal{O}(\epsilon_p+\epsilon_r)$ where $\epsilon_p$ and $\epsilon_r$ are the Lipschitz constants of the Lipschitz continuous $p_{\pi}$ (transition kernel) and $r_{\pi}$ (reward), so PS approximates PO well in a slowly changing environment with small $\epsilon_p$ and $\epsilon_r$ (Mandal et. al. 2023).
**Interpretation of Regularized Value Function:** I am concerned about the practical significance of the optimal policy from this regularized value function. Since the policy regularizer may impede convergence to the true optimal policy (thereby affecting generalization), if it dominates the environmental shift, does this imply that the optimal policy is biased towards a more uniform distribution? If so, this might render the primary contribution somewhat trivial.
**A:** Great question. The answer is partially yes. The optimal policy for regularized objective is closer to the uniform policy than the optimal policy for unregularized setting. However, we do not think it as a bias, because the optimal policy for entropy regularized setting is also an important target. To elaborate, entropy regularization has been demonstrated to make the policy robust against perturbation to the environment (transition kernel and reward), thereby improving generalization [1], and to encourage the agent to explore unknown environment, yielding better exploration-exploitation trade-off (Mnih et al., 2016; Mankowitz et al., 2019; Cen et al., 2022; Chen and Huang, 2024). As our algorithm converges to this important target policy, the regularized optimal solution, we do not think it as a bias.
[1] Eysenbach, Benjamin, and Sergey Levine. "Maximum Entropy RL (Provably) Solves Some Robust RL Problems." International Conference on Learning Representations (2022).
**Insights from Theorem 3:** Could you elaborate on the key takeaways of Theorem 3? The Lipschitz continuity property appears to be a direct consequence of Assumptions 1 through 3. It would be helpful to understand how the upper bound is affected by the parameters $L$ and $\ell$.
**A:** The key takeaway of Theorem 3 is that the objective function $V _ {\lambda,\pi}^{\pi}$ is Lipschitz continuous and Lipschitz smooth in the domain $\pi\in\Pi_{\Delta}=\{\pi\in\Pi:\pi(a|s)\ge\Delta\}$. You may misunderstood Theorem 3. First, the Lipschitz property comes from Assumption 1-2 but the proof is not very straightforward. Second, the upper bounds for Lipschitz continuity and Lipschitz smoothness are proportional to $L_{\lambda}$ and $\ell_{\lambda}$ (not $L$ and $\ell$) respectively, defined by Eqs. (22) and (23) respectively. $L_{\lambda}$ and $\ell_{\lambda}$ depend on problem-related constants like $|\mathcal{S}|$, $|\mathcal{A}|$, $\gamma$, $\lambda$, $\epsilon_p$, $\epsilon_r$, not tunable parameters.
**Experimental Validation:** Has the proposed approach been tested empirically?
**A:** Good question. We compared our Algorithm 1 with the existing repeated training algorithm in a simulation environment with 5 states, 4 actions, discount factor $\gamma=0.95$, entropy regularizer coefficient $\lambda=0.5$, transition kernel $p_{\pi}(s'|s,a)=\frac{\pi(a|s)+\pi(a|s')+1}{\sum_{s''}[\pi(a|s)+\pi(a|s'')+1]}$, and reward $r_{\pi}(s,a)=\pi(a|s)$. We implement our Algorithm 1 for 400 iterations with $N=1000$, $\beta=0.01$, $\Delta=10^{-3}$, $\delta=10^{-4}$ and value functions evaluated by value iteration. The repeated training algorithm obtains the next policy $\pi_{t+1}$ by applying the natural policy gradient algorithm [1] with 100 steps and stepsize 0.01 to the entropy-regularized reinforcement learning with transition kernel $p_{\pi_t}$ and reward $r_{\pi_t}$. Both algorithms start from the uniform policy (i.e. $\pi(a|s)\equiv 1/4$). Our experimental results in the anonymous website https://docs.google.com/document/d/1bH3eEoGhfDwq1NBNW7_zjCSLvvmcUyDusaINivK5bdo/edit?tab=t.0 shows that the existing repeated training algorithm stucks at the initial policy which is performatively optimal, while our Algorithm 1 converges to a much larger objective function value. | Summary: The paper studies the problem of performative reinforcement learning, where the choice of policy actively influences the dynamics in the environments (transitions) as well as the rewards.
The authors introduce the first algorithm which provably converges to the performatively optimal (not stable policy) under standard regularity conditions.
Claims And Evidence: Yes, all the claims are well supported.
Methods And Evaluation Criteria: The main contributions of the paper are theoretical. Their analysis makes sense.
Theoretical Claims: I did not.
Experimental Designs Or Analyses: NA
Supplementary Material: No
Relation To Broader Scientific Literature: The paper makes an excellent contribution to the growing area of performative prediction and performative reinforcement learning. To date, there was no known algorithm that one could show convergence to the performatively optimal solution.
Their results mirror a similar story developed in the classical performative prediction literature over the last few years where initially people only knew of algorithms that would converge to stable points. Then, in 2021, Miller et al introduced the first set of conditions under which the performative risk was convex, and designed algorithms which converged to the performatively optimal solution.
This result completes a similar arc for the performative reinforcement learning setting which is substantially more complicated than that initially considered by Perdomo et al in their paper on performative prediction. This is a very nice result that will be of interest to the community. Here, gradient dominance is somehow the analogous structural condition to convexity in the standard setup.
Essential References Not Discussed: The relevant literature is appropriately cited. It might be nice to tell a bit of this story above around how their results contribute to the broader literature on performative prediction, but this is really up to the authors.
Other Strengths And Weaknesses: Convergence to optimality, not stability, is a real strength of the paper. The analysis is substantial and involved but the authors do a good job of providing intuition. I think the paper would be even better if they give a broader overview of performativity and spend a bit more time delving into the intuition for their proofs. For instance, readers may not be familiar with these kinds of gradient dominance conditions and a gentler review of why these conditions are useful and where they have been previously studied in the literature (e.g. LQR) could be very nice.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Essential References Not Discussed:** The relevant literature is appropriately cited. It might be nice to tell a bit of this story above around how their results contribute to the broader literature on performative prediction, but this is really up to the authors.
**A:** Thank you very much for telling the story showing our contribution to the broader literature on performative prediction. We are glad to add this story to our revision.
**Other Strengths And Weaknesses:** I think the paper would be even better if they give a broader overview of performativity and spend a bit more time delving into the intuition for their proofs. For instance, readers may not be familiar with these kinds of gradient dominance conditions and a gentler review of why these conditions are useful and where they have been previously studied in the literature (e.g. LQR) could be very nice.
**A:** Thanks for your suggestion. We will add a discussion of related works including those on performative prediction. We will stress that the major idea of both performative prediction and performative reinforcement learning is performativity which means the data distribution can be affected by the decision, as observed in many applications.
We will elaborate more on gradient dominance right after our Theorem 1 as you suggested. Specifically, when $\mu\ge 0$, our Theorem 1 implies the following gradient dominance result widely used in reinforcement learning [1,2].
$$f(\pi^*)-f(\pi)\le C_1\max _ {\pi'\in\Pi}\big\langle \nabla f(\pi),\pi'-\pi\big\rangle,\quad{\rm(G1)}$$
where we use the constant $C_1=D^{-1}$, the objective function $f(\pi)=V _ {\lambda,\pi}^{\pi}$, the performatively optimal solution $\pi^*\in{\arg\max} _ {\pi}f(\pi)$, $C_1=D^{-1}>0$. This further implies the following weaker gradient dominance result widely used in optimization [3,4] and linear quadratic regulator (LQR) [5,6].
$$f(\pi^*)-f(\pi)\le C_2||\nabla f(\pi)||^{\alpha},$$
where we use the constant $C_2=2D^{-1}>0$ (since $||\pi'-\pi||\le 2$ in Eq. (G1) above), and the power $\alpha=1$.
Both the gradient dominance conditions above are useful for global convergence to the optimal solution $\pi^*$, since under either of these conditions, $||\nabla f(\pi_t)||\to 0$ can imply $f(\pi_t)\to f(\pi^*)$.
[1] Agarwal, A., Kakade, S. M., Lee, J. D., \& Mahajan, G. (2021). On the theory of policy gradient methods: Optimality, approximation, and distribution shift. Journal of Machine Learning Research, 22(98), 1-76.
[2] Chen, Z., Wen, Y., Hu, Z., \& Huang, H. (2024). Robust Reinforcement Learning with General Utility. Advances in Neural Information Processing Systems, 37, 11290-11344.
[3] Masiha, S., Salehkaleybar, S., He, N., Kiyavash, N., \& Thiran, P. (2022). Stochastic second-order methods improve best-known sample complexity of SGD for gradient-dominated functions. Advances in Neural Information Processing Systems, 35, 10862-10875.
[4] Nesterov, Y., \& Polyak, B. T. (2006). Cubic regularization of Newton method and its global performance. Mathematical programming, 108(1), 177-205.
[5] Mohammadi, H., Zare, A., Soltanolkotabi, M., \& Jovanović, M. R. (2021). Convergence and sample complexity of gradient methods for the model-free linear–quadratic regulator problem. IEEE Transactions on Automatic Control, 67(5), 2435-2450.
[6] Ye, L., Mitra, A., \& Gupta, V. (2024, December). On the Convergence of Policy Gradient for Designing a Linear Quadratic Regulator by Leveraging a Proxy System. In 2024 IEEE 63rd Conference on Decision and Control (CDC) (pp. 6016-6021). IEEE. | Summary: This paper proposes an algorithm to compute performatively optimal policies, i.e. policies maximizing the expected sum of rewards in an MDP-like environment where the transition and reward functions are dependent on the policy that is executed. The algorithm consists in iteratively building an ascent direction from samples in the decision process and using this direction in the Frank-Wolfe algorithm to update the policy. Convergence is guaranteed, as the ascent direction is "valid" and the objective function is gradient dominated.
Claims And Evidence: The claim that it is possible to find the performatively optimal policy with the proposed algorithm is theoretically supported.
There are several other claims that are incorrect or inefficiently detailed:
1. Authors claim that there is no analytical form to the performative policy gradient [line 320 right column]. To my understanding this has not been shown, and, intuitively, It is unclear for me why there would not be an analytical form to the gradient.
2. Authors claim in the abstract (and through the paper) that it is a "zeroth-order policy gradient method". This is insufficient to well-understand how the policy is effectively optimized and misleading to my understanding of what a zero-order method, first-order method, and policy gradient method is. On the one hand, a zero-order method optimizes a function without computing gradients but solely estimating the function. A first-order method, on the other hand, uses gradient. Policy-gradient methods fall into the second type of methods as the point is to estimate the gradient of the return (and computing the gradient of the policy) to do stochastic gradient ascent steps. If one where to use finite difference to compute an ascent direction to optimize the return, I am not sure it can still be considered a policy gradient method. The abstract should be clearer about the how the ascent direction is computed and used to update the policy.
3. Authors highlight that the performative optimal policy cannot be computed with previous algorithms from the literature. It nevertheless seems that the problem at hand is a particular case of some stochastic game where the objective to compute policies against adversarial opponents, e.g. [1, 2, 3]. Does this part of the literature provides algorithms that would compute an optimal performative policy?
[1] Sessa, P. G., Bogunovic, I., Kamgarpour, M., & Krause, A. (2020). Learning to play sequential games versus unknown opponents. Advances in neural information processing systems, 33, 8971-8981.
[2] Ramponi, G., Metelli, A. M., Concetti, A., & Restelli, M. (2021). Learning in non-cooperative configurable markov decision processes. Advances in Neural Information Processing Systems, 34, 22808-22821.
[3] Jackson, M. T., Jiang, M., Parker-Holder, J., Vuorio, R., Lu, C., Farquhar, G., ... & Foerster, J. (2023). Discovering general reinforcement learning algorithms with adversarial environment design. Advances in Neural Information Processing Systems, 36, 79980-79998.
Methods And Evaluation Criteria: There is no evaluation of the final algorithm.
Theoretical Claims: Theoretical claims seem correct, but I haven't checked proofs in appendices.
Experimental Designs Or Analyses: There is no empirical evaluation, which is to me problematic. The paper should include experiments to validate the final algorithm, and compare to algorithms from the literature dealing with non-stationary or adversarial settings.
Supplementary Material: No.
Relation To Broader Scientific Literature: The contribution should be related to the literature dealing with non-stationary or adversarial settings. Do there exist algorithms that could be applied to compute performatively optimal policy?
Essential References Not Discussed: See previous remarks on non-stationary or adversarial RL.
Other Strengths And Weaknesses: Authors should formally define the n-step transition distribution in equation (2).
In section 2.1, when defining $\mathcal{P}$, the sum should be over $s'$ and not $s$ I beleive.
I think equation (5) might be wrong, is it $r_{d'}$ or $r_d$? In (Mandal et al., 2023), they use the measure $d$ in their equation (3).
Authors should be mathematically clear about what a "valid approximation" is in Proposition 1.
Other Comments Or Suggestions: I would have clearly stated that distributions are represented by vectors at the beginning of section 2.1. In other words, sentence line 90 should come earlier for clarity.
Questions For Authors: Does convergence require the batch size $N$ to grow unbounded?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: **Claims And Evidence (1):** Why there would not be an analytical form to the gradient?
**A:** Good question. I later found this gradient can be computed by chain rule, but involves the unknown $\nabla_{\pi}p_{\pi}(s'|s,a)$ and $\nabla_{\pi}r_{\pi}(s,a)$. We will revise this claim.
**Claims And Evidence (2):** The abstract should be clearer about the how the ascent direction is computed and used to update the policy. Can we use the name "zeroth-order policy gradient method"?
**A:** Thanks for your suggestions. Our algorithm uses a Frank-Wolfe update to find the ascent direction, where the policy gradient is approximated by its zeroth order estimation (will be added to the revised abstract), so the name "zeroth-order policy gradient method" is valid, as also been used in [1,2]. We may also use "zeroth-order Frank-Wolfe algorithm" to reveal more optimization details.
[1] Wang, Z., et al. Policy evaluation in distributional LQR. In Learning for Dynamics and Control Conference 2023.
[2] Han, Y., Razaviyayn, M., \& Xu, R. Policy gradient finds global optimum of nearly linear-quadratic control systems. NeurIPS 2022 Workshop.
**Claims And Evidence (3):** It seems that the problem at hand is a particular case of some stochastic game where the objective to compute policies against adversarial opponents, e.g. [1-3]. Do their algorithms compute an optimal performative policy?
**A:** No, these adversarial settings are very different from our performative reinforcement learning problem without adversarial environment.
**Experimental Designs Or Analyses:** The paper should include experiments.
**A:** Thanks for your suggestion. Due to limited space, see the experimental details in my final response to Reviewr uoEY. Our result in https://docs.google.com/document/d/1bH3eEoGhfDwq1NBNW7_zjCSLvvmcUyDusaINivK5bdo/edit?tab=t.0 shows that our Algorithm 1 outperforms the existing repeated training algorithm.
**Relation To Broader Scientific Literature:** The contribution should be related to the literature dealing with non-stationary or adversarial settings. Do there exist algorithms that could be applied to compute performatively optimal policy?
**A:** No, since to our knowledge, performative reinforcement learning is not a special case of any other problems.
We will add a discussion of related works including those on non-stationary MDP (e.g. [1,2]) that are weakly related to our work. To elaborate, during the training of performative reinforcement learning, the policy $\pi$ and thus the environment $(p_{\pi}, r_{\pi})$ change with iterations. In non-stationary MDP, the environment $(p_t, r_t)$ changes with MDP time scale $t$ not the iteration.
[1] Chandak, Yash, et al. Optimizing for the future in non-stationary MDPs. ICML 2020.
[2] Chandak, Yash, et al. Towards safe policy improvement for non-stationary MDPs. Neurips 2020.
**Other Strengths And Weaknesses (1):** Authors should formally define the n-step transition distribution in Eq. (2).
**A:** Thanks for your suggestion. Since $s_{t+1}\sim p_{\pi}(\cdot|s_t,a_t)$, $a_t\sim\pi(\cdot|s_t)$ and $s_0\sim\rho$, the n-step transition can be computed below, which will be added to the revision.
$$\mathbb{P} _ {\pi,p,\rho}(s_n=s,a_n=a)=\sum_{s_0,...,s_{n-1}\in\mathcal{S}}\sum_{a_0,...,a_{n-1}\in\mathcal{A}}\rho(s_0)\pi(a|s)p_{\pi}(s|s_{n-1},a_{n-1})\pi(a_{n-1}|s_{n-1})\prod_{t=0}^{n-2}[\pi(a_t|s_t)p_{\pi}(s_{t+1}|s_t,a_t)].$$
**Other Strengths And Weaknesses (2):** In Section 2.1, when defining $\mathcal{P}$, the sum should be over $s'$ and not $s$ I believe.
**A:** Thanks. We have corrected that.
**Other Strengths And Weaknesses (3):** I think equation (5) might be wrong, is it $r_{d'}$ or $r_d$? In (Mandal et al., 2023)? they use the measure $d$ in their equation (3).
**A:** We use $r_{d'}$ for two reasons. First, since performatively stable policy is defined as $\pi_S\in{\arg\max} _ {\pi'}V _ {\pi_S}^{\pi'}(\rho)$, their Eq. (3) that defines the corresponding performatively stable occupancy measure $d_S$ should have used $r_{d_S}$, corresponding to our Eq. (5) with $d'=d_S$. Second, their Eq. (5) about their repeated training algorithm corresponds to our Eq. (5) with $d'=d_t$ at iteration $t$.
**Other Strengths And Weaknesses (4):** Authors should be mathematically clear about what a "valid approximation" is in Proposition 1.
**A:** Thanks for your suggestion. "Valid approximation" means the $\pi+\delta u_i$ and $\pi-\delta u_i$ in Eq. (26) are valid policies, i.e., $\pi'(a|s)\ge0$ and $\sum_a\pi'(a|s)=1$ for $\pi'\in\{\pi\pm\delta u_i\}$. We will add that explanation to the revision.
**Other Comments Or Suggestions:** Sentence line 90 should be moved to the beginning of section 2.1 for clarity.
**A:** Thanks. We have done that.
**Questions For Authors:** Does convergence require the batch size to grow unbounded?
**A:** No. Usually we fix $\epsilon,\eta$, so the batch size $N=O[\epsilon^{-2}\log(\eta^{-1}\epsilon^{-1})]$ is also fixed.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I would advise updating the paper so that these elements are made clear.
I think that the paper is still incomplete without an experimental validation, which I cannot review solely based on the elements you provided in the response to reviewers.
---
Reply to Comment 1.1.1:
Comment: Did you see our experimental results in
https://docs.google.com/document/d/1bH3eEoGhfDwq1NBNW7_zjCSLvvmcUyDusaINivK5bdo/edit?tab=t.0
The experimental details are in our final response to the Reviewer uoEY. We **have added the above experimental details and results to our paper, but ICML 2025 does not allow us to upload the updated paper.**
In addition, how do you think about our responses to your other concerns?
Thanks.
Authors | null | null | null | null | null | null | null | null |
Automatically Identify and Rectify: Robust Deep Contrastive Multi-view Clustering in Noisy Scenarios | Accept (spotlight poster) | Summary: In this paper, the authors propose a novel multi-view clustering method AIRMVC for noisy scenarios. They formulate the noisy identification as the anomaly problem. Besides, a noise-robust contrastive loss is designed to enhance the model performance. Experiments on six datasets show the effectiveness of the proposed method.
Claims And Evidence: The motivation of AIRMVC is clearly articulated and supported by experimental validation. Additionally, extensive supplementary experiments are provided in the appendix to further reinforce the findings.
Methods And Evaluation Criteria: The authors investigate the problem of multi-view clustering in noisy scenarios, a challenge widely encountered in real-world applications. The proposed method is well-aligned with the stated motivation and is designed to enhance model robustness under noisy conditions, offering practical insights for real-world implementations.
Theoretical Claims: The authors provide a theoretical proof for noise-robust contrastive learning for supporting the rationale behind this approach.
Experimental Designs Or Analyses: The authors conducted extensive experiments, with most baseline methods being from 2024, effectively demonstrating the efficacy of the proposed approach.
Supplementary Material: The authors supplemented the main text experiments in the appendix, providing relevant mathematical proofs and additional details on the experimental setup.
Relation To Broader Scientific Literature: The paper presents a comprehensive and well-rounded literature review.
Essential References Not Discussed: The related work is well-presented and the literature review is thorough.
Other Strengths And Weaknesses: Strengths:
1.The transformation of noise identification into an anomaly detection problem is an intriguing approach.
2.This paper conducts extensive experiments comparing the proposed method with 2024 state-of-the-art approaches, and the comprehensive experimental results validate its effectiveness.
3.Theoretical analysis demonstrates the robustness of the proposed contrastive learning mechanism in noisy environments.
Weaknesses:
1.In the noise identification module, both the projector and classifier are designed. Are these components shared across multi-views, or do they operate independently? The authors should provide a clear explanation regarding this aspect.
2.The authors have not discussed the limitations of AIRMVC or outlined potential directions for future research.
3.Although the authors conducted explanatory experiments in Figure 3 to support their motivation, they only utilized the relatively small-scale BBCSport dataset. It is recommended to perform experiments on larger datasets to further validate the motivation’s effectiveness.
4.The authors should release the source code to enhance the reproducibility of the study.
Other Comments Or Suggestions: 1.The notation II in Equation (10) should be explicitly defined for clarity.
2.On page 7, line 364, there is a typographical error in the quotation marks for "w/o D&R&Con."
Questions For Authors: Please see Weaknesses and suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Explanation for projector and classifier:** Thanks. We perform feature mapping and transformation in the latent space using a projector, and the sample predictions are obtained through a classifier. The projector and classifier are shared across different views. We will include the corresponding description in the final version.
**Limitations & Future directions of AIRMVC:** Thanks. In AIRMVC, we made an initial attempt to identify and rectify noise in an unsupervised setting and designed a robust contrastive learning method to further enhance the robustness of the model. However, the correction process relies heavily on the accuracy of the predicted distribution, which is the primary limitation of AIRMVC. In the future, improving the accuracy of the predicted distribution and exploring other reliable supervisory signals will be promising research directions.
**Large-scale motivation experiments:** Thanks. Following your suggestions, we conducted experiments on the Caltech101 and STL10 datasets with a 10% noise ratio. The experimental results are presented in Tab.1 and Tab.2. From these results, we observe the same conclusions as those described in the submitted version, presented as follows:
1) Simply merging noisy multi-view data results in the most degraded clustering performance. This is primarily due to the absence of a noise rectification mechanism, which causes the negative effects of noisy views to compound each other. Moreover, the fusion process intensifies the influence of noise, leading to a scenario where multi-view clustering performs even worse than using a single view alone.
2) In comparison to directly correcting noise based on the first view, our proposed AIRMVC demonstrates superior performance. This advantage arises because direct correction from a single view tends to enforce uniformity across views, potentially suppressing essential complementary information. In contrast, our noise detection and rectification strategy effectively removes noisy samples from each view while preserving beneficial cross-view diversity, thereby enhancing the overall clustering performance.
Tab.1 Motivation experiments on STL10 dataset.
| Metric | Ours | Directly Rectify | Single View | Noisy data |
|:------:|:------:|:----------------:|:-----------:|:----------:|
| ACC | 28.81 | 26.41 | 22.22 | 15.05 |
| NMI | 25.04 | 24.25 | 19.04 | 10.78 |
| PUR | 29.01 | 26.80 | 23.18 | 13.78 |
Tab.2 Motivation experiments on Caltech101 dataset.
| Metric | Ours | Directly Rectify | Single View | Noisy data |
|:------:|:------:|:----------------:|:-----------:|:----------:|
| ACC | 21.45 | 18.62 | 15.26 | 11.25 |
| NMI | 37.16 | 30.82 | 22.29 | 18.61 |
| PUR | 34.69 | 29.52 | 20.18 | 16.28 |
**Code:** Thanks. Following your suggestion, we will release the code in the final version.
**Notation & Typos:** Thanks. We will add the notations of Eq.10 and correct the typos in page 7. Furthermore, we will review the entire paper to enhance the overall presentation.
---
Rebuttal Comment 1.1:
Comment: Thank for rebuttal from the authors. My concerns and confusions are well-addressed and thus I would like to increase my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thanks for your increasing the score. We greatly appreciate the time and effort you have dedicated to reviewing our work!
Sincerely,
The Authors | Summary: This paper addresses the challenge of noisy data in multi-view clustering by proposing a method called AIRMVC. Specifically, AIRMVC first formulates noise identification and employs a Gaussian Mixture Model (GMM) to achieve this. It then introduces a hybrid rectification strategy with an interpolation mechanism to mitigate the adverse effects of noisy data. The paper validates the effectiveness of AIRMVC on six multi-view clustering datasets.
Claims And Evidence: Not all claims are fully supported. For instance, the paper asserts that no prior work has developed dedicated frameworks for identifying and rectifying noisy data. However, MVCAN (Xu et al., 2024) appears to be an earlier attempt at addressing this issue but is not appropriately acknowledged.
Methods And Evaluation Criteria: While the method is designed to address the noisy view problem, the chosen datasets and experimental settings are not sufficiently justified.
Theoretical Claims: No
Experimental Designs Or Analyses: Not all experiments are well-designed. Firstly, the datasets used are relatively small (fewer than 13,000 samples), which limits the evaluation of the method’s scalability and generalizability. Secondly, the evaluations are conducted with hand-crafted noise rather than real-world noise, potentially affecting the practical applicability of the method.
Supplementary Material: Yes, I have reviewed the supplementary material, including notations, related works, and additional performance comparisons
Relation To Broader Scientific Literature: The paper designs a method for handling the noisy view issue in the multi-view clustering task.
Essential References Not Discussed: Although related works are cited, the paper lacks a sufficient discussion on MVCAN (Xu et al., 2024), which may lead to an overstatement of its contributions.
Other Strengths And Weaknesses: Strengths:
The proposed method achieves state-of-the-art performance on the six selected datasets.
Weaknesses:
The novelty of the paper is questionable, as it overclaims its contribution to handling noisy views. MVCAN (Xu et al., 2024) may already have laid the groundwork for this problem.
The experimental evaluation is insufficient, relying on small-scale datasets and artificially introduced noise instead of real-world noisy data.
I would consider improving my rating if the authors could give more clarifications or experiment results to address my concerns.
Other Comments Or Suggestions: I look forward to seeing the method evaluated on large-scale datasets with real-world noise for a more comprehensive assessment of its effectiveness.
Questions For Authors: Please see the weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Additional experiments:** Thanks. Following your suggestions, with NVIDIA A6000 GPU we conduct experiments on CIFAR10 dataset, which contains 60,000 samples, 4 views and 10 classes. Besides, YouTube is a comprehensive video platform. We extract facial images from videos as a real-world data source. These multi-view facial images may include low-quality samples, which are treated as noise in our analysis. The Youtube dataset comprises 38,654 samples, 4 views and 10 classes. Detailed statistic information of the datasets is demonstrated in Tab.1. From the results shown in Tab.2 and Tab.3, we conclude that AIRMVC could achieve reliable performance on both large-scale dataset and real-world dataset, demonstrating its generalization capability.
Tab.1 Statistic information of the datasets.
| Datasets | Class | Sample | View |
|:--------:|:-----:|:------:|:----:|
| Youtube | 10 | 38,654 | 4 |
| CIFAR10 | 10 | 60,000 | 4 |
Tab.2 Experiment on YouTube dataset. OOM denotes out-of-memory during training process.
| Metric | CANDY | RMCNC | TGM-MVC | SCE-MVC | MVCAN | DIVIDE | Ours |
|:------:|:------:|:------:|:-------:|:-------:|:-----:|:------:|:------:|
| ACC | 62.86 | 53.05 | 58.26 | 60.54 | OOM | 60.16 | 66.23 |
| NMI | 70.06 | 65.27 | 55.91 | 64.22 | OOM | 65.38 | 70.94 |
| PUR | 70.20 | 63.81 | 60.12 | 65.54 | OOM | 63.01 | 75.10 |
Tab.3 Experiment on CIFAR dataset.
| Noisy Rate | 0.1 | 0.1 | 0.1 | 0.3 | 0.3 | 0.3 | 0.5 | 0.5 | 0.5 | 0.7 | 0.7 | 0.7 |
|:------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| Metric | ACC | NMI | PUR | ACC | NMI | PUR | ACC | NMI | PUR | ACC | NMI | PUR |
| CANDY | 20.16 | 12.35 | 21.21 | 18.25 | 11.82 | 18.01 | 16.04 | 9.54 | 16.05 | 14.17 | 9.06 | 14.64 |
| RMCNC | 19.25 | 11.52 | 20.58 | 18.26 | 10.64 | 19.25 | 16.45 | 8.68 | 16.25 | 15.05 | 8.14 | 14.99 |
| TGM-MVC | 17.82 | 10.02 | 18.99 | 15.29 | 7.91 | 14.25 | 13.42 | 6.04 | 13.57 | 11.05 | 5.71 | 12.52 |
| SCE-MVC | 18.25 | 10.55 | 19.54 | 18.02 | 10.00 | 18.57 | 15.15 | 8.02 | 16.05 | 14.23 | 7.64 | 13.16 |
| MVCAN | OOM | OOM | OOM | OOM | OOM | OOM | OOM | OOM | OOM | OOM | OOM | OOM |
| DIVIDE | 20.57 | 11.26 | 21.27 | 18.69 | 10.06 | 19.95 | 16.84 | 8.32 | 17.62 | 14.05 | 6.05 | 14.85 |
| Ours | 22.62 | 13.71 | 23.34 | 21.67 | 13.24 | 22.52 | 20.08 | 12.49 | 20.83 | 17.68 | 10.25 | 18.26 |
**Discussion with MVCAN:** Thanks. We discuss AIRMVC with MVCAN from three key perspectives:
1) Optimization Strategy: MVCAN adopts a two-level iterative optimization framework, consisting of T-level and R-level optimization to refine the network. In contrast, In contrast, AIRMVC focuses on noise detection and correction using a Gaussian Mixture Model (GMM) and directly optimizes the network.
2) Soft Label Acquisition: MVCAN employs a parameter-decoupled model to obtain view-specific representations and soft labels, mitigating the influence of noisy views. AIRMVC leverages a GMM trained with a shared projector and classifier to generate soft labels.
3) Module Design: MVCAN incorporates unshared parameters, distinct clustering optimization functions, and a two-level iterative optimization approach. In comparison, AIRMVC introduces a dedicated noise detection and correction mechanism, along with a noise-robust contrastive learning framework to enhance model robustness.
**Novelty of AIRMVC:** Thanks. The novelty of AIRMVC mainly contains the following perspectives.
1) Leveraging GMM, we reformulate the noise identification as an anomaly identification problem and propose a hybrid rectification strategy to automatically correct the noisy data.
2) We design a noise-robust contrastive mechanism to generate more reliable representations. Theoretically, we have demonstrated that the features generated by this mechanism are more beneficial for downstream tasks.
3) Extensive experiments on different benchmark datasets to verify the effectiveness and robustness of AIRMVC.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. I appreciate the additional experiments on large-scale and real-world noisy datasets, which address my second major concern. Furthermore, the detailed discussion on MVCAN highlights the novelty of the proposed method. I would like to raise my rating and maintain a positive stance on the paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for increasing the score and acknowledging our approach. We sincerely appreciate the time and effort you dedicated to reviewing our work. Based on your suggestions, we will make further improvements in the final version.
Best,
The Authors | Summary: The paper considers the problem of multi-view clustering in the presence of noise. In particular, a new approach is proposed that aims to detect noisy samples, characterized as outliers, and to rectify them based on the assumption that the first view is noise-free. In addition, the construction of the pairs in the contrastive loss is improved by taking into consideration the soft clustering labels.
## Update after rebuttal
After the additional clarifications provided by the authors, I have decided to increase my rating. While it still is based on relatively strong assumptions, which should be thoroughly discussed in a limitation section if the paper would be accepted, it could serve as an initial exploration of this setting.
Claims And Evidence: Yes
Methods And Evaluation Criteria: While the setup is mostly reasonable, the type of noise added to the samples is not described. Based on the introduction and problem definition, the noise considered in this work is due to the image being corrupted, while previous referenced work mostly considers noise in the sense of alignment (are the two views representing the same object). However, how this noise has been added and how the X% of data in the experiments has been corrupted is unclear.
Related to this point, the proposed approach builds on the strong assumption of having one clean view for the rectification step. While the authors state that this is done following previous work with references provided, these prior works, to the reviewers knowledge make a different assumption where some data is known to be aligned and some misaligned and do not assume that there is one completely uncorrupted view. What is the effect if this assumption is not valid?
In summary, additional clarifications on this setup are needed to ensure that the evaluation is fair.
Theoretical Claims: The paper includes a theoretical interpretation of the noise-robust loss in Theorem 4.1, which appears to be correct.
Experimental Designs Or Analyses: As mentioned above, the experimental design is somewhat unclear when it comes to the addition of noise and clarifications are required. Beyond this, the experimental design appears sound. However, it would be beneficial to also report the clean performance for reference, despite the focus being mostly on the noisy setting.
Supplementary Material: Yes, all of it.
Relation To Broader Scientific Literature: Within the extended multi-view clustering literature, there has been some work on designing more robust approaches. However, these have mostly been focusing on the design of approaches that are able to handle partial view alignment or incomplete views. While there are certain approaches that aim to address robustness to noise (such as Xu et al, CVPR 2024), it is a less explored domain and the paper contributes a new approach toward it.
Essential References Not Discussed: Overall, the reviewer believes that prior work is cited adequately, but believes that the paper would benefit from a clearer discussion of what noise problems the different baselines address, as the baselines are designed for different types of noise.
Other Strengths And Weaknesses: Overall, the paper addresses the interesting problem of corrupted views in deep multi-view clustering and the paper is mostly well-written and presents a set of relevant ablation studies to highlight the necessity of the different components. However, the presentation of the problem as well as how this relates to previous works on robustness in the multi-view space, which generally focus on another type of noise, should be improved. In addition it is based on a key assumption (first view is noise-free) and the effect of this assumption should be discussed, as it appears to differ from the assumptions in prior works and thus benefits the proposed approach.
Other Comments Or Suggestions: The presentation of the problem formulation seems to have been moved out of Sec. 3, while it still is mentioned in Line 139 (can be removed).
In line 304, "sub-optimal" should maybe be "runner-up" or "second best".
For Table 4, state explicitly that this is the 10% noise scenario.
Questions For Authors: Please elaborate the experimental setup and comment on the effect of the assumption on the first view.
The work further leverages another assumption, which is the presence of balanced clusters in 3.1. Does this overly bias the model to clustering settings where there are balanced classes?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Explanation for adding noise:** Thanks. Different with the noisy alignment, we simulate noisy scenarios by injecting standard Gaussian noise to the original views, excluding the first view. Specifically, we generate random Gaussian noise with the same shape as the view and inject it into the original views at a ratio of x%. The parameter x% scales the generated Gaussian noise, thereby simulating different levels of noise contamination.
**Experiments on clean data:** Thanks. Following your suggestion, we conduct experiments on clean data with six datasets. The results are shown in Tab.1. Due to character limitations, more results can be found in Tab.1 at https://anonymous.4open.science/r/Res-11B2. From the results, we find that AIRMVC achieves promising performance in both clean and noisy scenarios.
Tab.1 Clean data performance
|Datasets|UCI-digit|-|-|WebKB|-|-|STL10|-|-|
|-|-|-|-|-|-|-|-|-|-|
|Metric|ACC|NMI|PUR|ACC|NMI|PUR|ACC|NMI|PUR|
|CANDY|85.45|77.99|85.45|35.15|10.55|34.74|28.15|22.68|28.15|
|RMCNC|40.51|23.16|35.68|79.05|21.84|79.99|23.05|15.28|24.64|
|TGM-MVC|64.35|65.76|69.40|79.64|16.61|79.64|28.18|20.86|28.51|
|SCE-MVC|84.55|76.48|86.15|78.65|19.04|77.54|28.64|24.59|29.05|
|DIVIDE|89.25|81.52|89.45|69.61|20.20|78.12|29.68|23.63|28.95|
|Ours|94.55|90.10|94.55|80.65|21.54|80.65|30.26|24.88|30.95|
**Explanation for assumption:** Thanks. We provide explanation from three perspectives.
1) Different with noisy data align, AIRMVC extends the definition of noisy by considering the presence of noise within views. Since we propose a method for detecting and correcting noise, an "ideal" view is required as a reference standard. In the unsupervised multi-view clustering scenario, there is no available label information. Therefore, we assume that the first view serves as an ideal view, acting as pseudo-supervision to correct noise in the other views.
2) We show this assumption with a real-world multi-view scenario, i.e., the ideal view supplements and corrects the other views. For example, consider a case where the first view consists of high-resolution images, while the second view consists of low-resolution images. In the field of super resolution field, it is common to use high-resolution images (ideal view) to supplement information and guide the learning of low-resolution images (other view), e.g., 2019-ICCV-Guided Super-Resolution as Pixel-to-Pixel Transformation and 2021-CVPR-Robust Reference-based Super-Resolution via C2-Matching. Similarly, we select one view as the reference ("ideal") view to supplement and correct the other views.
3) Previous works have regarded data partially align as noisy. During the model's testing phase, they use an alignment strategy to align the $v-1$ views to the first view, thereby fusing multi-view feature for clustering, e.g., CANDY (line 53 of https://github.com/XLearning-SCU/2024-NeurIPS-CANDY/blob/main/model.py) and RMCNC (line 236 of https://github.com/sunyuan-cs/2024-TKDE-RMCNC/blob/main/RMCNC_main/sure_inference.py). This alignment operation implies that these papers consider the first view as an ideal view. Therefore, although the scenario settings may differ, to maintain generality, we also treat the first view as an ideal view.
**Explanation for baselines:** Thanks. Previous studies consider data partially align as noisy. In our work, we extend the definition of noise and explore a more common noisy scenario, where noise exists within individual views. Recently, MVCAN is the only work that explores the issue of noisy views, leaving no other methods available for direct comparison. MVCAN incorporated comparisons with numerous contrastive learning-based methods. Following this setup of MVCAN, we evaluated the performance of various algorithms under our proposed noisy setting. To further validate the effectiveness of our approach, we included the latest multi-view clustering methods from 2024 in Tables 1 and 2 of our submitted version. Moreover, our selection of a substantial number of contrastive learning-based methods is that contrastive learning could enhance both the model's robustness and discriminative capability. Therefore, in the absence of directly comparable methods, we select contrastive learning-based methods to demonstrate the effectiveness of our method.
**Explanation for balanced clusters:** Thanks. Cluster balance is a widely adopted default assumption in clustering problems, and we follow this common assumption as well. Additionally, to further verify the cluster balance of samples, we conduct statistical analyses on the datasets used in AIRMVC. The results indicate that the sample classes in the utilized datasets are nearly balanced. Due to space limitations, detailed results can be found in Tab.2 in https://anonymous.4open.science/r/Res-11B2.
**Typos & Presentation:** Thanks. Following your suggestion, we will correct the typos and further improve the presentation.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for these clarifications and providing the additional results. Could the authors clarify why the performance reported for the benchmark methods on the clean data seem to be significantly lower as the one reported in the original publications (i.e. Candy, Divide, and SCE-MVC)? Additionally, what is the intuition behind AIRMVC performing better than the baselines when no noise is present?
While I certainly agree that it will be useful to follow the assumption of having one “ideal” view as a reference, simplifying the task. This assumption is more of a limitation in this case compared to the setup in prior work as you generally are aware if you have data or not, while the presence of noise in the data is more subtle. In addition, not having the view removes all the information in the view, while adding noise only degrades it. While I do not necessarily think that this is a major problem, I believe it would be a limitation worth discussing, potentially pointing to future work.
---
Reply to Comment 1.1.1:
Comment: **Explanation for experimental results:** Thanks for your comment. From the publicly available code of CANDY (line 11 of https://github.com/XLearning-SCU/2024-NeurIPS-CANDY/blob/main/dataset_loader.py) and DIVIDE (line 11 of https://github.com/XLearning-SCU/2024-AAAI-DIVIDE/blob/main/dataset_loader.py), it is evident that the datasets they used contain only two data views. In contrast, the datasets we employed, i.e., Caltech101 and Reuters, consist of five views. Therefore, the datasets used in our experiments are not the same. We directly report the performance obtained by reproducing their original code with our multi-view datasets, which accounts for the observed differences.
Regarding SCE-MVC (https://openreview.net/pdf?id=xoc4QOvbDs), we used different clustering metrics, i.e., ACC, NMI, and PUR for AIRMVC, whereas SCE-MVC employs ACC, NMI, and ARI. Since the authors of SCE-MVC have not released their code, we reproduced their results based on the descriptions in their paper, which introduced some discrepancies. The experimental results demonstrate that AIRMVC achieves promising performance in the clean setting, rather than necessarily achieving SOTA performance, which aligns with our previous response.
**Explanation for clean performance:** Thanks for your comment. From our reported results, AIRMVC demonstrates only promising performance in clean scenarios. Moreover, it does not achieve SOTA performance on some datasets. We further analyze the reasons behind its guaranteed performance. Compared with other modules, we design a contrastive learning mechanism to enhance the model's discriminative ability. Specifically, we employ a high-confidence threshold to improve the quality of positive and negative sample pairs in contrastive learning. Furthermore, we provide a concise theoretical analysis to justify the design of our contrastive learning mechanism.
**Core idea of AIRMVC:** The core idea of AIRMVC is to explore the noisy problem in unsupervised multi-view scenarios. The experimental results in the submitted version demonstrate the effectiveness of AIRMVC in noisy scenarios. Although AIRMVC may not achieve SOTA performance across all datasets in the clean scenario, its promising performance could demonstrate its generalizability.
**Future work:** Thanks for your comment. Noisy views are a prevalent challenge in real-world multi-view scenarios. However, existing research in MVC has largely overlooked this issue, and there remains a lack of standardized methodologies for simulating noisy datasets. In AIRMVC, we provide an **initial exploration** of the noisy view problem in an unsupervised setting. We are delight that our method of using an "ideal view" as a reference has received your recognition. Identifying a suitable reference view in an unsupervised scenario and designing more realistic noisy view simulation strategies are promising directions for future research. We fully agree that this is a worthwhile topic of discussion, and following your insightful suggestions, we will continue to explore this problem in greater depth.
According to this year's ICML policy, we are not permitted to engage in multiple rounds of discussion. please trust that we have carefully considered and made every effort to address the concerns you raised. We kindly hope our response addresses your concerns. We greatly appreciate the time and effort you have dedicated to reviewing our work! | Summary: To mitigate the impact of noisy data on multi-view clustering models, this paper proposes a method capable of automatically identifying and correcting noise. Specifically, the authors reformulate noise identification as an anomaly detection problem. Then, they design a hybrid correction strategy to enhance model robustness. Extensive experimental results demonstrate the effectiveness of the proposed approach.
Claims And Evidence: In the submitted version of the paper, the motivation for handling noise is clearly defined and illustrated in Figure 1. Additionally, the authors conduct experiments to verify that the presence of noise adversely affects multi-view clustering performance. The submitted version effectively clarifies the research problem.
Methods And Evaluation Criteria: In this paper, the authors conduct comprehensive experiments on six widely used benchmark datasets. The experimental results demonstrate that the proposed method effectively mitigates the impact of noise on clustering performance.
Theoretical Claims: In Appendix A.2, the authors provide a mathematical proof, which theoretically supports the proposed method and enhances the credibility of the study.
Experimental Designs Or Analyses: In this paper, the authors conducted extensive experiments, including comparative analyses under different noise ratios, comprehensive ablation studies, and sensitivity analysis experiments. Additionally, the methods compared in Tables 1 and 2 are all from 2024, ensuring a fair and up-to-date evaluation.
Supplementary Material: The supplementary materials include related work, experimental results, hyperparameter tables, and more. The comprehensive supplementary materials provide the support for the findings presented in the paper.
Relation To Broader Scientific Literature: Compared to previous studies, this paper proposes a more effective approach to handling noisy data. The experimental results further validate this conclusion.
Essential References Not Discussed: The comparison algorithms in the paper are primarily from 2024, incorporating the latest research methods.
Other Strengths And Weaknesses: S:
I. The paper investigates novel methods to mitigate the impact of noise on models, which is a practical area of research.
II. From the submitted version, it is evident that the authors provide theoretical analysis and conduct extensive experiments.
III. The proposed method is clearly described, making it easy to follow.
W:
I. The authors have conducted detailed experimental validation; however, there is a lack of validation regarding the time and space consumption of the proposed method. I recommend that the authors add experiments to address.
II. Figure 2 presents the overall framework of the paper. In the upper part of view2, does the dark blue color represent noise? I suggest adding definitions and descriptions of the different colored data in the legend.
III. The authors divided the experimental section into four parts, with the fourth part being the sensitivity analysis of the parameters. This part is placed in Appendix A.3.3, but the appendix is labeled as RQ3 instead of RQ4, which needs to be corrected.
Other Comments Or Suggestions: I. The paper contains a large number of formulas, and the vast majority of definitions and explanations are in accordance with the standards. However, in Equation 8 on page 4, the formula is too large and extends beyond the page. It needs to be adjusted.
II. It is recommended to add more experimental details for the visualization experiments in Section 5.4.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Experiments of time and space cost:** Thanks. Following your suggestion, we conducted time and space complexity experiments on the six used datasets with 10% noisy ratio. Specifically, we measure the training time per epoch for all baselines using seconds as the evaluation metric. The space cost experiments are conducted on an NVIDIA A6000 GPU, measured in gigabytes (GB). The results are presented in Tab.1 and Tab.2. From these results, we observe that the time and space costs of AIRMVC remain within an acceptable range. In summary, AIRMVC demonstrates promising clustering performance while maintaining a reasonable computational cost.
Tab.1 Time cost for AIRMVC.
| Methods | | BBCSports | WebKB | Reuters | UCI-dight | Caltech101 | STL10 | Avg. |
|:------------:|:------------:|:---------:|:-------:|:-------:|:---------:|:----------:|:-------:|:------:|
| CANDY | NeurIPS 2024 | 0.0657 | 0.1206 | 0.2861 | 0.3325 | 3.2500 | 3.6200 | 1.2792 |
| RMCNC | TKDE 2024 | 0.1536 | 0.5148 | 0.4785 | 0.8962 | 3.9800 | 5.6800 | 1.9505 |
| TGM-MVC | ACM MM 2024 | 0.1206 | 0.2546 | 0.5931 | 0.6752 | 4.4200 | 6.2805 | 2.0573 |
| SCE-MVC | NeurIPS 2024 | 0.1521 | 0.2675 | 0.6428 | 0.6028 | 4.0255 | 6.0865 | 1.9629 |
| MVCAN | CVPR 2024 | 0.1525 | 0.2756 | 0.4429 | 0.7636 | 4.3580 | 6.6210 | 2.1023 |
| DIVIDE | AAAI 2024 | 0.0795 | 0.1568 | 0.3524 | 0.3326 | 3.1350 | 3.6248 | 1.2802 |
| AIRMVC | Ours | 0.0825 | 0.1486 | 0.3058 | 0.3390 | 3.0800 | 3.5200 | 1.2460 |
Tab.2 Space cost for AIRMVC.
| Methods | | BBCSports | WebKB | Reuters | UCI-dight | Caltech101 | STL10 | Avg. |
|:------------:|:------------:|:---------:|:-----:|:-------:|:---------:|:----------:|:-----:|:----:|
| CANDY | NeurIPS 2024 | 1.91 | 1.79 | 2.11 | 2.21 | 2.36 | 3.03 | 2.24 |
| RMCNC | TKDE 2024 | 1.88 | 2.55 | 1.96 | 2.16 | 2.46 | 2.99 | 2.33 |
| TGM-MVC | ACM MM 2024 | 1.47 | 1.64 | 2.06 | 2.20 | 2.54 | 2.35 | 2.04 |
| SCE-MVC | NeurIPS 2024 | 1.57 | 1.72 | 2.30 | 2.26 | 2.70 | 2.45 | 2.17 |
| MVCAN | CVPR 2024 | 1.56 | 1.57 | 1.66 | 1.28 | 1.44 | 1.47 | 1.50 |
| DIVIDE | AAAI 2024 | 2.02 | 1.78 | 2.05 | 2.19 | 2.36 | 2.97 | 2.23 |
| AIRMVC | Ours | 1.60 | 1.63 | 1.71 | 1.34 | 1.55 | 1.48 | 1.55 |
**Explanation for symbol in Fig.2:** Thanks. In Fig. 2(a), (b), and (c), the dark blue color represents noisy data. Following your suggestion, we will provide a more detailed explanation in the final version.
**Typos & Format:** Thanks. Following your suggestion, we will revise RQ3 and Eq.8 in the final version and review similar issues to enhance the overall presentation.
**Details for visualization experiments:** Thanks. We visualized the latent space features extracted by the encoder from the UCI-Digit dataset using the t-SNE algorithm. The visualization was performed every 20 epochs for the first 200 epochs. The experiments were conducted on an NVIDIA A6000 platform. We will provide additional descriptions in the future. | null | null | null | null | null | null |
A Scalable Solver for 2p0s Differential Games with One-Sided Payoff Information and Continuous Actions, States, and Time | Reject | Summary: This paper investigates how to solve 2-player zero-sum EFGs with continuous state and action set, when one of the players has perfect information. They also justify the result with experiments.
Claims And Evidence: It is not clear to me since the proof sketch of the main result Theorem 4.1 is very short and confusing. Moreover, I did not find the definition of $I+1$-atomic in the paper.
Methods And Evaluation Criteria: The Hexner's games seem to be an appropriate benchmark game for this paper.
Theoretical Claims: I think the proof sketch in the main paper is too short to convince me of its correctness.
Experimental Designs Or Analyses: I checked section 6 and the results are reasonable.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper extends the previous results on discrete two-player zero-sum EFGs to EFGs with continuous state and action sets. However, the algorithm is restricted to EFGs with one of the players having perfect information.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: It would be helpful to evaluate the algorithm in more benchmark games.
Other Comments Or Suggestions: Aligning the notations in the paper with EFG literature, such as MMD [2], would be helpful.
Questions For Authors: In assumption A.5, it says both players need to know the Nash equilibrium. I'm wondering if this is a typo since the paper's motivation is to solve the Nash equilibrium to my understanding.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **[Q3.1]** It is not clear to me since the proof sketch of the main result Theorem 4.1 is very short and confusing. Moreover, I did not find the definition of I+1-atomic in the paper.
**[A3.1]** Here is an extended explanation of Thm. 4.1 which we prove in detail in App. B (and more explanation in [**[A1.2]** for R1](https://openreview.net/forum?id=iDnwpbn20h¬eId=ipvrpvuKt6)). We first explain in terms of the primal game from where P1 finds his strategy. The key idea is that once we reformulate the primal game (let's call it $G_0$) so that P2 plays best responses to P1’s actions to be taken, we can show that the value of this reformulated game ($G_1$) at any belief state $p$ can be written as the convex envelope of an alternative value for a game where both players only play pure strategies ($G_2$). $G_2$ (called the non-revealing game in the paper) is complete-information since its minimax objective is defined as the expected payoff over types, and its value existence is guaranteed by Isaacs' condition. Since a convex envelope on an $I$-dimensional simplex is defined by at most $I$ vertices, P1's strategy in $G_1$ is I-atomic, i.e., the probability distribution of the mixed strategy is concentrated at $I$ actions in $\mathcal{U}$. This idea is explained in “A visual example” on page 4 and visualized in Fig. 2. The same applies to the dual game for P2 with a small difference: Since the dual game uses a dual state $\hat{p} \in \mathbb{R}^I$ instead of the belief state $p \in \Delta(I)$, P2’s strategy becomes $I+1$-atomic. We are **more than willing** to address further questions from the reviewer regarding Thm 4.1.
**[Q3.2]** The paper extends the previous results on discrete two-player zero-sum EFGs to EFGs with continuous state and action sets. However, the algorithm is restricted to EFGs with one of the players having perfect information.
**[A3.2]** This is correct. We chose this scope because (1) there is a strong demand for computing strategies for 2p0s differential games in real-world applications (e.g., missile defense) that is not being met by SOTA EFG solvers due to their lack of scalability for continuous action spaces [1-4]; and (2) there is currently no algorithm for computing Nash of 2p0s differential games with incomplete information on **both sides** [5; Cardaliaguet, personal communication, Feb 13, 2025]. Even within the scope of one-sided incomplete-information games, there are few studies on explaining the atomic structure of their Nash or the scalability of computing Nash for this type of games, which are the gaps we attempt to fill with this paper. From a practical perspective, we argue that modeling the information as one-sided and deriving a conservative strategy is necessary for the defender (the uninformed player P2) in any risk-sensitive settings, e.g., missile defense.
**[Q3.3]** Aligning the notations in the paper with EFG literature such as MMD would be helpful.
**[A3.3]** We used a different notation to facilitate the discussion about challenges with continuous actions/time and their solutions. That said, there is indeed a connection between behavioral-form MMD and DS-GDA which we used in this paper to solve the reformulated games following Thm. 4.1: While MMD constrains its policy updates with respect to the previous iterate and the magnet, DS-GDA constrains with respect to the moving average. Theoretically, MMD has convergence guarantees only when the objective function is convex-concave (i.e., for sequence-form game formulation) and when the Nash is interior [6]. DS-GDA, on the other hand, has guaranteed convergence even when the objective is nonconvex-nonconcave (i.e., behavioral-form formulation) and when Nash is on the boundary [7].
**[Q3.4]** In A.5, it says both players know the Nash equilibrium. Is this a typo since the paper's motivation is to solve the Nash equilibrium to my understanding?
**[A3.4]** Assuming that both players know the Nash equilibrium does not indicate that we (who study the game) know the Nash equilibrium. In fact, this assumption, i.e., both players know and thus play their Nash strategy, is standard in game theory settings.
[1] Martin, C., T. Sandholm. "Finding mixed-strategy equilibria of continuous-action games without gradients using randomized policy networks." arXiv:2211.15936
[2] Martin, C., T. Sandholm. "Joint-perturbation simultaneous pseudo-gradient." arXiv:2408.09306
[3] Brown, N., et al. "Deep counterfactual regret minimization." ICML 2019
[4] Tammelin, O.. "Solving large imperfect information games using CFR+." arXiv:1407.5042
[5] Cardaliaguet, P. "Differential games with asymmetric information." SIAM journal on Control and Optimization 46.3 (2007): 816-838
[6] Sokota, S., et al. "A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games." ICLR 2023
[7] Zheng, T., et al. "Universal gradient descent ascent method for nonconvex-nonconcave minimax optimization." NeurIPS 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! Could you elaborate on how you used assumption A5 in the paper and provide some reference that also assumed it?
---
Reply to Comment 1.1.1:
Comment: Certainly, let us reiterate the assumption.
**A5**: Both players have full knowledge about (1) $f$ (dynamics), (2) $l, g$ (payoffs), (3) $p_0$ (prior public belief), (4) and the Nash equilibrium. (5) Control inputs, and (6) states are fully observable and we assume (7) perfect recall.
**How we used A5**
Together with A1-4, A5 (except (4)) forms the sufficient conditions for the presented game to have a value, and therefore a Nash equilibrium (NE) [1]. In general, (4) is required for players in IIEFGs to update their beliefs about each other (e.g., to update P1’s belief about P2’s private cards in a Poker game, P1 needs to know P2’s NE). We do acknowledge that this assumption is stronger than necessary in games with one-sided information: while the uninformed player (P2) needs to know the NE of the informed player (P1), P1 does not need to know P2’s NE since P2 has no private info (this is first discussed in [2]).
**References that assumed A5**
- **From the side of EFGs**: An important set of IIEFG studies assume A5 (e.g., [3-6]). Taking Poker as an example, game dynamics and payoffs are pre-defined; the true payoff type is defined by the private cards and therefore $p_0$ can be pre-calculated; all player actions and states (chips and public cards) are observable; and all previous actions of the game are assumed to be memorized by players. We do note that there is a broader set of IIEFGs and POMGs that assume partial observability, in which case some dynamically changing system states, rather than payoff types, are private, and thus (6) is not assumed [4, 7]. Our paper does not consider partial observability.
- **From the side of differential games**: Almost all optimal control and differential game papers assume A5 (see description of Thm 1.3 of [9], page 11 line 4 of [11], and [10]), although the majority of them (refs in [12]-[14] for example) assume complete information, in which case (3) does not apply.
[1]Cardaliaguet, Pierre. "Differential games with asymmetric information." SIAM journal on Control and Optimization 46.3 (2007): 816-838.
[2]De Meyer, Bernard. "Repeated games, duality and the central limit theorem." Mathematics of Operations Research 21, no. 1 (1996): 237-251.
[3] Zinkevich, Martin, et al. "Regret minimization in games with incomplete information." Advances in neural information processing systems 20 (2007).
[4] Schmid, Martin, et al. "Student of Games: A unified learning algorithm for both perfect and imperfect information games." arXiv preprint arXiv:2112.03178 (2021).
[5] Perolat, Julien, et al. "From poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization." International Conference on Machine Learning. PMLR, 2021.
[6] Hennes, Daniel, et al. "Neural replicator dynamics: Multiagent learning via hedging policy gradients." Proceedings of the 19th international conference on autonomous agents and multiagent systems. 2020.
[7] Brown, Noam, et al. "Combining deep reinforcement learning and search for imperfect-information games." Advances in neural information processing systems 33 (2020): 17057-17069.
[8] Berkovitz, Leonard D. "Two person zero sum differential games: An overview." The Theory and Application of Differential Games: Proceedings of the NATO Advanced Study Institute held at the University of Warwick, Coventry, England, 27 August–6 September, 1974. Dordrecht: Springer Netherlands, 1975.
[9] Cardaliaguet, Pierre. "Information issues in differential game theory." ESAIM: Proceedings. Vol. 35. EDP Sciences, 2012.
[10] MICHAEL, D. DIFFERENTIAL GAMES: A CRITICAL VIEW.
[11] Asher, Robert Bernard. DIFFERENTIAL GAMES WITH SYSTEM UNCERTAINTY AND IMPERFECT INFORMATION. Oklahoma State University, 1974.
[12] Bansal, Somil, Mo Chen, Sylvia Herbert, and Claire J. Tomlin. "Hamilton-jacobi reachability: A brief overview and recent advances." In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 2242-2253. IEEE, 2017.
[13] Bansal, Somil, and Claire J. Tomlin. "Deepreach: A deep learning approach to high-dimensional reachability." In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 1817-1824. IEEE, 2021.
[14] Gammoudi, Nidhal, and Hasnaa Zidani. "A differential game control problem with state constraints." Mathematical Control and Related Fields (2022). | Summary: This work tackles an imperfect-information extensive-form games (IIEFGs) often struggle with continuous action and state spaces and continuous time. This paper addresses the scalability challenges for 2p0s game with one-sided information. By showing an atomicity property and using the same the authors show that the complexity of computing the Nash equilibrium for these games is not related to the
action space size; and they show a multi-grid approach of solving faster.
Claims And Evidence: The approach is convincing, but could be written more clearly. See other Strengths And Weaknesses.
Methods And Evaluation Criteria: Yes, they do.
Theoretical Claims: Not checked in the supplementary, all proofs are in supplementary which is not very helpful. See other Strengths And Weaknesses.
Experimental Designs Or Analyses: Experiments are fine.
Supplementary Material: Sorry, did not get time to go through all the supplementary material.
Relation To Broader Scientific Literature: The contributions as claimed are important.
Essential References Not Discussed: Seems fine.
Other Strengths And Weaknesses: This work is not in my area, particularly the numerical solution of PDE.
From what I gather, inspired by structural results from prior work, the authors study computational issues in implementing these.
On line 250, second column what is d_u - where is this defined? And, this claim made: "computational complexity of these problems are no longer related to the size of the action spaces" - please describe in detail, mainly the proof sketch after the minmax formulation was not clear to me. It might also help to explain FAS, especially for readers like me who are not familiar with it (some past results could be compressed)
Other Comments Or Suggestions: N/A
Questions For Authors: Respond to points in other Strengths And Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **[Q2.0]** From what I gather, inspired by structural results from prior work, the authors study computational issues in implementing these.
**[A2.0]** Thank you. We would like to highlight that to the authors’ best knowledge, this is the first paper that concretely explained the atomic nature of Nash equilibrium for 2p0s differential games with one-sided information. Therefore we humbly claim both theoretical and computational contributions.
**[Q2.1]** On line 250, second column what is d_u?
**[A2.1]** $d_u$ is the dimension of action $u$.
**[Q2.2]** "computational complexity of these problems are no longer related to the size of the action spaces" - please describe in detail, mainly the proof sketch after the minmax formulation was not clear to me.
**[A2.2]** Please refer to our response [**[A3.1]** to **[Q3.1]** raised by R3 (ZFnz)](https://openreview.net/forum?id=iDnwpbn20h¬eId=UhBmjpGVXh)
**[Q2.3]** It might also help to explain FAS, especially for readers like me who are not familiar with it (some past results could be compressed)
**[A2.3]** We will describe FAS in revision to convey the following main idea:
Multigrid methods were first developed to accelerate linear solvers for
$Ax = b$. Let the solution approximation be $y$, error be $e = x - y$, and residual be $r = b - Ay$. Starting from an initial $y$, a multigrid V-cycle transfers $r$ from the fine grid to the coarse grid, where $e$ is computed from $Ae = r$ to resolve low-frequency residual and interpolated back to the fine grid to refine $y$. FAS is an extension of this idea to accelerate nonlinear solvers for $f(x)=b$, where $r$ is resolved by solving a nonlinear problem "fully" to obtain an exact solution for $x$ on the coarse grid. We refer readers to [3] for a comprehensive review of FAS.
For both linear and nonlinear problems, multigrid (and FAS) leverages multiple mesh levels to resolve residuals associated with different error frequencies [1]. While FAS broadens the scope of multigrid to nonlinear systems, its practical utility depends on provable convergence guarantees. These guarantees typically include showing that each iteration of the multigrid cycle (e.g., a V-cycle) contracts the error by a factor $\gamma$ (the convergence rate) that is **independent of the mesh size**. Mathematically, this is expressed as $ ||f(y_{k+1}) - x|| \le \gamma ||f(y_k) - x|| $, where $y_k$ is the approximation at the $k$-th iteration, and $x$ is the true solution. The existence of a contraction factor $0 < \gamma < 1$, independent of the mesh size, is established in [7] for linear PDEs and in [8] for certain classes of nonlinear elliptic PDEs. Although a theoretical convergence rate for Hamilton-Jacobi-Isaacs (HJI) PDEs has not yet been established, [4] provides empirical evidence indicating that the convergence rate of FAS in an optimal control problem (governed by an Hamilton-Jacobi-Bellman equation) is bounded by 0.1. To the best of our knowledge, no analogous results are currently available for HJI PDEs.
[1] Olson, Luke. Multigrid Methods. https://lukeo.cs.illinois.edu/files/2015_Ol_encmg.pdf.
[2] Vegt van der, Jaap. Introduction to Multigrid Methods. https://faculty.ustc.edu.cn/_tsf/00/2A/FjIjI3fAJjqa.pdf.
[3] Henson, Van. "Multigrid methods nonlinear problems: an overview." Computational imaging 5016 (2003): 36-48.
[4] Han, Dong, and Justin WL Wan. "Multigrid Methods for Second Order Hamilton--Jacobi--Bellman and Hamilton--Jacobi--Bellman--Isaacs Equations." SIAM Journal on Scientific Computing 35.5 (2013): S323-S344.
[5] Akian, Marianne, and Sylvie Detournay. "Multigrid methods for two‐player zero‐sum stochastic games." Numerical Linear Algebra with Applications 19.2 (2012): 313-342.
[6]Akian, Marianne, P. Séquier, and Agnks Sulem. "A finite horizon multidimensional portfolio selection problem with singular transactions." Proceedings of 1995 34th IEEE Conference on Decision and Control. Vol. 3. IEEE, 1995.
[7] Braess, D., and W. Hackbusch. “A New Convergence Proof for the Multigrid Method Including the V-Cycle.” SIAM Journal on Numerical Analysis, vol. 20, no. 5, 1983, pp. 967–75. JSTOR, http://www.jstor.org/stable/2157109. Accessed 31 Mar. 2025.
[8] Reusken, A. (1988). Convergence of the multilevel Full Approximation Scheme including the V-cycle. Numerische Mathematik, 53(6), 663–686. doi:10.1007/bf01397135. | Summary: This paper focuses on two-player zero-sum differential games with continuous actions, states, and time. The author first proves that equilibria of these games can be computed by solving a dynamic programming problem with a discrete-time approximation. It is then shown that the complexity of this dynamic programming is independent of the size of the action space. Finally, a multigrid approach is proposed, and its performance is empirically evaluated through experiments.
Claims And Evidence: The claims are supported by theoretical results and numerical experiments.
Methods And Evaluation Criteria: Leveraging the fact that the computational complexity of solving the dynamic programming is independent of the action space size is intriguing. However, the benefit of the multigrid approach in addressing the fine time-discretization issue is not fully clear. The paper would benefit from a more detailed explanation of why the multigrid method is able to resolve this issue.
Theoretical Claims: I am particularly interested in Theorem 4.1. After reviewing its proof in Appendix B, I am still unclear on why the optimal solution $v^{\ast}$ in the RHS of Eq. (23) is $I$-atomic. Since this claim appears to be a key element of Theorem 4.1, I would like to know more about this.
Furthermore, I am not sure what Theorem 3.1 intuitively indicates. While the author provided some intuition behind the theorem (in line 190-192), it would be better to provide a more detailed explanation.
Experimental Designs Or Analyses: The paper presents numerical experiments on Hexner's game to validate the proposed solver and compares its performance with existing state-of-the-art solvers. However, there seems to be a discrepancy regarding the reported computational times. In Section 6.2, it is stated that "The wall-time costs for game solving are 17 hours using CAMS," while Figure 3(a) indicates much lower wall times. It would be helpful if the author could clarify what the y-axis in Figure 3(a) represents.
Supplementary Material: The supplementary material primarily contains technical proofs and implementation details. I did not check all of the proofs in the supplementary material.
Relation To Broader Scientific Literature: Utilizing the structure of differential games to compute equilibria is a novel contribution.
Essential References Not Discussed: While I am not deeply familiar with differential games, I wonder if there are other studies on games with continuous action spaces beyond (Martin & Sandholm, 2023; 2024).
Other Strengths And Weaknesses: I don't have anything in particular to comment here.
Other Comments Or Suggestions: In Section 2, it is stated that “Since any 2p0s IIEFG with finite action sets has a normal-form formulation, a unique Nash equilibrium always exists in the space of mixed strategies.” However, two-player zero-sum games can have multiple Nash equilibria in terms of strategy profiles, although the value of the game is unique. What did you mean by “unique” in this context?
Questions For Authors: I don't have anything in particular to comment here.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **[Q1.1]** The benefit of multigrid in addressing the fine time-discretization issue is not fully clear.
**[A1.1]** Please refer to our response [**[A2.3]** to **[Q2.3]** raised by R2 (tRrx)](https://openreview.net/forum?id=iDnwpbn20h¬eId=vlkPNnyOPK)
**[Q1.2]** Why is the optimal solution in the RHS of Eq. 23 I-atomic?
**[A1.2]** Expanding on App. B, below we show that for fixed $(t_0,x_0)$, $V(\cdot) := V_\tau(t_0,x_0,\cdot)$ as defined in Eq. 23 is the convexification of $a(\cdot)$ on $\Delta(I)$. Since convexification on $\Delta(I)$ requires at most $I$ vertices, the optimal solution is I-atomic.
To recall, Eq. 23: $V_\tau(t_0,x_0,p) = \min_\nu \int_{\Delta(I)} a(p')\nu(dp') \quad \text{s.t.}~\int_{\Delta(I)} p'\nu(dp') = p$,
where $a(p'): \Delta(I) \rightarrow \mathbb{R}$ and $\nu$ is a probability measure defined on $\Delta(I)$.
(1) We show $V$ is convex on $\Delta(I)$: Let probability measures $\nu^i$ for $i \in \{1,2\}$ be the solution to Eq. 23 for $p^i$. For any $\theta \in [0,1]$ and $p^\theta = \theta p^1 + (1-\theta)p^2$, the mixture $\nu^\theta := \theta \nu^1 + (1-\theta) \nu^2$ satisfies $\int_{\Delta(I)} p'\nu(dp') = \theta \int_{\Delta(I)} p'\nu^1(dp') + (1-\theta) \int_{\Delta(I)} p'\nu^2(dp') = p^\theta$ and is a feasible solution to Eq. 23 for $p^\theta$. Therefore $$V(p^\theta) \leq \int_{\Delta(I)} a(p')\nu^\theta(dp') = \theta V(p^1) + (1-\theta) V(p^2).$$
(2) We show $V(p) \leq a(p)$ for any $p \in \Delta(I)$: The Dirac measure $\delta_p$ concentrated at $p$ satisfies $\int_{\Delta(I)} p'\delta_p(dp') = p$. Thus $\delta_p$ is a feasible solution to Eq. 23 and by definition $V(p) \leq \int_{\Delta(I)} \tilde{a}(p')\delta_p(dp') = a(p)$.
(3) We show $V$ is the largest convex minorant of $a$: Let $h$ be any convex function on $\Delta(I)$ such that $h(p)\leq a(p)$ for all $p$. Given $p \in \Delta(I)$, for any probability measure $\nu$ that satisfies $\int_{\Delta(I)} p'\nu(dp') = p$, we have
$$h(p) = h(\int_{\Delta(I)} p'\nu(dp')) \leq \int_{\Delta(I)} h(p')\nu(dp') \leq \int_{\Delta(I)} a(p')\nu(dp').$$ Since this inequality holds for arbitrary $\nu$, including the optimal ones that define $V(p)$ through Eq. 23, it follows that $h(p) \leq V(p)$. With (1)-(3), $V(\cdot)$ in Eq. 23 is the convexification of $a(\cdot)$.
**[Q1.3]** Intuitive explanation of Thm. 3.1
**[A1.3]** Thm. 3.1 states that the Nash of the reformulated games are $\epsilon$-Nash for the original games, where $\epsilon \rightarrow 0$ as the time interval $\tau \rightarrow 0$. Let us focus on the primal game and explain the two inequalities involved:
(1) $V \leq \max J$: Here $V$ is the value of the original game where players move simultaneously, $\max J$ the value of the reformulated game where P1 moves before P2. (1) holds because P1 has a disadvantage in the reformulation.
(2) $\max J \leq V + M\tau$ says that this disadvantage is bounded by $M\tau$. This uses the fact that during any time interval, the value gap between P2 playing best response vs. any alternative strategy (including the Nash of the original game) is bounded by $\tau^2$ (Eq. 18), thanks to the continuity and boundedness assumptions on the dynamics and payoffs.
**[Q1.4]** A discrepancy in the reported computational times…what does the y-axis in Fig. 3(a) represent?
**[A1.4]** Fig. 3(a) shows wall times for solving the *1-stage* Hexner’s game. “17-hour” refers to the *4-stage* game. We will better clarify.
**[Q1.5]** Other studies on games with continuous action spaces beyond Martin & Sandholm?
**[A1.5]** Other studies on continuous action games, including those referenced in Martin & Sandholm (2023; 2024), are largely limited to one-shot Bayesian games [1] and auction games [2], which fundamentally differ from differential games with incomplete information. Other algorithms, such as [3], discretize the action space, making them impractical for high-dimensional settings. We chose Martin & Sandholm (2023; 2024) as representative algorithms due to their demonstrated success in solving a continuous action imperfect-information extensive-form game (a continuous action variant of Goofspiel). To the best of our knowledge, SOTA methods for solving incomplete/imperfect-information zero-sum extensive-form games with continuous actions are yet to be established.
**[Q6]** What did you mean by “unique” in this context?
**[A6]** The reviewer is correct: The value is unique, not the strategy.
[1] Zun Li and Michael P Wellman. Evolution strategies for approximate solution of bayesian games. In AAAI Conference on Artificial Intelligence (AAAI), 2021.
[2] Martin Bichler, Maximilian Fichtl, Stefan Heidekruger, Nils Kohring, and Paul Sutterer. Learning equilibria in symmetric auction games using artificial neural networks. Nature Machine Intelligence, 2021.
[3] Martin Bichler, Max Fichtl, and Matthias Oberlechner. Computing Bayes–Nash equilibrium strategies in auction games via simultaneous online dual averaging. Operations Research, 2023a. | null | null | null | null | null | null | null | null |
Test-Time Learning for Large Language Models | Accept (poster) | Summary: The paper introduces Test-Time Learning (TTL) paradigm for Large Language Models (LLMs), termed TLM, designed to dynamically adapt LLMs to target domains using only unlabeled test data during inference. The authors propose three key components: (1) an input perplexity minimization objective, based on empirical and theoretical evidence that reducing input perplexity enhances autoregressive predictions; (2) a Sample Efficient Learning Strategy, which prioritizes high-perplexity samples for efficient model updates; and (3) the use of Low-Rank Adaptation (LoRA) to mitigate catastrophic forgetting and enable lightweight updates. The paper also introduces the AdaptEval benchmark, comprising DomainBench, InstructionBench, and ReasoningBench, to evaluate TTL across diverse tasks. Experimental results claim at least a 20% performance improvement over baseline LLMs on domain knowledge adaptation, with further gains in instruction-following and reasoning tasks, validated on models like Llama3.1-8B-Instruct and Qwen2.5-7B-Instruct.
Claims And Evidence: The claims are generally supported by clear and convincing evidence, including empirical observations, theoretical justifications, and experimental results.
The core claim—that minimizing input perplexity improves LLM performance—is backed by Observation 1 (Fig. 1b)
The claim that high-perplexity samples are more informative is supported by Observation 2 (Fig. 1c)
Experimental results (Tables 2, 3, 4) consistently show TLM outperforming baselines (e.g., Tent, EATA) and original LLMs, with a reported 20%+ improvement on DomainBench
Methods And Evaluation Criteria: The proposed TLM method—combining perplexity minimization, sample-efficient learning, and LoRA—makes sense for adapting LLMs to distribution shifts without labeled data, aligning with the problem of real-world deployment challenges.
The use of perplexity as an unsupervised objective leverages LLMs’ autoregressive nature, which is a conceptually sound departure from entropy-based TTA methods ill-suited for LLMs.
Theoretical Claims: I checked the correctness of the theoretical justification for input perplexity minimization (Sec. 4.1, Eqns. 4-7).
The argument has on two assumptions: (1) the autoregressive property of LLMs and (2) shared parameter influence on input and output perplexity.
Experimental Designs Or Analyses: I reviewed the experimental designs in Sec. 5 and Supp. D. The comparison experiments (Tables 2, 3) against baselines (Tent, EATA) and original LLMs are well-designed, using diverse models and datasets to test robustness.
One experiment question is that how does your methods compared with other adapting methods such as fine tuning and rag.
Supplementary Material: I reviewed the supplementary material in full, including related work (Sec. A), AdaptEval details (Sec. B), experiment settings (Sec. C), additional results (Sec. D), and discussions (Sec. E). Sec. B provides valuable dataset descriptions and examples (Tables 7-9), enhancing benchmark transparency.
Relation To Broader Scientific Literature: The paper builds on prior work in test-time methods for classification. The perplexity minimization objective extends autoregressive modeling principles, shifting them to test-time adaptation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: strength:
1. The paper is well-structured, with clear figures (e.g., Fig. 1) and a logical flow from theory to experiments.
2. The focus on unlabeled test-time adaptation and the AdaptEval benchmark tackle real-world LLM limitations.
weakness:
1. Reproducibility: Missing details (e.g., random seeds, dataset splits) hinder replication.
Other Comments Or Suggestions: 1. Consider moving supplementary metrics (e.g., BERTScore) to the main paper for a richer evaluation.
2. Expand the discussion of trade-offs in the online setting (e.g., adaptation quality vs. efficiency).
Questions For Authors: I did not quite understand the Eq 4, why Minimizing the perplexity to the input P(x; Θ) is equivalent to maximizing the input generation probability P(x; Θ).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are deeply grateful for your thoughtful and encouraging feedback. Your recognition of our motivation and the thoroughness of our experiments is truly inspiring. Our detailed responses are as follows:
>Q1. One experiment question is that how does your methods compared with other adapting methods such as fine tuning and rag.
**A1.** Thank you for the insightful question. Our work focuses on TTL, which adapts LLMs using only *unlabeled test data* during inference. In our experiments, we thus chose to compare our method with other TTA methods, which share a similar problem setting. We would like to **clarify further the rationale for not including fine-tuning and RAG-based methods in our experimental comparison, along with the potential advantages of our method**:
* **Supervision and Data Dependency.** Fine-tuning requires labeled data, which is costly and often impractical in deployment. RAG depends on a well-maintained retrieval corpus, which may not generalize well across domains. In contrast, **our method uses only test data and minimizes perplexity to adapt efficiently without labels or external knowledge**.
* **Experimental Design Rationale.** Our experimental design is deliberately aligned with the TTL setting, where no labels or external knowledge sources are available at deployment. Therefore, we select baselines (e.g., Tent, EATA) that also operate in this setting. **Although fine-tuning and RAG are powerful in their respective contexts, they fall outside the scope of our setting due to their dependency on supervised data or retrieval infrastructure.**
>Q2. Reproducibility: Missing details (e.g., random seeds, dataset splits) hinder replication.
**A2.** We appreciate your feedback on ensuring reproducibility. Below we clarify the missing details and emphasize the existing descriptions in our paper:
* **Random Seeds.** Ours experiments use a fixed random seed of **42** for PyTorch/Numpy initialization.
* **Dataset Splits for AdaptEval Benchmark.** The proposed AdaptEval benchmark is specifically designed for TTL. Unlike traditional benchmarks, **AdaptEval contains only test-domain data without predefined train/val/test splits**, as TTL methods adapt solely during inference. Full dataset construction protocols are provided in Supp. B.
* **Additional Reproducibility Guarantees.** **1)** All hyperparameters (learning rates, batch sizes, adaptation steps) are detailed in Sec. 5.1. **2)** Hardware specifications (e.g., GPU types) and library versions (PyTorch) are detailed in Supp. C.2. **3)** Code, pre-trained models, and AdaptEval data will be publicly released upon acceptance.
>Q3. Consider moving supplementary metrics (e.g., BERTScore) to the main paper for a richer evaluation.
**A3.** We appreciate the feedback and agree that incorporating supplementary metrics like BERTScore into the main paper will strengthen the evaluation's comprehensiveness.
>Q4. Expand the discussion of trade-offs in the online setting (e.g., adaptation quality vs. efficiency).
**A4.** Thanks for your valuable comments. We conduct additional experiments in the **online setting** to evaluate the effect of batch size under streaming-like scenarios (see Table C). **With bs=1, the model updates after each test sample, simulating a true streaming setting. While this increases update steps, our method remains effective.** Using bs=100 reduces update frequency while slightly improving performance. This indicates that accumulating more samples before updating provides more reliable learning signals, and thus enhances the adaptation performance. The results suggest that in an online setting, adopting as high a batch size as possible can lead to both effective and efficient performance. The results suggest that in the online scenario, while more frequent updates offer immediate adaptation, adopting a larger batch size can lead to more effective and efficient adaptation.
Table C. Experimental results in the **Online setting** on DomainBench.
| Methods|Average R-Lsum|#Backwards|
|-|-|-|
|Ours (bs=100)|0.2040|1514|
|Ours (bs=1)|0.1917|2541|
>Q5. I did not quite understand the Eq 4, why Minimizing the perplexity to the input $\mathcal{P}(x;\Theta)$ is equivalent to maximizing the input generation probability $P(x;\Theta)$.
**A5.** Thank you for your insightful suggestion. The equivalence between minimizing input perplexity and maximizing the input generation probability is a fundamental property of perplexity in probabilistic models. We clarify this relationship as follows. Perplexity $\mathcal{P}(x;\Theta)$ for a sequence $x$ is defined as the exponential of the **cross-entropy** loss $P(x;\Theta)$, Formally, Eq. (2) can be written as:$\mathcal{P}(x;\Theta)=e^{-\frac{1}{N}\log P(x;\Theta)}$, where $N$ is the number of tokens in the sequence. Therefore, maximizing $\log P(x;\Theta)$ directly corresponds to minimizing $\mathcal{P}(x;\Theta)$.
****
We sincerely hope our clarifications above have addressed your concerns. | Summary: This paper proposes a novel Test-Time Learning (TTL) method that assigns weights to different samples based on input perplexity and employs LoRA for model adaptation. Experimental results demonstrate that, compared with existing TTL methods, the proposed approach achieves superior performance across multiple datasets.
Claims And Evidence: The majority of claims presented in the paper are adequately supported by experimental results. However, regarding the claim of **"Reducing Output Perplexity through Input Perplexity Minimization,"** the theoretical justification provided is somewhat unclear. While the autoregressive property indeed implies predicting subsequent tokens based on previous ones, the input content is inherently given (fixed), whereas the output tokens are generated sequentially. Therefore, it is not theoretically evident that minimizing input perplexity directly implies a corresponding reduction in output perplexity.
Methods And Evaluation Criteria: Overall, the methods and evaluation criteria employed in this work are reliable and sound. Nevertheless, the choice of the threshold $P_0$ for the perplexity-based weighting scheme warrants further investigation. Currently, the authors have empirically tested various values of $P_0$ only on DomainBench datasets, without assessing its generalizability to reasoning-oriented tasks. Introducing a more generalizable or theoretically justified criterion for selecting $P_0$ would further strengthen the methodology.
Theoretical Claims: No theoretical part in this paper.
Experimental Designs Or Analyses: The experimental design is reasonably thorough and comprehensive. However, the selection of models is relatively limited. Including models such as LLaMA-2-13B, which might be somewhat outdated, is less convincing. It would strengthen the work to evaluate newer and larger-scale models from multiple model families to demonstrate broader applicability.
Supplementary Material: The supplementary material clearly provides basic information about various benchmarks used in the study, as well as additional evaluation metrics for the experiments presented in the main text.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature on large language models and their reasoning capabilities. It addresses the issue of Test-Time Learning.
Essential References Not Discussed: There are no essential references missing from the paper.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: See above.
Questions For Authors: the baselines used for comparison (Tent and EATA) are somewhat outdated, being methods from 2021 and 2022. Considering recent advances summarized in the survey *"A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts"*, are there any newer state-of-the-art test-time learning methods for a more comprehensive evaluation and stronger evidence of the proposed approach’s superiority?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the encouraging comments and detailed suggestions. Responses are below:
>Q1. Regarding the claim of "Reducing Output Perplexity through Input Perplexity Minimization," the theoretical justification provided is somewhat unclear.
**A1.** We appreciate your feedback. We clarify our motivation below:
* **Theoretical Motivation Clarification.** In our study, we observe the strong positive correlation between input and output PPL (see Fig.1b). Relying on this, we seek to reduce the output PPL by input PPL minimization.
* **Autoregressive Training Dynamics.** In autoregressive models, each token is generated using the internal representation of the input. Although the model parameters are updated during input PPL minimization, this process refines the representation $h(x;\Theta)$, which in turn informs the generation of $P(y|x;\Theta)$. In other words, since $P(y|x; \Theta)$ is generated directly from $h(x;\Theta)$, an improved representation of $x$ can lead to more accurate and confident next-token predictions, which is expected to reduce output PPL.
* **Empirical Validation.** We compare the changes in both input and output PPL between our proposed method and the original LLM. In Tab. A, **our method not only reduces input PPL but also leads to a consistent decrease in output PPL**.
Tab. A: Experimental results (metric is PPL) on DomainBench.
|| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|-|
|Input PPL|Llama3.1-8B|187.3|933.2|43.6|323.0|
||Ours|6.6|3.9|2.0|5.5|
|Output PPL|Llama3.1-8B|3242.9|205247.1|7.7|1208975.9|
||Ours| 2483.9|156232.3|6.6|242617.0|
>Q2. Need to empirically test various values of $P_0$ on reasoning-oriented tasks. Introducing a more generalizable or theoretically justified criterion for selecting $P_0$ would further strengthen the method.
**A2.** We would like to clarify the applicability of the threshold $\mathcal{P}_0$ in reasoning-oriented tasks, and outline potential strategies for its dynamic selection:
* **Generalization of $\mathcal{P}_0=e^3$ in reasoning-oriented tasks.** We conduct experiments with values of $\mathcal{P}_0=\\{e^2,e^3,e^4\\}$. In Tab. B, when $\mathcal{P}_0=e^3$, our method achieves the best performance on three datasets. These results indicate that $\mathcal{P}_0=e^3$ generalizes well to reasoning-oriented tasks.
* **Potential method for dynamic threshold selection.** To address the concern regarding the generalizability of the static threshold, one can adopt a dynamic threshold selection scheme. Instead of a fixed $\mathcal{P}_0$, we define $\mathcal{P}_0^{(t)}=\mu\_{\mathcal{P}}^{(t)} + \alpha \cdot \sigma\_{\mathcal{P}}^{(t)}$, where $\mu\_{\mathcal{P}}^{(t)}$ and $\sigma\_{\mathcal{P}}^{(t)}$ are the mean and standard deviation of test samples PPL at step $t$. This method automatically identifies high-surprisal samples critical for adaptation while excluding redundant low-PPL data.
Tab. B: Effects of different $\mathcal{P}_0$ under Llama3.1-8B on ReasoningBench.
|$\mathcal{P}_0$|$e^2$|$e^3$|$e^4$|
|-|-|-|-|
|GSM8K|0.8070|**0.8074**|0.8026|
|MetaMath|0.7002|**0.7006**|0.7006|
|Logiqa|0.4834|**0.4868**|0.4538|
>Q3. There is a need to evaluate newer and larger-scale models from multiple model families.
**A3.** We strengthen our empirical validation by expanding experiments across model families and scales. Specifically,
* **Existing Multi-Family Model Coverage.** As shown in Tables 2-3, our experiments already encompass: **1) Llama Family:** 3B, 8B, and 13B parameter variants. **2) Qwen Family:** Latest Qwen2.5-7B-Instruct model.
* **New Model Family Added.** To further validate broader applicability, we conduct additional experiments with **Phi-4-14B**. In Tab. C, our method consistently outperforms existing baselines, demonstrating its effectiveness on larger-scale model adaptation tasks.
Tab. C: Experimental results on *DomainBench*.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|Phi-4-14B|0.2326|0.0997|0.1291|0.2517|
|Tent|0.0000|0.1178|0.1262|0.2206|
|EATA|0.0064|0.0143|0.1388|0.2200|
|Ours|**0.2421**|**0.1315**|**0.1393**|**0.2711**|
>Q4. Need to add SOTA TTA methods for a more comprehensive evaluation.
**A4.** We agree that evaluating against more recent TTA methods can provide a more comprehensive comparison and strengthen the empirical validation of our method. **We include COME [r1], a recently proposed TTA method accepted at ICLR 2025**, in our experimental comparison. In Tab. D, the results demonstrate that **our method consistently and significantly outperforms COME across all domains**.
Tab. D: Comparison between COME and our method on DomainBench.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|COME [r1]|0.0048|0.0039|0.0301|0.0328|
|Ours|**0.3212**|**0.1319**|**0.2372**|**0.3242**|
[r1] COME: Test-time Adaption by Conservatively Minimizing Entropy, ICLR 2025.
****
We sincerely hope our clarifications above have addressed your concerns. We would be grateful if you could kindly reconsider the evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I still believe that comparison with more recent TTA methods could improve the work, I appreciate the clarification and will keep my score (Weak Accept). | Summary: In this paper, the authors propose a new Test-Time Learning (TTL) approach called Test-Time Learning for LLMs (TLM) that uses unlabeled test data to address distribution shifts arising from specialized domains. They highlight three main contributions:
1. Self-Supervised Perplexity Minimization: The authors set a perplexity-minimization objective to adapt LLMs at test time through a self-supervised approach.
2. Sample-Efficient Learning Strategy: To achieve more efficient and effective parameter updates, they selectively utilize high-perplexity samples for training, thereby focusing on instances that provide the most useful information.
3. LoRA for Stable Adaptation: They integrate LoRA to mitigate catastrophic forgetting and ensure stable model adaptation during test-time learning.
Furthermore, the authors introduce a new benchmark called AdaptEval to comprehensively evaluate TLM. AdaptEval covers vertical domain knowledge, instruction-following tasks, and reasoning datasets, thereby assessing the utility of TLM under various real-world distribution shifts. Experimental results demonstrate that TLM substantially improves performance on data that are highly specialized or deviate from the training distribution compared to existing methods, making it a promising solution in real-world deployment scenarios where labeled data are scarce.
Claims And Evidence: Strengths:
- The paper convincingly explains why large-scale foundation models struggle to adapt to specialized domains or changing distributions, and provides a solid rationale for the necessity of TLM as a solution.
- Through both theoretical and empirical evidence on the relationship between perplexity and autoregressive models, the paper strongly justifies the efficacy of perplexity-based optimization. In addition, it clearly shows how LoRA helps mitigate catastrophic forgetting.
- By utilizing the variety of datasets in the AdaptEval benchmark—including domain knowledge, instruction-following tasks, and reasoning tasks—the paper demonstrates performance improvements of the proposed method in multiple contexts.
Weaknesses:
- Although the proposed method achieves generally positive experimental results, there are no concrete examples of actual outputs included (only input/output examples of the datasets are mentioned), making it less clear how the model performs in real usage scenarios.
- The paper compares its method mainly with Tent and EATA, yet there are many other recent TTA methods not discussed. Moreover, while earlier in the paper it references fine-tuning and RAG approaches, a direct comparison to these methods is missing, which is regrettable.
Methods And Evaluation Criteria: - AdaptEval covers a broad range of distribution shifts—domain knowledge, instruction tasks, and reasoning—so it appears to be well-suited for evaluating both the generality and performance of the proposed method.
Theoretical Claims: - The assumption that reducing the model’s input perplexity also lowers its output perplexity is highly convincing, especially given the autoregressive nature of the model and prior findings in language modeling.
Experimental Designs Or Analyses: Strengths:
- By conducting experiments on the AdaptEval benchmark—which spans a variety of tasks—this design is shown to possess strong robustness.
Weaknesses:
- Same as the second weakness in "Claims And Evidence"
Supplementary Material: Strengths:
- The main text references appendices that provide detailed information on dataset construction, implementation details, and evaluation metrics.
Weaknesses:
- Same as the first weakness in "Claims And Evidence"
Relation To Broader Scientific Literature: - Addressing distribution shifts in deployment environments is a crucial issue in the fields of NLP and deep learning, and this work reflects the latest trends, such as perplexity-based updates and the application of LoRA.
- AdaptEval appears likely to serve as a unified standard for subsequent research on TTA, TTT, and similar methods.
Essential References Not Discussed: - Same as the second weakness in "Claims And Evidence"
Other Strengths And Weaknesses: Strengths:
- Combining high-perplexity sample selection with LoRA to balance computational cost and adaptation performance is conceptually neat and practically beneficial.
Weaknesses:
- More detailed comparisons of time complexity and resource requirements would be welcome when handling different types of domains (e.g., conversational tasks, online streaming data).
Other Comments Or Suggestions: - Overall, this study is well-structured in assuming situations where obtaining high-quality labels is difficult in real operational environments, enabling models to adapt to available test data on the fly.
- It highlights the importance of test-time learning, particularly perplexity-based updates, in large language models, and underscores the practicality of LoRA for mitigating catastrophic forgetting.
- If possible, supplementing the paper with more detailed analyses on extreme cases and computational efficiency, along with comparisons to the latest literature, could further bolster its contributions.
Questions For Authors: Same as above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate your high level of encouragement for our work. Your recognition of "provides **a solid rationale for the necessity of TLM** as a solution", "the paper **strongly justifies the efficacy of perplexity-based optimization**", and "AdaptEval appears likely to **serve as a unified standard for subsequent research**" is deeply appreciated. Our detailed responses are below:
>Q1. Although the proposed method achieves generally positive experimental results, there are no concrete examples of actual outputs included (only input/output examples of the datasets are mentioned), making it less clear how the model performs in real usage scenarios.
**A1.** We sincerely thank you for highlighting the need for concrete examples to demonstrate practical efficacy. As shown in https://anonymous.4open.science/r/ICML1861, our method consistently outperforms SOTA methods across diverse scenarios. Our analysis is detailed below:
* **More accurate reasoning steps.** In reasoning tasks, our method produces logically sound and well-structured intermediate steps, enabling the model to reach correct conclusions more reliably.
* **Better coherence and fluency.** Compared to TTA methods, our method maintains output fluency and avoids the repetition issues commonly observed in models like Tent or EATA.
>Q2. Recent TTA methods not discussed. Moreover, a direct comparison to fine-tuning and RAG approaches is missing, which is regrettable.
**A2.** According to your suggestions, we further compare with a SOTA TTA method, COME (ICLR'25,[r1]). From the results in Table A, **our method consistently and significantly outperforms COME across all domains**. This further demonstrate the superior performance of our method in addressing LLMs adaptation to new domains using only test data.
**Experimental Design Rationale with RAG and fine-tuning approaches.** Our experiments focus on TTL, which uses only unlabeled test data without relying on additional labels or retrieval resources. Fine-tuning typically depends on task-specific labeled data, while RAG assumes a well-maintained knowledge base that may not generalize across domains. In contrast, our method updates solely on unlabeled test data via perplexity minimization. Therefore, we compare with TTA methods that share similarly label-free, resource-free settings.
Table A: Comparison between COME and our method on *DomainBench*.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|COME [r1]|0.0048|0.0039|0.0301|0.0328|
| **TLM(Ours)** | **0.3212**|**0.1319**| **0.2372** |**0.3242** |
[r1] COME: Test-time Adaption by Conservatively Minimizing Entropy, ICLR 2025.
>Q3&Q4. More detailed comparisons of time complexity and resource requirements would be welcome when handling different types of domains (e.g., conversational tasks, online streaming data).
**A3&A4.** Thank you for the valuable comments. According to your suggestions, we add experiments under the **online streaming setting (batch size = 1)**, which more accurately reflect real-world inference conditions, such as conversational or sequential inputs. Detailed analyses are as follows:
* **Time Complexity.** In the TTL setting, total computation time $T_{\text{total}}$ consists of forward passes plus backward updates, i.e., $T_{\text{total}} = T_{\text{forward}} + T_{\text{backward}} \cdot N_{\text{selected}}$, where $N_{\text{selected}}$ denotes the number of samples used for backpropagation. Thanks to our Sample Efficient Learning Strategy, our method significantly reduces $N_{\text{selected}}$ compared to baselines. For instance, as shown in Table B, our method requires up to 45% fewer backpropagation than EATA, resulting in lower overall computation time.
* **Resource Requirements.** As shown in Table B, we adopt LoRA for parameter updates, thereby limiting the trainable parameters to a small fraction of the full model. This design keeps memory usage and computational overhead low, especially when compared to full-parameter tuning. Consequently, our method enables practical adaptation in resource-constrained environments.
Table B. Experimental results of our proposed method in the **Online setting (batch size=1)**.
| Methods|Average R-Lsum|Trainable parameters |#Backwards|
|-|-|-|-|
|Llama3.1-8B|0.1720|-|-|
|Tent|0.1834|3.41M|5000|
|EATA|0.1809|3.41M|4634|
|Ours| **0.1917**|3.41M|**2541**|
****
We sincerely hope our clarifications above have addressed your concerns. We would be grateful if you could kindly reconsider the evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed and thoughtful responses. The additional comparison with COME, analyses of time complexity and resource usage, and demonstration of performance in an online streaming environment have addressed most of my major concerns. However, some issues have remained, which are similar to those raised by Reviewer BSk7:
* Lack of theoretical formulation for PPL minimization: While the effectiveness of PPL minimization presented in the paper may appear empirically plausible, it lacks a rigorous theoretical formulation, which undermines its persuasiveness.
* Insufficient distinction between the proposed TTL and TTA: The significant difference between the proposed TTL and TTA remains unclear.
* Need for experiments on tasks involving longer responses and i.i.d. conditions: Additional results are required for tasks with extended sequences and an i.i.d. setup.
Based on these points, I will maintain my current rating, but will positively reconsider upon seeing these concerns resolved.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your prompt response to our rebuttal. We hope to make the following further responses to your concerns and sincerely hope you would be satisfied.
>Q1. Lack of theoretical formulation for PPL minimization.
**A1.** Thanks for you suggestion. We provide additional theoretical analysis to justify the connection between input optimization and output PPL reduction.
* **Autoregressive Training Dynamics.** The standard next-token prediction objective makes model predictions inherently conditional on previous context quality. Thus, our TTL seeks to optimize the model by input PPL minimization, which will boost output PPL with improved context predition. In other words, an improved context representation of input (also the context to next token) will benefit more accurate next-token predictions. As a result, the TTL objective is also expected to reduce output PPL.
* **Gradient-Based Theoretical Analysis.** We formalize the intuition that question-conditioned updates benefit answer predictions under a key assumption. Let $\theta' = \theta - \eta \nabla_\theta (-\log P(q;\theta))$ denote the updated parameters after a single TTL step, where $q$ is the question. Using a first-order Taylor expansion:
$\log P_{\theta'}(a|q) \approx \log P_\theta(a|q) + \eta \underbrace{\left[ \nabla_\theta \log P(q;\theta) \right]^\top \nabla_\theta \log P_\theta(a|q)}_{\text{Cross-gradient term}} + \mathcal{O}(\eta^2)$, where $a$ is the answer to the qestion $q$. Our core assumption is that
$\langle \nabla_q, \nabla_a \rangle=\left[ \nabla_\theta \log P(q;\theta) \right]^\top \nabla_\theta \log P_\theta(a|q) \geq 0$ for question-answer pairs with strong semantic alignment. Under this condition, the cross-gradient term becomes non-negative, guaranteeing: $\log P_{\theta'}(a|q) \geq \log P_\theta(a|q)$ for small $\eta$.
* **Empirical Validation of Assumption:** We compute this gradient inner product using 100 batches QA-pairs from the Geography over Llama3.1-8B. Results show 92% of batch-samples satisfy the non-negativity condition, with average $\langle \nabla_q, \nabla_a \rangle = +23.36$. This strongly supports our theoretical premise.
We will include this analysis in the revised manuscript, with expanded derivations and statistical details.
>Q2. Insufficient distinction between the proposed TTL and TTA.
**A2.** We respectfully further clarify the key distinctions between TTL and TTA:
* TTA focuses on unsupervised adaptation at the output level. This, however, easily suffers from error accumulation without reliable supervision and results in performance degradation. In contrast, **we leverage the auto-regressive nature of language processing tasks, and design a supervised objective for LLMs based on the model inputs to stably guide the model adaptation during testing**.
* Our empirical evaluations in Table 2 of the main paper also demonstrate that, while TTA methods improve model performance in certain domains, they degrade the performance in many others. **Similar results are observed in our ablation study from Tab. A.** In contrast, our TTL scheme demonstrates superior stability, and consistently improves performance across domains and across various models. We will make this clearer in the revised paper.
Tab. A: Comparison between entropy and PPL minimization on DomainBench.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|Llama3.1-8B|0.2450|0.0834|0.1265|0.2329|
|Entropy|0.0778|0.0067|0.0105|0.0372|
|PPL|**0.3190**|**0.1255**|**0.2326**|**0.3222**|
>Q3. Need for experiments on tasks involving longer responses and i.i.d. conditions.
**A3.** Here are the clarifications:
* **Our experiments already follow the i.i.d assumption, per your suggestion, and each test sample is drawn *Independently from an Identical Distribution***. Our setup is adopted from existing TTA methods, where the model adapts continuously to the target domain, allowing the model to capture domain-invariant features by learning from more test samples.
* We provide more challenging evaluations when the model is adapted and tested on a different domain in Tab. B. **The results underscore the effectiveness of TTL to learn domain-generalizable features**. We will make this clearer in the revision.
* As suggested, **we conducte experiments on longer response generation using LongWriter [r1]. Our method also shows a notable improvement on Llama3.1-8B (0.1637→0.1709)**. We will include an analysis of the longer response generation experiments in the revised manuscript.
Tab. B: Results on DomainBench.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|Llama3.1-8B|0.2450|0.0834|0.1265|0.2329|
|Ours (Train on the Geo./Agri./Med./Fin.)|0.3212|0.1319|0.2372|0.3242|
|Ours (Train on Geo.)|0.3212|0.1227|0.1740|0.3055|
[r1] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs, arxiv 2024.
We sincerely hope our clarifications above have addressed your concerns.
Best,
Authors | Summary: This paper proposes a test-time learning scheme for LLMs that minimizes input perplexity (instead of entropy in Test-Time Adaptation) for unlabeled data at test time for better domain adaptation. In addition, the authors draw an insight showing that high-perplexity inputs are more informative for optimization and thus propose to select those for test-time updates. Extensive empirical experiments suggest that the proposed method is generally effective.
Claims And Evidence: In Sec. 4.1, the authors claimed "This improved representation facilitates more accurate and confident next-token predictions, thereby reducing conditional output perplexity", which is not formally derived or supported. After the minimization of input perplexity, the model parameter $\Theta$ has changed, thus it is different from the real conditional output perplexity $\mathcal{P}(y | x; \Theta)$. So, the claim is not properly justified either theoretically or empirically.
Methods And Evaluation Criteria: Rouge scores and accuracy using extract match make sense. But, since the paper uses perplexity minimization for optimization, the perplexity itself can be one meaningful metric which is not presented in the paper.
Theoretical Claims: There are no proofs or formal theoretical claims in this paper.
Experimental Designs Or Analyses: Yes, I have checked Section 5 Experiments. The setting of model training or restoration is not specified well. For example, in a typical setting of Test-Time Training, the model will be reverted to its original weight after training of each test sample. However, this setting of model optimization is not described in the paper.
Supplementary Material: Yes, I have reviewed Appendix A-D.
Relation To Broader Scientific Literature: The finding that minimizing perplexity is performing better than minimizing cross-entropy is relevant to the test-time adaptation literature and test-time training literature.
Essential References Not Discussed: Yes. The key contribution is a method that minimizes input perplexity for test time training/adaptation, but there was prior work [R Sennrich'EACL 2012] that considered perplexity minimization for domain adaptation. This could harm the novelty of the paper.
[R Sennrich'EACL 2012] "Perplexity Minimization for Translation Model Domain Adaptation in Statistical Machine Translation"
Other Strengths And Weaknesses: ## Other Weaknesses
- The justification of batch setting is not discussed, as in real-world scenarios testing data often comes in a streaming fashion.
- Although the key claim of this paper is to minimize perplexity for test input instead of minimizing entropy as prior works, the perplexity is just a monotonic transformation of entropy itself, i.e., $Perplexity(P) = e^{H(P)}$. Thus, minimizing cross entropy directly minimizes perplexity. It is not well-understood what the real difference is and how perplexity benefits the test-time adaptation.
Other Comments Or Suggestions: - Line 161: the model “is” optimized to .. "is" should be added.
Questions For Authors: - What is the essential difference between TTA and TTL? Minimizing perplexity or entropy is almost the same.
- Can you show a comparison of Figure 1b, which presents an updated input and output perplexity after input perplexity minimization? This is helpful for understanding the effectiveness of the correlation after TTL.
- What does Figure 1c mean? Are low perplexity samples and high perplexity samples mixed together in the training or separate?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: >Q1. The claim "This improved representation ..., thereby reducing conditional output PPL" is not properly justified either theoretically or empirically.
**A1.** We thank the reviewer for this important observation. We agree that the original statement regarding conditional output PPL reduction requires more precise formulation and supporting evidence. Please allow us to clarify:
* **Causal Relationship Clarification.** We have revised the phrase "thereby reducing conditional output perplexity" to **"This improved representation facilitates more accurate and confident next-token predictions, which is expected to reduce output perplexity".**
* **Theoretical Motivation Clarification.** In our study, we observe a strong positive correlation between input and output PPL (see Fig.1b). Relying on this, we seek to reduce the output PPL by input PPL minimization.
* **Empirical Validation.** We compare changes in both input and output PPL between our method and the original LLM in Tab. A. **Our method reduces input PPL and consistently decreases output PPL.**
Tab. A: PPL results on DomainBench.
|| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|-|
|Input PPL|Llama3.1-8B|187.3|933.2|43.6|323.0|
||Ours|6.6|3.9|2.0|5.5|
|Output PPL|Llama3.1-8B|3242.9|205247.1|7.7|1208976.9|
||Ours| 2483.9|156232.3|6.6|242617.0|
>Q2. Add PPL as metric.
**A2.** As suggested, we include PPL as a metric in Tab. B. **Our method outperforms existing methods in terms of PPL**.
Tab. B: PPL results on DomainBench.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|Llama3.1-8B|3242.9|205247.1|7.7|1208976.9|
|EATA|4692.1|628196.9|2937.1|11444422.0|
|Ours|**2483.9**|**156232.3**|**6.6**|**242617.0**|
>Q3. The training/restoration setting is not clear.
**A3.** In our method, the model weights are not restored to their original state after processing each test sample. Specifically,
* **Offline Setting.** All test data are processed at once, and the model is updated using all available test samples before any testing.
* **Online Setting.** Test samples are processed sequentially, with the model updated after each test sample or mini-batch, and the model parameters are never reset.
>Q4. Differences of the proposed method from [R Sennrich].
**A4.** Both works use PPL minimization, and our method differs from [R Sennrich] as:
* **Problem Setting.** We focus on **adapting LLMs at test time using unlabeled test data** to handle distribution shifts. In contrast, [R Sennrich] assumes access to **labeled source-domain training data** and optimizes translation models from a static source corpus.
* **Optimization Objectives.** We adopt **input PPL minimization** as the optimization objective, enabling efficient self-adaptation of LLMs to target domains during testing. Instead, [R Sennrich] optimizes **output PPL** using labeled data.
>Q5. Justification of batch setting needed.
**A5.** We evaluate batch size in the **online setting** (bs=1 vs 100) under streaming-like scenarios. **With bs=1, the model updates after each test sample. Results demonstrate increased update steps(1514→2541), while our TTL remains effective (0.2040→0.1917).**
>Q6&Q7. Differences between TTA and TTL? Also, PPL and entropy?
**A6&A7.** Both TTA and TTL adapt models without labeled data, but they differ in key aspects:
* **Objective Function.** Most TTA methods minimize the output entropy $H(P(y|x))$. TTL minimize the PPL of the input sequence $\mathcal{P}(x)=e^{1/T\sum_{t=1}^TH_{CE}^{(t)}}$, where $H_{CE}^{(t)}$ is the token-level cross-entropy.
* **Task-Specific Adaptation.** TTA seeks to improve output confidence but cannot optimize the input representation quality that is essential for LLMs. In contrast, TTL minimizes input PPL to refine the internal representation of the input sequence to encourage more accurate predictions.
>Q8. Can you show a comparison of Fig.1b, which presents an updated input and output PPL after input PPL minimization?
**A8.** Fig. 1(b) shows **a strong positive correlation between input and output PPL** on DomainBench using Llama3.1-8B with varying degrees of training. We compare the changes in both input and output PPL of our method with the original LLM in Tab. A, **our method reduces input PPL and consistently decreases output PPL**.
>Q9. Fig.1c meanings and Training Protocol.
**A9.** Clarifications are below:
* Fig. 1c shows how the selection of different proportions of test samples (X-axis: top/bottom p% by PPL) would impact the final performance for TTL (Y-axis: evaluated on the full test set).
* **Training Protocol.** We pre-sort test samples by PPL, select a given proportion, **randomly shuffle the subset**, and perform TTL. The adapted model is then evaluated on all test data.
****
We sincerely hope our clarifications above have addressed your concerns. We would be grateful if you could kindly reconsider the evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response, but most of my concerns are not well-addressed. I think my current score is fair.
- The clarification does not make the theoretical formulation clearer or convincing. Although empirically it may hold, the conditional output PPL reduction needs a more rigorous formulation to make the claim solid.
- I suggest that the authors add PPL results for more challenging tasks that have longer responses like ReasoningBench.
- If the model weights are not restored after each test sample, some continuity of the test sequence or the distribution should be assumed. However, practically, the test samples are often i.i.d. which means learning on one test sample (minimizing input perplexity for one specific test sample) does not provide any guarantee that it will benefit other test samples. The model is often restored in almost all test-time training settings.
- I still couldn’t see sufficient distinction/novelty from the proposed TTL compared with TTA. I think ablations regarding how entropy and perplexity minimization differ are significant for understanding the effectiveness of this work.
---
Reply to Comment 1.1.1:
Comment: >Q1. The clarification does not make the theoretical formulation clearer or convincing.
**A1.** Thanks for you suggestions. We provide additional theoretical analysis to justify the connection between input optimization and output PPL reduction.
* **Autoregressive Training Dynamics.** The standard next-token prediction objective makes model predictions inherently conditional on previous context quality. Thus, our TTL seeks to optimize the model by input PPL minimization, which will boost output PPL with improved context predition. In other words, an improved context representation of input (also the context to next token) will benefit more accurate next-token predictions. As a result, the TTL objective is also expected to reduce output PPL.
* **Gradient-Based Theoretical Analysis.** We formalize the intuition that question-conditioned updates benefit answer predictions under a key assumption. Let $\theta' = \theta - \eta \nabla_\theta (-\log P(q;\theta))$ denote the updated parameters after a single TTL step, where $q$ is the question. Using a first-order Taylor expansion:
$\log P_{\theta'}(a|q) \approx \log P_\theta(a|q) + \eta \underbrace{\left[ \nabla_\theta \log P(q;\theta) \right]^\top \nabla_\theta \log P_\theta(a|q)}_{\text{Cross-gradient term}} + \mathcal{O}(\eta^2)$, where $a$ is the answer to the qestion $q$. Our core assumption is that
$\langle \nabla_q, \nabla_a \rangle=\left[ \nabla_\theta \log P(q;\theta) \right]^\top \nabla_\theta \log P_\theta(a|q) \geq 0$ for question-answer pairs with strong semantic alignment. Under this condition, the cross-gradient term becomes non-negative, guaranteeing: $\log P_{\theta'}(a|q) \geq \log P_\theta(a|q)$ for small $\eta$.
* **Empirical Validation of Assumption:** We compute this gradient inner product using 100 batches QA-pairs from the Geography over Llama3.1-8B. Results show 92% of batch-samples satisfy the non-negativity condition, with average $\langle \nabla_q, \nabla_a \rangle = +23.36$. This strongly supports our theoretical premise.
We will include this analysis in the revised manuscript, with expanded derivations and statistical details.
>Q2. Need to add PPL results for the tasks that have longer responses like ReasoningBench.
**A2.** As suggested, we further include the results on ReasoningBench in terms of PPL in Tab. A. **Our method still outperforms existing methods in terms of PPL**.
Tab. A: PPL results on ReasoningBench.
| Methods|GSM8K|MetaMath|Logiqa|
|-|-|-|-|
|Llama3.1-8B|4.3|2.2|20.2|
|Tent|5.5|2.8|204092.9|
|EATA|8.6|409.8|5643358262027.4|
|Ours|**4.0**|**2.1**|**10.9**|
>Q3. The test samples are often i.i.d. which means learning on one test sample does not provide any guarantee that it will benefit other test samples.
**A3.** Here are the clarifications:
* **Our experiments already follow the i.i.d assumption, per your suggestion, and each test sample is drawn *Independently from an Identical Distribution***. Our setup is adopted from existing TTA methods, where the model adapts continuously to the target domain, allowing the model to capture domain-invariant features by learning from more test samples.
* We agree that continuous learning may encounter issues like overfitting, but frequent restoration may also otherwise suffer from underfitting. We believe that each setting has its considerations and concerns, and the debate is out of the scope of our paper.
* We provide more challenging evaluations when the model is adapted and tested on a different domain in Tab. B. **The results show the effectiveness of TTL to learn domain-generalizable features**. We will make this clearer in the revision.
Tab. B: Results on DomainBench.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|Llama3.1-8B|0.2450|0.0834|0.1265|0.2329|
|Ours (Train on the Geo./Agri./Med./Fin.)|0.3212|0.1319|0.2372|0.3242|
|Ours (Train on Geo.)|0.3212|0.1227|0.1740|0.3055|
>Q4. Need to discuss distinction/novelty from TTL compared with TTA, add ablations about entropy and perplexity minimization.
**A4.** We further clarify the key distinctions between TTL and TTA:
* TTA focuses on **unsupervised** adaptation at the output level. This, however, easily suffers from error accumulation without reliable supervision and results in performance degradation. In contrast, we use the auto-regressive nature of language processing tasks and design a **supervised** objective for LLMs based on inputs to stably guide the adaptation during testing.
* TTA methods improve performance in certain domains but **degrade** the performance in many others (see Tab. 2). **Similar results are observed in ablation study from Tab. C.** In contrast, TTL shows superior stability and **consistently improves** performance across domains and across various models.
Tab. C: Entropy and PPL minimization on DomainBench.
| Methods|Geo.|Agri.|Med.|Fin.|
|-|-|-|-|-|
|Llama3.1-8B|0.2450|0.0834|0.1265|0.2329|
|Entropy|0.0778|0.0067|0.0105|0.0372|
|PPL|**0.3190**|**0.1255**|**0.2326**|**0.3222**| | null | null | null | null | null | null |
Tractable Transformers for Flexible Conditional Generation | Accept (poster) | Summary: This paper introduces Tracformer, a Transformer-based generative model designed for flexible and robust conditional generation tasks. Tracformer incorporates a sparse multi-scope attention mechanism to capture both local and global contextual information efficiently. Empirical results demonstrate that Tracformer outperforms existing NAR models like SEDD and MDLM in both conditional and unconditional generation tasks, showcasing its potential as a scalable and versatile generative model.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Fair
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1) The motivation for the study is clearly introduced. Non-autoregressive (NAR) models have shown superior performance in unconditional generation compared to autoregressive (AR) models of similar sizes. However, a significant challenge with NAR models is their difficulty in generalizing to conditional probability queries that were not seen during training. In response to this issue, the authors propose the Tractable Transformer (Tracformer), a NAR model designed specifically for flexible conditional generation tasks.
2) The experimental results demonstrate that the proposed Tracformer model outperforms several established models, including BERT, BART, SEDD, and MDLM, in terms of conditional generation performance.
Weaknesses:
1) Some abbreviations are repeatedly defined throughout the paper, which leads to unnecessary redundancy. For example, terms like non-autoregressive (NAR) and feed-forward neural network (FFN) are defined multiple times.
2) The contributions of the paper would be more clearly understood if they were summarized at the end of the Introduction section. This would help provide the reader with a clearer context and set expectations for the rest of the paper.
3) In the experimental section, the authors compare their proposed Tracformer model only with BERT and BART. However, more recent and powerful models, such as GPT, should be included in the comparisons presented in Figures 5 and 6.
4) The results for zero-shot unconditional perplexity in Table 4 show that Tracformer outperforms the other baselines on only the 1BW dataset. In contrast, MDLM demonstrates superior performance on three datasets, while Tracformer only outperforms baselines on one.
5) The Background section could be moved to the appendix. Sequence modeling and Transformer models are well-established concepts and are common knowledge for researchers and readers familiar with this field. Additionally, more detailed experiments and comparisons with recent models, such as GPT, should be conducted and presented in the experimental section. A deeper analysis comparing the proposed model against more state-of-the-art architectures would provide a clearer picture of its strengths and weaknesses.
Other Comments Or Suggestions: None
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and for recognizing the potential of Tracformers in conditional generation tasks.
> Some abbreviations are repeatedly defined throughout the paper, which leads to unnecessary redundancy.
We appreciate the reviewer’s feedback and will revise the paper to ensure that abbreviations are defined only once.
> The contributions of the paper would be more clearly understood if they were summarized at the end of the Introduction section.
We thank the reviewer for the suggestion. In summary, our contributions are two-fold: (i) we identified the problem that existing NAR models suffer from severe performance degradation in conditional generation, despite having strong unconditional generation performance; (ii) we propose Tracformer, a novel Transformer-based architecture specially designed to improve conditional generation and generalize to conditional queries unseen during training. In the next version of the paper, we will add this summary at the end of the Introduction section to enhance clarity.
> In the experimental section, the authors compare their proposed Tracformer model only with BERT and BART. However, more recent and powerful models, such as GPT, should be included in the comparisons presented in Figures 5 and 6.
While autoregressive models like GPT are indeed very powerful, they cannot be used in the contextual AR (CAR) and arbitrary-context (AC) generation experiments in Figures 5 and 6. This is because GPT models can only condition on prefix prompts, whereas our experiments require conditioning on context that is scattered throughout the sequence.
In our experiments, we included GPT-2 as a baseline in Table 4, as it has a similar number of parameters compared to our model and other baselines. In this comparison, all models were trained on the WebText dataset (or its open-sourced version) and evaluated for their zero-shot unconditional perplexity across multiple datasets.
> The results for zero-shot unconditional perplexity in Table 4 show that Tracformer outperforms the other baselines on only the 1BW dataset.
We acknowledge that the zero-shot unconditional perplexity of Tracformer is less favorable compared to MDLM. However, the primary focus of this paper is to highlight the limitation of existing NAR models, including MDLM, which achieve strong unconditional generation performance but struggle with conditional generation. While Tracformer exhibits slightly worse unconditional generation performance, it significantly outperforms MDLM in multiple conditional generation tasks (as shown in Section 6.2). Since conditional generation is often more critical for downstream applications, we believe this aspect is more important.
Another potential factor influencing the results is that the Tracformer model used in our experiments has fewer parameters (109M vs. 169M) and was trained on fewer tokens (295B vs. 524B). This is summarized in Table 3. In future work, we plan to scale up Tracformers and explore different training objectives, such as the diffusion objective used by MDLM, to further evaluate their scalability and versatility.
> The Background section could be moved to the appendix.
We thank the reviewer for the helpful suggestion. In the next version of the paper, we will move most of the background of sequence modeling and Transformers to the appendix, retaining only the essential parts needed to introduce basic notations.
> …more detailed experiments and comparisons with recent models, such as GPT, should be conducted and presented in the experimental section. A deeper analysis comparing the proposed model against more state-of-the-art architectures would provide a clearer picture of its strengths and weaknesses.
We thank the reviewer for the suggestion. In Section 6.2, we compared Tracformer against state-of-the-art autoregressive models (e.g., GPT-2) as well as non-autoregressive models (e.g., SEDD and MDLM). For unconditional generation tasks, we included both autoregressive and non-autoregressive baselines. For conditional generation tasks, only non-autoregressive models were included since autoregressive models cannot condition on suffix contexts, making a direct comparison infeasible. In the next version of the paper, we will add the above discussion to the experiment section. | Summary: This paper explores why non-autoregressive (NAR) generative models often underperform in conditional tasks, despite strong unconditional performance. The authors introduce Tractable Transformers (TracFormer), which factorize conditional queries to handle partial inputs flexibly while leveraging both local and global context. Their experiments show that TracFormer achieves robust conditional generation, surpassing diffusion- and autoregressive-based baselines.
## update after rebuttal
The authors have provided a thorough rebuttal and addressed most of my concerns. I continue to believe that my original score was an accurate evaluation.
Claims And Evidence: Yes, the observations are clearly articulated, and the proposed methodology to address them is both well-founded and convincingly supported by experimental results.
Methods And Evaluation Criteria: The methodology clearly outlines how it aims to address the problem, and it is easy to understand how the proposed approach tackles the observed issues. Additionally, the evaluation was conducted using fair benchmarks in comparison with existing methods.
Theoretical Claims: I have reviewed Appendix G and did not identify any major issues.
Experimental Designs Or Analyses: The necessary experiments were well-designed, and the results were appropriately analyzed.
Supplementary Material: I may have overlooked some minor details, but I have reviewed most of the content necessary for understanding the main text and did not encounter any issues.
Relation To Broader Scientific Literature: Recently, there has been a trend in LLMs towards using simple transformers with tokenization, which has led to a significant reduction in architecture search even at smaller scales. In this regard, I find the scalable proposal in the paper to be impactful, although scaling in practice presents its own challenges.
Essential References Not Discussed: There are no comments on this matter.
Other Strengths And Weaknesses: The paper addresses a clearly defined and significant problem, with observations that are straightforward. The proposed methodology is well explained, making it easy to understand how it tackles the problem. Moreover, the work is supported by well-designed experiments and convincing results, and the analysis effectively addresses any potential concerns.
However, a few clarifications were needed, and the details are provided in the "Questions For Authors" section.
Other Comments Or Suggestions: At line 630 in Appendix A, are you referring to Decoder instead of Encoder?
Questions For Authors: 1. In Figure 4(a), is the token identical to x₀ from the prefix encoder input understood from a teacher forcing perspective?
2. In the original Transformer decoder, self-attention and cross-attention are both present. In TracFormer, is it correct to understand that the decoder combines the self-attention causal mask and cross-attention into a single cross-attention operation? Was this design choice naturally adopted from a modeling perspective, or did it stem from a separate insight by the authors?
3. On line 242, the paper states, “Attention masks are used in cross-attention layers to ensure each decoder embedding only depends on desired input variables.” However, since equation (5) appears to merely apply a stride, I do not fully understand how it guarantees that the variable scope does not include X_C. Could you clarify this?
4. The proposed method employs two encoders (a prefix encoder and a suffix encoder). What is the fundamental advantage of using these two uni-directional encoders over a bidirectional encoder? Is this choice made solely for modeling purposes, or does it reflect the authors’ specific intuition?
5. In Figure 5, if the mask ratio is 1.0, does this correspond to the unconditional perplexity? If so, should we interpret that unconditional perplexity is worse than conditional perplexity?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and for acknowledging that the paper addresses a clearly defined and significant problem.
> At line 630 in Appendix A, are you referring to Decoder instead of Encoder?
We thank the reviewer for pointing out this typo, and the paragraph title should be “Sparse Attention Masks of the Decoder”. We will fix the typo in the next version of the paper.
> In Figure 4(a), is the token identical to x₀ from the prefix encoder input understood from a teacher forcing perspective?
Yes, the target tokens are identical to the input. However, as described in Section 4.3 (Equations (5) and (6)), we set the cross-attention masks from the decoder to the encoder such that the decoder will not receive information about $x_0$ from the prefix encoder when predicting $X_0$. This is done by not attending to any token in the prefix encoder.
> In TracFormer, is it correct to understand that the decoder combines the self-attention causal mask and cross-attention into a single cross-attention operation?
There is no self-attention in the decoder of Tracformers, and the model predicts a token only using information from the encoder(s) through the cross-attention operations. This design choice (not including self-attention layers in the decoder) was made mainly for (inference-time) efficiency considerations, as the decoder can be fully parallelized when predicting multiple tokens.
> However, since equation (5) appears to merely apply a stride, I do not fully understand how it guarantees that the variable scope does not include X_C. Could you clarify this?
As shown in Figure 4, in the case of arbitrary-context (AC) generation, the masked tokens (variables not in $X_C$) are replaced by the mask token (<MASK>) so the model will not observe these tokens. In the case of contextual AR (CAR) generation, the condition $t’ < t$ in Equation (5) is used to guarantee that the model only observes previous tokens from the prefix encoder. In the next version of the paper, we will add more discussion to clarify how we guarantee the model does not observe tokens outside of $X_C$.
Additionally, we agree with the reviewer that there are other ways to design the decoder masks to ensure the model does not observe variables that it should not during training. We plan to explore other design choices in future work.
> The proposed method employs two encoders (a prefix encoder and a suffix encoder). What is the fundamental advantage of using these two uni-directional encoders over a bidirectional encoder?
This design choice is made mainly because it aligns well with the contextual AR (CAR) generation paradigm, where the prefix encoder always encodes the full prefix context while the suffix encoder represents the given suffix tokens. It would be very interesting to see which (two unidirectional encoders versus one bidirectional encoder) performs better in the arbitrary-context (AC) generation paradigm, and we plan to explore it in future work. We will discuss this in the conclusion section in the next version of the paper.
> In Figure 5, if the mask ratio is 1.0, does this correspond to the unconditional perplexity? If so, should we interpret that unconditional perplexity is worse than conditional perplexity?
In Figure 5, we only vary the mask ratio from 0.1 to 0.9 in increments of 0.1. Therefore, the right-most conditional perplexity values correspond to a mask ratio of 0.9. In general, as the mask ratio increases, the conditional perplexity worsens because the model has access to less contextual information. | Summary: This paper proposes a novel model architectural modification of Transformers and demonstrate that the proposed model outperforms baselines on non-autoregressive (NAR) conditional generation tasks, especially when the mask pattern at inference is different from the mask pattern at training.
Claims And Evidence: Most claims are supported by clear and convincing evidence.
One somewhat inaccurate claim is that at the end of Section 6.2, below Table 4, the authors claim that “Tracformer remains highly competitive, achieving results comparable to or better than larger models.”
But the *unconditional* generation perplexity (Table 4) of the proposed model does not compare favorably with strong baselines such as GPT-2, SEDD, and MDLM.
I understand that the proposed model is smaller, but more experiments (e.g. slightly scaling it up to match baseline model sizes) may be needed to more convincingly demonstrate its capability of unconditional generation.
This limitation does not undermine the authors claim regarding *conditional* generation tasks.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense.
Theoretical Claims: This paper does not claim to include theory as its primary contributions.
Experimental Designs Or Analyses: I checked all experimental designs and analyses in the main paper. They are sound and valid.
Supplementary Material: I checked some important details in the appendix but did not carefully verify each claim.
Relation To Broader Scientific Literature: This paper contributes to improving the generalizability of non-autoregressive (NAR) conditional language generation tasks, and compares favorably against SOTA discrete diffusion language models under OOD masking patterns.
Essential References Not Discussed: References are sufficiently discussed.
Other Strengths And Weaknesses: Strengths
1. originality: the multi-scale attention applied to the NAR context for better generalizability is novel. (Similarly designed sparse attention may have been used for AR Transformers to improve efficiency in long-context settings.)
2. clarity: the techniques and the results are clearly described.
Weaknesses
1. significance: the tasks, datasets, and evaluation metrics used in this paper mostly correspond to generic language modeling, emphasizing on fluency.
- This paper does not demonstrate that the proposed model outperforms baselines under some task-specific metrics (e.g. BLEU for translation, ROUGE for summarization, accuracy for reasoning, etc).
- These specialized tasks are commonly benchmarked for non-autoregressive language modeling papers. They provide important additional information since in these tasks, fluency is not the only important capability.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. In Figure 2:
- The pre-trained SEDD was only able to handle sequences of length 1024. How did you calculate the log likelihood of shorter sequences?
- Does SEDD output the exact model-predicted likelihood, or its lower bound (ELBO)?
2. In Table 1: What does a mask range of [0.25, 0.75] mean? Does it mean that out of the context window of 1024, the positions from 256 to 768 are masked, while other positions are visible?
3. I would like to learn from the authors about their insights on my feedback shared in the above sections of this review form.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and for recognizing the novelty and clarity of our work.
> I understand that the proposed model is smaller, but more experiments (e.g. slightly scaling it up to match baseline model sizes) may be needed to more convincingly demonstrate its capability of unconditional generation.
We thank the reviewer for the suggestion. As pointed out by the reviewer, we would like to highlight that although the unconditional generation performance is less favored compared to the baselines, our model achieves significantly better conditional generation performance. We will adjust the descriptions and claims about the unconditional generation performance of Tracformers versus the baseline models.
We agree with the reviewer that it would be valuable to conduct further experiments by scaling up Tracformer to assess how its performance improves with model size. Additionally, as discussed in the conclusion, another promising direction is exploring how Tracformer can be combined with recent advancements in training objectives for discrete diffusion models, such as diffusion language models. We plan to investigate these directions in future work.
> the tasks, datasets, and evaluation metrics used in this paper mostly correspond to generic language modeling, emphasizing on fluency.
We thank the reviewer for the suggestion on extending the evaluation metrics. We focus mainly on metrics that reflect generative modeling performance (conditional log-likelihood/perplexity) and fluency (MAUVE, BERT score) to study the fundamental problem of NAR models having bad conditional generation performance. Although fluency alone may not be sufficient to fully characterize generation quality, it provides insights into how well the generated text aligns with natural language distributions. Perplexity, on the other hand, serves as a general measure of the model’s generative modeling performance.
To comprehensively evaluate Tracformers, we evaluated the BLEU-4 scores of Tracformer and the baseline model BART on the WikiText-103 dataset. The model was evaluated across six different masks. The evaluation setup follows Table 1. As indicated by the results, Tracformer outperforms BART consistently. We will include the results in the next version of the paper.
| Mask ranges | Tracformer | BART |
| ------------------------------- | ---------- | ----- |
| [0.25,0.75] | **0.524** | 0.513 |
| [0.5,1.0] | **0.540** | 0.519 |
| [0.1,0.4] & [0.6,0.9] | **0.419** | 0.405 |
| [0,0.4] & [0.5,0.8] | **0.339** | 0.325 |
| [0,0.25] & [0.75,1.0] | **0.536** | 0.523 |
| [0,0.1] & [0.2,0.5] & [0.7,1.0] | **0.337** | 0.322 |
> The pre-trained SEDD was only able to handle sequences of length 1024. How did you calculate the log likelihood of shorter sequences?
While SEDD was trained with sequences of length 1024, as described in the original paper (e.g., in their Section 5.3.2), they can handle sequences of small length. Specifically, when generating a sequence of length N, we only provide the first N tokens to the Transformer model. The authors of SEDD provided code to handle shorter sequences in their official GitHub repository, and we used their implementation. Note that all other models were also trained with sequences of length 1024, so the evaluation protocol is consistent across all models.
> Does SEDD output the exact model-predicted likelihood, or its lower bound (ELBO)?
As described in their paper, SEDD, like other discrete diffusion models, can only compute the Evidence Lower Bound (ELBO) rather than the exact model-predicted likelihood. This is a common limitation in diffusion models since computing the exact likelihood is intractable.
> In Table 1: What does a mask range of [0.25, 0.75] mean? Does it mean that out of the context window of 1024, the positions from 256 to 768 are masked, while other positions are visible?
Yes, the range of [0.25, 0.75] specifies the proportion of the context window that is masked. In the case of a context window of 1024 tokens, this means the positions from 256 to 768 are masked.
> I would like to learn from the authors about their insights on my feedback shared in the above sections of this review form.
We thank the reviewer for their valuable comments, which we found very helpful in improving the quality of our paper. For example, as suggested by the reviewer, we added new metrics to further strengthen the experiment section. We also fully agree that text generation quality can be assessed from multiple perspectives.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal!
Re: “SEDD, like other discrete diffusion models, can only compute the Evidence Lower Bound (ELBO) rather than the exact model-predicted likelihood.”
In light of that, some evaluation numbers might be slightly misleading: for example, Figure 2 claims to use the log-likelihoods predicted by SEDD, but as the authors stated in the rebuttal, SEDD can only compute the ELBO. Does it mean that the numbers reported in Figure 2 are in fact the ELBO numbers, instead of the actual model-predicted log likelihood? Furthermore, just because two query orders lead to different ELBOs does not necessarily mean that these two query orders will cause the model to predict different likelihoods. Of course, for SEDD, the exact likelihood is unavailable, but I think the authors should clearly note (in applicable parts of the paper writeup) the conceptual gap between using a lower bound (ELBO) and using the exact model-predicted likelihood in their reported numbers.
The proposed updates make sense. I encourage the authors to incorporate those updates in the paper draft.
I think my rating of “4: Accept” remains accurate.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their additional feedback.
> Does it mean that the numbers reported in Figure 2 are in fact the ELBO numbers, instead of the actual model-predicted log likelihood?
The numbers in Figure 2 represent log-likelihoods but under constrained ordering schemes for token generation. For example, in Figure 2(a), we evaluate likelihoods using two ordering schemes—forward and reverse. We agree with the reviewer that these likelihoods do not directly represent the overall likelihood or ELBO of SEDD, as computing the ELBO requires sampling or enumerating various unmasking strategies and weighing the resulting likelihoods for each strategy.
The illustrative experiment is not intended to analyze the ELBO or the overall likelihood of the SEDD model but rather to demonstrate how generative performance is influenced by the mask/unmask strategy. This further leads to the “query generalization” problem discussed in the last two paragraphs of Section 3.
We will include the above discussion and note the conceptual gap between ELBO and likelihood in the next version of the paper.
> I think the authors should clearly note (in applicable parts of the paper writeup) the conceptual gap between using a lower bound (ELBO) and using the exact model-predicted likelihood in their reported numbers
In the next version of the paper, we will clarify that we only report ELBO numbers instead of log-likelihoods for discrete diffusion models such as SEDD and MDLM since computing likelihood is intractable. We will also incorporate other suggestions into the next version of the paper, as detailed in the rebuttal. | Summary: This paper is motivated by the fact that Non Auto-regressive (NAR) generative models do not work very well for conditional generation. The paper proposes Tracformers, a transformer-based architecture robust for conditional generation in the more difficult NAR setting: this is done through using *multiple context levels*, in order to force the model being able to use only local subset of the context when training for conditional generation, which authors call *query generalization*. After describing their proposed architecture, they describe two settings, Contextual AR generation and Arbitrary Context (AC) generation, for which they describe the instantiation of the model and the loss. Experiments compare, in both settings, a model built with Tracformers to basic (BERT and BART) and a scaled-up version to recent (diffusion-based) generative NAR models, using conditional complexity on Wiki103, 1B and Lambada.
Claims And Evidence: The paper claims that *Tracformer’s multi-scope attention mechanism and specialized encoder-decoder design enable robust conditional generation performance*, which is verified by the experiments. However, as pointed out in the conclusion, the larger model proposed is the size of a GPT-2 model, and the question remains if the reduction of interactions in self-attention in the Tracformer layer will significantly restrict the model's expressiveness at larger scales (and in particular, for larger encoded sequences).
Methods And Evaluation Criteria: The method relies on an encoder-decoder architecture, where:
- The encoder uses increasing context length with the layers (exponential growth), with a maximal number of attended positions
- The decoder work through cross-attention only, and making independent predictions at different positions; attending first encoded representations built on large scope and moving towards local information.
The description of the architecture relies on mask to restrict attention to the desired scope.
Depending on the setting (CAR implies auto-regressive generation with partial knowledge of the future, while AC is completely NAR) and the available input context, different mask schemes and losses are implemented.
Evaluation is made mainly on conditional perplexity of the generated sequences, as well as infilling performance with MAUVE and Bert score.
Theoretical Claims: No theoretical claims are made in the paper.
Experimental Designs Or Analyses: - The experiments compare the Tracformer with the main types of models corresponding to AR and CAR settings; and then, to state-of-the-art NAR generative models.
- Models are compared across various mask ratios.
- The analysis is extensive, and further ablation studies are provided in the supplementary material.
Supplementary Material: The supplementary material provided is extensive; I have reviewed sections D and E.
Relation To Broader Scientific Literature: This paper's contribution is a new architecture for NAR modeling, and could be used with dedicated NAR models, such as discrete diffusion models.
Essential References Not Discussed: I have no knowledge of any essential reference missing.
Other Strengths And Weaknesses: - This paper propose a promising idea which should be further explored.
- While this paper is generally well written, Section 4 stays difficult to follow.
Other Comments Or Suggestions: - The abstract is quite unclear in retrospect; terms that are well defined later (conditional probability query) appear obscure.
- L238: while this remark may only be here to provide intuition for the design of the model, it would be better if it was backed by a reference.
- Footnote 1 is unclear: I don't see how it matches Equation 3.
Questions For Authors: I have no question for the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and for recognizing Tracformers as a promising architecture for addressing the challenges NAR models face in conditional generation.
> The abstract is quite unclear in retrospect; terms that are well defined later (conditional probability query) appear obscure.
We appreciate the reviewer’s suggestion. In the next version of the paper, we will clarify the term “conditional probability query” directly in the abstract, referring to the set of tokens/variables provided during NAR generation.
> L238: while this remark may only be here to provide intuition for the design of the model, it would be better if it was backed by a reference.
While the main goal of this sentence is to give intuition, it can indeed be supported by references such as [1], which highlights the effectiveness of a top-down hierarchical scheme in long text generation. We will add the reference and further discuss it in the next version of the paper.
> Footnote 1 is unclear: I don't see how it matches Equation 3.
We thank the reviewer for pointing out the typo in Footnote 1. According to the definition in Equation 3, we have that $\phi^{l-1}_{t-2^{l-1}}$ = {$t’ : t’ \geq 1, 0 \leq t - 2^{l-1} - t’ < 2^{l-1}$}
and
$\phi^{l-1}_{t}$ = {$t’ : t’ \geq 1, 0 \leq t - t’ < 2^{l-1}$}.
Taking the union of the two sets leads to
$\phi^{l}_{t}$ = {$\{t’ : t’ \geq 1, 0 \leq t - t’ < 2^{l}\}$}.
We will fix it in the next version of the paper.
> While this paper is generally well written, Section 4 stays difficult to follow.
In our humble opinion, the main reason Section 4 is more challenging to follow than the other sections is the indexing of the variable scopes for different tokens, which is necessary to rigorously define Tracformer. To improve clarity, we will add a figure with one example Tracformer layer where each feature embedding is labeled with the corresponding variable scope. This figure will then be frequently referenced throughout the model description.
[1] Wan, Kaiyang, Honglin Mu, Rui Hao, Haoran Luo, Tianle Gu, and Xiuying Chen. "A Cognitive Writing Perspective for Constrained Long-Form Text Generation." arXiv preprint arXiv:2502.12568 (2025). | null | null | null | null | null | null |
Primal-Dual Neural Algorithmic Reasoning | Accept (spotlight poster) | Summary: This work proposed a framework PDNAR, that lies within the neural algorithmic reasoning framework, uses a bipartite MPNN to simulate the primal dual algorithm on solving minimum hitting set problem and its extensions.
Claims And Evidence: They provided theoretical proof that MPNN can simulate the algorithm.
They provided experimental results that PDNAR outperforms the baselines in all the experiments.
Methods And Evaluation Criteria: The problem definition is solid, the proposed method makes sense and the benchmark datasets (both real world and synthetic) are good.
Theoretical Claims: I check the proof in the appendix.
The assumption of MPNN is that it uses MLPs. But there exist some functions like ln or fractional that cannot directly be simulated by MLPs.
Experimental Designs Or Analyses: I checked all the experiments.
Supplementary Material: I read the code from the anonymous url without running it.
Relation To Broader Scientific Literature: The work focused on MHS problem and its extensions, which are some essential families of CO problems.
The work also established the effectiveness of bipartite representation and MPNN approach for the primal dual algorithm.
Essential References Not Discussed: Not that I know, they discussed wide related work in the appendix as well.
Other Strengths And Weaknesses: Strength:
- Overall the paper is well written and easy to follow.
- The novelty is significant, and the experiment results are pretty strong.
Weakness:
- The algorithm 1 is hard to read without explanation.
Other Comments Or Suggestions: Some notations are messy, for example some symbols $E, A$ are overridden without clarifying.
Questions For Authors: - You remove the variables in the bipartite graph by masking them out, but how? By assigning 0s? Does it influence batchnorm or something?
- Could you explain more what is the NAR baseline and how is it essentially different from PDNAR? Because PDNAR is also NAR.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your support and highlighting the strengths of our paper, which we summarize below.
- Paper: **well-written** and **easy to follow**
- Problem definition: **solid**
- Novelty: **significant**
- Benchmark datasets: **good**
- Empirical results: **strong**
We address your questions in the following:
> I check the proof in the appendix. The assumption of MPNN is that it uses MLPs. But there exist some functions like ln or fractional that cannot directly be simulated by MLPs.
Thanks for checking our proof in the appendix! As shown in Line 767, the logarithm is hard-coded as a feature transformation, and thus is not to be simulated by MLPs. The logarithm is used so that the fractional/division can be achieved by subtraction followed by the ELU activation function. This avoids simulating division with MLPs.
> You remove the variables in the bipartite graph by masking them out, but how? By assigning 0s? Does it influence batchnorm or something?
When removing a variable, we remove all its edges in the bipartite graph, so it does not effectively participate in future message-passing steps. The edges are removed by applying a true/false mask. Empirically, we do not apply BatchNorm.
> Could you explain more what is the NAR baseline and how is it essentially different from PDNAR? Because PDNAR is also NAR.
The processor of PDNAR was specially designed to align with the primal-dual approximation algorithm (Section 4.1). For the NAR baseline, we use the conventional choice of MPNN with max aggregation (e.g. [1]). Furthermore, the NAR baseline does not use the primal-dual bipartite formulation, and therefore, is only trained on the primal but not the dual signal. Results align with the previous findings that multi-task learning helps in NAR (e.g. [1]). We will edit “Baselines” in Section 5.1 to highlight the differences and architectural details for the NAR baseline.
> The algorithm 1 is hard to read without explanation. Some notations are messy.
Thank you for pointing out the areas that we can improve for the readability! We will enhance the explanation of the algorithm in Section 3.3 and make the pseudocode more intuitive to understand. We will also include a notation table and ensure the notational consistency in our revised paper.
We thank the reviewer for your valuable feedback. We hope these answer your questions, and please let us know if you require further clarification. Thank you very much!
[1] Neural Execution of Graph Algorithms, Veličković et al, ICLR 2020. | Summary: The authors present a neural architecture adopting the primal-dual framework, as studied in algorithm design especially for the approximation of NP-hard problems. Using the minimum hitting set as the primary case study, the authors prove the proposed architecture satisfies the requirement of algorithmic alignment, i.e., it can replicate the intermediate states produced by the actual algorithm as described in Algorithm 1. Importantly, the authors propose a training strategy enabling the neural architecture to learn from small instances with known solutions. Beyond replicating the algorithm, the resulting neural solver can surpass the quality of the solvers used to train it, and the embeddings computed by the trained neural architecture can have other benefits. A comprehensive suit of experiments adequately support those claims, establishing the efficacy of the proposed approach. Further valuable discussion is presented in the appendices, e.g., more discussion of related work, as well as a summary of limitations.
Claims And Evidence: - A key claim is algorithmic alignment, which is supported by a theoretical proof in Appendix B.
- Empirical results further demonstrate the efficacy of the approach and proposed architecture.
Methods And Evaluation Criteria: The contributions are supported by a comprehensive set of evaluations of synthetic, OOD, and real-world datasets, along with a comparison to commercial solvers.
Theoretical Claims: Only glanced at the proofs in Appendix B. The presented strong induction seems to make sense. It's a bit difficult to follow given its length, mirroring the steps of the algorithm. It would help to recall the pseudocode, highlighting which part is being mirrored at each proof step.
One remark: the last line mentioned $\Theta_{\text{dual}}$ while I expected $\mathcal{M}_\Theta$.
Experimental Designs Or Analyses: Each section seemed to use reasonable datasets, e.g., known graph families, datasets used in prior studies, or comparing to established solvers.
Supplementary Material: Checked Appendix B and Appendix F.
Relation To Broader Scientific Literature: The authors adequately discuss related works in algorithm theory, neural reasoning, neural combinatorics optimization, and linear programming.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: I was hoping the authors would do an ablation study of the uniform update rule.
Other Comments Or Suggestions: I see Appendix F mentions possible strengthening using ``other advanced techniques'' without specifics. It would have helped to even hint at 1-2 such techniques.
Questions For Authors: No further questions at this time.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your time carefully **reading our main paper and appendices** and for **strongly supporting** our paper! Thank you for highlighting our experiments and discussion in the appendix as comprehensive and helpful.
We answer your questions in the following:
> I was hoping the authors would do an ablation study of the uniform update rule.
We conducted an ablation by comparing the model with and without the uniform update rule on the MHS (minimum hitting set) problem. Recall that the primal-dual approximation algorithm for MHS requires the uniform update rule. Results in Table R1 show that the uniform update rule design in our model helps to improve the model performance due to its better alignment with the algorithm.
Table R1. Ablation of the uniform update rule on MHS.
| Uniform update rule | 16 (1x) | 32 (2x) | 64 (4x) | 128 (8x) | 256 (16x) | 512 (32x) | 1024 (64x) |
|----------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| No | 0.996 ± 0.003 | 0.983 ± 0.010 | 0.999 ± 0.013 | 0.987 ± 0.013 | 0.977 ± 0.005 | 1.009 ± 0.013 | 1.060 ± 0.028 |
| Yes | 0.990 ± 0.003 | 0.981 ± 0.003 | 0.983 ± 0.006 | 0.965 ± 0.005 | 0.979 ± 0.005 | 1.004 ± 0.013 | 1.043 ± 0.022 |
> I see Appendix F mentions possible strengthening using ``other advanced techniques'' without specifics. It would have helped to even hint at 1-2 such techniques.
Yes, similar to the uniform update rule, there are other advanced techniques to enhance the basic primal-dual framework to design approximation algorithms. One example is selectively choosing the number of dual variables to increase at each timestep, strengthening scalability. Another example is specialized handling of dual variables with different types, which is required for the uncapacitated facility location problem. These techniques can be incorporated into our framework as extensions, which we leave for future work. We will add more specifics and examples about these techniques in our revised paper.
> Only glanced at the proofs in Appendix B... It would help to recall the pseudocode, highlighting which part is being mirrored at each proof step.
Thanks for reading our proof and providing helpful suggestions on its readability! We will include a side-by-side reference to the algorithm’s pseudocode in Appendix B. And thank you for pointing out the typo. We will fix it in our revision as well.
We are grateful for your support and hope we have addressed your questions. Please let us know if you require further clarification. Thank you very much! | Summary: This paper presents a general NAR framework based on the primal-dual paradigm, aiming to solve NP-hard problems that traditional NAR methods struggle with by mimicking approximation algorithms. The authors provide a detailed model description, theoretical justifications, and empirical validation on three NP-hard graph tasks. Overall, I find the paper novel and well-written. However, I still have concerns regarding certain details. If the authors can address these issues, I would be willing to reconsider my score.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes, all parts
Relation To Broader Scientific Literature: This paper extends the traditional NAR to NP-hard problems
Essential References Not Discussed: yes
Other Strengths And Weaknesses: Strengths:
1. This paper extends traditional NAR to NP-hard problems, which I find meaningful and valuable.
2. The paper provides solid theoretical support for its approach.
3. The writing is clear, and the paper is well-structured and well-organized.
Weaknesses:
1. The proposed method aims to directly learn approximation algorithms for NP-hard tasks. However, the authors only discuss three NP-hard problems related to graphs, which may limit the generality of their approach.
2. I am curious about how this method performs on problems that are not NP-hard. If it can effectively approximate both NP and non-NP problems, it would make the approach even more compelling.
3. For NP problems, should the authors consider multiple approximation algorithms rather than a single one?
4. I remain concerned about the construction of effective training and testing sets, as obtaining reliable ground truth for NP-hard problems is inherently challenging.
Other Comments Or Suggestions: see weakness
Questions For Authors: see weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our paper, which we summarize below.
- Paper: **novel** and **well-written**
- Motivation: **meaningful** and **valuable**
- Theoretical support: **solid**
- Writing: **clear**, **well-structured and well-organized**
We address your questions in the following:
> The proposed method aims to directly learn approximation algorithms for NP-hard tasks. The authors only discuss three NP-hard problems related to graphs, which may limit the generality of their approach.
We politely point out that while Minimum Vertex Cover is a graph problem, both Minimum Set Cover and Minimum Hitting Set are not graph problems. Furthermore, Minimum Hitting Set is a general formulation of a wide range of graph and non-graph problems. What we propose was a bipartite graph representation which “turns” these problems into graphs using primal and dual variables. This is based on the fact that all linear programs have a dual formulation.
> I am curious about how this method performs on problems that are not NP-hard. If it can effectively approximate both NP and non-NP problems, it would make the approach even more compelling.
Thank you for highlighting that our primal-dual framework applies to both NP-hard and polynomial-time solvable problems. While this is true, we emphasize that our main contribution lies in the NP-hard domain. In NAR, NP-hard tasks were previously underexplored due to the lack of optimal solutions and the complexity of the problems. Therefore, we focus on extending NAR beyond exact algorithms for tractable problems and into the space of approximation algorithms for NP-hard tasks. Furthermore, the goals are different. In tractable problems, the aim is to simulate an algorithm that *already* produces optimal solutions. In our case, the objective is to outperform the approximation algorithm itself. To do this, we leverage optimal solutions from integer programming solvers, which is only meaningful in the NP-hard setting. As a result, we evaluate performance based on solution quality (weight ratio), rather than accuracy as commonly used in prior NAR work on tractable problems. We elaborate on this point in the next response.
> I remain concerned about the construction of effective training and testing sets, as obtaining reliable ground truth for NP-hard problems is inherently challenging.
We clarify that only *training* requires obtaining ground-truth labels. Since we train on small problem instances on the scale of 16, these are fast to obtain with integer programming solvers like HiGHS and Gurobi. Then, we *test* against the approximation algorithm on larger problems. As shown in the caption of Table 1, the metric we use is the *model-to-algorithm weight ratio*: the sum of weights from the model solution divided by the sum of weights from the algorithm solution ($w_{model}/w_{algo}$). Therefore, a weight ratio < 1 means the model produces higher-quality solutions than the algorithm. In summary, we show that training on these small optimal instances enables the model to outperform the algorithm on larger problems (up to scale of 1024).
> For NP problems, should the authors consider multiple approximation algorithms rather than a single one?
Thank you for suggesting this! Yes, training an NAR model on multiple algorithms simultaneously has been proven to be effective [1, 2]. Therefore, we hypothesize that our models can also enjoy multi-task learning benefits by training on multiple approximation algorithms simultaneously. However, the multi-task training setup requires more intricate designs, such as the selection of approximation algorithm pairs/sets and architectural design. We believe this is outside the scope of the current paper, but it points to a meaningful direction for future work: how multi-task learning transfers from exact algorithms to approximation algorithms for NAR. We will add these points to Future Work in the revised paper.
We sincerely thank the reviewer for your valuable feedback and highlighting interesting future directions. We hope we answered your questions, and please don’t hesitate if you require further clarification. Thank you very much!
[1] A Generalist Neural Algorithmic Learner, Ibarz et al, LoG 2022.
[2] How to transfer algorithmic reasoning knowledge to learn new algorithms? Xhonneux et al., NeurIPS 2021. | Summary: The authors propose Primal-Dual Neural Algorithmic Reasoning (PDNAR), for training neural networks to simulate classical approximation algorithms. The core idea is to leverage primal-dual paradigm, by representing primal and dual variables as a bipartite graph and parameterzing by GNN. Optimal solutions from small problem instances are incorporated as training signals, enabling the network to surpass the performance of the original algorithms.
The authors demonstrate that their method outperforms existing baselines acrosss NP-hard problems like vertex cover, set cover, and hitting set.
Code is also shared.
Claims And Evidence: 1. Generalization to larger graphs: Results shown in Table 1. Trained on smaller, inference on larger.. better quality.
2. Generalize to graphs from OOD families: Table 2. OOD samples were used for evaluation.
3. Application of warm start large-scale commercial solvers, such as Gurobi: Table 4, better initialization leads to lower running time,
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Not checked in detail.
Experimental Designs Or Analyses: Yes. The evaluation seems correct.
Supplementary Material: Code
Relation To Broader Scientific Literature: PDNAR directly trains GNNs to simulate and improve upon approximation algorithms for NP-hard tasks.
PDNAR goes beyond simply replicating algorithms by incorporating optimal solutions from small problem instances as training signals. This helps them outperform existing method.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths
1. Ablations are done with and without optimal solution integration. ( no optm).. Without optm the quality reduces.
2. Application is shown -> Initialization Gurobi helps in reducing running time.
3. Code is shared.
Other Comments Or Suggestions: Check questions for authors.
Questions For Authors: 1. Are there any failure cases? What kind of scenarios should use this approach? Where does it fail?
2. What if optimal solution is not easy to obtain? Wow does the method fare when high quality but non-optimal solutions are available). I believe if this helps then its an advantage in cases where obtain optimal for small instances is also hard. are there any such cases? How would the algorithm fare. This is an optional expt. But a small discussion(even without experiments) on this should improve the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time reading our paper and providing valuable feedback!
We appreciate your **positive** acknowledgement of our **empirical design**, **results**, and **application**.
We address your questions in the following:
> Are there any failure cases? What kind of scenarios should use this approach? Where does it fail?
We note that the primal-dual framework is a core method for many algorithmic problems, so our proposed framework is general. On the other hand, our current architecture design is based on the hitting set. While hitting set can be reformulated as many other problems (e.g. vertex cover, set cover, etc), problems that are not directly formulated as the hitting set may require further adjustment. For example, the uncapacitated facility location problem has two types of dual variables that require different handling, which our current model does not adapt to. We believe these can be interesting future directions to extend our model to an even broader set of algorithms. We will add more elaboration and examples in the revised paper.
> What if optimal solution is not easy to obtain? Are there any such cases? How would the algorithm fare.
We only train on very small instances on the scale of 16, so obtaining optimal solutions is not a problem, especially with advanced integer programming solvers like HiGHS and Gurobi. Futhermore, while optimal solutions give an advantage to our model to surpass the approximation algorithm, we included an ablation to train with the algorithmic steps only. This setup corresponds to “No optm” in Table 1. We can see in MVC, “No optm” outperforms traditional NAR, and has better generalization ability than both NAR and TripletMPNN (the latter is much more computationally expensive). This highlights the strength of our model, which aligns closely with the primal-dual framework and leverages both primal and dual training signals. We will add a discussion of these points in Section 5.1.
Thank you so much for the questions! We hope these answers address them. Please let us know if you require further clarification!
---
Rebuttal Comment 1.1:
Comment: Thanks a lot.
I increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for reading our rebuttal and increasing the score! | null | null | null | null | null | null |
Byzantine-Resilient Federated Alternating Gradient Descent and Minimization for Partly-Decoupled Low Rank Matrix Learning | Accept (poster) | Summary: This paper presents a Byzantine-Resilient federal low-rank matrix completion algorithm, which aims to recover the decomposition of the ground truth matrix $X^\star=UB$ from its entrywise measurements. The proposed algorithm works by minimizing $U$ and $B$ alternatively under the settings of federated learning. The authors also incorporate the method of Krum and GM to make their algorithm Byzantine-Resilient. The proposed algorithms can be extended to solve low-rank columnwise sensing and phase retrieval.
Claims And Evidence: All the claims made in the submission are supported by convincing evidence.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I only have time to check the proof of Theorem 2.2 and it seems correct to me.
Experimental Designs Or Analyses: The experimental section in the paper is really weak.
First of all, the experiment is only done on synthetic data generated from Gaussian distribution. As mentioned by the authors in the introduction "These (federated low rank matrix learning) find important applications in many different modern ML and medical imaging domains – recommender system design, multi-task representation learning for few shot learning, federated sketching, accelerated dynamic MRI and Fourier ptychography.", the authors should at least evaluate their proposed algorithm using one of the above applications mentioned above.
Second, the experimental results for LRCS and LRPR are not shown.
Supplementary Material: I only review the part related to the proof of Theorem 2.2.
Relation To Broader Scientific Literature: The paper presents a Byzantine-Resilient federal low-rank matrix completion algorithm.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: The writing of this paper needs to be improved as are so many typos and grammatical mistakes in the paper. For example, the second sentence of the introduction is missing a period (.) at the end; "the this value" should be "this value" in line 4 in algorithm 1; and many more.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to express our sincerest gratitude for your diligent study of our work and the thoughtful reviews. We will now answer your questions and concerns.
* We have strengthened the experiments by adding results for GM and for the LRCS problem. We have also evaluated our algorithm on real-world MovieLens 1M dataset (Harper & Konstan, 2015). The dataset contains $1,000,209$ user ratings for about $3,900$ movies from $6,040$ users. An anonymous link to the simulation results PDF is provided for the reviewer. [Link](https://send.vis.ee/download/08042171bf7ca9bd/#_zNz4g5BylyRmIrCCb_YqA) **(Please Note: The link will expire after 10 downloads/2.5 days)**. These will be included in the final version. We observe that our algorithm, Byz-AltGDmin-LRMC, converges on the federated MovieLens 1M dataset even with $40\\%$ Byzantine nodes ($L_{byz} = 8$, $L = 20$). We also observe that CWMedian converges in this setting due to the large number of samples ($n\tilde{q} p$). We view applying the method to other real-world datasets for LRCS, and LRPR problem as a promising direction for future work.
* We will improve the structure and readability in the final version. We will also fix all typos and grammar errors. | Summary: This paper considers multiple versions of low-rank matrix completion problem under different observation models, in a federated learning scenario. In this scenario, the columns of the observed matrix are distributed among multiple clients. The paper proposes a Byzantine-resilient algorithms that may use two different similarity-based mechanisms and provably converges to the true model. Some simulation experiments are presented that discuss the speed of convergence under different attacks.
Claims And Evidence: The claims regarding convergence are verified both theoretically and by simulated experiments. However the experiments are limited.
Methods And Evaluation Criteria: The methods make sense for the problem of interest. The experiments indeed support the methods, but they are introduced only in a simulated scenario and the results in a more realistic problem are still unknown.
Theoretical Claims: I did not check the details of the proof, but the overview presented in the main body of the paper makes me confident that the proof is correct.
Experimental Designs Or Analyses: The experiments are based on simulation which makes sense for the claims, but does not motivate the importance of the problem under study.
Supplementary Material: I did not check the supplement carefully. It only contains the proof.
Relation To Broader Scientific Literature: I believe that the problem is of general interest and can be used in many applications. Federated learning problems and security are the main topics of broad interest in this work.
Essential References Not Discussed: I am not aware of any reference.
Other Strengths And Weaknesses: The paper discusses the low-rank matrix factorization and completion, which can have application in various scenarios. There is a rigorous mathematical study regarding Byzantine resilience and convergence of the algorithm.
Weakness: There is a similarity to the previous work (Singh and Vaswani 2024) and I am still unsure how much this paper improves the results in the previous work. The application of this problem is not also well-justified. The experiments are only in a simulated scenario.
Other Comments Or Suggestions: I suggest the authors to consider proofreading. The text is not understandable at some points, but it is generally easy to read it. Particularly the notation for columns and rows of a matrix is not introduced.
Questions For Authors: I do not have any major question, but two minor ones:
1) I am still unsure what are the main differences between this work and (Singh and Vaswani 2024). It is helpful if the authors can list the differences clearly.
2) In Theorem 2.2 is there any assumption on the attacker or is this result for any arbitrary sequence of messages sent by the attackers?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to express our sincerest gratitude for your diligent study of our work and the thoughtful reviews. We will now answer your questions and concerns.
* We have strengthened the experiments by evaluating our algorithm on real-world MovieLens 1M dataset (Harper & Konstan, 2015). The dataset contains $1,000,209$ user ratings for about $3,900$ movies from $6,040$ users. An anonymous link to the simulation results PDF is provided for the reviewer. [Link](https://send.vis.ee/download/08042171bf7ca9bd/#_zNz4g5BylyRmIrCCb_YqA) **(Please Note: The link will expire after 10 downloads/2.5 days)**. These will be included in the final version. We observe that our algorithm, Byz-AltGDmin-LRMC, converges on the federated MovieLens 1M dataset even with $40\\%$ Byzantine nodes ($L_{byz} = 8$, $L = 20$). We also observe that CWMedian converges in this setting due to the large number of samples ($n\tilde{q} p$). We view applying the method to other real-world datasets for LRCS, and LRPR problem as a promising direction for future work.
* **Novelty, Key Differences, and Difficulties Compared to (Singh & Vaswani, 2024)** We respectfully note that this point has already been addressed in detail in the paper (see “Contributions and Novelty” subsection on Page 2). We reiterate it here for clarity:
1) **Heterogeneous gradients:** Our setting involves heterogeneous data across nodes (vertical federation), making gradients different
at each node. (Singh & Vaswani, 2024) assumes a homogeneous setting (horizontal federation) , which is much easier to handle. **Definitions for terminologies, horizontal v.s. vertical settings?** The data matrix is $\mathbf{Y}$ of size $n \times q$. Horizontal federation means each node $\ell\in[L]$ sees a subset of $n/L$ rows of $\mathbf{Y}$ but all $q$ columns. This is like partitioning data by examples, each node has different samples but all features. This is usually a homogeneous setting (Kairouz et al., 2021). Whereas Vertical federation partitions data by features: each node sees a subset of $q/L$ columns but all rows (samples), resulting in a heterogeneous setting.
2) **Incoherence of** $\mathbf{U}^*$: (Singh & Vaswani, 2024) studied the LRCS problem, which does not require incoherence of $\mathbf{U}^*$. In LRMC, we need to ensure incoherence of $\mathbf{U}$ at every iteration. This is hard because $\mathbf{U}$ is updated using possibly non-incoherent gradients from GM or Krum. To handle this, we introduce a **filtering step**. See Algorithm 1, Lines 22–27 and 9–12, Lemma 3.4, Fact 3.5 (for the GD step), and Lemma 3.2 item 3 (for initialization).
3) **Guarantees for Krum:** (Singh & Vaswani, 2024) only provides theoretical results for GM, which cannot
be computed exactly. The theory and practical implementation differ. In contrast, we provide non-asymptotic guarantees
for both Krum and GM.
* There is no assumption on the attacker. We mention in the Introduction section Page 1 *“that the adversarial nodes have
complete knowledge of the data at every node and of the exact algorithm (and all its parameters) implemented by every
node, including center; and all the adversarial nodes can collude to use this information to design the worst possible attacks”*
so adversarial nodes can send any arbitrary sequence of messages.
* We will improve the structure and overall readability in the final version. | Summary: This paper proposes a provably secure sample and communication-efficient federated alternating minimization-based algorithms Krum-AltGDminv and GM-AltGDmin for low-rank matrix completion (LMRC) problems that are resilient to Byzantine attacks. They then extend their analysis to show how a simple modification of assumptions about the right singular vector's coherence (Assumption 4 instead of Assumption 2) and steps of the federated algorithm can also solve partly-decoupled vertically federated LR matrix learning problems including the LR column-wise sensing (LRCS) and its phaseless generalization, the LR phase retrieval (LRPR). They conclude with experimental evaluation of their federated learning LRMC algorithms against Additive $\mathcal{N}(1, 1)$ attack and the Reverse Gradient Attack (Figure 1) and given data heterogeneity (Figure 2).
Claims And Evidence: The claims made in the paper are backed up with experimental evaluation using Additive $\mathcal{N}(1, 1)$ attack and the Reverse Gradient Attack (Figure 1) and given data heterogeneity (Figure 2). For the first set of experiments (Figure 1), authors make the observation that Krum converges faster than CWMedian and that CWMedian fails for Reverse Gradient Attack, which is a harder attack, which clearly demonstrates that Krum is more effective. The experimental evaluation is limited and there is no comparison of Krum-AltGDminv and GM-AltGDmin algorithms against other federated algorithms for LMRC.
Methods And Evaluation Criteria: While simulation datasets are sensible to use in this setting, it would be good to see more matrix completion datasets that are used in real-life like the MovieLens datasets: https://grouplens.org/datasets/movielens/.
Theoretical Claims: I could not check the correctness of the proofs.
Experimental Designs Or Analyses: The experiments are limited and few. They serve to show that Krum is more effective than CWMedian and that data heterogeneity matters. However, there is no evaluation of the Krum-AltGDminv and GM-AltGDmin for low-rank matrix completion (LMRC) problems.
Supplementary Material: I didn't review the supplementary material.
Relation To Broader Scientific Literature: The Low-Rank Matrix Recovery (LMRC) problem is well studied in the centralized or non-federated setting. Early work include Solutions consist of the convex relaxation developed by (Candes & Recht, 2008), which was very slow and the Alternating Minimization (AltMin) algorithm with a spectral initialization by (Netrapalli, Jain, & Sanghavi, 2013). The authors make extensive use of the results in (Abbasi & Vaswani, 2024) to prove their results. The work by (Singh & Vaswani, 2024b) is most closely related to this work but it deals with a much easier setting of horizontally federated LRCS, where the data, and hence node gradients, are homogeneous.
Essential References Not Discussed: It would be good to add these works to the discussion:
1. Abbasi, Ahmed Ali, Shana Moothedath, and Namrata Vaswani. "Fast federated low rank matrix completion." 2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2023.
2. Dadras, Ali, Sebastian U. Stich, and Alp Yurtsever. "Personalized Federated Learning via Low-Rank Matrix Factorization." OPT 2024: Optimization for Machine Learning. 2024.
3. He, Xuechao, Qing Ling, and Tianyi Chen. "Byzantine-robust stochastic gradient descent for distributed low-rank matrix completion." 2019 IEEE Data Science Workshop (DSW). IEEE, 2019.
Other Strengths And Weaknesses: The main strength of this paper is the detailed theoretical analysis of security of the Krum-AltGDminv and GM-AltGDmin for low-rank matrix completion (LMRC) problems. The main weakness is the lack of experimental evaluation of the proposed algorithms. The reason could be the difficulty of implementation as the authors state that Krum is an easy-to-compute estimator but requires $nrL^2$ time to compute and that GM can only be approximated and that the only algorithm with a useful guarantee for it is too complex to implement even for the algorithm authors.
Other Comments Or Suggestions: None
Questions For Authors: 1. Have you implemented the Krum-AltGDminv and GM-AltGDmin for low-rank matrix completion (LMRC) problems? If not, what are the practical difficulties with implementation?
2. Have you tried to solve two other federated LR problems – LRCS and LRPR with your proposed algorithms?
3. You mentioned that Weiszfeld algorithm used in practice is also practically faster than Krum. Is there a citation for this? How does it compare to your proposed algorithms?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to express our sincerest gratitude for your diligent study of our work and the thoughtful reviews. We will now answer your questions and concerns.
* We have strengthened the experiments by adding results for GM and for the LRCS problem. We have now also evaluated our algorithm on the MovieLens dataset. An anonymous link to the simulation results PDF is provided for the reviewer. [Link](https://send.vis.ee/download/08042171bf7ca9bd/#_zNz4g5BylyRmIrCCb_YqA) **(Please Note: The link will expire after 10 downloads/2.5 days)**. These will be included in the final version.
* Theoretically GM-AltGDmin sample complexity is similar to Krum-AltGDmin, so it also converges in the experiments. However, GM is an approximate algorithm, while Krum is not. Our work provides, to the best of our knowledge, the first non-asymptotic guarantee for Krum, which is simple, intuitive, and easy to compute (Blanchard et al., 2017) see Lemma 3.9.
*We thank the reviewer for the helpful observation* that the Weiszfeld algorithm used in practice is not guaranteed to be
faster than Krum. This holds both in theory and can also be seen in experiments. Nothing concrete can be said about which
one is faster in general. Now we explain the **practical difficulties with implementing GM**:
1) In practice, Weiszfeld’s algorithm (Weiszfeld, 1937; Beck & Sabach, 2015) is used to approximate GM. Weiszfeld’s algorithm (Beck & Sabach, 2015, Theorem 5.1) is known to converge, but the number of iterations is not specified. In theory, GM can be faster when using (Cohen et al., 2016, Algorithm 1). However, that algorithm is complex and to our best knowledge has no known experimental results.
2) As shown in Table 1 of the paper, Krum has a compute cost of $nr^2L^2\log(1/\epsilon)$, and GM has a compute cost of $nr^2L\log^3(L/\epsilon_{approx})\log(1/\epsilon)$ when using (Cohen et al., 2016, Algorithm 1). To see any speedup from GM in simulations, one would need to use (Cohen et al., 2016, Algorithm 1) with a large $L$. This also requires a large $q$ to maintain accuracy. As a result, the total simulation time becomes much higher.
* We will include and compare the related works as mentioned by the reviewer in the final version of our paper. We iterate it here: (Abbasi, Moothedath, & Vaswani, 2023) show experimental results for federated LRMC using AltGDmin, but without theoretical guarantees. (He, Ling, & Chen, 2019) show experimental results for Byzantine-resilient LRMC, again with no theoretical guarantees. (Dadras, Stich, & Yurtsever, 2024) provides results for personalized federated learning using the Burer-Monteiro
factorization method.
* While LRMC has been widely studied in the centralized setting (Candes & Recht, 2008; Jain, Netrapalli, & Sanghavi, 2013; Fazel, 2002; Keshavan, Montanari, & Oh, 2010), to the best of our knowledge, there is no existing work on provably accurate federated LRMC, except (Abbasi & Vaswani, 2024). Some related works exist in distributed (Mackey, Talwalkar, & Jordan, 2015; Teflioudi, Makari, & Gemulla, 2012) and decentralized (Ling, Xu, Yin, & Wen, 2012; A.-Y. Lin & Ling, 2015; Mardani, Mateos, & Giannakis, 2013) settings, but they are not directly comparable to our federated setup.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications about the Wieszfeld's algorithm and the compute costs of Krum and GM! It might be good to include some experimental comparison to Wieszfeld's algorithm as it can be implemented as an approximation of GM. If previous theory and experiments show that it cannot be better than Krum as you say, then it might not be meaningful to add. Glad that you will add the suggested references. I will increase my score to 4.
---
Reply to Comment 1.1.1:
Comment: We express our gratitude to reviewer **uCLY** for the thoughtful review and for acknowledging the contribution of our work.
We have also addressed all comments from **RNhd, 2BbT,** and **FvfR**.
In particular, for the concerns regarding the experiments section, we have added the following comparisons:
1) GM for LRMC (our plots now compare GM, Krum, and coordinate-wise median (CWMedian)).
2) All three methods for the LRCS problem.
3) We have also evaluated all methods on the real-world MovieLens dataset.
We have included clear and intuitive explanations of both the novelty of our proposed algorithm and its guarantees in the paper.
We are happy to respond to any further questions they have. | Summary: This paper mainly focuses on the low-rank matrix completion problem, under the setting of federated (centralized) learning with Byzantine attacks. The authors propose an algorithm designed to be resilient to the attacks by employing robust aggregators such as Krum or the Geometric Median. Theoretical analysis and simulations are provided to demonstrate the convergence of the algorithm. Additionally, the approach is extended to other low-rank learning problems.
Claims And Evidence: The claims made in this work would benefit from stronger support—both through more intuitive explanations to aid understanding and through more thorough discussions and comparisons with existing literature to enhance credibility.
Methods And Evaluation Criteria: The experiment description is unclear:
- What is the purpose of Figure 1? Does the algorithm benefit from the robust aggregator Krum, or from the algorithm framework? And how do the two curves compare with GM?
- In Figure 2, how do the authors change the heterogeneity? There is no description in the Supplementary material on the data generating process.
Theoretical Claims: I went over the proofs, which inherits a lot from the literature (Singh & Vaswani, 2024b). The soundness of the results are good, though the originality should be highlighted.
Experimental Designs Or Analyses: Details of the experimental settings in the supplementary material will be appreciated. Otherwise, it is too vague to evaluate the simulations.
Supplementary Material: I did not go into details of each line of the proofs.
Relation To Broader Scientific Literature: The key contributions of the paper root in the theoretical results, which as the authors point out, originates from Singh & Vaswani (2024b) a lot. Instead of sketching each step of the proofs in details, it could be better to emphasize the main difficulties or differences between your work and the previous one.
Essential References Not Discussed: There should be more references in the discussion of application in the first paragraph of introduction. On the other hand, literature of related works, especially in Byzantine algorithms that are not specific to low-rank problems, are cited too much without recent updates.
Other Strengths And Weaknesses: The theoretical derivations are presented in thorough detail; however, the paper's weak structural organization and lack of a clear logical roadmap significantly hinder readability.
Other Comments Or Suggestions: 1. Please properly use citation format, including \citet, \citep, etc.
2. The writing is readable but logically weak. Hope the writing can be refined.
3. In the last paragraph of Section 1, there is a typo "X=UB, B,are updated...". Not sure what is the correct statement.
4. The notation $x_k^*$ in Section 1 is not clearly defined.
5. There should be a title for Assumption 2, similar to "Bounded heterogeneity" in Assumption 3. For example, stating that Assumption 2 is the so-called incoherence.
Questions For Authors: 1. Could you provide explanations or definitions for terminologies, e.g., horizontal v.s. vertical settings?
2. Could you emphasize what are important features that limit the analysis to two aggregators (GM and Krum)?
3. Although you assume that the fraction of Byzantine nodes is <= 0.4. How would the results rely on the fraction? In particular, for sanity check I would like to see that fraction = 0 recovers the literature results on this problem, and fraction going to the upper bound (L/2-1) gives explosive error bound.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to express our sincerest gratitude for your diligent study of our work and the thoughtful reviews. We will now answer your questions and concerns.
* Figure 1 of our paper shows that our Algorithm 1, Byz-AltGDmin-LRMC, converges for the federated LRMC problem even when $40\\%$ of the nodes are Byzantine. It also shows that coordinate-wise median (CWMedian) does not converge. As we explain in the paper, CWMedian needs a much higher sample complexity (depends on the gradient dimension) see Table 1 and is also evident from Figure 1.
* Purpose of paper is to show (i) the framework provably works; (ii) we can prove results for Krum and for GM. Theoretically, Krum can be slower than GM when GM is approximated using (Cohen, Lee et al., 2016, Algorithm 1) and the number of nodes $L$ is very large.
* We have now added GM and will include it in the final version. We have provided the discussion for it in our response to
Reviewer **uCLY**. An anonymous link to the simulation results PDF is provided for the reviewer. [Link](https://send.vis.ee/download/08042171bf7ca9bd/#_zNz4g5BylyRmIrCCb_YqA) **(Please Note: The link will expire after 10 downloads/2.5 days)**
* As noted in "Observations from Fig. 2: Heterogeneity" subsection on Page 8, Heterogeneity question is already addressed. We briefly reiterate for clarity: we first generate $\mathbf{B}^*$ as a random Gaussian matrix. Each $\mathbf{B}^*_\ell$ is formed by selecting a subset of columns $k \in S_{\ell}$ from $\mathbf{B}^*$. Then, with $L = 10$ total nodes, we multiply $5$ randomly chosen $\mathbf{B}^*_\ell$ by a factor $G_B$ to introduce heterogeneity across the datasets see Assumption 3 (Bounded heterogeneity).
* **Novelty, Key Differences, and Difficulties Compared to (Singh & Vaswani, 2024b)** We respectfully note that this point has already been addressed in detail in the paper (see “Contributions and Novelty” subsection on Page 2). We reiterate it here for clarity:
1) **Heterogeneous gradients:** Our setting involves heterogeneous data across nodes (vertical federation), making gradients different
at each node. (Singh & Vaswani, 2024b) assumes a homogeneous setting (horizontal federation) , which is much easier to handle. **Definitions for terminologies, horizontal v.s. vertical settings?** The data matrix is $\mathbf{Y}$ of size $n \times q$. Horizontal federation means each node $\ell\in[L]$ sees a subset of $n/L$ rows of $\mathbf{Y}$ but all $q$ columns. This is like partitioning data by examples, each node has different samples but all features. This is usually a homogeneous setting (Kairouz et al., 2021). Whereas Vertical federation partitions data by features: each node sees a subset of $q/L$ columns but all rows (samples), resulting in a heterogeneous setting.
2) **Incoherence of** $\mathbf{U}^*$: (Singh & Vaswani, 2024b) studied the LRCS problem, which does not require incoherence of $\mathbf{U}^*$. In LRMC, we need to ensure incoherence of $\mathbf{U}$ at every iteration. This is hard because $\mathbf{U}$ is updated using possibly non-incoherent gradients from GM or Krum. To handle this, we introduce a **filtering step**. See Algorithm 1, Lines 22–27 and 9–12, Lemma 3.4, Fact 3.5 (for the GD step), and Lemma 3.2 item 3 (for initialization).
3) **Guarantees for Krum:** (Singh & Vaswani, 2024b) only provides theoretical results for GM, which cannot
be computed exactly. The theory and practical implementation differ. In contrast, we provide non-asymptotic guarantees
for both Krum and GM.
* We focus only on GM and Krum because the other two common aggregators used in the literature CWMedian, and
Trimmed Mean, have very high sample complexity, which makes the results impractical. The sample complexity for
CWMedian (Yin et al., 2018, Theorem 1) and for Trimmed Mean (Yin et al., 2018, Theorem 4) depends on the gradient
dimension.
* We fixed the fraction of Byzantine nodes to a specific value for simplicity. For the general case, we provide the result again: Theorem 2.2 with the following changes. If $L_{byz}<\frac{L}{2} - 1$, and $n\tilde{q} p\geq CC_{Kr}\tilde{\kappa}^{10} \mu^2 \tilde{q} r^2\log \tilde{q} \log (1/\epsilon),$ then with high probability, $ SD_F(U^*, U_{T}) \leq \max(\epsilon, 14C_{Kr}\tilde{\kappa}^2 G_B).$ Where $C_{Kr}=2+\frac{1-\tau-2/L}{1-2\tau-2/L}$, for $\tau=\frac{L_{byz}}{L}<0.4$, $C_{Kr}\lessapprox 5$.
If $\tau\rightarrow \frac{1}{2} - \frac{1}{L}$ then $C_{Kr}\rightarrow\infty$ which blows up everything. For $\tau = 0$, our sample complexity matches the result of Byzantine free result (Abbasi & Vaswani, 2024, Theorem 2.1). It should be noted
whenever we are using a robust aggregator GM or Krum the estimation accuracy is as good as that of one good node. So,
the effective sample size is $\tilde{q}$ and we see the heterogeneity factor $G_B$ in $SD_F(U^*, U_{T})$ bound.
___
We will address all minor comments, add the necessary references, and improve the document's readability in the final version of the paper. | null | null | null | null | null | null |
Low-Rank Thinning | Accept (poster) | Summary: This paper proposes Low-Rank Thinning, a method for selecting representative data points using sub-Gaussian thinning with improved efficiency. By leveraging low-rank structures, it enhances dataset summarization capability. Theoretical guarantees and empirical results demonstrate its ability to reduce computation while preserving data quality.
## update after rebuttal
My concerns are addressed by the authors, I improved my rating to 4.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes I checked the proof of C part in the appendix and it looks ok
Experimental Designs Or Analyses: The paper presents experiments evaluating Low-Rank Thinning on transformer attention, stochastic gradient training, and distribution testing. Theoretical guarantees are supported by empirical results. Although the experiments show gains in computational efficiency and accuracy, additional validation on other task apart from image classification would enhance the claims of generalizability and practical relevance, e.g., semantic segmentation, object detection.
Supplementary Material: yes I checked all part of the supplementary.
Relation To Broader Scientific Literature: This paper extends sub-Gaussian thinning to broader distributions and kernels, overcoming limitations in prior dataset summarization methods. By leveraging low-rank structures, it enhances attention approximation, stochastic optimization, and distribution testing with improved efficiency and theoretical guarantees.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths:
1. The proposed Low-Rank Thinning framework improves dataset summarization across diverse aspects, including transformer attention, stochastic gradient optimization, and two-sample testing, which can benefit the community.
2. The paper provides strong theoretical guarantees, extending sub-Gaussian thinning to more general distributions and kernels while addressing limitations in prior work.
3.Experimental results demonstrate that Low-Rank Thinning reduces computational costs while maintaining high accuracy, making it practical for large-scale machine learning tasks.
Weaknesses:
1. The experiments focus on constrained settings but lack extensive evaluation on other tasks, e.g., object detection, which may hinder the evaluation of the generalizability.
2, The method assumes a low-rank structure in the data, which may not always be present in real-world scenarios and can decrease its effectiveness for high-dimensional or complex datasets.
3. In Table 3, experiments are only conducted on T2T-ViT, more transformer architectures are suggested to be included while their variants should aslo be considered to test the cross backbone generalizability of the proposed method.
4. The training details regarding the hyperparamters, e.g., lr, batch size, are not well described in both of the main paper and appendix.
Other Comments Or Suggestions: Please refer to the weaknesses
Questions For Authors: My questions are all derived from the weaknesses I mentioned.
1. The experiments are conducted in constrained settings and do not include tasks like object detection. How would the proposed method perform on more diverse applications, and could additional benchmarks improve the evaluation of its generalizability?
2. The method relies on a low-rank structure in data, which may not always hold in practical settings. How does its performance degrade when applied to high-dimensional or non-low-rank data, and are there strategies to mitigate this limitation?
3. Table 3 only includes experiments on T2T-ViT, without testing on other transformer architectures or their variants. Would extending the evaluation to different transformer backbones provide stronger evidence of the method’s cross-architecture generalizability?
4. The paper lacks detailed descriptions of training hyperparameters such as learning rate and batch size in both the main text and appendix. Can the authors provide the details of the experiments in main text or appendix further?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and constructive feedback and are delighted that the reviewer found our theoretical guarantees strong, our methods practical, and our applications diverse and of benefit to the community.
**A new application:** We presented three vignettes with three diverse applications (SGD training, two-sample testing, and attention approximation) to demonstrate the broad applicability and generalizability of the techniques, but we are happy to include an additional experiment with a different architecture and application as further evidence of generalizability. We have recreated the BigGAN image generation benchmarking experiment of Zandieh et al. where leading attention approximation methods are scored on computational expense and image generation quality. We used the experimental setup and released implementations of Zandieh et al. for all other methods. Our results table (https://imgur.com/a/F5Y7pfh) shows that Thinformer (g=2) yields better Frechet Inception Distance (FID) and Inception Scores (IS) than all of the alternatives while running significantly faster than exact, KDEformer, and Reformer. Performer runs faster still but at the expense of substantially worse FID and IS.
**Low-rankness:** For attention approximation, even if the matrices Q and K are full rank, we still obtain the theoretical and practical benefits advertised (since their rank is d < n). The KMS guarantees (4) and (5) of Thm. 1 also ensure that we obtain a high-quality KMS approximation even if the kernel matrix is full-rank.
For the other two applications, we have run two new analyses to test to what extent the relevant matrices are approximately low-rank in practice. In the CTT setting of Sec. 6.2, we find that the empirical inflation factor $R_k = O(\log^5(n))$ owing to approximate low-rankness: see https://imgur.com/a/hmAVQIU. In the LKH experiment setting of Sec. 5.2, we find that the stochastic gradient matrices $X_k$ have rapidly decaying singular values and a median $\epsilon_k^*$-rank of 10: see https://imgur.com/a/MyA7gR1.
Thank you for the suggestion regarding hyperparameters! We initially omitted hyperparameters when recreating experiments fully specified in prior work, but we will include those hyperparameters in the revision. These include Sec. 6.2 (learning rate = 5e-5; batch size = full sample), Sec. 5.2 (learning rate = 1e-2; batch size = 16), and Sec. 4.2 (learning rate = N/A (inference only); batch size = 64). | Summary: This paper focuses on the thinning problem and proposes a low-rank method to simplify datasets with a limited number of data points, based on the analysis of sub-Gaussian thinning. The proposed low-rank analysis generalizes sub-Gaussian thinning to any distribution and kernel, ensuring high-quality compression of datasets.
## Update After Rebuttal
Thanks for the rebuttal. I will keep the initial score but with low confidence.
Claims And Evidence: The claims in this paper are mainly supported by theoretical analysis.
Methods And Evaluation Criteria: Yes
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: The related work section not available in this paper.
Other Strengths And Weaknesses: **Clarification: I don't have a background in thinning or kernel methods, so my assessment below comes with low confidence.**
## Strengths
1. The paper extends sub-Gaussian thinning beyond previous constraints, providing a comprehensive framework applicable to arbitrary kernels and distributions.
2. Based on the proposed method, the paper introduces Thinformer attention, a mechanism that provides a computationally efficient alternative to traditional self-attention while maintaining competitive accuracy. The Thinfomer layer is 2x faster than baselines like KDEformer while achieving a comparable accuracy.
3. In additional to the network architecture, the paper also involves the SGD acceleration, and shows significant convergence speed improvements over random reshuffling.
## Weaknesses
My main concern is the poor presentation of the paper, which makes it less accessible to non-expert readers. For instance, the thinning problem is not clearly defined at the outset. For example, the paper claims that *"This work is about thinning, finding a small set of
representative points to accurately summarize a larger dataset"*. At first glance, the paper appears to address the Core Set problem, which also aims to represent a dataset with a small subset. However, it is actually a general algorithm and is not related to DATASETS.
In addition, the main topic changes frequently in this paper. In the methodology section, the focus shifts to the Thinformer module for fast attention approximation. Then, in Section 5, the discussion pivots again—this time to fast SGD optimization. These transitions between seemingly different topics make the paper somewhat difficult to follow and may lead to confusion about its contribution.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and constructive feedback and are glad that the reviewer found our framework “comprehensive” and our speed improvements “significant.”
We apologize for any inaccessibility in our presentation. We aimed to showcase the broad applicability of theory developed in this paper by explicitly deriving sub-Gaussian thinning solutions with state-of-the-art guarantees for three practical problems. We will improve the coherence of the presentation and the accessibility for non-expert readers by providing a more detailed overarching roadmap in the introduction (with guidelines on what will be included in each section), improving transitions between the different application sections, summarizing our main proof arguments, and providing more high-level discussion of theoretical and practical takeaways.
In addition, we will make the definition and broad scope of thinning clearer from the outset by highlighting that thinning flexibly allows one to summarize any of collection of points, be they datapoints (as in our testing application), stochastic gradients (as in our SGD application), or key-value pairs (as in our attention application).
We will also clarify in the introduction that related work for the main results and applications are discussed separately in each section. | Summary: This paper introduces a novel theoretical analysis of "thinning algorithms" that adapt effectively to low-rank structures present in data. The authors develop a theoretical upper bound on the discrepancy metrics (Theorem 1) applicable to any kernel and any data distribution, thereby demonstrating that thinning algorithms with sub-Gaussian guarantees (per Definition 3) achieve improved quality metrics especially when the kernel or data matrix is approximately low-rank. Three significant applications are also discussed: (1) approximating dot-product attention in Transformers efficiently, (2) expediting stochastic gradient descent (SGD) training by optimized data reordering, and (3) conducting two-sample kernel maximum mean discrepancy (MMD) tests efficiently via Compress-Then-Test approach. The authors provide theoretical guarantees for these applications (Theorems 2, 3, and 4, respectively) and extensive empirical experiments validating the efficacy of the proposed thinning-based approaches.
Claims And Evidence: The paper makes four main claims: a general theorem providing high-probability upper bounds on the MMD and kernel max seminorm (Theorem 1), and three application-specific theorems demonstrating the utility of thinning-based approaches (Theorems 2, 3, and 4). Overall, these theoretical claims---particularly regarding improved complexity and performance bounds---are well-supported through rigorous proofs provided in the appendix and extensive empirical validation. However, the paper's presentation could be improved by succinctly summarizing the high-level ideas behind these proofs, especially for Theorem 1, in the main text. Currently, relegating all proofs to the appendix leaves readers without clear intuition on the core ideas underpinning the main results. Additionally, clearer explicit summaries and high-level discussions of theoretical takeaways, especially in Section 6.1 (e.g., how Algorithm 3's analysis compares with prior works), would significantly improve the readability and impact of the paper.
Methods And Evaluation Criteria: The methods proposed and their associated evaluation criteria are appropriate and practical for the applications considered. For the Transformer approximation task, benchmark experiments on ImageNet provide relevant and convincing empirical evidence of efficiency and accuracy. Similarly, the logistic regression experiment for accelerating SGD and the Higgs boson detection task for two-sample testing are both well-chosen and pertinent.
Theoretical Claims: I briefly checked Theorem 1 and its associated proofs in the appendix, which appear rigorous and sound. Nonetheless, I haven't checked the correctness of details in the proof.
Experimental Designs Or Analyses: The experimental designs---e.g., especially the Transformer approximation experiments (Section 4.2) and SGD training (Section 5.2)---seem well-executed to support and complement the theoretical claims.
Supplementary Material: I briefly reviewed the supplementary material, focusing primarily on verifying the proof of Theorem 1 and understanding the various thinning methods (e.g., KH-Compress, LKH, KT-Compress) whose descriptions were relegated to the appendix but referenced in the main text. The supplementary material adequately supports the main text, although clearer referencing and brief summarization of these methods earlier in the main text would improve readability.
Relation To Broader Scientific Literature: The authors clearly position their contributions within existing literature on kernel methods, coresets, and thinning algorithms, differentiating from recent related studies. The connections to practical applications in Transformers, SGD acceleration, and kernel two-sample tests are effectively highlighted.
Essential References Not Discussed: It seems essential references are properly cited and discussed.
Other Strengths And Weaknesses: Strengths:
- Originality in providing generally applicable theoretical guarantees for (sub-Gaussian) thinning algorithms that can effectively adapt to (approximate) low-rank structure.
- Broad applicability showcased through multiple practical applications.
- Solid and comprehensive theoretical guarantees, complemented by extensive empirical evidence.
Weaknesses:
- Although the contributions are substantial and the manuscript is generally clear, the presentation could be improved (see suggestions in "Other Comments or Suggestions" below).
Other Comments Or Suggestions: * It would significantly benefit readability and clarity if the authors provided a concise, high-level summary of key proofs, particularly for Theorem 1, in the main body of the text.
* Presentation could be improved by ensuring algorithm references are not prematurely made before formal definitions, especially those relegated to appendices.
* Explicitly summarizing the key theoretical results in Section 6.1 and clearly situating Algorithm 3 relative to prior methods would enhance readability and interpretability of the contributions.
* $\nu$ is used to denote the sub-Gaussian constant in Definition 3, but it was also used to denote a distribution in Definition 2. Please try to avoid overloading notation.
* In the sentence surrounding Eq. (7), please specify that Eq. (7) applies to Gaussian kernel for clarity.
* It seems Table 2 is redundant as the three-line display at the bottom right of page 4 contains the same information. Please consider removing one to use the saved space for other purposes.
Questions For Authors: 1. Could you comment on the tightness of the upper bounds provided in Theorem 1? Do you have any insights or conjectures regarding their sharpness? Additionally, if feasible, could you empirically evaluate the tightness of your theoretical upper bounds through a simple experiment with synthetic datasets?
2. The numerical experiments in Sections 4–6 demonstrate the practical effectiveness of thinning-based approaches, yet the connection to the theoretical results (Theorems 2–4) seems unclear (or indirect). Could you clarify whether and how these experiments validate the tightness or relevance of your theoretical guarantees? Could you also comment on the tightness of these theorems, and any room for further improvements, if any?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and constructive feedback and are delighted that the reviewer found our methods broadly applicable, our experiments extensive and convincing, and our contributions novel and clearly positioned within the literature.
**Summaries:** Following the reviewer’s advice, we will add a summary of core ideas underpinning the main results. For example, for (3), we decompose the sub-Gaussian error vector $K^{1/2}w$ for $w = p_{in}-p_{out}$ into a rank-r projection $V_r^\top K^{1/2}w\in R^r$ and a residual. We bound the residual error in terms of $\lambda_{r+1}$ and $\vert\vert w\vert\vert_2=1/n_{out}-1/n_{in}$ and bound $\vert\vert V_r^\top K^{1/2}w\vert\vert_2=\sup_{\vert\vert u\vert\vert_2\le1} u^\top V_r^\top K^{1/2}w$ with high probability using the sub-Gaussianity of $w$ and the union bound over a finite cover of the unit ball in $R^r$. Similarly, (4) follows from the sub-Gaussianity of $w$ and the union bound over $\pm e_i$ for each $i\in\mathcal{I}$. Finally, (5) follows from a more elaborate chaining argument that frames $(e_i^\top Kw)_{i\in\mathcal{I}}$ as a sub-Gaussian process and uses the Lipschitzness of $K$ to control its entropy integral.
We will also add summaries and high-level discussions of theoretical takeaways. For example, in Sec. 6.1, we will highlight that a standard way to summarize the discriminating power of a test is through its detectable separation rate, the smallest MMD separation between P and Q detectable with power at least $1-\beta$. Standard quadratic-time MMD tests have a minimax-optimal separation rate of order $1/\sqrt{\min(m,n)}$ (Kim & Schrab, 2023, Thm. 8), and our Thm. 4 shows that CTT matches this rate up to an inflation factor $R_k/2^g$ depending on the eigenvalue decay of the kernel. In other words, CTT provides minimax-optimal detectable separation in near-linear time whenever $\textup{rank}_{\epsilon}(K)$ is polylog(n) for $\epsilon=$polylog(n). As a practical concrete example, we derive the first power guarantees for deep kernel CTT in Cors. 3 and 4; these ensure that CTT can match the power of quadratic-time deep kernel testing in near-linear time. The original CTT analysis used the RKHS covering number bounds of Dwivedi & Mackey (DM) instead of kernel matrix eigenvalues to bound compression error, yielding less sharp results. For example, substituting DM bounds into our deep kernel analysis would yield a larger, order $\log^{3d/4+2}(n/\widetilde \beta)$ inflation factor that does not adapt to the intrinsic manifold dimension.
We will also add a brief summarization of the methods analyzed in the revision and follow the reviewer’s other suggestions to improve clarity.
**Tightness:** In the revision we will highlight that our Thm. 1 guarantee (5) with GS-Thin matches the minimax lower bound of Phillips & Tai (Thm. 3.1) for KMS coresets on $R^d$; that our Thm. 3 matches the minimax lower bound of Cha et al. (Thm. 4.5) for reordered SGD algorithms up to the epsilon rank parameter r; and that, by Kim & Schrab, 2023 (Thm. 8), the separation rate of our Thm. 4 is minimax optimal up to the inflation factor $R_k/2^g$ and that of Cors. 3 and 4 is minimax optimal up to log factors.
We will also clarify that the observed accelerated convergence rate of LKH-SGD over the standard slow SGD rate of RR provides a direct verification of the Thm. 3 guarantee, while the improved power-runtime trade-off of CTT over the standard Monte Carlo tradeoff of subsampling provides a direct verification of the Thm. 4 guarantee.
Following the reviewer’s advice, we have also conducted two new analyses to explicitly measure the approximate low-rankness of our matrices and thereby gauge the relevance of our guarantees. In the CTT setting of Sec. 6.2, we find that the empirical inflation factor $R_k = O(log^5(n))$ owing to approximate low-rankness: see https://imgur.com/a/hmAVQIU. In the LKH experiment setting of Sec. 5.2, we find that the stochastic gradient matrices $X_k$ have rapidly decaying singular values and a median $\epsilon_k^*$-rank of 10: see https://imgur.com/a/MyA7gR1. We will investigate whether another simple experiment could provide further insight into bound tightness.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response with clarifications, additional discussions/experimental results, and revision plans. I believe this paper would be a nice contribution to the ICML community with the promised revisions. With the proviso that the authors would make appropriate revisions, I am willing to happily increase my evaluation rating from 3 to 4. | Summary: This paper presents a new analysis of sub-Gaussian thinning algorithms based on a low-rank assumption. It provides theoretical guarantees that apply to many data distributions and kernel functions, in contrast to previous work with more limited applicability and poor dimension dependence. The key insight is that high-quality dataset summarization can be achieved whenever the kernel or data matrix is approximately low-rank, enabling efficient identification of representative points. The authors introduce practical sub-Gaussian thinning methods. They apply them in three key applications with good performance: approximating attention in transformers, accelerating stochastic gradient descent, and distinguishing distributions using deep kernels.
Claims And Evidence: The answer is a definite yes! The paper is very neatly organised, making the claims particularly clear and easy to follow. There is convincing evidence for all theoretical claims, with rigorous proofs in appendix.On top of the main theoretical results presented in section 3, the authors specify these in three quite diverse applications in the last three sections: approximating attention in transformers, accelerating stochastic gradient descent, and distinguishing distributions using deep kernels. Specifics theoretical results are proven in each of them, and numerical experiments are proposed as well. These evaluation criteria make sense, and I find the wealth and diversity of application quite impressive. I'm not an expert in these applications but found them interesting and informative.
Methods And Evaluation Criteria: On top of the main theoretical results presented in section 3, the authors specify these in three quite diverse applications in the last three sections: approximating attention in transformers, accelerating stochastic gradient descent, and distinguishing distributions using deep kernels. Specific theoretical results are proven in each of them, and numerical experiments are proposed as well. These evaluation criteria make sense, and I find the wealth and diversity of application quite impressive. I'm not an expert in these applications, but found them interesting and informative.
Theoretical Claims: I set out to carefully check all of the proofs, as I am both familiar with and interested in sub-Gaussian properties, but did not manage to read every proof in full detail; I reviewed the main arguments through Appendix D with reasonable care. The parts I read were well-written and convincing.
Experimental Designs Or Analyses: As said earlier, I am not much familiar with the applications of section 4 to 6. I did not check the soundness of the experimental designs or analyses, but all seemed very reasonable to me. A quick look at the code and some of the Python notebooks within the suppl materials convinced me that the experiments were conducted very thoroughly.
Supplementary Material: As written above, I checked with reasonable care the parts until App D.
Relation To Broader Scientific Literature: See below.
Essential References Not Discussed: Relevant literature is covered to a good extent. I am not familiar enough with the thinning literature to make a definite judgement.
Other Strengths And Weaknesses: Strengths:
- a paper thoroughly written;
- very solid and comprehensive theoretical results: every proposed algorithm comes with its own approximation/convergence guarantees and sub-Gaussian parameter \nu;
- the code provided seems well presented and complete (from skimming through it);
- applications and experiments are well motivated and show the interest of the proposed thinning algorithms.
Weakness
- I honestly do not see much to say as weaknesses. As my score and evaluation indicate, I am clearly championing the paper. I do feel, however, that the 8-page conference format makes the presentation somewhat compressed. A journal format with more space could have better served the exposition. That said, the authors make excellent use of content division between the main text versus the appendix.
Other Comments Or Suggestions: Just a few typos:
- last eqn page 12: what is meant by "surely"?
- the sentence starting with "Since" in App C.4 misses an end.
Questions For Authors: I don't think the response to this question would affect my score, but here it is anyway:
The authors partly characterize the performance of their thinning algorithms using sub-Gaussian parameters. These can in principle be optimized (ie, minimized) to yield a so-called optimal (i.e., minimal) variance proxy $\nu_opt^2$. For many standard distributions, this optimal variance proxy parameter is known. I was curious whether the sub-Gaussian parameters derived for the proposed thinning algorithms are optimal in this sense. Are there technical reasons preventing such a result? Establishing optimality—or proving lower bounds showing that the guarantees cannot be improved—would be a valuable addition and strengthen the theoretical contribution of the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for championing this submission and are delighted that the reviewer found the paper “clear and easy to follow,” the theoretical results “solid and comprehensive,” and the “wealth and diversity of application quite impressive.”
Thank you also for the interesting question concerning variance proxies. If we understand correctly, the question is whether for each thinning algorithm we have identified the smallest possible sub-Gaussian constant $\nu$. We have not yet had a chance to verify this, but one tractable lower-bounding strategy would be to compute (or lower-bound) each algorithm’s variance parameter $\sigma^2 = \sup_{u} E[u^\top K(p_{in} - p_{out})(p_{in} - p_{out})^\top K^\top u] / u^\top K u$ as a function of the sample size, $\delta$, and the kernel matrix. It seems at least plausible that the derived sub-Gaussian and variance parameters would match in the worst-case setting.
While we do not yet have a precise answer to this question, we can currently comment on the rate optimality of our sub-Gaussian constants. Specifically, Thm. 3.1 of Phillips & Tai implies that any thinning algorithm must incur at least $\Omega(\sqrt{d}/n_{out})$ KMS error for some dataset in $R^d$ and many common kernels. Meanwhile, our Prop. B.6 and Thm. 1 imply that GS-Thin has $\nu = O(1/n_{out})$ and hence KMS $O(\sqrt{d}/n_{out})$. Thus, GS-Thin has a minimax rate-optimal sub-Gaussian constant, and no thinning algorithm can have sub-Gaussian constant $\nu = o(1/n_{out})$.
Finally, thank you for flagging our typo. In the revision, we will replace "Since" with "Moreover" and "surely" with "with probability 1." | null | null | null | null | null | null |
Strengthen Out-of-Distribution Detection Capability with Progressive Self-Knowledge Distillation | Accept (poster) | Summary: To address the issue of suboptimal OOD detection performance during the later stages of training, this paper proposes Progressive Self-Knowledge Distillation (PSKD) framework. PSKD strengthens the OOD detection capability by leveraging self-provided uncertainty embedded targets. PSKD is orthogonal to most existing methods and thus can be used to further enhance the effectiveness of other OOD detection methods.
## update after rebuttal
After reading the authors' responses, I decide to keep my original score.
Claims And Evidence: Yes, all claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, both proposed methods and evaluation criteria make sense for the problem and application.
Theoretical Claims: Yes, both proofs for the theoretical claims are correct.
Experimental Designs Or Analyses: Yes, the experimental design and analyses in this paper are solid, with a clear comparison to state-of-the-art methods.
Supplementary Material: The authors have not provided the special supplementary material. However, I have reviewed the attached Appendixes.
Relation To Broader Scientific Literature: Out-of-distribution (OOD) detection aims to ensure AI system reliability by rejecting inputs outside the training distribution. This paper proposes Progressive Self-Knowledge Distillation (PSKD) framework to improve OOD detection performance and further enhance the effectiveness of other OOD detection methods.
Essential References Not Discussed: No, there are no essential references not discussed.
Other Strengths And Weaknesses: Strengths:
1) To address the issue of suboptimal OOD detection performance during the later stages of training, this paper proposes Progressive Self-Knowledge Distillation (PSKD) framework. PSKD strengthens the OOD detection capability by leveraging self-provided uncertaintyembedded targets.
2) The authors conduct extensive experiments to verify the effectiveness of the proposed PSKD on both small-scale CIFAR and large-scale ImageNet, covering near-OOD scenarios with semantic shifts and far-OOD scenarios with further obvious covariance shifts.
3) The paper is generally a good paper with a clear central idea. The organization of the paper is quite good and it is easy to follow the topic and the proposed algorithms.
Weaknesses:
1) With regard to the comparison results, statistical tests are needed in the comparison results. The detailed description about statistical tests for comparisons of multiple algorithms on multiple datasets can be found from the following paper: Statistical comparisons of classifiers over multiple data sets.
2) In the current version, the authors use the words "approach", "method", “technique”, “framework” and "strategy" a little casually. I know, it is very difficult to distinguish these words thoroughly. At least, the author should try to unify the use of these words in a paper.
3) In the paper, the authors use AUROC to denote the area under the receiver operating characteristic curve. Why? To my knowledge, the area under the receiver operating characteristic curve is widely denoted as AUC. It is not necessary to change the well-known abbreviation.
Other Comments Or Suggestions: Please give the full names for all abbreviations when they occur for the first time. For example, UM/UMAP. I have not any other comments or suggestions.
Questions For Authors: 1) With regard to the comparison results, statistical tests are needed in the comparison results. The detailed description about statistical tests for comparisons of multiple algorithms on multiple datasets can be found from the following paper: Statistical comparisons of classifiers over multiple data sets.
2) In the current version, the authors use the words "approach", "method", “technique”, “framework” and "strategy" a little casually. I know, it is very difficult to distinguish these words thoroughly. At least, the author should try to unify the use of these words in a paper.
3) In the paper, the authors use AUROC to denote the area under the receiver operating characteristic curve. Why? To my knowledge, the area under the receiver operating characteristic curve is widely denoted as AUC. It is not necessary to change the well-known abbreviation.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and helpful feedback.
**Comment 1. Statistical Tests for Comparison:**
Thank you for your valuable suggestion. The discussion on statistical tests for result comparison will be included in the final version. Here, we report the statistical test results using AUC as the OOD detection metric on the CIFAR-10 benchmark. First, we rank the performance of all methods across multiple datasets, with the results recorded in Table 1. We then perform the Friedman test to determine whether there is a significant difference in the average rankings of the methods. The Friedman test yields a test statistic of 25.524 and a p-value of 0.00011 (which is less than the significance level of 0.05), indicating a statistically significant difference among the methods. To further identify specific methods with significant performance differences, we conduct a Nemenyi test, obtaining a Critical Difference (CD) value of 3.078, and present the comparative analysis of ranking differences between our PSKD and other methods in Table 2.
The statistical test results indicate that (1) compared to Energy, Energy+PSKD effectively enhances the model's OOD detection capability and demonstrates a significant performance improvement; and (2) the lack of a significant difference between our PSKD and Unleashing Mask/Unleashing Mask Adopts Pruning (UM/UMAP) can be attributed to our shared goal of restoring the model’s intrinsic OOD detection capability. The distinction lies in our method, which utilizes an uncertainty-embedded target to learn valuable atypical samples, whereas UM/UMAP directly discards them, resulting in a loss of ID generalization performance and limiting the model's OOD detection capability.
Table 1. Average ranking results of methods across multiple datasets.
||MSP|ODIN|Energy|Energy+UM|Energy+UMAP|Energy+PSKD(Ours)|
|-|-|-|-|-|-|-|
|Average Rank|4.67|5.67|4.67|2.67|1.83|**1.50**|
Table 2. Ranking differences between our PSKD method and other comparison methods, with an asterisk (*) indicating a significant difference, where the value exceeds the CD value of 3.078.
||MSP|ODIN|Energy|Energy+UM|Energy+UMAP|
|-|-|-|-|-|-|
|Ranking Difference|3.167*|4.167*|3.167*|1.167|0.333|
**Comment 2. Clarity and Consistency:**
Thank you for pointing out these mistakes, and we will revise the manuscript accordingly. (1) The full name will be provided upon the first occurrence of any abbreviation to enhance clarity. (2) Special attention will be given to word choice to ensure consistency throughout the manuscript.
**Comment 3. Why Use AUROC Instead of AUC?**
Thank you for your correction. The abbreviation "AUROC" (Area Under the Receiver Operating Characteristic Curve) used in our paper follows the OpenOOD benchmark [1]. Indeed, the more commonly used abbreviation is "AUC". We will make this revision in the final version.
**Reference:**
[1] Yang, et al. OpenOOD: Benchmarking Generalized Out-of-Distribution Detection. NeurIPS 2022. | Summary: This paper proposes Progressive Self-Knowledge Distillation (PSKD), a framework that leverages self-distillation and dynamic teacher selection to enhance a model’s intrinsic OOD detection capability. PSKD uses pseudo-outlier samples generated through rotation, distortion, and Gaussian noise to iteratively refine the student model’s OOD performance by distilling knowledge from a dynamically updated teacher model. Extensive experiments demonstrate the superiority of the PSKD, including (1) main comparison with SOTA OOD detection baselines, (2) ablation study for adjustment strategy, weighting factor, and temperature scaling, (3) other deep analysis. The paper is well-written and easy to follow.
Claims And Evidence: Yes. The key claims made in the paper are well-supported by relevant methodological and experimental evidence. The authors thoroughly explain their approach, ensuring that each assertion is backed by rigorous theoretical analysis and empirical validation. The experimental results are comprehensive and demonstrate the effectiveness of the proposed method across different scenarios, further strengthening the credibility of the findings.
Methods And Evaluation Criteria: Yes. The method proposed by the author addresses OOD problems and effectively tackles related challenges in practical applications.
Theoretical Claims: N/A. The paper does not include any theoretical claims.
Experimental Designs Or Analyses: Yes. I have evaluated the rationality and effectiveness of the experimental design in the paper. The experiments are well-designed, and the author has provided the relevant code. Based on the experimental setup and results, I think relevant results are reasonable and convincing.
Supplementary Material: Yes. The supplementary materials have been reviewed. The supplementary materials provide the whole algorithm process, experimental setting details, and fine-grained results. Relevant materials and experimental results further demonstrate the superiority of the proposed method.
Relation To Broader Scientific Literature: OOD detection is a crucial area in machine learning. This paper addresses the limitation of directly forgetting atypical samples in previous studies.
Essential References Not Discussed: Key references are cited in this paper. The author provides a comprehensive and well-structured discussion of related work.
Other Strengths And Weaknesses: Strengths:
1. Novelty: The authors design a novel strategy to adaptively select a self-teacher model. PSKD integrates self-distillation with dynamic teacher selection, addressing the “forgetting” problem in traditional regularization methods.
2. Simplicity and Efficiency: Avoids complex architectures or additional data, relying solely on pseudo-outliers and parameter tuning.
3. Comprehensive Experiments: The authors provide comprehensive experiments to verify the superiority of the proposed method. Furthermore, the authors also report the ablation study, deeply analysis experimental results to further reveal the justifiability and effectiveness.
4. Clearly Writing: This paper is well-written and easy to follow. The logic is clear, and the structure is well-organized, allowing readers to grasp the main ideas efficiently.
Weaknesses:
1. Training Instability: Frequent teacher updating and adaptive $\lambda$ adjustments might be leading to training instability in practice.
2. Computation Overhead: The PSKD requires frequent teacher updates and pseudo-outlier generation, potentially increasing training time.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Please analyze the training instability problem.
2. Please analyze the training computation overhead problem.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and helpful feedback.
**Comment 1. Analysis of Training Stability:**
Tables 6-11 in Appendix C of the original paper report the performance of PSKD across multiple independent training runs on various benchmarks. The results show that PSKD exhibits a standard deviation comparable to that of Vanilla training, suggesting that the introduction of PSKD has a negligible effect on the stability of the training process. Below, we provide an analysis of the effects of teacher update frequency and λ adjustment on training stability:
- **Teacher Update Frequency:** Table 1 examines the effect of teacher model update intervals on training stability. The results indicate that PSKD is robust to update frequency, with frequent updates having a limited effect on training stability. Moreover, appropriately increasing the update frequency may help improve OOD detection performance by providing more opportunities for the model to explore its optimal intrinsic OOD detection capability.
- **λ adjustments:** The purpose of adjusting λ is to minimize the influence of self-distillation in the early stages of training. This helps prevent the large bias introduced by the teacher model, which may not have been adequately trained, from destabilizing the training process. As training progresses, λ gradually increases to emphasize the objective of learning the uncertainty-embedded targets. As analyzed in Figure 5(c) of the original paper, failing to adjust λ dynamically may compromise training stability due to excessive interference from an undertrained teacher model.
Table 1. The impact of varying teacher selection intervals on training stability on CIFAR-10 benchmark. The OOD results are averaged over near- and far-OOD groups on the CIFAR-10 benchmark.
|Teacher Selection Interval|FPR95↓|AUROC↑|ID ACC↑
|-|-|-|-|
|10 selections per 1 epoch|26.25 ± 1.46|93.11 ± 0.22|95.08 ± 0.22|
|5 selections per 1 epoch|**25.79 ± 2.17**|**93.29 ± 0.30**|**95.30 ± 0.07**|
|1 selection per 1 epoch (default)|26.08 ± 1.02|93.14 ± 0.21|95.14 ± 0.08|
|1 selection per 5 epochs|26.86 ± 0.91|92.96 ± 0.37|95.14 ± 0.17|
|Single selection throughout training (PSKD-S)|28.18 ± 3.29|92.82 ± 0.68|95.06 ± 0.05|
**Comment 2. Analysis of Computational Overhead:**
Table 2 presents the additional training overhead introduced by PSKD on both small-scale and large-scale datasets. The results indicate that: (1) the overhead of the teacher selection process accounts for only a small fraction of the total training cost, which is affordable. Notably, for large-scale datasets, the additional overhead of PSKD is even less significant due to the relatively higher ratio of training samples to validation samples. (2) The generation of pseudo-outliers is conducted only once at the beginning of training, and its computational cost is minimal compared to the overall training time.
Table 2. Overhead analysis introduced by PSKD. The training setup follows the OpenOOD benchmark [1], consisting of a total of 100 epochs for CIFAR-10 and 90 epochs for ImageNet-200. The results are averaged over five independent runs. Software and hardware configurations remain consistent with those detailed in Appendix B.1 of our paper.
|Dataset|One-Epoch Training Cost|One-Time Teacher Selection Cost|Total Training Cost|Pseudo-Outlier Preprocessing|
|-|-|-|-|-|
|CIFAR-10 (Small scale)|10.50 seconds|1.18 seconds|21.96 minutes|0.96 minutes|
|ImageNet-200 (Large scale)|252.30 seconds|8.72 seconds|397.49 minutes|1.42 minutes|
**Reference:**
[1] Yang, et al. OpenOOD: Benchmarking Generalized Out-of-Distribution Detection. NeurIPS 2022. | Summary: This paper concerns the out-of-distribution (OOD) detection task. Recent work shows that memorizing atypical samples during later stages of training can hurt OOD detection, while strategies for forgetting them show promising improvements. However, directly forgetting atypical samples sacrifices ID generalization and limits the model's OOD detection capability. To address this issue, this paper proposes Progressive Self-Knowledge Distillation (PSKD), which strengthens the OOD detection capability by leveraging self-provided uncertainty-embedded targets. Specifically, PSKD adaptively selects a self-teacher model from the training history using pseudo-outliers, facilitating the learning of uncertainty knowledge via multi-level distillation applied to features and responses. Moreover, PSKD is orthogonal to most existing methods and can be integrated as a plugin to collaborate with them. Extensive experiments including main comparison and ablation study verify the effectiveness of PSKD.
Claims And Evidence: Clear. The main claims have been supported from methodology and comprehensive experiments.
Methods And Evaluation Criteria: Make sense.
Theoretical Claims: This paper does not include theoretical analysis.
Experimental Designs Or Analyses: The experimental designs or analyses make sense and are validate
Supplementary Material: I have reviewed all the supplementary material. The author provides the complete algorithm flow and detailed experimental results in the appendix, enhancing the credibility of the paper.
Relation To Broader Scientific Literature: The key contributions of the paper are related to the OOD detection, which is an interesting research issue.
Essential References Not Discussed: No, the author provides extensive references on OOD detection.
Other Strengths And Weaknesses: Strengths:
Novelty: The proposed PSKD framework is innovative and can effectively solve the problem of overfitting atypical samples in the late training period, thereby improving the OOD detection capability.
Effectiveness: The experimental design was comprehensive, covering multiple datasets and baseline methods, verifying the effectiveness and generality of PSKD. The authors have provided code to enhance the reproducibility of the paper.
Writing and Paper Organization: The writing of the paper is clear, with well-structured arguments and a logical flow that enhances readability and comprehension. And the experimental results makes the paper more convincing.
Cons:
Performance: Although PSKD performed well on multiple datasets, performance improvements on some specific datasets were not significant and may require further analysis for reasons.
Discussion: There is a lack of in-depth discussion on why PSKD can effectively improve OOD detection capability.
Other Comments Or Suggestions: It is suggested that the author add the analysis of PSKD performance differences on different datasets in the experimental part, especially those data sets with insignificant performance improvement.
Questions For Authors: The performance of PSKD varies greatly on different data sets. Are there some data set characteristics that affect the performance of PSKD? Can you further analyze the reasons for these differences?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and helpful feedback.
**Comment 1. Analysis of Performance Disparities:**
The performance improvement of PSKD is inherently dependent on the model's intrinsic OOD capability relative to the training data. Compared to the small-scale CIFAR dataset, the large-scale ImageNet dataset encompasses a larger and more complex semantic space. Under the same ResNet-18 architecture used in our paper, models trained on the more challenging ImageNet dataset tend to learn a more crowded feature space, increased class overlap, and unreliable decision boundaries. These factors contribute to the model's relatively weak intrinsic OOD detection capability [1], limiting PSKD's potential to restore the model's intrinsic OOD detection capability. Consequently, the performance improvements on ImageNet are less pronounced, while substantial gains are observed on the simpler CIFAR-10 benchmark. Although the improvements vary across datasets of different scales, PSKD consistently enhances the model's OOD detection performance, demonstrating its effectiveness.
**Comment 2. Insight Justification of PSKD:**
Our motivation stems from the observation that the model's OOD detection performs optimally at an intermediate stage of training rather than in the final well-trained state. We attribute this to the issue of label assignment in traditional supervised learning, where samples with varying levels of uncertainty are assigned a uniform, absolutely confident learning target (i.e., one-hot labels). This forces the model to learn overly confident predictions for atypical samples, leading to an overestimation of certainty. Consequently, the model becomes prone to overconfidence regarding OOD data, which hurts OOD detection.
To address this problem, our PSKD leverages the potential within the model's own learning process to generate soft targets that inherently capture sample-level uncertainty. Specifically, PSKD alleviates overfitting to atypical samples by learning from the uncertainty-embedded targets provided by the self-selected teacher model, thereby effectively enhancing the model's ability to perceive uncertainty. As a result, PSKD can effectively learn from atypical samples to restore the model's intrinsic OOD detection capability. Further empirical analysis and interpretative insights regarding PSKD can be found in Section 4.4 of the original paper.
**Reference:**
[1] Huang, et al. MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space. CVPR 2021. | Summary: This paper proposes Progressive Self-Knowledge Distillation (PSKD), a framework to enhance out-of-distribution (OOD) detection by leveraging uncertainty-embedded targets from a self-selected teacher model. The authors argue that models tend to memorize atypical samples during later training stages, harming OOD detection. PSKD dynamically selects a teacher model from training history using pseudo-outliers and applies multi-level distillation (feature and response) to learn uncertainty knowledge. Experiments on CIFAR and ImageNet benchmarks demonstrate improved OOD detection and in-distribution (ID) classification.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence including methodology, comprehensive experiments.
Methods And Evaluation Criteria: Yes, PSKD makes sense for the OOD detection problem.
Theoretical Claims: This paper does not include theoretical analysis.
Experimental Designs Or Analyses: Yes, the soundness/validity of experimental designs and analysis is checked.
Supplementary Material: Yes, I have reviewed the supplementary materials.
Relation To Broader Scientific Literature: Yes, the method proposed in the paper is highly inspiring to the field of OOD detection.
Essential References Not Discussed: No, the references of the paper are comprehensive.
Other Strengths And Weaknesses: Strengths:
1. Dynamic Teacher Selection: The AUROC-based criterion (Eq. 6) for selecting teachers is intuitive and aligns with OOD detection objectives.
2. Strong Empirical Validation: Comprehensive experiments across near- and far-OOD scenarios demonstrate PSKD’s superiority over existing methods, e.g., reducing FPR95 by 31.26% on CIFAR-100.
3. Practical Plug-and-Play Design: PSKD’s compatibility with existing OOD scoring methods (e.g., Energy, MSP) and training paradigms (e.g., OE) enhances its applicability.
4. Reproducibility: Open-sourced code and hyperparameter details (Appendix B) ensure transparency.
Weaknesses:
1. Pseudo-Outlier Dependency: Performance hinges on artificially generated pseudo-outliers (Table 5); robustness to realistic OOD data remains understudied.
2. Overlooked Architectural Variants: Evaluations are limited to ResNet-18; performance on transformers or attention-based models is unexplored.
Other Comments Or Suggestions: How does PSKD handle distribution shifts in high-dimensional or non-image data (e.g., text or audio)?
Questions For Authors: Could the teacher selection mechanism be sensitive to the quality of pseudo-outliers? How robust is PSKD to noisy validation data?
What is the computational overhead of the teacher selection process, especially for large-scale training?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and helpful feedback.
**Comment 1. Teacher Selection with Realistic Outlier:**
To ensure a fair comparison, we intentionally avoid using realistic OOD data as auxiliary information following the setting of [1]. To further explore this, we introduce realistic OOD data for teacher selection to evaluate the performance of PSKD, with the results recorded in Table 1. The results indicate that: (1) both realistic and pseudo-outlier data consistently lead to significant performance improvements; (2) realistic OOD data more accurately reflect the actual OOD distribution, thereby enhancing the robustness of teacher model selection and further improving performance.
Table 1. The impact of different sources of OOD data for teacher selection on ImageNet-200 benchmark. Following the settings of the OpenOOD benchmark [1] for the OOD validation set, OpenImage-O is adopted as the realistic OOD data. The OOD results are averaged over near- and far-OOD groups.
|Methods|FPR95↓|AUROC↑|ID ACC↑
|-|-|-|-|
|Vanilla|47.55 ± 0.76|86.68 ± 0.12|86.37 ± 0.08|
|PSKD w/ Pseudo|44.38 ± 0.22|87.12 ± 0.15|**86.79 ± 0.25**|
|PSKD w/ Realistic|**43.93 ± 0.27**|**87.64 ± 0.18**|86.76 ± 0.16|
**Comment 2. Architectural Variants:** Table 2 analyzes the robustness of PSKD across different architectures, including the CNN-based ResNet-18 and the transformer-based Vision Transformer [2] (ViT-B/16). The results indicate that PSKD consistently improves the model's OOD detection performance across different architectures and demonstrates general applicability.
Table 2. The robustness of PSKD across different architectures on the ImageNet-200 benchmark. The Energy score is adopted for OOD scoring and the OOD results are averaged over near- and far-OOD groups.
|Model|Methods|FPR95↓|AUROC↑|ID ACC↑|
|-|-|-|-|-|
|ResNet-18|Vanilla|47.55 ± 0.76|86.68 ± 0.12|86.37 ± 0.08|
|ResNet-18|PSKD|**44.38 ± 0.22**|**87.12 ± 0.15**|**86.79 ± 0.25**|
|Vit-B/16|Vanilla|28.80 ± 0.41|93.61 ± 0.19|93.90 ± 0.10|
|Vit-B/16|PSKD|**27.79 ± 0.50**|**93.87 ± 0.24**|**94.01 ± 0.06**|
**Comment 3. Handling of Non-Image Data:**
A general and feasible way for handling non-image data (e.g., text, audio) is to induce distribution shifts by adding noise after converting the data into embedding vectors. To verify the effectiveness of this strategy, we conduct an OOD detection task in an audio classification setting based on the well-known Kinetics-Sound dataset [3]. The results presented in Table 3 provide empirical evidence that our PSKD is also suitable for other types of non-image data.
Table 3. The applicability of PSKD to audio data, with the Energy score used for OOD scoring.
|Methods|FPR95↓|AUROC↑|ID ACC↑|
|-|-|-|-|
|Vanilla|76.95 ± 1.64|69.67 ± 0.59|68.06 ± 0.60|
|PSKD|**72.93 ± 1.84**|**71.04 ± 0.48**|**68.33 ± 0.37**|
**Comment 4. Sensitivity to the Quality of the Validation Set:**
The ablation analysis in Table 5 of the original paper examines the impact of pseudo-outlier quality on OOD detection performance. The results reveal the following: (1) even when using simple pseudo-outlier construction strategies (such as Gaussian noise or distortions), there is a notable improvement in OOD detection performance; (2) increasing the diversity of pseudo-outliers (e.g., by incorporating rotations, distortions, and noise) provides a marginal but further performance boost. Additionally, as shown in Table 1, high-quality, realistic OOD data results in a slight yet consistent improvement in performance. Overall, performance remains relatively stable across different data quality levels, with high-quality validation sets generally yielding the most significant performance gains.
**Comment 5. Computational Overhead of Teacher Selection:**
Table 4 records the overhead of teacher selection in PSKD on both small-scale and large-scale datasets. The results show that the overhead of the teacher selection process accounts for only a small fraction of the total training cost, which is affordable. Notably, for large-scale datasets, the additional overhead of PSKD is even less significant due to the relatively higher ratio of training samples to validation samples.
Table 4. Analysis of teacher selection overhead in PSKD. The results are averaged over five independent runs. Software and hardware configurations remain consistent with those detailed in Appendix B.1 of our paper.
|Dataset|One-Epoch Training Cost|One-Time Teacher Selection Cost|
|-|-|-|
|CIFAR-10 (Small scale)|10.50 seconds|1.18 seconds|
|ImageNet-200 (Large scale)|252.30 seconds|8.72 seconds|
**References:**
[1] Yang, et al. OpenOOD: Benchmarking Generalized Out-of-Distribution Detection. NeurIPS 2022.
[2] Dosovitskiy, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR 2021.
[3] Arandjelovic, et al. Look, listen and learn. ICCV 2017. | null | null | null | null | null | null |
Understanding Multimodal LLMs Under Distribution Shifts: An Information-Theoretic Approach | Accept (poster) | Summary: This paper introduces an information-theoretic framework to analyze and understand the performance of Multimodal Large Language Models (MLLMs) under distribution shifts, which occur when the evaluation data differs from the instruction tuning distribution. The authors propose the concept of Effective Mutual Information (EMI), a metric that quantifies the relevance between input queries and model responses. EMI provides a more robust and theoretically grounded alternative to existing empirical metrics like the win rate. The paper derives upper bounds for the EMI difference between in-distribution (ID) and out-of-distribution (OOD) data, linking it to visual and textual distributional discrepancies. Through extensive experiments across 61 real-world distribution shift scenarios, the authors validate their framework, demonstrating strong correlations between EMI and model performance. The results suggest that EMI can be used for more reliable and cost-effective evaluation of MLLMs, particularly in situations where traditional evaluation methods may be computationally expensive or less transparent.
Claims And Evidence: Based on the results in Figure 1, the authors claim that as the severity of the shift increases, the performance degradation becomes more significant. First, part of the result is covered by the legend in text shift LlaVA v1.5, which requires some alteration. Second, the result in text shift does not clearly verify the statement. According to the context, the x-axis is sorted by severity of shifts, but the win rate in text shift isn’t decreasing monotonously. Also, LlaVA NeXT 7B performs better in Korean than in Arbaic, yet LlaVA NeXT 13B shows opposite performance respectively.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The results in terms of Spearman correlation analysis and Kendall’s tau analysis needs further explanation, it is beneficial to specify the meaning of the value for better understanding.
In Figure 3, all four model results are shown in one graph, making it difficult to understand the authors’ implications. Also, from the figure, it seems that EMID is a lot smaller than EMID upper bound, for example, in synthetic shift graph, EMID ranges from -0.02 to 0.10, but the upper bound ranges from 1.1 to 1.8. If not misunderstood, this shows that the upper bound is very loose and has little constraint on EMID.
Supplementary Material: The appendix includes further description about EMI, implementation detail, additional experiments and proofs of the theory.
Relation To Broader Scientific Literature: This paper introduces Effective Mutual Information (EMI) to evaluate MLLMs under distribution shifts, extending information-theoretic frameworks to multimodal models. It provides a formal method to quantify performance degradation, addressing gaps in out-of-distribution generalization and model robustness in real-world applications.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- Understanding MLLMs under distribution shifts seems a critical research problem.
- The paper is well constructed and easy to read.
Weaknesses
- In Figure 1, the authors claim that as shift severity increases, performance degradation worsens. However, part of the result in the text shift is obscured by the legend for LLaVA v1.5, requiring adjustment. Additionally, the text shift results do not fully support this claim, as the win rate does not decrease monotonically with shift severity. For instance, LLaVA NeXT 7B performs better in Korean than Arabic, while LLaVA NeXT 13B shows the opposite trend.
- The results from the Spearman correlation analysis and Kendall’s tau analysis require further clarification. It would be helpful to specify the meaning of the correlation values to aid in better understanding their implications.
- In Figure 3, combining all four models in one graph makes it hard to grasp the authors’ implications. Additionally, the EMID appears much smaller than its upper bound—e.g., in the synthetic shift graph, EMID ranges from -0.02 to 0.10, while the upper bound spans from 1.1 to 1.8. If not misunderstood, this suggests the upper bound is very loose and offers little constraint on EMID.
Other Comments Or Suggestions: No.
Questions For Authors: Please refer to the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > _A1. Legend issue in Fig. 1. and non-monotonic performance trend in text shifts_.
Thank you for pointing out the visualization issue—we will revise the legend in Fig. 1 to improve clarity and avoid confusion!
* Regarding the non-monotonic trend in win rate under text shifts, this behavior arises in part from the inherent stochasticity of win rate computation based on GPT-4 API evaluations, making it fundamentally difficult to observe strictly monotonic trends in practice [1]. Additionally, the x-axis in Fig. 1 is sorted by embedding space distances—computed using CLIP ViT for visual shifts and XLM-RoBERTa for text shifts—which may not always reflect the true degree of distributional shift.
* That said, we still observe a meaningful overall relationship between embedding distance and performance degradation, both in the 27 natural shifts presented in Fig. 1 and in the 34 synthetic shifts shown in Fig. 6 of the Appendix. By taking inspiration from this empirical analysis, we derive a much more rigorous framework, i.e., EMID upper bound (Theorem 4.5), to quantify the performance gap that consistently shows statistical significance across diverse settings.
> _A2. Clarification for the meaning of Spearman correlation and Kendall’s tau_.
* Spearman's $\rho$ and Kendall’s $\tau$ are both representative measures for monotonic relationships between two variables, where the former is preferred for detecting weak correlation and the latter is preferred to capture strong correlation in small sample sizes and is more robust to outliers with large sample sizes. Both of them are standard approaches to measure the correlation between LLM Judge score and other metrics [2], and are good for measuring the relation of two variables, even if they have different data types, e.g., discrete (win rate) versus continuous (EMI).
* Both correlation coefficients range from -1.0 (negative correlation) to 1.0 (positive correlation), where 0.0 indicates there is no monotonic relationship between two variables. For Spearman's $\rho$, a 0.2-0.4 range of values denotes weak correlation, 0.4-0.6 denotes moderate correlation, and 0.6-0.8 and 0.8-1.0 denote strong and very strong correlations, respectively, and Kendall's $\tau$ can be similarly interpreted by multiplying 1.5, i.e., $\rho=1.5 \tau$, to compensate its relatively smaller scale in practice.
* Our analysis in the paper (Table 2) indicates that the EMI consistently shows moderate or strong correlation with the LLM-judge evaluation metric, win rate, across different types of shifts and model architectures.
We will add this description in our future version of the manuscript. Thank you for the suggestion.
> _A3. Intricate visualization of Figure 3 and the tightness of EMID upper bound_.
* On the left two panels in Figure 3, we presented the overall relationship between EMID and its upper bound, whereas **we distinguished four different models on the right two panels to show the model-dependent difference over the relationship**. We did so to show that Theorem 4.5 actually differentiates the model itself through the output entropy of each model $H(P_{\theta}(\cdot|x))$, i.e., LLaVA NeXT shows higher sensitivity to the shifts compared with LLaVA v1.5 implied by the larger slope.
* In this work, we do not claim the tightness of the derived upper bound. Moreover, verifying the tightness of the proposed bound can be affected by the choice of estimators for the MI and JSD terms during empirical validation. However, we would like to emphasize that **the bound shows consistent correlation with statistical significance across 61 cases of distribution shift over four different models, which means that our analytic bound of EMID effectively stands for performance degradation of MLLM.** We appreciate your valuable concern, and we will explore devising a much tighter bound in future work.
> Reference
1. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, Zheng et al. 2023
2. PROMETHEUS: INDUCING FINE-GRAINED EVALUATION CAPABILITY IN LANGUAGE MODELS, Kim et al. 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I have also read the comments from other reviewers. Most of my concerns have been adequately addressed. As a result, I would like to keep my score as '3', leading to acceptance.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer vm1D,
The authors would like to express their sincere gratitude to Reviewer vm1D for taking valuable time to read and provide helpful feedback that contributes to improving the quality of this work! We are happy to hear that most of the concerns were adequately addressed, and thanks for acknowledging our contribution.
Best regards,
The authors | Summary: This paper introduces Effective Mutual Information (EMI), a novel metric for quantifying the relevance between input queries and model responses in MLLMs. Unlike standard Mutual Information, EMI removes domain-dependent components, making it a more generalizable measure especially in OOD settings. The authors establish theoretical connections between EMI and win rate, providing an intuitive explanation of EMI’s effectiveness. Additionally, they derive an upper bound for the performance gap in OOD scenarios using EMID. Extensive experiments validate EMI’s utility, confirming its theoretical predictions and demonstrating its effectiveness across diverse distribution shift scenarios.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: above average
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
1. This paper provides a theoretical framework to analyze and understand the impact of distribution shifts on MLLM performance via EMI.
2. The intuition of this new metric EMI is explained via analogy with excess risk, effective robustness and win rate.
3. The empirical findings strongly support most of the theoretical conclusions.
Weaknesses:
1. I appreciate the theoretical part. However, the current theories do not address how modality fusion and modality interaction affect generalization, despite these techniques being commonly employed in (MLLMs).
2. In Theorem 4.4, the paper assume that the $\epsilon$-representation capacity holds, where itself requires to be quantified, e.g. it it closed related to the model size, the number of training samples and the input dimension. Given that the assumption fundementally underpins the theoretical contributions presented, I suggest expanding the discourse to elucidate the quantitative interdependencies between $\epsilon$-representation capacity and these factors.
3. The paper does not demonstrate how to leverage EMI or the EMID upper bound to guide model optimization (e.g., designing robust training objectives or adaptation strategies). It only uses EMI as a post-hoc evaluation tool.
Other Comments Or Suggestions: n/a
Questions For Authors: 1. The paper states, “In Eq. (4), we show that the autoregressive objective for instruction tuning (Eq. (1)) effectively estimates the lower bound of MI when the model’s representation capacity is sufficiently high.” However, while the term 𝛿 can be omitted under this assumption, I suppose that 𝐻(𝑃𝑌) could be significantly large, making Eq. (4) an inaccurate estimate of the MI lower bound.
2. The computation of EMI relies on pre-trained encoders (e.g., CLIP and XLM-R) for feature extraction, but the paper does not discuss the sensitivity of these encoders to domain shifts. For instance, CLIP may underperform on medical images, leading to distorted EMI estimates.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > _A1. Effect of modality fusion/interaction on generalization_.
* MLLMs commonly undergo a modality alignment phase during training, which may affect generalization, and it is known that modality fusion can reduce the sample complexity to improve generalization [1]!
* As noted in `line 250-254` in our paper, **$I(P_{\mathbf{X}Y})$ can be factorized into $0.5 I(P_{X_{v}Y}) + 0.5 I(P_{X_{t}Y}) + 0.5 I(P_{X_{t}Y|X_v}) + 0.5 I(P_{X_{v}Y|X_t})$ where the conditional MI terms $I(P_{X_{t}Y|X_v})$ and $I(P_{X_{v}Y|X_t})$ encapsulate modality interaction.**
* Based on this factorization, we can define per-modality EMI based on $I(P_{X_{v}Y})$ and $I(P_{X_{t}Y})$, and then derive a new upper bound that is constructed with $I(P_{X_{t}Y|X_v})$ and $I(P_{X_{v}Y|X_t})$ terms to capture the effect of modality interaction. We leave the explicit derivation for future work.
> _A2. Interdependencies between $\epsilon$-representation capacity and the model size, number of training samples, and input dimension_.
The $\epsilon$-representation capacity assumption captures the minimum achievable discrepancy between truth distribution $P_{Y|X}$ and the model's distribution $P_{\theta}(\cdot|X)$. Due to the expectation and $\min$ operator, it does not depend on the training sample size but is mainly influenced by model capability.
Specifically, as the models become more expressive—e.g., by increasing model size [5] and leveraging advanced positional encoding [6]—MLLM approaches the universal approximator of sequence-to-sequence mapping [6,7], as a result, the minimum expected discrepancy tends to decrease, leading to a smaller $\epsilon$.
We will elucidate this in the next version, thanks!
> _A3. How to leverage EMID upper bound to guide model optimization?_
While our primary focus is on presenting the theoretical framework to quantify the performance gap of MLLMs, we also showcase an application of the EMID upper bound in this rebuttal, a **regularization term for visual instruction tuning**.
Due to the space limit, we have included the setups and results in the `rebuttal to reviewer o1Fb, response A3`. **Please refer to that thread!** As shown in the tables, our instantiation of Theorem 4.5 can indeed be used to optimize model to improve robustness under shifts.
> _A4. Eq. (4) can be an inaccurate estimate of MI lower bound due to potentially large $H(P_Y)$_.
* In the Eq. (4): $I(P_{XY})\geq\mathbb{E}[\log P_{\theta}(y|x)]+H(P_Y)$, maximizing $\mathbb{E}[\log P_{\theta}(y|x)]$ through instruction tuning can be interpreted to learn a parameter $\theta$ that maximizes $I(P_{XY})-H(P_Y)$ rather than solely $I(P_{XY})$.
* **We do not claim that the log-likelihood term is a tight lower bound of MI but rather suggest that maximizing $\mathbb{E}[\log P_{\theta}(y|x)]$ can implicitly maximize the MI between input and model's response**. We will revise the paper to make this clear.
* To validate this, we reproduce the visual instruction tuning of LLaVA-v1.5-7B on a 10% subset of data, and show how the empirical estimate [2] of MI evolves during training.
|Step|$\hat{I}$|
|-|-|
|1|0.166|
|5|0.172|
|20|0.182|
|100|0.187|
|200|0.194|
|500|0.197|
* As shown above, visual instruction tuning can effectively maximize MI between input and model response.
> _A5. Encoder sensitivity analysis to domain shifts for EMI estimation._
* In **Table 4 and 5 of Appendix**, we already discussed two alternative choices of the encoders, e.g., [3], and showed that _our theoretical claims hold in practice consistently across varying encoders with statistical significance_.
* We further conduct encoder sensitivity analysis under domain shifts by replicating experiments on medical domains with a CLIP-ViT-B32 and XLM-RoBERTa encoders.
* Specifically, we use 200 samples of LLaVA-Med [4], get three splits of them based on embedding distance with COCO images, and translate English queries into six different languages used in the paper by using GPT-4o to induce 28 subsets of shifts to conduct correlation analysis for EMID and its upper bound.
|Model|Pearson $r$|$p$-val|
|-|-|-|
|LLaVA-v1.5.7B |0.93|0.00|
* We see the correlation between EMID and its upper bound estimates is very strong, even though the medical image and text are relatively minority instances compared with general object and text, implying that our theorem robustly holds even on the special domains that encoders may not excel at.
> Reference
1. A Theory of Multimodal Learning, Lu 2023
2. A Contrastive Log-ratio Upper Bound of Mutual Information, Cheng et al. 2020
3. Universal Embeddings with Multimodal Large Language Models, Jiang et al. 2024
4. LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day, Li et al. 2023
5. Scaling Laws for Neural Language Models, Kaplan et al. 2020
6. Your Transformer May Not be as Powerful as You Expect, Luo et al. 2022
7. Transformers are Universal In-context Learners, Furuya et al. 2024 | Summary: I struggled to understand the paper so take this with a grain of salt:
The authors suggest there is a risk involved in using multimodal models (language models conditioned on visual input) and suggest a way to measure this is using a difference of mutual information between a model and the distribution of the training data (is this right?) – though it remains unclear to me how that distribution is gotten.
Claims And Evidence: The abstract talks about the risk of MLLMs and safe and reliable applications, and the proposed framework about quantifying risk under distribution shifts. It’s not clear what this risk is. Would it make sense to just call it the performance?
You talk about multimodal language models in general in the introduction, but it seems this is specifically about (bi-modal) vision and language models with vision only in the input. It could also be made clearer that the paper is only about textual output.
I also don’t see the reason for focusing on vision in the input and instruction tuning. It seems to me the part of the ideas I followed apply to sequence models in general. As such the question is to what degree a conditional distribution (given some prefix) differs from the non-conditional distribution.
Methods And Evaluation Criteria: I don’t understand how the various probability distributions are defined. I don't understand the definitions of the visual and textual shifts, e.g. the relation between $D(P_{X_v}|| Q_{X_v})$ and $D(P_{X_t}|| Q_{X_t})$. How are these defined exactly in terms of next token probabilities? What is $P_{X_v}$ and $P_{X_t}$? You only define KL with some arbitrary distribution P.
How are $P_X$ and $P_Y$ defined? Are these trained models? Are $P_X,P_Y,P_{\theta},P_{XY}$ all the same trained model differing only in prefixes? E.g. as used in equation 3. For all of these, I would expect to see exact definitions of the distributions in terms of the exact model configuration you say your method works for.
Theoretical Claims: I struggled to follow the definitions of the authors and so did not go through the proofs.
Experimental Designs Or Analyses: No, since I struggled to understand what is being done
Supplementary Material: No
Relation To Broader Scientific Literature: Unclear.
Essential References Not Discussed: Unclear
Other Strengths And Weaknesses: See other answers
Other Comments Or Suggestions: Some notes:
In the motivation with visual shifts, I would put the definitions of the variables before the shift examples, it was confusing.
Random variables: Could you define the domains of the random variables in line 93? How is (X_v, X_t) combined into a single sequence?
Line 101: instruction tuning has not been introduced.
Line 99: joint population → joint probability?
Equation 1, should this not be argmin?
Equation 2. Should this be P_{\theta} instead of \theta
Questions For Authors: I think the biggest problem with the paper is the writing and underspecified mathematics. The text is unclear throughout the abstract, introduction, and the motivation of what is being done. I don't find it clear what problem is being solved, nor what the exact configuration in which your method applies is.
More concretely, I don’t find the distributions and the distribution shifts well-defined making it very hard to follow any derivations or reasoning. Even if these (in your mind) are somewhat standard conditional factorizations, I want to see it spelled out, for every such $P_X, P_Y P_{XY}, P_{\theta}$, otherwise it’s very hard to be sure that what is written is correct. It was also unclear what is trained and what is not trained.
The authors define mutual information as a function of a single distribution with an implicit factorization instead of over two random variables. This did not make it easier to follow the writing. For instance, in equation 7 of the EMI (a main contribution), what is the definition of $I(P_X\otimes P_{\theta})$, the definition of $I$ is hardcoded for a distribution $P_{XY}$ based on the factorization given. If there is some marginalization used to define it this needs to be defined, or maybe stick to the mutual information between two random variables.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate sTSP's effort to read our paper and provide comments. Here is a notation table and our responses.
|Var.|Def.|
|-|-|
|$X_t=(X_{t,1},...,X_{t,L_t})$ where $X_{t,i}\in V$|a random variable (r.v.) of a text input sequence with length $L_t$ of tokens in vocabulary $V$|
|$X_v=(X_{v,1},...,X_{v,L_v})$ where $X_{v,i}\in \mathbb{R}^{D_v}$| a r.v. of a $D_v$-dimensional image embedding sequence with length $L_v$ of tokens produced by a frozen vision encoder|
|$\mathbf{X}=(X_v,X_t)$| a joint r.v. of an input query constructed with a tuple of $X_v$ and $X_t$|
|$Y=(Y_1,...,Y_L)$ where $Y_i\in V$|a r.v. of a text response with length $L$ of tokens|
|$P_{\mathbf{X}}=P(X_{v,1},...,X_{v,L_v},X_{t,1},...,X_{t,L_t})$|a probability distribution (p.d.) of an input|
|$P_{X_v}=P(X_{v,1},...,X_{v,L_v})$|a p.d. of a visual input|
|$P_{X_t}=P(X_{t,1},...,X_{t,L_t})$|a p.d. of a text input|
|$P_{Y}=P(Y_1,...,Y_L)$| a p.d. of a text response|
|$P_{Y \|\mathbf{X}}=P(Y_1,...,Y_L\|X_{v,1},...,X_{v,L_v},X_{t,1},...,X_{t,L_t})$|a conditional p.d. of a text response given input|
|$P_{\mathbf{X}Y}=P(X_{v,1},...,X_{v,L_v},X_{t,1},...,X_{t,L_t},Y_1,...,Y_L)$|a joint p.d. of input and response|
|$P_{\theta}(Y\|\mathbf{X})=P_{\theta}(Y_1,...,Y_L\|X_{v,1},...,X_{v,L_v},X_{t,1},...,X_{t,L_t})$|model's prediction p.d. for a response given input|
> What problem is being solved? What is the risk?
* As we noted in the introduction, MLLMs suffer from performance degradation when they encounter distribution shifts. We denote risk as this performance degradation.
* `Problem focus`: As noted in the abstract, introduction, and motivation sections, **our goal is to quantify the performance degradation of MLLMs under distribution shifts by presenting an information-theoretic framework**.
> It seems this is about (bi-modal) vision and language models with vision only in the input.
* As we noted in `L107-109`, MLLMs take multimodal input (both visual and text) to produce text output, not "vision only in the input".
* The term MLLM is commonly used in the literature to denote LLMs that receive a visual input as well as text [1,2], so we took this term by following the convention.
> Definition of $P_X,P_Y,P_{XY},P_{\theta}$.
* In `L92-103`, we put the definition of random variables and distributions. $P_X,P_Y,P_{XY}$ denote the probability distributions of the input $\mathbf{X}$, target response $Y$, and their joint $\mathbf{X}Y$, respectively.
* The $P_\theta$ is a model being trained.
> Definitions of $D(P_{X_v}||Q_{X_v})$ and $D(P_{X_t}||Q_{X_t})$. How are these defined in terms of next token probabilities? What is $P_{X_v}$ and $P_{X_t}$?
As noted in `L96-97`, $X_v$ and $X_t$ are the sequences of visual and text input tokens, so the $P_{X_v}$ and $P_{X_t}$ are the corresponding distributions. $D(P_{X_v}||Q_{X_v})$ and $D(P_{X_t}||Q_{X_t})$ are defined at the input, which should not be confused with the next token probabilities (at the output level).
> Put the definitions of variables before shift examples. Domains of the variables in L93? How is (X_v,X_t) combined into a sequence?
* We put the definition of all variables in `L92-103` before the motivation section.
* We concatenate the visual input tokens $X_v$ and textual input tokens $X_t$ into a single sequence, where the visual tokens are obtained by encoding an image using a vision encoder (CLIP-ViT), and then projecting them into the language embedding space. This follows the standard practice in MLLMs, where visual tokens are prepended to the text tokens to form a unified input sequence.
> Definition of MI and EMI
* Our main interest is to express the model performance gap across different distributions. Thus, instead of defining MI with individual variables, we define MI as their joint distribution, which is mathematically equivalent to random variable-based MI, as can be seen in Eq 3.
* The definition of EMI was explicitly introduced in Eq 6. Please refer to `L168-169`.
> L101: instruction tuning has not been introduced. L99: joint probability? Eq 1, should this not be argmin? Eq 2. Should this be P_{\theta} instead of \theta
* Instruction tuning was introduced from `L104`.
* In statistics, the population (distribution) [3] is used to denote a distribution of the entire collection of objects in contrast to a sampled distribution.
* In Eq 1, both min and argmin can be valid, where the former aims at the objective-centric whereas the latter stands for the parameter-centric perspective.
* In Eq 2, we use the first argument to denote the data distribution that the metric is computed on, and use the second argument to denote the model parameter to be evaluated. **We will use $P_{\theta}$ in the revised version.**
> Reference
1. A Fully Open, Vision-Centric Exploration of Multimodal LLMs, Tong et al. 2024
2. Exploring The Design Space for Multimodal LLMs with Mixture of Encoders, Shi et al. 2025
3. Sampling of Populations: Methods and Applications, Levy and Lemeshow 2013 | Summary: The paper proposes an information-theoretic framework to analyze the performance of multimodal large language models (MLLMs) under distribution shifts. It introduces Effective Mutual Information (EMI) to quantify the relevance between input queries and model responses. The authors also derive an upper bound for the EMI difference between in-distribution (ID) and out-of-distribution (OOD) data, connecting it to visual and textual distributional discrepancies. Extensive experiments on real benchmark datasets validate the theoretical insights.
Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The correctness of the theoretical claims, including Lemma 4.3, Theorem 4.4, Theorem 4.5, and Theorem 4.6, is supported by detailed proofs in the supplementary material. The derivations appear to be sound.
Experimental Designs Or Analyses: All checked
Supplementary Material: **Implementation Details** and **D.1. Proof for the relationship between EMI and preference model** are reviewed.
Relation To Broader Scientific Literature: The proposed information-theoretic framework for analyzing MLLMs under distribution shifts provided insights for better understanding and alleviating that issue.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - Strengths: The connection between EMI and win rate provides a practical and efficient alternative for model evaluation.
- Weaknesses: Evaluations are only made on LLaVA v1.5 and LLaVA NeXT. It could benefit from involving the SOTA and representative MLLMs like Qwen2.5-VL and InternVL2.5.
Other Comments Or Suggestions: N/A
Questions For Authors: - How do the assumptions made in the theoretical framework impact the applicability of the results to more complex real-world scenarios? Can these assumptions be relaxed in future work?
- The paper mentions the potential use of the upper bound of EMID as a regularizer during post-training or test-time adaptation. Can you provide more details on how this could be implemented and its potential impact on model robustness?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > _A1. Applicability on SOTA MLLMs._
Thanks for the suggestion! Following your comment, **we additionally conduct the full evaluation with`Qwen2.5-VL-7B-Instruct` and `InternVL2.5-7B`**.
Specifically, we first evaluate the official release of `Qwen2.5-VL-7B-Instruct` and `InternVL2.5-7B` models on 35 synthetic shifts, then compute EMI estimates over the model responses. We perform a correlation analysis between estimates of EMI difference (EMID) and its upper bound. **Consistent with our existing finding, we observe a strong correlation between EMID and its theoretical upper bound in both models**.
|Model|Pearson $r$|$p$-val|
|-|-|-|
|InternVL2.5-7B|0.67|0.00|
|Qwen2.5-VL-7B|0.81|0.00|
> _A2. How do assumptions impact the applicability of proposals to more complex cases?_
This is an insightful question.
First, **to claim the closeness between EMI and win rate (Thm 4.4), we assumed $\epsilon$-representation capacity of the MLLM**.
* $\epsilon$-representation capacity essentially reflects the model’s ability to approximate the target task’s conditional distribution, meaning that the model can approximate this distribution with a KLD no greater than $\epsilon$.
* The assumption mainly argues the strong expressive and approximation capabilities of MLLMs. **Given the strong expressive and approximation capabilities of recent large-scale MLLMs, this assumption is generally reasonable in practice.**
* Moreover, numerous efforts improve the diversity of an instruction tuning dataset and robustness of the visual encoder of MLLM [1,2], which makes the learned distribution robustly approximate conditional distributions encountered during evaluation.
* As we continually pursue enriching dataset construction and enhancing visual recognition of the encoder, $\epsilon$-representation capacity assumption becomes reasonable in more complex cases.
Second, **to claim the relationship between EMID and its upper bound in a simple case (Thm 4.5), we assumed consistent conditional distributions** over $X_v|X_t$, $X_t|X_v$, and $Y|X$.
* This assumption zeros out the discrepancy between conditional distributions. If the conditional distributions are quite different between ID and OOD in some real-world scenarios, this makes our upper bound underestimate the performance gap, e.g., EMID.
* However, we highlight that the strong correlations between EMID and this upper bound have been observed through 61 distribution shifts, implying the validity of our upper bound to quantify EMID.
* **Meanwhile, we also provide a bound for general cases to address non-consistent conditional distributions in Thm 4.6**. This general-case bound can also be empirically estimated using a procedure similar to that of Thm 4.5.
* Therefore, as mentioned in our manuscript (L302-303), we recommend choosing a proper bound based on the knowledge of the data-generating process for datasets.
> _A3. Details for practical implications of EMID upper bound_.
While we confined the scope of this project to _presenting the first theoretical framework to quantify MLLM's performance gap_, we further provide **a potential application of EMID upper bound, instruction tuning with regularization, for this rebuttal**.
Without loss of generality, we can assume the input sequence $X=(X_v,X_t)$ as a sequence of intermediate representation vectors of MLLM, i.e., $Z=(Z_v,Z_t)$, and can also assume that $P_{\theta}(.|.)$ maps this representation to responses, i.e., $P_{\theta}:Z \rightarrow Y$. This induces a modified bound with representation variable $Z$ rather than raw data input $X$.
We instantiate this modified EMID bound in two distinct setups below, where we set the 24th layer's hidden states as $Z$, and adopt RJSD [3] as a differentiable estimator for JSD and an average of empirical model output entropy. We provide evaluation results with LLaVA-v1.5-7B on in-distribution (ID) and visual (V), text (T), and joint (J) synthetic shifts.
`Regularization term for instruction tuning`: $\mathbb{E}[H(P_{\theta}(\cdot|z))] \cdot ((D_{JS}^{0.5}(P_{Z_v}||N(0,I))+(D_{JS}^{0.5}(P_{Z_t}||N(0,I)))$
* One can not access $Q_X$ during the training phase, so we alternatively enforce the distribution of the intermediate representation to be close to the standard Gaussian.
* We sampled 10% of the instruction dataset from LLaVA-v1.5, and trained the entire LLM and modality connector parameters of LLaVA-v1.5 with and without the regularization.
* As shown in the table, we confirm that the EMID can be leveraged as a regularizer during instruction tuning to pursue better robustness to distribution shifts.
|Method|ID|V Shift|T Shift|J Shift|
|-|-|-|-|-|
|Baseline|72.7|65.8|68.0|59.6|
|Baseline + Ours|72.7|**66.3**|**68.3**|**60.8**|
> Reference
1. Qwen2.5-VL Technical Report, Alibaba Group 2025
2. Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders, Shi et al. 2025
3. The Representation Jensen-Shannon Divergence, Hoyos-Osorio et al. 2024
---
Rebuttal Comment 1.1:
Comment: I appreciate the response from the authors, which addresses my concerns. Thus, I would like to increase my score to Accept.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to read our rebuttal and increasing the score. We appreciate your insightful comments and support! | null | null | null | null | null | null |
What Has a Foundation Model Found? Inductive Bias Reveals World Models | Accept (poster) | Summary: This paper proposes methods for evaluating if the predictions made by foundation models following fine-tuning on new tasks are compatible with a reference world model. The authors introduce two metrics; inductive bias towards respecting state and inductive bias towards distinguishing state, which aim to measure how well the foundation model's extrapolations align with a reference world model. They demonstrate their approach in the domains of classical mechanics (orbital bodies), lattice problems, and the game Othello, and give evidence that foundation models trained on next-token prediction fail to transfer to new tasks in a way that is consistent with the underlying world model, but instead rely on task-specific heuristics..
## update after rebuttal
Following the authors rebuttal, and proposed clarifications of the paper, I am now learning towards acceptance.
Claims And Evidence: The authors do present "a framework for testing foundation models by analyzing how they adapt to synthetic tasks that share mechanisms with their training domain". They give quite convincing evidence that the models they evaluate are not generalizing in a way that respects the underlying statespace.
One of the key claims—that foundation models learn heuristics rather than fundamental world models—seems to come mostly from symbolic regression on the predictions of a model, rather than applying the metrics that are the main results of the paper. It is not clear how robust this claim is from this single, and quite small, experiment.
Methods And Evaluation Criteria: The metric R-IB seems well motivated to me, though I struggled to understand the metric D-IB (see questions below)
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes, pre-training and fine-tuning settings are well-described. Baseline models (RNNs, LSTMs, Transformers, Mamba) are compared, and the novel metrics (R-IB, D-IB) are computed over multiple training seeds. Symbolic regression is used to validate whether models recover correct physical laws, which I am less certain about. The experiment is fairly small, and its not immediately clear how robust the result is, and if its really coming from the fact that the models are learning the wrong physical laws.
Supplementary Material: Only section G in depth
Relation To Broader Scientific Literature: There is a good discussion of related work as far as I can see.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Overall I think the proposed method is compelling, though Im not sure it quite does what the authors claim and establishes that a world model has been learned. The main weakness to my mind is the formalism section, which I found very unclear and had to read multiple times before realising (I think) what the authors are proposing. See notes below.
Other Comments Or Suggestions: I will increase my score if the Framework section is sufficiently improved, and if the language is toned down a bit re: the results showing that the foundation model has learned a world model (rather than, to what degree its predictions are consistent with a given world model).
Questions For Authors: Question 1
My current understanding of the formalism the authors propose is this, and my first question is if this accurately captures what they are proposing.
1. The authors define consistency in the following way. Assume a foundation model makes predictions using a world model prediction = f(state = WM(input)). Assume the foundation model predicts in this way, and that the WM is re-used across tasks (e.g. when we fine-tune the foundation model on some new task). Then, if two inputs map to the same state in the world model, then they must give the same predictions across all tasks, where the task is to prediction some function that depends only on the state (e.g. not any additional information in the input that is not captured in the state, such as how the state was reached).
2. They then invert this relationship, saying that if two inputs are treated the same across all tasks, then they are compatible with being the same states in a world model. This does not imply they are the same states in the world model, or even that there is a world model, just that the foundation models predictions are compatible with this being true.
3. Then, if we have a reference world model (such as the statespace of a dynamical system, or the board state of a game), we can map inputs (e.g. trajectories) to states on the `correct’ world model. Then, for the foundational model to be compatible with this correct world model, the predictions must be the same for all inputs x that correspond to the same state on the correct world model. If this is not true, then the world model implicit in the foundation model (if there is one) is to some extend incompatible with the correct or reference world model.
If this is correct, then the introduction of the framework could have been a lot clearer and easier to understand. It took me multiple reads to get to this understanding, that I am still not confident on. But (if the above is accurate), the core ideas are not very complex, and will be better motivated by a simplified introduction.
Question 2
It could be that the foundation model has learned the correct world model but its predictions cannot be described by prediction = f(state = WM(input)). For example, if prediction = f(state = WM(input), input), e.g. the prediction is not just based on the world state but some spurious information from the input (such as having a bias towards certain predictions depending on the input length). This would give a false negative (there is an accurate world model, but a poor process of drawing inferences from that model). The model prediction = f(state = WM(input), input) even makes sense if assume the world model is encoded in the intermediate layers of the transformer, and the action of the residual stream. Does your result get around this in some way? Its fine if not, but it would be nice to A) have all the assumptions on how the foundation model generates predictions in a single paragraph, B) some examples of possible ways foundation models could generate predictions (as described here) that violate the assumptions (including false positives, where there is no world model, but the foundation model passes the eval).
Question 3
the motivation and intuition for D-IB is not clear when it is introduced. When you say about D-IB “ If a learning algorithm’s effective foundation is based on the world model, it should now extrapolate in predictable ways between inputs values in different state”, what precisely do you mean here by "in predictable ways"? What should we expect D-IB to be if there is a world model, and why? Are you saying something like, if the predictions are based on the (true) world model, then the correlation between predictions that map to the same state should be invariant between different synthetic tasks? Or that the correlation should be invariant when we vary the inputs x_i, x_j but they still map to fixed (but different) states?
Question 4
You talk about foundation models learning the correct underlying physical laws of a system, but focus purely primarily on the state space. Physical laws involve two kinds of object; kinematics (states) and dynamics, and the dynamics are typically the most important thing as they allow for future states to be predicted. For example in RL, world models capture both of these fundamental objects. In the newtonian mechanics example, the most interesting part in my opinion was the symbolic regression on the actual predictions, precisely because you are seeing if the model is learning the correct dynamics or not. Can you explain what R-IB and D-IB are telling you here, in simple terms? And am I right in thinking that the main claim, that the models are learning task-specific heuristics, comes from these symbolic regressions? If so, it feels like this should be a larger experiment, with a lot more details on experimental design.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your careful and insightful review.
> _I will increase my score if the Framework section is sufficiently improved, and if the language is toned down a bit re: the results showing that the foundation model has learned a world model (rather than, to what degree its predictions are consistent with a given world model)._
We really like the way you've described the framework and we've modified our paper to match (see below).
We also agree with your language change: it's overstating to say “has learned a world model”; given our test, it's much more accurate to say “consistent with a given world model”, which is the language we'll use going forward.
Your comments were very constructive and have improved the paper. **We believe we've addressed your comments below; if we haven't, please let us know if you have any further questions.**
> _Question 1: ...My first question is if this accurately captures what they are proposing..._
We've rewritten the framework to incorporate your suggestions. We don't have space to include the full section here but the outline is:
- This paper is about measuring the degree to which a foundation model's predictions are consistent with a given world model.
- Note a mechanistic interpretation is difficult to measure.
- Instead we examine model predictions on synthetic tasks that obey the world model
- Consistency requires two properties:
1. R-IB: When two inputs map to the same state in the world model, do they have the same predictions for each task?
2. D-IB: When two inputs have the same predictions for each task, do they belong to the same state?
- R-IB and D-IB are different perspectives on consistency, analogous to type-I/type-II errors.
> _Question 2: It could be that the foundation model has learned the correct world model but its predictions cannot be described by prediction = f(state = WM(input))_
Great question. This is why we have _two_ metrics:
- If a model predicts f(state, input) != f(state, input') for same-state inputs, R-IB is low
- If a model predicts f(state, input) = f(state', input') for different-state inputs, D-IB is low
These metrics can be used independently (e.g. focusing only on R-IB). But there are many applications where we’d care if a model has extraneous information. For example, if a model carries extraneous information its predictions on small data may be subject to spurious correlations. Empirically we find that low D-IB scores suggest that models base their predictions not on true state but on next-token partitions (see Figure 2 and Table 8), hurting small-data performance.
> _Question 3: When you say about D-IB “If a learning algorithm’s effective foundation is based on the world model, it should now extrapolate in predictable ways between inputs values in different state”, what precisely do you mean?_
You've helped us identify an unfortunate typo. The sentence should read "...it should **not** extrapolate in predictable ways...". So if the model learns according to not only the effective foundation but other inputs, D-IB is lower. We've corrected this.
> _Question 4: Physical laws involve two kinds of object; kinematics (states) and dynamics..._
We want to make sure we're interpreting your question correctly:
- kinematics: mapping from inputs X to state space (i.e. given by phi)
- dynamics: how the state spaces evolve.
We agree the distinction is interesting. Our framework only addresses kinematics (e.g. state, not the laws that may govern dynamics on top of state). Therefore R-IB and D-IB tell us about kinematics. But the two are related; knowing kinematics often allows recovering dynamics from enough sequences.
We focus on states because even in physics there's value in studying kinematics independently from dynamics. Although orbital trajectories (dynamics) are fixed, many interesting functions depend on state (e.g. energy, angular momentum). Addressing dynamics explicitly is valuable future work.
> _[Does the result that] models are learning task-specific heuristics [come only] from these symbolic regressions?_
Symbolic regressions are one tool for showing that models learn task-specific heuristics (we've added more experimental details in the appendix), and it's a common method for validating physics models [1, 2]. But it's not the only tool; our appendix has additional experiments with other methods (Table 8) in non-physics domains. We've moved this to the main text.
We've also added added new physics experiments to Table 8:
- D-IB for distinct states with same next tokens: 0.709
- D-IB for distinct states with different next tokens: 0.764
So poor D-IB can be partially attributed to models basing their predictions not on state but on next-token partitions. Figure 2 illustrates this point for Othello.
[1] Udrescu et al. "AI Feynman: a Physics-Inspired Method for Symbolic Regression." (2020).
[2] Cranmer et al. "Discovering Symbolic Models from Deep Learning with Inductive Biases." (2020).
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, which has cleared up a lot of my confusion and if implemented would improve the clarity of the paper greatly. I have raised my score. | Summary: In this paper, the authors focus on the problem of understanding the generalization capabilities of foundational models. For example, can a foundational model truly develop an inductive bias towards Newtonian mechanics or the rules of a board game? The authors test this question, developing a framework for testing world models in addition to proposing metrics such as Inductive Bias towards Respecting State (R-IB) and inductive bias towards distinguishing state (D-IB). The test hypothesis in a number of domains - specifically orbital mechanics, the board game Othello, and Lattices. They find that while common world models such as transformers can excel at specific tasks, they fail to learn inductive biases and rely on data-specific, piecemeal heuristics.
## update after rebuttal
In line with the other reviewers' comments, I keep my assessment of accept.
Claims And Evidence: Yes. The authors clearly show quantitatively and qualitatively that world models struggle with learning general principles from data.
Methods And Evaluation Criteria: Yes. Even though the domains used for evaluation are very simple (two-world body physics, 2d-board game), they allow for a deeper understanding why foundational models fail to generalize. The qualitative failure cases (such as the Figure 1 and 2) make the strongest cases for the paper. The quantitative metrics such as R-IB and D-IB also support the argument even if they are somewhat abstracted away from the domains.
Theoretical Claims: The paper contains no proofs.
Experimental Designs Or Analyses: Experimental designs are sound even if the datasets used are simplistic and metrics abstract.
Supplementary Material: Appendix G. This is a very helpful example of what models are actually learning; it might be useful to incorporate this result in the main paper if possible.
Relation To Broader Scientific Literature: This paper contests the idea that foundational models are able to learn inductive biases in various domains and shows specifically how and why they fail. One of the domains (Othello) has been used in prior literature.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The proposed metrics (R-IB and D-IB), while needed for a comprehensive evaluation, are somewhat abstract and sometimes difficult to understand in the context of the various domains (Orbital physics, Othello). It would be helpful to ground the concepts for these metrics with domain-specific examples when describing them in section 2.
Other Comments Or Suggestions: None.
Questions For Authors: I have no questions for the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive review of our paper. We're glad that you appreciated our paper and findings, and that our results gave you "a deeper understanding [of] why foundation models fail to generalize".
> _The proposed metrics (R-IB and D-IB), while needed for a comprehensive evaluation, are somewhat abstract and sometimes difficult to understand in the context of the various domains (Orbital physics, Othello). It would be helpful to ground the concepts for these metrics with domain-specific examples when describing them in section 2._
This is a great point. We've revised our paper to make the framework clearer, and we've added more domain-specific examples.
Specifically:
- A foundation model's predictions are consistent if two properties hold:
1. When two inputs map to the same state in the world model, do they have the same predictions for each synthetic task? This is measured by R-IB.
2. When two inputs have the same predictions for each synthetic task, do they belong to the same state? This is measured by D-IB.
- So R-IB and D-IB are different perspectives on measuring consistency, analogous to type-I and type-II errors in classification.
Here's an implementation example for the lattice problem:
- X is a sequence of directions ("L", "R", "stay")
- 5 states (1-5)
- phi(X) maps movement in X starting at state 1
- So phi("R", "R", "L")=2 (two states right, one state left)
- We create random synthetic datasets (X, Y) such that phi(x) = phi(x') implies y = y' for each (x,y), (x',y') pair.
- We fit a foundation model (e.g. by fine-tuning) on a very small subset of each dataset and make predictions for the held-out data points.
- R-IB: do two held-out points with the same state always have the same predictions across synthetic datasets?
- D-IB: do two held-out points with different states have unpredictable extrapolations across synthetic datasets? | Summary: This paper investigates whether foundation models trained via next-token prediction learn "world models". The paper uses synthetic tasks to measure whether such models are able to generalize from their training tasks to other tasks drawn from the same distribution. The evaluation is performed using two metrics to quantify how much the model learns inductive biases that _respect_ and _distinguish_ ground-truth states, according to an expert-defined history-to-state mapping. The paper includes experiments in predicting orbital mechanics, Othello board states, and discrete position on a number line, and presents evidence that despite good performance at next-token prediction, models appear not to use the expert-defined state representations.
Claims And Evidence: Claims:
1. "Framework for testing foundation models by analyzing how they adapt to synthetic tasks that share mechanisms with their underlying training domain."
- Supported. The framework itself appears sound. The orbital mechanics dataset samples random initial conditions (masses, positions, and relative velocities), and shows that the model can predict the trajectories without correctly predicting force vectors. I believe this demonstrates that the framework can be applied for the intended purpose.
- Measuring the model's ability to predict sequences it was not trained on feels like an appropriate way to test generalization, and true generalization [seems to require](https://arxiv.org/abs/2402.10877) a (causal) world model. That said, I believe the linked paper makes that claim with respect to out-of-distribution data, whereas this paper assumes the generalization is still in-distribution.
2. "Metrics that measure a model's inductive bias toward known models of reality."
- I had trouble evaluating this. I found the inductive bias definitions for respecting and distinguishing state to be a bit hard to follow. In Section 2: Implementation, a toy example would have been helpful. My impression is that if the approach fails, it could be that the predictor model is underpowered, rather than due to a lack of world model.
- The defined metrics appear to rely heavily on whether two inputs share the same underlying state. It would be helpful to quantify what fraction of the dataset contains different inputs from the same state. I'm concerned that if there aren't many such examples, the metrics might behave poorly. For example, in Othello there are a combinatoric number of states, and after a certain number of steps, it is highly unlikely that the model will see the same state twice for two different trajectories.
3. "Models can excel at their training tasks yet fail to develop inductive biases toward the true mechanisms when being adapted to new tasks."
- Mostly supported: I already mentioned the orbital mechanics experiments, and the Othello experiments also showed that the model struggles to predict the correct board state but successfully predicts the set of next legal moves.
Methods And Evaluation Criteria: The definition of a "world model" as a mapping from histories to a particular ground-truth state representation feels problematic for answering the ultimate question of whether foundation models contain world models. There are many valid and useful state representations, and if the foundation model fails to match a particular expert-defined representation, it does not mean the foundation model lacks a world model altogether.
For example, [this project](https://www.neelnanda.io/mechanistic-interpretability/othello) showed that when evaluating whether Othello-GPT had a linear representation of the board state, it matters whether the representation is of the form "this cell is black" vs. "this cell has my color", since the model plays games as both colors.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See above.
Supplementary Material: I reviewed some of the additional tables (4, 5, 8) that were linked from the main text.
Relation To Broader Scientific Literature: It is an extremely important question whether foundation models learn world models. Foundation models are the dominant paradigm in AI right now, and much of our confidence in them is predicated on the assumption that their performance at sequence prediction implies that they have learned a world model.
This paper doesn't address the question fully. Instead it addresses the question of whether a particular foundation model has learned a particular world model. A satisfying answer to this narrower question would still be a useful step towards answering the broader question.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: I wonder how useful this method can be when there multiple acceptable state representations. It would be helpful for the paper to analyze, for example, what happens in Othello under the two representations described above.
Other Comments Or Suggestions: N/A
Questions For Authors: Can you provide a more intuitive explanation of the inductive bias definitions?
Can you provide a toy example to go along with the Section 2: Implementation?
=====================
POST-REBUTTAL UPDATE
=====================
I found the comments by reviewer 86Gp and the subsequent response very helpful in understanding the paper's main claim. I agree that explanation could have been much clearer, and I'm happy the authors are taking steps to improve it.
In the rebuttal "Lattice problem" example, I am still unclear on what exactly $y$ is in the pair $(x,y)$. I suppose it's some predicted quantity that depends on the encoded state $\phi(x)$? I would encourage the authors to make this example even more concrete by saying what specific thing is being predicted, how it depends on the state, and maybe even provide some example predictions from which we can see what the D-IB and R-IB metrics would report.
I thank the authors for providing results for the two different Othello representations. Unfortunately I'm not sure what you mean by "poor IB metrics", and I don't understand what the takeaway is here. In regards to my original question of whether the method is sensitive to the choice of representation, it sounds like the answer is no. But then why are there two different correlation numbers for the different distance metrics? I would encourage the authors to take a bit more time to explain this.
Overall, I think the weakest aspect of the paper is the clarity of the presentation, and I urge the authors to do whatever they can to clarify the points that the other reviewers and I were confused about.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive review of our paper. We respond to your comments and describe new results below; to summarize the new results, we've added:
- New experiments showing robustness to predictor in R-IB and D-IB calculations
- A more intuitive explanation of our framework
- Clarification and new analysis of multiple representations in Othello
**We hope our comments have addressed your concerns. If not, please let us know if you have any more questions we can address in the follow-up.**
> _Can you provide a more intuitive explanation of the inductive bias definitions? Can you provide a toy example to go along with the Section 2: Implementation?_
We've revised our paper to make this more intuitive. Broadly:
- A foundation model's predictions are consistent if two properties hold:
1. When two inputs map to the same state in the world model, do they have the same predictions for each synthetic task? This is measured by R-IB.
2. When two inputs have the same predictions for each synthetic task, do they belong to the same state? This is measured by D-IB.
- So R-IB and D-IB are different perspectives on measuring consistency, analogous to type-I and type-II errors in classification.
Here's an implementation example for the lattice problem:
- X is a sequence of directions ("L", "R", "stay")
- 5 states (1-5)
- phi(X) maps movement in X starting at state 1
- So phi("R", "R", "L")=2 (two states right, one state left)
- We create random synthetic datasets (X, Y) such that phi(x) = phi(x') implies y = y' for each (x,y), (x',y') pair.
- We fit a foundation model (e.g. by fine-tuning) on a very small subset of each dataset and make predictions for the held-out data points.
- R-IB: do two held-out points with the same state always have the same predictions across synthetic datasets?
- D-IB: do two held-out points with different states have unpredictable extrapolations across synthetic datasets?
> _My impression is that if the approach fails, it could be that the predictor model is underpowered, rather than due to a lack of world model_
We can think of two ways to interpret this question: one is about the predictor that's part of the foundation model; the other is about the predictor used to calculate R-IB and D-IB.
For the first interpretation: We're interested in understanding a foundation model's extrapolative properties in small data regimes (e.g. few shot learning) since this is how they're typically used. Thus, our metrics measure consistency with the world model, not predictor accuracy, so they wouldn't penalize an underpowered predictor.
For the second interpretation: R-IB and D-IB are based on a predictor that only uses a single binary feature. Therefore the Bayes optimal solution is nonparametrically identifiable. [This table varies the number of data points used to form R-IB and D-IB](https://imgur.com/a/ppaDzrx); empirically we find little sensitivity, because the predictor converges to the optimal solution quickly.
> _I wonder how useful this method can be when there [are] multiple acceptable state representations. It would be helpful for the paper to analyze... what happens in Othello under the two representations described above._
This is an important question. Our framework depends on the state mapping, not how it's represented. The only important quantities for D-IB and R-IB are whether two inputs have the same or different state. This means the **metrics aren't sensitive to different ways state can be represented**, so the two Othello representations have identical metrics. In comparison, sensitivity to representation makes other metrics (e.g. probes) more fragile.
However we could still use different representations to help _analyze_ poor IB metrics. D-IB measures how similar the predictions for input pairs with distinct states are. We consider two distance metrics for distinct-state pairs: standard ("cell is black") vs. relative ("cell has my color") Hamming distances. We find:
- Corr(D-IB, standard distance) = 0.094
- Corr(D-IB, relative distance) = 0.001
Interestingly this indicates states with similar standard representations are more easily confused.
> _I believe... this paper assumes the generalization is still in-distribution._
This is a good point. It's crucial to also test out-of-distribution (OOD). While our submission was unclear, we do test on OOD sequences -- the only restriction on synthetic tasks is that they be consistent with the world model, not the original sampling distribution. We've changed the writing to make this clear.
> _It is highly unlikely that the model will see the same state twice for two different trajectories [in Othello]._
This is absolutely right. Going back to your earlier point, it shows the importance of testing distribution shifts. In practice, we sample inputs in Othello so that we have enough same-state examples (3,889) to make accurate estimates. Appendix B has more information but we'll highlight it in the main text. | null | null | null | null | null | null | null | null |
Deep Electromagnetic Structure Design Under Limited Evaluation Budgets | Accept (poster) | Summary: The authors present Progressive Quadtree-based Search (PQS), a novel method for electromagnetic structure (EMS) design under limited evaluation budgets. PQS leverages a quadtree-based hierarchical representation to mitigate the curse of dimensionality inherent in conventional pixel-wise optimization. In addition, it introduces a Consistency-based Sample Selection (CSS) module that leverages model prediction consistency to dynamically allocate scarce evaluation resources. PQS is benchmarked on two real-world tasks, where it substantially outperforms baseline methods in both efficiency and robustness. Comprehensive ablation studies further validate the important role of the hierarchical representation and the adaptive sample selection mechanism.
Claims And Evidence: This work provides a successful demonstration of deep learning’s application in electromagnetic structure design. By directly addressing the often-overlooked challenge of high data collection costs in real-world scenarios, it shows the potential to shorten product design cycles in industrial settings, making it well-suited for practical, large-scale applications. The claims of the paper are well-supported by evidence from experiments.
1. Experiments on two real-world tasks demonstrate that PQS yields designs with enhanced performance, with limited evaluation budgets.
2. Ablation studies confirm the importance of the QSS and CSS strategy.
3. Robustness tests indicate that PQS consistently delivers reliable performance.
Methods And Evaluation Criteria: The paper presents effective methods for addressing EMS design challenges. Its innovative quadtree-based representation reduces the problem’s dimensionality, and the CSS mechanism ensures efficient utilization of computational resources under limited evaluation budgets. Additionally, the evaluation tasks (DualFSS and HGA) are highly relevant to real-world electromagnetic structure design applications. However, some detailed issues still remain.
1. Regarding the CSS mechanism, it relies on assessing the consistency of historical predictors, but at t=0 there is only one predictor available, so it is unclear how consistency is computed. The authors should provide additional details on this aspect.
2. Although the predictor itself is not the main contribution of the methods, I noticed that the authors consistently used ResNet-50 as the architecture. Therefore, it would be interesting to discuss whether using different predictor architectures could lead to significant differences in prediction accuracy.
3. As noted in the paper, there is currently a lack of publicly available real-world datasets and benchmarks in the EMS design field. Therefore, I recommend that the authors release the annotated EMS design data, as such a resource would greatly assist future researchers in improving and advancing related methodologies.
4. The stopping condition in Algorithm 1 is inconsistent with that in Figure 2. In the former, the algorithm stops once the simulation budget is exhausted, whereas in the latter, it stops when a satisfactory solution is reached. Furthermore, Algorithm 1 should output the optimal solution found by the search, not a satisfactory solution.
Theoretical Claims: This paper demonstrates strong practical applicability and innovation, especially with the introduction of the PQS and the CSS mechanism, both of which have been experimentally validated for their effectiveness. However, the theoretical background could be expanded further. For example, it would be beneficial to explore more thoroughly how much the quadtree-based representation reduces the search space size. What is the specific relationship between the search space size and the parameter $N_{max}$?
Experimental Designs Or Analyses: The paper compares the proposed method with various baseline approaches and supports its claims with comparative evaluations, robustness tests, and ablation studies, providing solid evidence of its effectiveness. Nevertheless, some improvements are possible:
1. While the authors note that simulation time represents the primary cost within the overall optimization, it would still be beneficial to provide a more detailed computational time including simulations and model-updating—especially since PQS continuously retrains its predictor.
2. Although the authors explain in the appendix why Bayesian optimization cannot be successfully applied, the paper lacks a more detailed analysis. Since Bayesian optimization is commonly used for optimizing expensive evaluations, I recommend that the authors include a more thorough discussion on this topic.
Supplementary Material: I reviewed the supplementary material, which provides additional details about the implementation, experimental setups, and further experimental results.
Relation To Broader Scientific Literature: The key contributions of this paper are well-grounded in the existing literature, addressing key gaps.
Essential References Not Discussed: The authors have done a thorough job of addressing the essential references relevant to their contributions.
1. The paper clearly discusses key methodologies in the field of electromagnetic structure design, including both surrogate-based optimization and generative approaches.
2. The authors have made meaningful comparisons with analogous fields, highlighting the unique challenges of this task.
Other Strengths And Weaknesses: 1. The images in the appendix are excessively large. Please adjust their sizes to improve readability.
2. The citation for the Genco baseline is missing the publication year.
Other Comments Or Suggestions: 1. Figure 1 can benefit from a longer caption.
2. In the "Depth-wise Importance Assignment" section, there appear to be repetitive sentences.
3. In the "Effectiveness of Consistency-based Sample Selection" section, "Top-M" should be corrected to "Top-K".
4. In the "Quadtree-based Representation" section, "so taht" should be corrected to "so that".
Questions For Authors: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: >Q1. The CSS mechanism relies on assessing the consistency of historical predictors. However, there is only one predictor available at t=0.
**A1.** Thank you for your insightful comment. We actually train multiple predictors at t=0 rather than relying on a single one. These predictors are then used to compute and assess consistency. We will clarify this detail in the revised manuscript.
>Q2. It would be interesting to discuss whether using different predictor architectures could lead to significant differences in prediction accuracy.
>
**A2.** We have conducted further experiments to validate this. Please refer to A2 in our response to Reviewer 63Rq for more details.
>Q3. I recommend that the authors release the annotated EMS design data.
**A3.** We will release the annotated EMS design dataset upon acceptance.
>Q4. The stopping condition in Algorithm 1 is inconsistent with that in Figure 2 ... Furthermore, Algorithm 1 should output the optimal solution found by the search, not a satisfactory solution.
**A4.** Thank you for your suggestion. We will correct this in our revised version.
>Q5. For example, it would be beneficial to explore more thoroughly how much the quadtree-based representation reduces the search space size. What is the specific relationship between the search space size and the parameter $N_{max}$?
**A5.** The quadtree representation is used to reduce the complexity of the design space. Each leaf node in the quadtree represents a binary decision (either 0 or 1) for its corresponding subregion. Thus, if the current set of leaf nodes is $L$, the total size of the design space is $2^{|L|}$. The parameter $N_{{max}}$ limits the maximum number of leaf nodes, i.e., $|L| \leq N_{{max}}$, so the upper bound of the search space is $2^{N_{{max}}}$. In other words, a smaller $N_{{max}}$ effectively reduces the search space size.
>Q6. It would still be beneficial to provide a more detailed computational time including simulations and model-updating—especially since PQS continuously retrains its predictor.
**A6.** Below, we provide a detailed breakdown of the average computational time for simulation in **Table 2** and model-updating in **Table 3**. Specifically, we measured the model-updating time under both initial and final dataset conditions for each task:
* HGA: 300 samples → 1,000 samples.
* DualFSS: 500 samples → 1,000 samples.
All model updates were consistently performed for 200 epochs.
**Table 2:** Simulation cost per iteration (seconds).
|Task|Avg.Time|
|-|-|
|HGA|5588|
|DualFSS|5837|
**Table 3:** Model updating cost per iteration (seconds).
|Task|Updating Samples|Avg.Time±Std|
|-|-|-|
|HGA|300|18.8716±0.8278|
|HGA|1000|43.8452±0.2518|
|DualFSS|500|26.9258±0.5701|
|DualFSS|1000|44.5284±0.2120|
In both tasks, simulation is the dominant cost. Model updating remains lightweight and efficient throughout the search.
>Q7. Since Bayesian optimization is commonly used for optimizing expensive evaluations, I recommend that the authors include a more thorough discussion on this topic.
**A7.** We will include a more comprehensive discussion of Bayesian optimization in our revised manuscript. | Summary: This paper proposes a novel method for electromagnetic structure (EMS) design under limited computational budgets, called Progressive Quadtree-based Search (PQS). PQS employs a Quadtree-based Search Strategy (QSS) to progressively explore the high-dimensional EMS design space and incorporates a Consistency-based Sample Selection (CSS) mechanism to optimize search efficiency under constrained evaluation resources. The method is evaluated on two real-world engineering tasks: Dual-layer Frequency Selective Surface (DualFSS) and High-gain Antenna (HGA). Experimental results demonstrate that PQS can efficiently identify high-performance designs while reducing computational costs by 75%–85% compared to baseline methods, significantly shortening the product design cycle.
Claims And Evidence: * The paper employs Surrogate-GA, which is no longer representative of state-of-the-art (SOTA) surrogate-assisted evolutionary algorithms (SAEAs). Recent survey [1] indicate that SAEAs have become the dominant approach for high-dimensional expensive optimization problems. The paper should compare PQS with SOTA SAEAs rather than a 2020-era Surrogate-GA.
* The paper utilizes ResNet50 as the predictor but lacks details on training data sources and training methodology. The dataset size is limited to 300 samples, which may be insufficient for complex EMS tasks. A larger dataset should be used, and additional experiments should evaluate the impact of dataset size on performance.
* The observation that random sampling (RS) and Surrogate-RS outperform Surrogate-GA and Surrogate-GW is unexpected, as it contradicts typical evolutionary algorithm performance trends. The paper does not provide a satisfactory explanation for this result.
[1] M. Zhou, M. Cui, D. Xu, S. Zhu, Z. Zhao and A. Abusorrah, “Evolutionary Optimization Methods for High-Dimensional Expensive Problems: A Survey,” in IEEE/CAA Journal of Automatica Sinica, vol. 11, no. 5, pp. 1092-1105, May 2024, doi: 10.1109/JAS.2024.124320.
Methods And Evaluation Criteria: * The dataset contains only 300 samples, which may hinder generalization. The study should explore the impact of dataset size by conducting experiments with larger datasets.
* The baseline methods do not include the most recent SAEA algorithms, affecting the fairness of the evaluation.
* Only ResNet50 is used as the predictor. The study should compare different classifiers to assess their impact on performance.
Theoretical Claims: The paper does not provide theoretical justification for why QSS is superior to other dimensionality reduction approaches, such as PCA or AutoML-based search space simplification.
Experimental Designs Or Analyses: * The paper relies on older methods (e.g., Surrogate-GA, Surrogate-RS), omitting SOTA SAEAs, which undermines the validity of the comparisons.
* The predictor is trained with only 300 samples, which may lead to poor generalization. The study should analyze the predictor’s accuracy across different dataset sizes.
* The superior performance of RS and Surrogate-RS over Surrogate-GA contradicts standard evolutionary optimization trends, and the paper does not provide a satisfactory explanation.
Supplementary Material: Important predictor training details are missing, particularly regarding ResNet50 training data sources, hyperparameter choices, and training iterations. This should be addressed.
Relation To Broader Scientific Literature: The paper references some relevant works but fails to cite the latest SAEA methods. Additionally, it does not cover recent advances in EMS optimization, such as deep reinforcement learning (DRL) or Bayesian optimization (BO).
Essential References Not Discussed: I do not have more suggestions.
Other Strengths And Weaknesses: * The paper does not compare with SOTA surrogate-assisted evolutionary algorithms.
* Only 300 samples are used, which may not be sufficient to train a robust predictor.
* The better performance of RS over Surrogate-GA is counterintuitive and requires further justification.
Other Comments Or Suggestions: * Expand the dataset size and analyze its impact on the predictor’s accuracy.
* Include SOTA SAEAs for fair comparisons.
* Explain the unexpected experimental results (why RS outperforms Surrogate-GA).
Questions For Authors: I have no more questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Q1. PQS should be compared with SOTA surrogate-assisted evolutionary algorithms (SAEAs).
**A1.** Thank you for your constructive suggestion. We compared our method with TS-DDEO[1] and SAHSO[2] in Tables 1-3. PQS outperforms them by 3.14-3.60 and 9.63-10.99 and achieves significantly higher robustness in the aggregation value of objectives, with 15%-19% lower variance. This stems from PQS’s ability to hierarchically explore the design space and adaptively allocate the tight evaluation budget via CSS. This efficiency could significantly benefit real-world applications where evaluation budgets are tight.
**Table 1:** Comparisons on High-gain Antenna (HGA).
|Method|Agg Obj↑|Obj1↑|Obj2↑|
|-|-|-|-|
|TS-DDEO[1]|0.5201|1.0441|0.5201|
|SAHSO[2]|0.0589|0.0589|2.2410|
|**PQS (Ours)**|**3.6595**|**3.6595**|**6.4820**|
**Table 2:** Comparisons on Dual-layer Frequency Selective Surface (DualFSS).
|Method|Agg Obj↑|Obj1↑|Obj2↑|
|-|-|-|-|
|TS-DDEO[1]|5.5627|5.5627|9.0000|
|SAHSO[2]|4.2066|4.2066|8.0576|
|**PQS (Ours)**|**15.1964**|**15.1964**|**31.0443**|
**Table 3:** Robustness comparison results on High-gain Antenna (HGA).
|Method|Agg Obj↑|Obj1↑|Obj2↑|
|-|-|-|-|
|TS-DDEO[1]|0.03±0.42|0.62±0.87|0.27±0.36|
|SAHSO[2]|-0.29±0.40|0.20±0.75|0.06±0.61|
|**PQS (Ours)**|**4.34**±**0.34**|**4.45**±**0.30**|**4.53**±**0.46**|
[1] A two-stage surrogate-assisted metaheuristic algorithm..., Soft Comput. 2023.
[2] A surrogate-assisted hybrid swarm optimization..., Swarm Evol. Comput. 2022.
>Q2. Only 300 samples may hinder generalization. Authors should examine dataset sizes and classifiers' impact on performance.
**A2.** We acknowledge dataset size and model choice's importance. However, in EMS design, simulation takes hours to days (e.g., 559-583 seconds per evaluation). To address this, we begin with a small initial dataset (300 samples) and use the remaining evaluation budget iteratively during optimization. This ensures that computational resources are prioritized for optimization rather than data collection.
In Table 4, we evaluated Kendall's Tau (KTau) between predicted and ground truth across dataset sizes (300-1,100) on a fixed 5,800-sample test set (10 trials). We clarify two key observations:
* Expanding the dataset from 300 to 1,100 samples maintains weak KTau correlation (<0.3), showing larger datasets don't proportionally enhance accuracy in our task.
* All models perform poorly, implying that model capacity is not the limiting factor.
These observations suggest that while gathering more data can be helpful, it is not always cost-effective in EMS design. Our PQS is designed to achieve data-efficient optimization, attaining superior performance even with a limited budget.
**Table 4:** Impact of training sample size and predictor on model performance.
|Model|KTau @300 Samples|KTau @700 Samples|KTau @1100 Samples|
|-|-|-|-|
|GoogLeNet|0.2148±0.0088|0.2610±0.0084|0.2718±0.0071|
|ResNet50|0.2175±0.0065|0.2600±0.0126|0.2896±0.0084|
|ResNet101|0.2146±0.0135|0.2605±0.0092|0.2882±0.0065|
> Q3. The observation that RS and Surrogate-RS outperform Surrogate-GA and Surrogate-GW is unexpected.
**A3.** One possible factor is **each method's reliance on the predictor for solutions**. Specifically:
* Surrogate-GA and Surrogate-GW **rely on the predictor to guide the evolution of new populations**. With a limited training set (300 samples), the predictor's accuracy suffers, misleading GA and GW optimizers to suboptimal regions.
* By contrast, RS and Surrogate-RS avoid predictor dependence: they generate candidates via uniform random sampling. **This is like our QSS, which also avoids heavy predictor reliance** by hierarchical random exploration.
>Q4. Why QSS is superior to other dimensionality reduction approaches, such as PCA or AutoML-based search space simplification?
**A4.** Our QSS excels in **search space reduction** and **dynamic adaptation**:
* **Explicit space shrinking**: Unlike PCA that projects the design space into a continuous latent space, our QSS preserves the discrete, grid-like nature of EMS through its quadtree-based hierarchical representation, ensuring clear shrinking.
* **Dynamic adaptation during search**: PCA or feature selection methods perform static, global dimensionality reduction. In contrast, our QSS dynamically adjusts the search granularity during optimization. Initially, it explores coarse global patterns and later focuses on locally promising regions, enabling efficient exploration under tight budgets.
>Q5. It does not cover recent advances such as deep reinforcement learning or Bayesian optimization.
**A5.** We will carefully discuss and cite these works in our revised paper.
>Q6. Important training details are missing.
**A6.** We clarify training details: Batch Size: 256, Epochs: 200, Cosine Annealing LR (lr=0.01, T_max=200, η_min=1e-6), Gradient Clipping (Global L2 norm ≤1.0/batch), Optimizer: Adam, resnet_lr=0.01, fc_lr=0.01. These will be added to the manuscript. | Summary: # I have no experience in this field so I am not capable of completing a sound review. I acknowledge that some of the comment are generated with the assistance of LLM, and some parts are left blank.
The paper proposes Progressive Quadtree-based Search (PQS) for electromagnetic structure (EMS) design under limited budgets, tackling high-dimensional spaces (e.g., 10^86 configurations) and costly simulations (660-42,780 seconds each). PQS uses Quadtree-based Search Strategy (QSS) for hierarchical exploration and Consistency-based Sample Selection (CSS) to balance exploitation/exploration. Tested on DualFSS and HGA, PQS outperforms baselines (e.g., 15.20 dB vs. 7.28 dB for DualFSS) with 1000 simulations, cutting costs by 75-85% and saving 20-48 days. Contributions include PQS, QSS, and CSS for efficient EMS design.
Claims And Evidence: Claims (PQS outperforms baselines, QSS reduces dimensionality, CSS boosts efficiency, 75-85% cost reduction) are supported by experiments (Tables 3, 6; Figures 3, 4) and metrics (Agg Obj, Kendall’s tau). Time-saving estimate (20-48 days) lacks detailed derivation but aligns with simulation reductions. Evidence is clear and convincing overall.
Methods And Evaluation Criteria: N/A (not familiar)
Theoretical Claims: No formal proofs
Experimental Designs Or Analyses: Experiments (Section 5, Appendix D) compare PQS vs. baselines (1000 vs. 7000/4000 simulations) and ablate QSS/CSS (Tables 5, 6; Figures 3, 7).
limited to two tasks, but conclusions hold.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Not familiar with this field
Essential References Not Discussed: Not familiar with this field
Other Strengths And Weaknesses: Strengths: Original QSS+CSS combo, significant cost savings, clear presentation (e.g., Figure 2).
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and valuable feedback. We acknowledge your concerns and would like to clarify our motivations and technical contributions.
**1. Significance of the Task & Motivation**
Electromagnetic structure (EMS) design is fundamental to modern antenna/material development, yet faces two critical challenges:
+ **Vast Design Space**: Conventional grid-based representation creates exponentially growing search spaces (e.g., 16×16 layout can contain over $10^{77}$ potential design).
+ **Expensive Evaluations**: Each design must be evaluated via time-consuming electromagnetic simulations (often taking minutes to hours), making large-scale evaluations impractical for industrial timelines.
Existing approaches primarily revolve around two strategies:
* **Predictor-Based Methods:** Use predictors to approximate the costly electromagnetic simulator. However, such predictors need **large amounts of high-quality data** to achieve sufficient accuracy, which is infeasible when budgets are tight.
* **Generative Approaches:** Use generative models to produce promising designs directly. They also require **tens of thousands of labeled samples** for training.
In short, **our motivation** is to **reduce the reliance on large evaluation costs** by **introducing a more efficient search mechanism** that narrows down the design space in a structured manner.
**2. Methodological Innovation**
Our proposed **Progressive Quadtree-based Search (PQS)** addresses these challenges through:
* **Quadtree-based Search Strategy:**
* **Quadtree-based Representation:**
To reduce the dimensionality, we encode the layout in a **quadtree** that recursively subdivides the structure from a coarse resolution to finer details.
* **Progressive Tree Search:**
To focus the limited budget on the most impactful design decisions, we decompose the high-dimensional search into **a tree search from global pattern to local refinement**.
* **Consistency-based Sample Selection:**
Rather than requiring a highly accurate predictor, we adopt an adaptive sample selection. Specifically:
* **Ranking-based Reliability:**
Kendall's Tau is used to measure how consistently the predictors ranks candidate designs across consecutive iterations, which reveals the model's reliability to guide the search effectively.
* **Dynamic Exploitation & Exploration:**
Based on the consistency, our method dynamically allocates evaluations: high consistency leads to evaluate top-ranked designs, while low consistency leads to an additional exploration to uncover overlooked candidates.
**3. Practical Validation**
Our proposed approach has already been successfully validated in real-world engineering tasks:
* **High-Quality Outcomes:** Compared to baseline methods, PQS yields designs with approximately 70% greater improvement in DualFSS design objectives.
* **Cost Savings:** Achieved **75%-85% cost reduction** vs. generative approaches.
* **Robust Performance:** Repeated experiments showed that PQS exhibited **29%-92%** lower variance compared to baseline methods.
**4. Broader Impact to ICML Community**
By bridging the gap between theoretical optimization and industrial constraints, PQS provides:
* **A scalable framework** for high-dimensional and expensive structural design problems.
* **A low-budget solution** for EMS design with limited computational resources.
We will revise the manuscript to better emphasize these aspects and would be happy to provide supplementary materials detailing industrial implementation.
> Q1. Time-saving estimate (20-48 days) lacks detailed derivations.
**A1.** Thank you for your valuable feedback. We acknowledge that the original time-saving estimate (20–48 days) lacked detailed derivation. Upon re-evaluating the calculations, we identified an error in rounding and task-specific simulation reductions. Here is the correct derivation:
+ **DualFSS Task**: The average simulation time is 583.7 seconds; The reduction in simulations is 3,000 times; The total time saved is $\frac{583.7 \times 3,000}{3,600 \times 24} \approx 20.27$ days.
+ **HGA Task**: The average simulation time is 558.8 seconds; The reduction in simulations is 6,000 times; The total time saved is $\frac{558.8 \times 6,000}{3,600 \times 24} \approx 38.80$ days.
Thus, the refined time-saving range is **20.27–38.80 days**. We apologize for the oversight and have updated the manuscript to reflect this correction. Importantly, the revised values still robustly demonstrate that our method achieves significant efficiency gains compared to generative approaches. | null | null | null | null | null | null | null | null |
On Mitigating Affinity Bias through Bandits with Evolving Biased Feedback | Accept (poster) | Summary: The authors mathematically analyze the feedback loop created by affinity bias in hiring processes and propose strategies to mitigate its effects. They introduce affinity bandits, a model where biased feedback evolves based on the selection frequency of each arm. Their algorithm operates without prior knowledge of bias models or initial biases, and they establish a lower bound that applies in known and unknown bias scenarios.
They show that ignoring bias leads standard bandit models, such as UCB, to suffer linear regret, even in simple cases. Additionally, infrequently choosing some arms makes obtaining an unbiased sample lead to high variance. Finally, using synthetic data, they evaluate their framework against standard bandit approaches like UCB, showcasing their algorithm's effectiveness in addressing unconscious bias.
- Updated the score +1 after the rebuttal.
Claims And Evidence: The theoretical claims are well supported with mathematical proofs, and authors supplement their theory with empirical evaluations using synthetic data.
I think the authors propose a neat evolving biased feedback model. However, I am uncertain if the chosen motivation is the right one for the algorithm/method design.
Methods And Evaluation Criteria: Yes and no. The theoretical claims are well backed with mathematical proofs and the empirical results support the claims. However, I am uncertain if the chosen motivation is the right one for the algorithm/method design.
Theoretical Claims: Yes, mainly in the main paper and briefly in the appendix
Experimental Designs Or Analyses: Yes, in the main paper and the appendix.
Supplementary Material: There is no zipped supplementary material but the authors attached an appendix to the main paper. I briefly read through the appendix.
Relation To Broader Scientific Literature: The proposed evolving bandits feedback model is interesting and could potentially be a worthwhile contribution to the community. However, for the use case proposed, I am uncertain if the proposed method is the best way to address the shortcomings in hiring.
Essential References Not Discussed: Given the motivation of the paper being on affinity bias, and hiring processes, some works on homophilous relationships (e.g., "*Identification of Homophily and Preferential Recruitment in Respondent-Driven Sampling*", "*Diversity through Homophily? The Paradox of How Increasing Similarities between Recruiters and Recruits Can Make an Organization More Diverse*", among others) and social networks (e.g. Stoica et al. *Seeding Network Influence in Biased Networks and the Benefits of Diversity*) could have added more context.
Other Strengths And Weaknesses: My main concern is with the motivation. While the authors present an interesting evolving biased feedback model, I am unsure whether the chosen motivation aligns well with the algorithm's design.
Integrating the feedback loop aspect is interesting, but the current formulation is somewhat limited, and the problem setup does not fully cater to it. For instance, assuming a static environment is restrictive since real-world conditions change and true values may evolve with the biased ones. Consider a scenario where Groups A and B start with the same skill levels. If Group A is repeatedly given genuine growth opportunities, its average skill level could increase over time. This compounding effect might eventually result in Group A significantly surpassing Group B, which, by contrast, may have been denied opportunities or even disadvantaged. The reverse is also true, a group's mean skill level and or interest in a given industry could actually reduce over time due to hiring patterns that continuously disadvantage the group. Thus, repeatedly favoring one group not only reinforces biased reward allocations but may also increase the group's true underlying value, and some hiring patterns might potentially reduce a group's true reward.
Furthermore, the approach of modeling groups as arms, each with a true and biased value, inherently embeds and reinforces biases. Even if this differentiation is based on skill level, confounding factors complicate such assumptions. No group is entirely homogeneous, even for ''*objective*'' measurable skills.
Other Comments Or Suggestions: The main paper is well-written.
Questions For Authors: The authors propose a clean evolving bandits algorithm that they proved to succeed in a biased setting. However, the motivation does not align well with the problem setup. If the authors could refine their motivation to better match the algorithm design, I would consider raising my score to 4/5.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback on our paper. We respond to the main points and questions below. Please let us know if you require any further clarifications. We hope that the reviewer will consider raising their score if we have sufficiently addressed their concerns.
> Given the motivation of the paper being on affinity bias, and hiring processes, some works on homophilous relationships (e.g., "Identification of Homophily and Preferential Recruitment in Respondent-Driven Sampling", "Diversity through Homophily? The Paradox of How Increasing Similarities between Recruiters and Recruits Can Make an Organization More Diverse", among others) and social networks (e.g. Stoica et al. Seeding Network Influence in Biased Networks and the Benefits of Diversity) could have added more context.
Thank you for the pointers to additional related works. We will update our related works section to discuss these works as well.
> Integrating the feedback loop aspect is interesting, but the current formulation is somewhat limited, and the problem setup does not fully cater to it. For instance, assuming a static environment is restrictive since real-world conditions change and true values may evolve with the biased ones. Consider a scenario where Groups A and B start with the same skill levels. If Group A is repeatedly given genuine growth opportunities, its average skill level could increase over time. This compounding effect might eventually result in Group A significantly surpassing Group B, which, by contrast, may have been denied opportunities or even disadvantaged. The reverse is also true, a group's mean skill level and or interest in a given industry could actually reduce over time due to hiring patterns that continuously disadvantage the group. Thus, repeatedly favoring one group not only reinforces biased reward allocations but may also increase the group's true underlying value, and some hiring patterns might potentially reduce a group's true reward.
This is an important point and we are glad you brought it up. As we understand your comment, you seem to be describing a problem setting which combines ours (where feedback changes adaptively with past decisions) with various time-varying bandit problems (where the underlying reward distributions are time-varying). In order to study this more challenging setting (which generalizes many already-difficult problems in non-stationary bandits), a necessary first step is to understand what challenges arise when the underlying reward distributions do not change. As our work shows, there are many challenges which arise (e.g., in developing new lower bounds to understand fundamental difficulties, and in designing algorithms). Thus, we believe that our work gives a framework upon which one could study more challenging problem settings, like the one you describe.
> Furthermore, the approach of modeling groups as arms, each with a true and biased value, inherently embeds and reinforces biases. Even if this differentiation is based on skill level, confounding factors complicate such assumptions. No group is entirely homogeneous, even for ''objective'' measurable skills.
Combined with your previous comment, those are very interesting points which should guide the creation of more nuanced models, which would be closer to reality. We are not aware of any existing work with the level of nuance you are proposing, or even with the level of nuance we have in this work. In this paper, with adding the additional detail of affinity feedback loops, we believe we are making a significant step towards your vision. We discuss the limitations of our work in this regard in the Impact Statement, lines 454-466, and we will add a sentence about this point.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ efforts in addressing the reviewers’ feedback. However, I still see a disconnect between the paper’s motivation and technical contributions, a concern also raised by M2kH and, to some extent, TDwV. That said, I agree with the other reviewers that the paper presents novel theoretical contributions. Based on this, I will give the paper a score of 4. | Summary: This paper examines how affinity bias influences feedback loops and impacts decision-making in multi-armed bandit problems. The novel formulation assumes biased reward values, where the bias toward an arm depends on the fraction of arms with the same set of trials. The authors establish a new lower bound that is a factor of $K$ larger than in standard MAB problems. Additionally, they present an elimination-style algorithm that nearly achieves this lower bound.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I checked the main body of the work.
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: # Contribution
1. **Affinity bandit model**: The authors introduce a valuable variant of non-stationary MAB called affinity bandits. This model addresses biased feedback within a challenging setting where conventional UCB/EXP3 algorithms perform poorly. Beyond job hiring applications, this model shows potential for broader application.
2. **Theoretical lower bound**: The paper provides a comprehensive lower bound for the affinity bandit problem. The additional $K$ factor effectively demonstrates the inherent challenge of this problem formulation.
3. **Near-optimal algorithm**: The authors develop an algorithm that nearly matches the established lower bound, completing the theoretical analysis.
Essential References Not Discussed: The authors should discuss the connection to rising bandits, another important category of non-stationary MAB problems. A more detailed comparison would strengthen the paper's positioning within the broader literature.
Other Strengths And Weaknesses: The paper represents a valuable contribution to the community by introducing a novel problem formulation with comprehensive theoretical analysis through both lower and upper bounds. While I have no major concerns about the work, I suggest the authors provide a high-level intuitive explanation of why their elimination algorithm is particularly effective for affinity bandits, which would help readers better understand the algorithm's design principles and why it succeeds where traditional UCB/EXP3 approaches fail.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback on our paper. We respond to the main points and questions below. Please let us know if you require any further clarifications.
> The authors should discuss the connection to rising bandits, another important category of non-stationary MAB problems. A more detailed comparison would strengthen the paper's positioning within the broader literature.
Thank you for the suggestion. We will add a reference in our related works.
> I suggest the authors provide a high-level intuitive explanation of why their elimination algorithm is particularly effective for affinity bandits, which would help readers better understand the algorithm's design principles and why it succeeds where traditional UCB/EXP3 approaches fail.
Note that we discussed the challenges that arise in our model, particularly for algorithms such as UCB and EXP3, in Section 3 (see e.g., Figures 3-4, also Figures 6-7 in the appendix). We aimed to give intuition for why our algorithm works in the second and third paragraphs of Section 4. The essential feature of our algorithm, when compared to, say, UCB, is that it plays a round-robin strategy to ensure that the ordering of arms according to their mean feedback at each round is (essentially) the same as the ordering of the arms according to their (unobserved) mean rewards. We can add additional clarifications on this point in the final version of our paper. | Summary: Motivated by affinity biases arising from many decision-making systems, including hiring, the authors *introduced* a stochastic bandit framework that could model the hiring process (affinity bandits setting). In this model, we have $n$ rounds of hiring, and in each arm $\in [K]$ corresponds to a group, and the goal is to minimize pseudo-regret with respect to the true qualification of groups $X_{t,i}$, while the bandit algorithm only observes has bandit feedback access to $Y_{t,i}$. The feedback model in this paper (which is a general one with some plausible properties) is so that upon selection of one arm, roughly speaking the future perceived reward of the selected arm goes higher and the perceived reward of other arms goes down.
- they *empirically* showed that naively ignoring the feedback model or using a simple unbiased operation on feedback can not recover the optimal performance
- they *proved* an instance-dependent lower bound for this problem showing that this problem is harder than the classical stochastic bandit problem
- they provided Algorithm 1 which for large enough $n$ (granted access to $n$ to select $m_r$ appropriately) could *provably* achieve instance dependent regret bound almost matching the lower bound
Claims And Evidence: Theoretical results: Yes
Feedback model: makes sense intuitively, but it is not clear whether it is a novel feedback model, or whether this feedback model is known in literature in computer science or other fields related to biased in perception
Methods And Evaluation Criteria: Yes
Theoretical Claims: I followed the high-level ideas in Section 3,4. I briefly looked at Section 5.
Experimental Designs Or Analyses: No issues.
Supplementary Material: No.
Relation To Broader Scientific Literature: The observed reward model is general and it is possible that some other future work uses similar feedback models for different applications. Additionally, it is valuable to extend the work to more realistic settings in which more information about each individual is revealed to the learner, in addition to the group identity (index of the selected arm).
Essential References Not Discussed: It is not clear whether the biased feedback model used in this paper (Assumption 2.1) is a new model of perception, or it is a well-known way to model human biases. (I acknowledge there are citations to papers related to bias. However, I do not know whether the feedback model is from those works or it is novel. Also, why do they choose exactly this feedback model among all possible models? In case there are other models of bias in perception in the literature.)
Other Strengths And Weaknesses: ## Strengths
- The setting is new and valuable. It is possible that some other real-world problems can be formulated this way.
- strong technical contributions (upper bound and lower bound)
- The technical sides are very nicely written
## Weakness
There are not strict weaknesses, but I want to give comments about modelling assumptions. Please let me know if you agree or disagree with these points.
- In this paper, the goal is to minimize regret with respect to picking the group with the best actual value for all rounds. Now, I am wondering if this goal inherently is problematic. Indeed, seeing all members of one group as a single arm can be problematic. Imagine a case where we have two groups. Group A is graduates from a top-rank university, and Group B is graduate students from a medium-rank university (as one of the examples used in this paper). Both groups have the same population size. Assume that 50 percent of group A are good candidates for the job, and 5 percent of group B are good candidates for the job. In this situation, a plausible hiring system would hire candidates from each group proportionally depending on how good candidates are. In this setting, having people from group A being hired all the time and not giving any chance to group B is problematic. However, by the objective of the paper, only picking arm 1 (from group A) has 0 regret. (Although I acknowledge your point in the impact statement.)
- As mentioned in the Impact statement, maybe a more refined and detailed model (contextual bandit) can better capture the affinity bias type behavior in hiring, than the affinity bandit framework introduced in this paper. Maybe among each group A, the is a subset of group $A' \in A$ such that they are good candidates. Indeed, in this example, ideally, if we hire 55 people, the least affinity-biased allocation is to hire 50 from group $A'\in A$ and 5 from group $B' \in B$ just because they are all good.
- If we consider each arm as a set of skills, then as mentioned by authors in the Impact statement, minimizing regret is plausible, although sometimes a hiring candidate might be considered to belong to multiple arms, perhaps if their skillset is broad. Additionally, sometimes we want to hire people from different skill sets as they can complement each other. Again, in this case, minimizing "regret" might not be the best policy.
Other Comments Or Suggestions: If you could include potential directions for future works, that would be greatly helpful for the broader research community. Especially on how we could have more refined models that could capture more details about the hiring process and affinity biases in real-world scenarios, different ways to define arms, and different possible extensions to the feedback model.
Questions For Authors: I am very interested to hear your thoughts on my points about modelling assumptions in section **Other Strengths and weaknesses**
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback on our paper. We respond to the main points and questions below. Please let us know if you require any further clarifications.
> It is not clear whether the biased feedback model used in this paper (Assumption 2.1) is a new model of perception, or it is a well-known way to model human biases. (I acknowledge there are citations to papers related to bias. However, I do not know whether the feedback model is from those works or it is novel. Also, why do they choose exactly this feedback model among all possible models? In case there are other models of bias in perception in the literature.)
We are not aware of prior works which use our Assumption 2.1 as a way of modelling human biases. Our goal in choosing the bias model in Assumption 2.1 is to develop a model which captures some of the essential features of affinity bias, which we outlined on lines 144-149. While understanding regret under more general bias models is an interesting direction for future work, we remark that one cannot hope to minimize regret under arbitrary feedback models which depend jointly on the rewards and fraction of times the arm was played previously. For an example, refer to our response to Reviewer TDwV.
> In this paper, the goal is to minimize regret with respect to picking the group with the best actual value for all rounds. Now, I am wondering if this goal inherently is problematic. Indeed, seeing all members of one group as a single arm can be problematic.
We agree that in some settings, modelling a group as a single arm can be problematic (indeed, we discuss this in our Impact Statement on lines 454-466). We do agree that studying contextual bandits for the hiring problem is a natural next step. However the current setting already comes with significant technical difficulties. We hope our work can serve as a stepping stone for future work studying more refined models, including contextual bandits. | Summary: This work studies a new setting called affinity bandits, which extends the non-stationary bandits setting to capture the affinity biases. They motivate the setting with a hiring feedback loop, where people tend to hire someone with similar features. They made assumptions of the feedback bias in Assumption 2.1 and proved the regret bounds for the elimination type of algorithm.
Claims And Evidence: See "Strengths And Weaknesses"
Methods And Evaluation Criteria: See "Strengths And Weaknesses"
Theoretical Claims: See "Strengths And Weaknesses"
Experimental Designs Or Analyses: See "Strengths And Weaknesses"
Supplementary Material: Reviewed Appendix A.
Relation To Broader Scientific Literature: See "Strengths And Weaknesses"
Essential References Not Discussed: See "Strengths And Weaknesses"
Other Strengths And Weaknesses: Strengths:
- This paper is well-written, and well-motivated and the theoretical results are presented in a clear way with good intuitive explanations.
- The problem setting is interesting and useful.
- The theoretical results and intuitive explanation are interesting and novel. The analysis of the elimination algorithm in the non-stationary bias environment can be a separate interest itself.
Weakness:
- The main concern is Assumption 2.1, which is how the weights term is defined in Eq 3. The assumption is selected to make Eq. 5 to be bounded. But it is not clear how realistic this assumption is in general in this Affinity bandits setting, and also in real-world applications (e.g. if we'd like to use it in hiring events). A more detailed explanation and discussion is needed.
- With my concern about Assumption 2.1, it would be useful to add some empirical evaluation (ideally real-world data) to confirm this assumption and the elimination algorithm does work in practice (and is consistent with the theoretical analysis).
Other Comments Or Suggestions: See "Strengths And Weaknesses"
Questions For Authors: - Can you provide either theoretical or empirical analysis/discussion if Assumption 2.1 does not hold?
- "Why is the problem difficult" in Section 3 explains why this setting is difficult for the UCB/EXP3 type of algorithm. Is there any related work in related work (fairness, non-stationary bandits) that would provide a better framework for this new setting and can be compared in terms of theoretical results?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback on our paper. We respond to the main points and questions below. Please let us know if you require any further clarifications. We hope that the reviewer will consider raising their score if we have sufficiently addressed their concerns.
> The main concern is Assumption 2.1, which is how the weights term is defined in Eq 3. The assumption is selected to make Eq. 5 to be bounded. But it is not clear how realistic this assumption is in general in this Affinity bandits setting, and also in real-world applications (e.g. if we'd like to use it in hiring events). A more detailed explanation and discussion is needed.
Our goal in Assumption 2.1 is to develop a model which captures the essence of three components of affinity bias, which we outline on lines 144-149. This model gives a tractable framework for studying some of the challenging aspects of sequential decision-making with biased feedback, from the perspective of both upper and lower bounds.
Understanding more general bias models in our setting would be an interesting direction for future work. However, we note that some generalizations of our bias model are intractable. For instance, consider feedback of the form $Y_t = \varphi(X_t, frac_t)$, where $X_t$ is the true reward associated with the arm $A_t$ pulled at time $t$, and $frac_t$ is fraction of times arm $A_t$ was played before time $t. In general, there does not exist a policy with non-trivial regret guarantees for this model (assuming initially unknown bias model).
To see this, consider two bias models (indexed by $a$ and $b$): Consider $\varphi_a(X_t, frac_t) = 1 - X_t$, and $\varphi_b(X_t, frac_t) = X_t$, and take the environment to be a two-armed Bernoulli bandit, where the means are either $\mu_1 = p, \mu_2 = 1-p$ or $\mu_1’ = 1-p, \mu_2’=p$. Suppose that an algorithm achieves sublinear regret simultaneously for means $\mu$ and $\mu’$ under one bias model (say, $\varphi_a$). Then, this immediately implies that the algorithm must suffer linear regret under the other bias model. Indeed, this follows, e.g., by coupling the decisions for the bias model in environment $\mu$ with the decisions of the algorithm for the alternate bias model in environment $\mu’$. Thus, we cannot hope to prove any nontrivial result without some structure on the bias function.
> With my concern about Assumption 2.1, it would be useful to add some empirical evaluation (ideally real-world data) to confirm this assumption and the elimination algorithm does work in practice (and is consistent with the theoretical analysis).
We note that we conducted empirical evaluations of our algorithm (and some other optimistic bandit algorithms) under various bias models in Appendix F (see in particular sections F.3-F.7 and Figures 8-12), in Bernoulli and/or Gaussian bandit environments which satisfy the assumptions in the paper.
While an empirical evaluation on real-world data would be nice, we are not aware of datasets which are readily available and suitable for our problem setting. Moreover, since the main focus and contribution of our paper was characterizing the fundamental difficulties that arise under a simple model for affinity bias, we leave a more thorough empirical evaluation on real-world data to future work.
> "Why is the problem difficult" in Section 3 explains why this setting is difficult for the UCB/EXP3 type of algorithm. Is there any related work in related work (fairness, non-stationary bandits) that would provide a better framework for this new setting and can be compared in terms of theoretical results?
We are not aware of related work which provides a better framework for our setting. Please refer to Appendix A for an extended discussion on some related models to ours. | null | null | null | null | null | null |
Human-Aligned Image Models Improve Visual Decoding from the Brain | Accept (poster) | Summary: The authors tackle the task of image decoding from brain activity (EEG and MEG). On this task, most of the existing papers map brain data to pretrained vision encoders such as CLIP and DINO. The authors propose to analyze the role of these vision encoders and show that aligning them with human perception boosts performance on image retrieval tasks. They conduct experiments by analyzing different types of neural architectures for mapping brain data to vision embeddings and provide a biological analysis showing the representation learned by these human-aligned encoders have more meaningful spectral and temporal response compared to non-aligned ones.
Claims And Evidence: The claims are clear.
Methods And Evaluation Criteria: The method makes sense and is well justified.
Theoretical Claims: No proofs
Experimental Designs Or Analyses: The experimental part is well designed but would benefit from additional experimental results to validate the claim stating that human-aligned models are better at decoding.
- Evaluation on more tasks. The authors only evaluate models on image retrieval. it would be interesting to see if it generalizes to other vision tasks such as image generation, captioning.
- Evaluation on other brain modalities. The authors choose to focus on MEG and EEG data using THINGS datasets. Since the fMRI modality is also available in THINGS dataset and is commonly used for image decoding, it would be valuable to examine the impact of human-aligned encoders on fMRI data to determine if the findings are also applicable to this modality too.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper focuses on image decoding from brain activity, a field that has seen significant advancements in recent years due to the emergence of large pretrained vision models. Most existing studies involve aligning brain representations with image encoders like CLIP which was shown to not align well with human perception. This paper demonstrates that using an image encoder better aligned with human similarity judgments yields a better decoding performance on retrieval tasks.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper is well written and easy to follow. It presents interesting findings by showing that human-aligned image encoders enhance the performance of image retrieval from brain signals, opening up new avenues for future research.
Other Comments Or Suggestions: No
Questions For Authors: The results related to the generalization to other subjects in Section 5.3 are not very clear. Indeed, could you please elaborate on why the generalization would work for CLIP and openCLIP and not DINO network ?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer KJCA for their constructive review and insightful suggestions. We agree that evaluating the impact of human-aligned embeddings across a broader range of vision tasks would be highly valuable. In this work, we focused on image retrieval as a direct and interpretable proxy for brain-image alignment. Nonetheless, we see strong potential for extending this framework to other tasks that rely on shared embedding spaces. As part of our ongoing work, we are exploring image reconstruction and plan to investigate whether human-aligned embeddings can also enhance brain-to-text mappings.
Regarding your suggestion on incorporating fMRI data, we fully agree. In response, we have conducted additional experiments using the Natural Scenes Dataset (NSD) which has been commonly used in similar studies. The results—provided below—show a consistent improvement in retrieval performance when using human-aligned embeddings over unaligned ones, further supporting the generalizability of our findings.
| | CLIP | | DINOv2 | | DINO | | ENSEMBLE | | OpenCLIP | | SynCLR | |
|-------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| | HA | Base | HA | Base | HA | Base | HA | Base | HA | Base | HA | Base |
| top-1 | **45.0±4.3** | 23.8±2.4 | **45.3±3.6** | 22.5±2.1 | **49.1±3.7** | 45.9±3.7 | **54.6±4.3** | 48.1±4.0 | **49.5±4.3** | 35.4±3.8 | **58.2±4.6** | 48.2±4.3 |
| top-5 | **76.6±4.2** | 53.8±4.1 | **75.5±3.5** | 49.1±3.2 | **79.7±3.7** | 76.8±3.6 | **83.4±3.7** | 79.6±3.4 | **79.2±4.1** | 67.8±4.3 | **86.7±3.6** | 78.7±3.1 |
**Regarding your question:**
Thank you for raising this question. Interestingly, while the original (unaligned) performance of models like DINO is comparable to that of human-aligned CLIP and OpenCLIP in the cross-subject setting, human alignment improves performance only for the latter. One possible explanation is that CLIP and OpenCLIP embeddings, due to their language supervision, have more semantically structured representations that benefit from additional alignment with human similarity judgments. In contrast, DINO and SynCLR—trained with self-supervised visual objectives—may already be well-matched to the structure of EEG signals in a way that does not benefit further from human alignment in the cross-subject setting. This suggests that the effectiveness of human alignment in promoting generalization may depend on the underlying inductive biases of the image encoder. | Summary: This paper explores the problem of decoding visual images from the brain by replacing the visual encoder with a human-aligned visual encoder. The experiments demonstrate that the use of human-aligned visual encoder effectively improves the brain-image retrieval performance.
Claims And Evidence: See Other Strengths And Weaknesses.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: See Other Strengths And Weaknesses.
Experimental Designs Or Analyses: See Other Strengths And Weaknesses.
Supplementary Material: Yes
Relation To Broader Scientific Literature: See Other Strengths And Weaknesses.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- This paper is well-written and easy to follow.
- This paper conducts extensive experiments across various EEG and image encoders, demonstrating the effectiveness of the use of human-aligned image encoders.
- This paper reveals an interesting finding that human-aligned encoders demonstrate greater compatibility with brain visual decoding tasks.
Weaknesses
- This paper focuses only on image retrieval task. However, brain decoding has another important task, reconstructing images from brain signals, which is more challenging and important. Extending the experiments to include the reconstruction task would provide a more convincing demonstration of the method’s effectiveness.
- The novelty is limited. Although effective, the model’s design and alignment strategy is quite simple, i.e., using the human-aligned models to align with EEG features through contrastive loss.
- The visualization results in Fig. 4 cannot fully support the biological interpretation in Section 5.5. According to Fig. 4 (b), the PSD difference of human-aligned models and original models is trivial,making it difficult to conclude that “Models trained with original embeddings focused more on Delta, while human-aligned models emphasized Alpha, Beta, and Gamma”. Additionally, in Fig. 4(c), EEG features aligned using different methods appear to focus on nearly identical brain regions, which does not sufficiently explain why the human-aligned models perform better. Further visualization across more models may be necessary to provide stronger evidence for these claims.
Other Comments Or Suggestions: See Other Strengths And Weaknesses.
Questions For Authors: See Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer q6J2 for their thoughtful review. Please find our responses to your concerns and suggestions below.
**Regarding the image reconstruction task:**
Thank you for highlighting this important point. We agree that image reconstruction is a key aspect of brain decoding and would be an important addition to our retrieval results. Our current focus is on retrieval as it provides a direct measure of alignment between brain activity and image embeddings—our primary objective in this work. That said, we recognize the significance of reconstruction and are exploring it in our current work.
State-of-the-art reconstruction methods typically use pretrained diffusion models based on latents from the CLIP representation space and project brain activity to that using an alignment loss such as contrastive or MSE loss (Scotti et al. 2024, Li et al. 2024, Benchetrit et al. 2024). However, a challenge is that current generative models are trained on original CLIP embeddings, so using human-aligned embeddings would require retraining or fine-tuning the generative model. This task is broader than brain-image alignment and primarily tests the general utility of human-aligned embeddings as a latent space for generative diffusion models. We thought our setup provided a more direct and easier-to-evaluate scenario that allows us to isolate the effect of human alignment better, and that is why it was our focus.
An alternative, inspired by MindEye, decouples retrieval and reconstruction. In our case, we can have two separate representation spaces: In one, brain signals are aligned with human-aligned embeddings for retrieval and with the original CLIP embeddings for reconstruction. We are also experimenting with a two-stage pipeline: brain signals are first mapped to the human-aligned space (for retrieval) and then further projected into the original CLIP space for reconstruction. This framework is implemented and under active evaluation. Preliminary results are promising, but further experimentation is needed to draw firm conclusions.
We hope our ongoing work and future directions help address your concern regarding the broader impact of the study.
**Regarding the novelty:**
Thank you for the feedback. While the model design is intentionally simple, we believe the contribution lies in the insight that brain signals are more aligned with image embeddings that have been aligned with human similarity judgments. This alignment raises the interesting question of whether the representations of currently used models are suitable for brain signals. Also, it opens new doors to further investigate the connection between human-alignment and brain signal decoding. To our knowledge, this is the first work to systematically evaluate the impact of human-aligned representations on brain-based image retrieval across multiple modalities, models, and datasets. We hope this finding encourages future work to explore cognitively grounded embeddings in brain decoding.
**Regarding the biological interpretations:**
Thank you for this important observation. We agree that the visualizations should be interpreted with caution. As noted in the paper, we performed statistical tests to assess the significance of the differences and only reported effects that met the significance threshold. However, we acknowledge that these differences—while statistically supported—are subtle and should be viewed as suggestive evidence rather than definitive explanations.
Our intent was to provide preliminary insight into how human alignment might influence model attention across spectral and spatial domains. We agree that more extensive visualizations across additional models and datasets would help strengthen these interpretations. While additional visualizations for other models and participants are included in Appendix F2, we will revise the text to better highlight the limitations of these findings.
---
Rebuttal Comment 1.1:
Comment: After reading author's response, I decide to raise my rating. Although the method design is quite simple, the finding that brain signals are more aligned with human-aligned visual embeddings is interesting. | Summary: This paper compares the image identification (decoding) performance of human-aligned image embedding models and their unaligned counterparts. The authors find that human-aligned models generally performed better than the unaligned models in EEG and MEG, after evaluating several different base image encoders and brain signal encoders. The authors also claim that the gradients of the EEG encoder more closely match human perception when using human-aligned image embeddings yields.
Claims And Evidence: 1. The authors claim that Dreamsim performs better than the two alignment alternatives (gLocal and Harmonization) and go on to speculate the cause. But, the image encoders used to measure improvement are not the same for each alignment method, so their results are not directly comparable. In Figure 3, it is also unclear which image encoders _do_ have direct comparisons between alignment methods (i.e., which encoders are both in 3a and 3b).
Methods And Evaluation Criteria: The authors use retrieval to evaluate the quality of their decoder.
1. I believe some of the specifics about the retrieval process (like which images are made available to the decoder) are not stated.
2. Further, I believe a task like open domain reconstruction (rather than selecting from a known set of candidates) would be more interesting, albeit significantly more computationally expensive. I think this because the test set, unlike the training set, seems to contain 200 concepts each with a single unique image. This means that the model might not need to learn to (for example) differentiate between viewpoints of a single object -- simply differentiating something like texture or color may be sufficient.
Theoretical Claims: No theoretical claims were made in this paper.
Experimental Designs Or Analyses: The analyses seem sound -- in particular, the authors are explicit about which concepts are distinct to which split of the EEG and MEG datasets. I do have a concern about an overlap in concepts in the human alignment datasets and the brain datasets. (I ask about this in the Questions section as well.)
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: This work shows that human-alignment efforts in machine learning may have deeper impacts in modeling (decoding) brain processes.
Essential References Not Discussed: Unless the authors are specifically discussing previous efforts to decode still images in Lines 154-161 (which I believe they should not), discussion of Nishimoto et al. 2011 paper belongs in Section 3.1: https://www.sciencedirect.com/science/article/pii/S0960982211009377
Other Strengths And Weaknesses: Other strengths:
1. The authors are thorough and trained a wide combination of EEG encoders and image encoders.
Other Comments Or Suggestions: 1. In Figure 3a, consider reordering the bars to match the order of the models in Tables 1-3.
Questions For Authors: 1. Were any of the concepts in the EEG and MEG datasets also present in the human alignment datasets (e.g. NIGHTS)? In particular were they part of the test split of the brain datasets?
2. At test time, which concepts/images is the encoder given access to? Only the 200 images associated with the 200 test concepts, or also the images/concepts from the training set? (If I missed this in the paper, please point me to where it was discussed.)
2. In Table 1, the base performance of DINOv2 is substantially worse than DINO, though the gap narrows after human alignment. What could be the cause of this?
3. In Table 2, the Ensemble and DINO models show no improvement after HA. What makes cross-subject decoding so different from within-subject decoding to cause this?
4. In Table 2, S1 shows a catastrophic drop in performance for OpenCLIP after HA. What is going on there? It doesn't seem to just be a bad seed.
5. Around Line 408 in column 2, the authors speculate that the datasets for Harmonization emphasize features that "require careful inspection and attention", while the RSVP setup of the experiments emphasize the recognition of more low-/mid-level features. This would be borne out by looking at the temporal gradients -- is a shift toward later timepoints actually seen in the data? (In the appendix I saw some gradient maps for gLocal but not Harmonization.)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer 9vzB for their careful review and constructive suggestions. We appreciate your thoughtful engagement with our work.
Your concern about the image encoders differing across alignment methods is valid. However, we would like to clarify that our comparisons are always made within the same image encoder, specifically, between its original embeddings and its human-aligned embeddings. Our goal is to assess whether human alignment improves brain-image alignment for a given encoder, not to compare across encoder architectures. The inclusion of multiple image encoders was intended to test the consistency of this effect across different visual representation models. Thank you also for suggesting the inclusion of Nishimoto et al., 2011. We agree it is a relevant and influential study, and we will incorporate it into the related work section of the revised paper. Below, we address your specific questions in detail.
**Question 1:**
Thank you for this important question. Based on Fu et al. (2023), the THINGS categories used in the THINGS-EEG and THINGS-MEG datasets are not directly included in the NIGHTS dataset. The Harmonization component of NIGHTS is based on ImageNet-derived datasets. As for gLocal, it uses odd-one-out triplets from the same THINGS database, so there is likely some concept overlap between its training data and the EEG/MEG test sets.
However, we would like to emphasize two points. First, despite potential concept overlap, gLocal-aligned models do not outperform unaligned models on EEG/MEG retrieval, suggesting that overlap alone does not explain performance differences. Second, these concepts are already likely present in the pretraining datasets of the original image encoders, which are large-scale vision models trained on broad and diverse datasets. Therefore, any overlap applies equally to both human-aligned and unaligned models and does not undermine the claims of the paper.
**Question 2:**
In the main results, the encoder only has access to the 200 test concepts and their associated images during retrieval. We will clarify this more explicitly in the revised text. For completeness, we also evaluated retrieval on a larger image database—including both training and test concepts—and reported those results in Appendix E. Notably, models trained with human-aligned embeddings continue to outperform the original models in this more challenging setting.
**Question 3:**
One possible explanation is that DINOv2 embeddings may capture different visual features or emphasize different inductive biases compared to DINO, leading to a less optimal alignment with EEG representations in the unaligned setting. After human alignment, however, both models are optimized toward human perceptual similarity, which reduces the gap by bringing their representations closer to behaviorally relevant dimensions. This suggests that human alignment can partially mitigate architectural differences by enforcing a shared similarity structure.
**Question 4:**
Cross-subject decoding introduces additional variability due to individual differences in neural representations, which may overshadow the benefits of human alignment. In this setting, the EEG encoder must generalize across subjects, making it more difficult to exploit the finer-grained structure introduced by human alignment.
**Question 5:**
Thank you for catching this. The reported value is a typo—the correct performance for OpenCLIP with HA on S1 is **11.5 ± 0.55**. As noted in the paper, all results are averaged over five random seeds to reduce sensitivity to a specific seed. We will correct this in the revision.
**Question 6:**
You are right; we only included temporal gradient maps for Dreamsim and gLocal in the original submission. Based on your suggestion, we computed the gradient distributions over time for models trained with Harmonization-aligned embeddings and will include them in Appendix F1 of the revised paper. As anticipated, we do not observe a consistent shift toward earlier time points compared to the unaligned models. This suggests that Harmonization-aligned features do not rely more heavily on early, low-level EEG responses, and the alignment may not strongly alter temporal emphasis in this context.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough responses. I've increased my score. | Summary: The paper explores the application of human-aligned image encoders to enhance the decoding of visual information from brain signals, specifically EEG and MEG data. The authors propose that image encoders fine-tuned to align with human perceptual similarity judgments improve the mapping of brain activity to visual stimuli compared to standard pre-trained encoders. The main findings indicate that integrating human-aligned image encoders, such as Dreamsim, into a brain-to-image decoding framework increases image retrieval accuracy. The study employs a contrastive learning approach, aligning brain signal embeddings with image embeddings in a shared latent space using the InfoNCE loss. Comprehensive experiments across various EEG architectures, image encoders, and brain imaging modalities demonstrate consistent performance improvements. The authors also provide biological insights, showing that human-aligned models focus on early visual processing features and frequency bands linked to visual perception.
Claims And Evidence: The primary claim—that human-aligned image encoders improve visual decoding from brain signals—is supported by extensive empirical evidence. The authors present quantitative results, such as top-1 and top-5 retrieval accuracies, across multiple participants, image encoders, and EEG architectures, showing consistent improvements with human-aligned models. The 21% improvement over state-of-the-art (from 28% to 62% top-1 accuracy with NICE and Ensemble models) is convincingly demonstrated in Figure 1 and Table 1. Additional evidence from gradient analyses (Figures 4 and 5) supports the claim that these models capture perceptually relevant signal components, aligning with biological expectations of visual processing.
However, the broader claim that these models enhance "visual decoding" in general is less substantiated, as the evaluation focuses solely on image retrieval tasks without exploring other decoding paradigms, such as image reconstruction. This limitation weakens the generalizability of the claim beyond retrieval. The evidence is clear and convincing within the scope of retrieval, but the paper overstates its implications for brain decoding as a whole without broader task validation.
Methods And Evaluation Criteria: The methods—using pre-trained human-aligned image encoders (e.g., Dreamsim) and existing EEG/MEG encoders (e.g., NICE)—are appropriate for the retrieval task, leveraging established contrastive learning frameworks (InfoNCE loss) to align brain and image embeddings. The evaluation criteria, primarily top-1 and top-5 retrieval accuracy on the Things EEG2 and MEG datasets, are standard for retrieval tasks and align with prior work (e.g., Song et al., 2024). However, the reliance on a single EEG dataset (Things EEG2) and one MEG dataset limits the robustness of the findings across diverse brain signal recordings. While the datasets are well-suited for the task, the lack of variety in datasets and tasks (e.g., reconstruction) restricts the method's applicability.
Theoretical Claims: The paper does not present formal theoretical proofs but proposes a hypothesis: human-aligned encoders improve decoding by better capturing perceptual attributes reflected in brain signals. This is supported empirically.
Experimental Designs Or Analyses: I reviewed the experimental designs in Sections 4 and 5, focusing on the retrieval tasks. The design is sound: training on averaged EEG/MEG repetitions, using a 90/10 train/validation split, and testing on unseen images with multiple seeds ensures reproducibility and reduces overfitting. The use of paired T-tests is statistically valid, with significance levels (p<0.05) appropriately reported. The usage of architectures and main training loss for alignment is mostly well-reported and ensures reproducibility. However, the method for creating a human-aligned model, though referenced, were not explained clearly in the paper.
Supplementary Material: I reviewed the appendices, specifically Appendix A (dataset details), Appendix B (implementation details), and Appendix E (additional retrieved image examples). These sections provide sufficient detail on preprocessing, hyperparameters, and qualitative results, enhancing transparency.
Relation To Broader Scientific Literature: The paper mainly builds upon the Dreamsim work (Fu, 2023) to create its main results, proving its effectiveness over classical vision models like CLIP (Radford, 2021). In terms of decoding the paper is related to Decoding natural images from eeg for object recognition (Song 2024), which takes a step further by utilizing human-aligned image encoders for contrastive learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths: The paper’s strength lies in its thorough experimentation—spanning multiple encoders, modalities, and participants—demonstrating robustness within the retrieval task. The idea of leveraging human-aligned models is conceptually sound, addressing the semantic collapse in text-paired contrastive learning (e.g., CLIP), and the biological insights (Section 5.5) add depth. The clarity of presentation, with figures (e.g., Figure 5) and tables, enhances accessibility.
Weaknesses: Originality is limited, as the core methods (Dreamsim, NICE) are borrowed, and the contribution is an application rather than a novel algorithmic advance. The significance is tempered by the narrow focus on retrieval, omitting tasks like reconstruction, which are critical for general brain decoding (e.g., Scotti et al., 2023, used Stable Diffusion for reconstruction). The reliance on a single EEG dataset (Things EEG2) further limits generalizability. While the direction is promising, the work feels accumulative, reinforcing the utility of human-aligned models rather than breaking new ground.
Other Comments Or Suggestions: N/A
Questions For Authors: Why was image reconstruction not explored alongside retrieval to support the broader claim of improved visual decoding? A response demonstrating feasibility or plans for reconstruction could strengthen the paper’s significance, potentially shifting my recommendation to weak accept. Without this, the claim feels overstated.
Can you justify the exclusive use of the Things EEG2 dataset and MEG dataset, given other image-paired neural datasets (e.g., Allen et al., 2022) could test robustness?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer ewMG for their thorough review and constructive feedback. Below, we provide our response to your questions as well as some clarifications on the points you raised.
**Regarding the image reconstruction task:**
Thank you for this thoughtful comment. We agree that visual decoding goes beyond retrieval, and incorporating reconstruction would strengthen our claims. We will revise the framing to more clearly reflect our focus on evaluating whether human-aligned image embeddings improve alignment with brain activity. Since retrieval directly reflects this alignment, it was our primary evaluation metric.
State-of-the-art reconstruction methods typically use pretrained diffusion models based on latents from the CLIP representation space and project brain activity to that using an alignment loss such as contrastive or MSE loss (Scotti et al. 2024, Li et al. 2024, Benchetrit et al. 2024). However, a challenge is that current generative models are trained on original CLIP embeddings, so using human-aligned embeddings would require retraining or fine-tuning the generative model. This task is broader than brain-image alignment and primarily tests the general utility of human-aligned embeddings as a latent space for generative diffusion models. We thought our setup provided a more direct and easier-to-evaluate scenario that allows us to isolate the effect of human alignment better, and that is why it was our focus.
An alternative, inspired by MindEye, decouples retrieval and reconstruction. In our case, we can have two separate representation spaces: In one, brain signals are aligned with human-aligned embeddings for retrieval and with the original CLIP embeddings for reconstruction. We are also experimenting with a two-stage pipeline: brain signals are first mapped to the human-aligned space (for retrieval) and then further projected into the original CLIP space for reconstruction. This framework is implemented and under active evaluation. Preliminary results are promising, but further experimentation is needed to draw firm conclusions.
We hope our ongoing work and future directions help address your concern regarding the broader impact of the study.
**Regarding the use of other datasets:**
Thank you for this question. Our primary focus is on EEG, as decoding performance in this modality remains relatively low and presents clear opportunities for improvement. We included MEG due to its similar input structure, allowing us to use the same architectures and demonstrate consistent gains from aligning neural signals with human-aligned image embeddings.
We agree that including other modalities, such as fMRI, could further strengthen our claims. In response to your suggestion, we extended our experiments to the NSD dataset (Allen et al., 2022) and observed similar improvements in retrieval performance using human-aligned embeddings. These results, will be included in the revision and reinforce the generalizability of our findings across neural recording modalities.
| | CLIP | | DINOv2 | | DINO | | ENSEMBLE | | OpenCLIP | | SynCLR | |
|-------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| | HA | Base | HA | Base | HA | Base | HA | Base | HA | Base | HA | Base |
| top-1 | **45.0±4.3** | 23.8±2.4 | **45.3±3.6** | 22.5±2.1 | **49.1±3.7** | 45.9±3.7 | **54.6±4.3** | 48.1±4.0 | **49.5±4.3** | 35.4±3.8 | **58.2±4.6** | 48.2±4.3 |
| top-5 | **76.6±4.2** | 53.8±4.1 | **75.5±3.5** | 49.1±3.2 | **79.7±3.7** | 76.8±3.6 | **83.4±3.7** | 79.6±3.4 | **79.2±4.1** | 67.8±4.3 | **86.7±3.6** | 78.7±3.1 | | null | null | null | null | null | null |
FloE: On-the-Fly MoE Inference on Memory-constrained GPU | Accept (poster) | Summary: Mixture-of-experts have become a popular way to scale up the transformer models these days, but the large model scale creates challenge when deploying the model under limited resources.This paper introduces FloE, an inference system designed for on-the-fly MoE inference on consumer-grade GPUs. FloE integrates quantization, offloading, prefetching, and optimized kernels to reduce memory overhead and latency with limited performance degradation. The authors evaluate their approach against multiple baselines.
Claims And Evidence: Most claims in this paper are either supported by evidence or well-known facts in this area.
Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-aligned with the problem at hand.
Theoretical Claims: The theoretical claim and proof in App. B looks reasonable.
Experimental Designs Or Analyses: The experimental setup is well-structured and appropriately evaluates the proposed approach. However, one minor issue is the absence of Mixtral-GPU performance results in Section 3.2. While HQQ Int2 seems to reflect the same setting, the authors should clarify this explicitly.
Supplementary Material: I reviewed the supplementary materials but could not find code assets related to the optimized kernels discussed in Section 2.4. Providing these resources would enhance the clarity and reproducibility of this section. **I strongly encourage the authors to include such details during the rebuttal period.**
Relation To Broader Scientific Literature: The proposed method builds upon prior work in deploying MoE models on personal devices, offering improvements in efficiency and performance preservation.
Essential References Not Discussed: I'm not aware of any essential references that were not discussed.
Other Strengths And Weaknesses: The paper is well-written, but the absence of code assets limits reproducibility.
Other Comments Or Suggestions: Figures 6–8 use color schemes that are difficult to read. Additionally, Figure 8's caption contains a missing space.
Questions For Authors: - What is the latency of the predictor described in Section 2.3?
- Given that the proposed quantization method outperforms existing approaches, could it also be applied to dense models, particularly due to the similarities with the SwiGLU module?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer ZXAh,
We sincerely appreciate your recognition and valuable suggestions. Below, we summarize and respond to the **issues**, **suggestions**, and **questions** you raised.
**`[Minor Issue 1]` The absence of Mixtral-GPU performance results in Section 3.2. While HQQ Int2 seems to reflect the same setting, the authors should clarify this explicitly.**
**`[Response]`** We sincerely appreciate you pointing out the issue with our inconsistent terminology. Indeed, **HQQ-INT2 refers to the performance of Mixtral-GPU on downstream tasks**. We utilized HQQ to quantize the experts to INT2, enabling the entire Mixtral-8×7B model to fit within GPU memory. In future versions of the paper, we will ensure the consistency of the configurations for HQQ-INT2 and Mixtral-GPU by clearly stating their alignment.
**`[Suggestion 1]` I strongly encourage the authors to include code assets during the rebuttal period.**
**`[Response]`** We genuinely hope that our efforts can contribute to the community by facilitating the deployment of MoE models on consumer-grade GPUs and edge devices. We have provided the current version of **our code via an anonymous link** below **[Code Link of FloE](https://anonymous.4open.science/r/floe-5843)**, and we plan to open-source it after completing the adaptation for more models.
**`[Suggestion 2]` Figures 6–8 use color schemes that are difficult to read. Additionally, Figure 8's caption contains a missing space.**
**`[Response]`** For later versions of the paper, we will **refine the color schemes** and **captions for Figures 6-8**.
**`[Question 1]` What is the latency of the predictor described in Section 2.3?**
**`[Response]`** We measured the latency of the predictor over 500 forward passes and calculated the average values. The results show that the inter-expert sparse predictor has a latency of **0.11 milliseconds**, while the intra-expert predictor's latency is **0.27 milliseconds**. In comparison, the average execution time for a single FloE Transformer block is **5.74 milliseconds**. These latencies account for only **1.9%** and **4.7%** of the total execution time, respectively, indicating that they have almost no significant impact on the model's generation speed.
**`[Question 2]` Given that the proposed quantization method outperforms existing approaches, could it also be applied to dense models, particularly due to the similarities with the SwiGLU module?**
**`[Response]`** The evaluation methods for **quantization and sparsification sensitivity are equally applicable to dense models**. The **theoretical proof presented in Appendix B** is also **generalizable** and can be applied to MoE models. These are our preliminary results on Mistral. Moving forward, we plan to further explore how to compress the size of MLPs in dense models to improve inference speed and reduce deployment costs. This will be one of the key directions for our future work.
Looking forward to hearing from you!
Best regards,
Authors
---
### `Code Link of FloE:`
https://anonymous.4open.science/r/floe-5843
---
### `Results Table:`
#### **Quantization Sensitivity of Mistral-7B**
| Quantization | INT8 | INT4 | INT3 | INT2 | INT1 |
|--------------|--------|--------|--------|---------|----------|
| **gate** | 5.252 | 5.271 | 5.350 | 5.839 | 1025.0 |
| **down** | 5.252 | 5.309 | 5.544 | 13.51 | 12022 |
| **up** | 5.552 | 5.264 | 5.324 | 5.753 | 301.28 |
#### **Sparsification Sensitivity of Mistral-7B**
| Sparsification | 50% | 60% | 70% | 80% | 90% |
|----------------|----------|----------|----------|-----------|-----------|
| **gate** | 6.91 | 8.67 | 18.76 | 2902 | 21857 |
| **down** | 5.29 | 5.35 | 5.503 | 5.940 | 7.92 |
| **up** | 5.46 | 5.74 | 6.404 | 8.581 | 20.47 |
---
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response, especially the open-sourcing effort. The response addressed my concern, and therefore I have increased my rating to 4.
---
Reply to Comment 1.1.1:
Comment: We are deeply grateful for your timely response and for recognizing our efforts. Your support is incredibly meaningful to us, and we truly appreciate your kind acknowledgment. Thank you so much for your encouragement—it means a lot! | Summary: The paper introduces an on-the-fly inference system (called FloE) for Mixture of Experts models on memory-constrained GPUs. The utilization of the limited GPU memory is optimized by a hybrid compression scheme, especially focused on intra-expert sparsity - while still utilizing inter-expert sparsity. When the memory required by all the experts exceeds the GPU memory, some of the experts parameters are offloaded to CPU memory, and then loaded onto the GPU when required. This shifts the LLM decoding bottleneck from memory-bound to I/O-bound, because of the bandwidth limitations of the PCIe bus.
A study of the activation distribution within experts shows that many activations are very close to zero. This led the authors to propose a magnitude-based activation sparse strategy, where activations close to zero are set to be exactly zero, resulting in the elimination of the corresponding weight computations (and therefore transfers) during inference.
The experimental evaluation shows some of the advantages of FloE, and compares it with other schemes.
Claims And Evidence: Intra-expert sparsity can be exploited via compression techniques, resulting in significant speedups of MoE execution on consumer-grade hardware with limited memory. Empirical evaluation is presented.
Methods And Evaluation Criteria: The experimental evaluation and comparison with other schemes are reasonable.
Theoretical Claims: I did not check the Appendices.
Experimental Designs Or Analyses: Yes, I followed most of the experimental evaluation.
Figure 10 is the one I found more interesting, showing the performance. On some of the larger corpus tasks (e.g., MMLU), the performance goes down quite a bit compared to Mixtral-8x7B (60.1 and 65.4 vs 69.5). If we accept degradation in performance, then it may be possible to use smaller models without any compression, that are the the same performance level. Therefore, it may be good to compare not just other compression methods, but also non-compression methods on smaller models with the same performance.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper belongs to a line of work concerned with making large model perform inference efficiently on edge hardware, typically at single batch size; offloading and compression techniques are used.
Essential References Not Discussed: Not that I can suggest.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: No questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer bhnT:
Thank you for recognizing the importance of the research problem, as well as the soundness of our methodology and experiments. Below, we address the key concerns regarding **MMLU performance** and **comparisons with non-compressed baselines**:
**1. FloE achieves competitive performance on MMLU.**
As you noted, MMLU performance is relatively sensitive to sparsity, which is reasonable given its complexity as a large-scale, multi-task NLP benchmark covering 57 subjects. Similar **sparsity-sensitive trends** are observed in other challenging tasks such as GSM8K (math reasoning) and HumanEval (code generation), which **aligns with the observations in Sirius [1]**.
However, FloE demonstrates **best performance retention** compared to other baselines (see Figure 10 of original manuscript). Specifically, as shown in the table below,
at **90% sparsity**, CATS almost completely fails on these three tasks, achieving an average score of only **16.2**, while FLoE maintains a score of **43.8**. At **80% sparsity**, FLoE retains over **93%** of the base model's capabilities.
**2. Non-compressed baselines exhibit limited performance.**
To address your concerns, we have used **Mistral-7B** and **Llama-3.2-3B** as baselines for non-compressed methods.
**(1) Clarification of Fair Evaluation Setup.**
To ensure that the activation parameter count of FLoE aligns with that of similarly sized non-compressed models, we applied **INT8 quantization** to the attention layers of FLoE at **90% sparsity**. This adjustment ensures that the memory footprint of FLoE is comparable to that of Llama-3.2-3B in its **FP16** configuration. For FLoE at **80% sparsity**, the memory usage remains consistent with the original setup and matches that of Mistral-7B. Furthermore, we have supplemented the experimental results for **MMLU** (5-shot), **GSM8K** (8-shot), and **HumanEval**, providing a comprehensive comparison between FLoE and the baseline models.
**(2) Analysis of New Results.**
Not only on MMLU, but also on complex tasks such as GSM8K and HumanEval, various sparse methods face significant challenges, with performance tending to degrade more rapidly. However, our method, FLoE, demonstrates the best performance retention. At **90% sparsity**, CATS almost completely fails on these three tasks, achieving an average score of only **16.2**, while FLoE maintains a score of **43.8**. At **80% sparsity**, FLoE retains over **93%** of the base model's capabilities.
Even smaller non-compressed models struggle to match the performance of large compressed models on complex reasoning tasks. On **MMLU**, **GSM8K**, and **HumanEval**, **FloE** at **90% sparsity** outperforms both **Mistral-7B** and **Llama-3.2-3B**. Moreover, **FloE** at **80% sparsity** preserves performance **closest** to the base model, demonstrating its clear advantage in these tasks. These results highlight that compressing large models can be more beneficial than relying on smaller non-compressed models, particularly for demanding reasoning tasks.
In summary, **FloE not only achieves superior performance retention under high sparsity but also surpasses smaller non-compressed models across multiple benchmarks**. This underscores the effectiveness of our approach in balancing model efficiency and task performance, especially in scenarios where computational resources are limited.
We believe these findings strongly reinforce FloE’s robustness and practicality, making it a **compelling choice** for deploying MoE-based LLMs in **resource-constrained environments**.
For users with limited resources who cannot deploy MoE models, we highly recommend FloE as an efficient and practical alternative, delivering high performance while significantly reducing computational costs.
Looking forward to hearing from you!
Best,
Authors
---
### `Results Table:`
| Model | GSM8K-8 shot (Acc) | Humaneval-0 shot (Pass@1) | MMLU-5 shot (Acc)| Average |
|---------------|--------------|-------------|-------------|---------------|
| Base Model | 58 | 33.5 | 69.5 | 53.67 |
| FLoE-80 | 51.7 | 32.3 | 65.4 | 49.8 |
| CATS-80 | 31.1 | 28.7 | 61.7 | 40.5 |
| FLoE-90 | 40.9 | 30.5 | 60.1 | 43.83 |
| CATS-90 | 2.42 | 8.54 | 37.7 | 16.22 |
| Mistral-7B | 39.4 | 29.2 | 62.5 | 43.7 |
| Llama-3.2-3B | 26.6 | 25.6 | 56.4 | 36.2 |
---
[1] Zhou, Y., Chen, Z., Xu, Z., Lin, V., & Chen, B. Sirius: Contextual Sparsity with Correction for Efficient LLMs. NeurIPS, 2024. | Summary: The paper introduces FloE which is an inference system to run MoE models on memory constrained GPUs. FloE includes various techniques (1) compression coupled with (2) dual predictors.
(1) the work suggests that there is an internal sparsity in experts that can be set to zero during inference by using magnitude based sparsification. Also, the work suggests using various bit-widths to quantize the different projection matrices can provide good perplexity.
Also, the paper proposes some system optimizations such as development of efficient sparse kernels and compact asynchronous transfer.
This results in a significant speedup in running MoE models.
Claims And Evidence: It seems like the main claims are backed by data Figure 2, 3.
The ideas seem to make sense.
Methods And Evaluation Criteria: Evaluations seem to be reasonable.
Theoretical Claims: I was not able to scrutinize the theoretical proofs in the supplementary material.
Experimental Designs Or Analyses: More data may be helpful but overall it seems to be reasonable.
Supplementary Material: I only looked at some key results in the supplementary material.
Relation To Broader Scientific Literature: Common understanding in the field is that the mixture of experts are good for training high performance models. However, it still incurs high memory usage and it may lead to low GPU utilization. The paper focuses on the first issue and claims that the large size of activated experts overburdens the limited PCIe bandwidth is a challenge that limits the use of MoEs in latency-critical scenarios.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: + Ideas presented in this paper may help reduce the memory burden of running MoE based networks.
Other Comments Or Suggestions: Minor issue, but the paper is very difficult to read as it skips proper description of the terminologies used in the paper. Although it is understandable that it is challenging to put a lot of content into short 8 pages, I believe a little section to describe some background would be very helpful for readers.
Questions For Authors: Key question I have about the paper is on the GPU utilization w/ and w/o this work. Although memory is a big bottleneck in MoE adoption is "the large size of activated experts overburdens the limited PCIe bandwidth", another big contributor is the low GPU utilization that spikes up the cost of running services using MoE. I think it would be great to understand how GPU utilization is impacted.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer u26i,
We sincerely thank you for your thorough and thoughtful review of our submission. Your feedback is invaluable, and we will provide a concise response to the knowledge background and GPU utilization concerns you raised.
**`[Minor issue]` The paper is very difficult to read as it skips proper description of the terminologies used in the paper. Although it is understandable that it is challenging to put a lot of content into short 8 pages, I believe a little section to describe some background would be very helpful for readers.**
**`[Responce]`** Thank you for your understanding regarding the difficulty of including rich content within the 8 pages. We agree that a more thorough background would be helpful for readers. As such, we have placed the related work on expert offloading and sparsity in LLMs in Appendix A, with a concise introduction and positioning provided in **Lines 105-108, Section 2, Page 2 of the main text**. We would be happy to **move Appendix A to Section 2** if given any opportunity to revise.
In **Appendix A**, we introduce **experts offloading** and **sparsity in LLMs**, two key areas critical to the efficient deployment of large language models. Experts offloading addresses the memory bottlenecks caused by the massive parameter counts in Mixture-of-Experts (MoE) models, with frameworks like Llama.cpp and DeepSpeed Inference transferring expert weights between VRAM and DRAM. However, limited PCIe bandwidth leads to communication delays, and existing prefetching strategies face tradeoffs in accuracy, latency, and scalability, especially under multi-expert activation. Sparsity in LLMs, on the other hand, minimizes computational and memory overhead through techniques like weight pruning and activation sparsity. While these methods show promise, they are often hindered by performance degradation, hardware inefficiency, or limited adaptability to modern architectures like SwiGLU.
Our work builds on these foundations by addressing the gaps in both areas. We propose an approach for on-the-fly inference in MoE models and explore contextual sparsity techniques tailored for modern architectures. By optimizing parameter utilization and reducing hardware bottlenecks, our solutions aim to **improve the efficiency, accuracy, and practicality of deploying MoE models in resource-constrained environments**.
**`[Key Issue]` Key question I have about the paper is on the GPU utilization w/ and w/o this work. Although memory is a big bottleneck in MoE adoption is "the large size of activated experts overburdens the limited PCIe bandwidth", another big contributor is the low GPU utilization that spikes up the cost of running services using MoE. I think it would be great to understand how GPU utilization is impacted.**
**`[Responce]`** **Two challenges stemming from distinct scenarios.** Your analysis of the bottlenecks in MoE deployment is insightful. Indeed, PCIe transmission bandwidth and GPU utilization represent two critical challenges that arise from **distinct scenarios and are driven by different underlying causes** in the current deployment of MoE models.
**The first challenge, which occurs in scenarios with limited GPU memory resources, compels the heterogeneous storage of model weights.** Consequently, substantial latency occurs during inference due to the transfer of parameters between DRAM and VRAM.
**The second challenge arises from an imbalance between computational load and memory usage in GPU-deployed MoE inference scenarios.** This frequently leads to suboptimal GPU utilization, especially when processing small-batch requests, leaving GPU cores underutilized.
**Our work primarily focuses on addressing the first challenge.** Our work reduces the latency of model inference caused by weight transfer while maximizing the preservation of model accuracy. This is achieved by designing an ingenious hybrid compression strategy and pairing it with a corresponding prediction mechanism. Though our solution does not directly resolve GPU utilization concerns, we would be happy to share additional insights into how our approach interacts with GPU performance. We tested on the ShareGPT corpus and measured the **GPU utilization rate at 63.2%** during 50 forward passes with 20 tokens each. **Given the relatively large matrix dimensions of MoE models compared to consumer-grade GPUs, we have observed that GPU utilization remains reasonably high in practice.**
**The research on the second challenge is orthogonal to our work.** The challenge of GPU utilization that you raised is indeed one of the major challenges in MoE deployment. It is also a key direction for our future research and exploration. We deeply appreciate your feedback and hope to continue advancing our understanding as we work toward optimizing MoE deployment.
We hope the above clarifications address your concerns and would greatly appreciate your further feedback.
Cheers,
Authors | Summary: The paper presents FloE, a system for on-the-fly inference of Mixture-of-Experts (MoE) models on memory-constrained GPUs. It addresses the challenge of high memory and I/O overhead in MoE inference by introducing a hybrid compression strategy, which combines contextual sparsification of gate and down projection matrices with ultra-low-bit quantization of up projection matrices. Additionally, FloE leverages learning-based expert sparsity predictors to reduce the overhead of expert offloading while maintaining inference efficiency. Experimental results show that FloE achieves a 9.3× parameter compression per expert, enables inference on a GPU with only 11GB VRAM, and delivers a 48.7× speedup over DeepSpeed-MII on an RTX 3090, with only 4.4%–7.6% accuracy degradation.
Claims And Evidence: The paper makes claims about compressing experts in MoE models while maintaining accuracy, demonstrating 4.4%–7.6% accuracy degradation through empirical results. However, this trade-off is non-trivial, as it remains unclear how compression affects routing behavior and knowledge retention within experts.
A key concern is the lack of theoretical justification for why the proposed hybrid compression strategy—contextual sparsification for gate/down projections and ultra-low-bit quantization for up projections—preserves model performance. While empirical results show competitive accuracy, the paper does not provide a formal analysis of how different compression techniques influence expert activation patterns, routing stability, or downstream generalization.
Without a deeper theoretical understanding, it is difficult to assess whether the observed accuracy retention is inherent to the method or merely specific to the evaluated models and tasks. This raises concerns about the generality of the approach across different MoE architectures (e.g., DeepSeek-MoE, Switch Transformers) and long-form inference stability, where routing mispredictions might accumulate.
To strengthen its claims, the paper would benefit from:
- A theoretical analysis of how expert sparsification and quantization affect routing distributions and model expressiveness.
- A deeper investigation into failure cases, such as whether certain compressed experts become underutilized or degrade faster than others.
- Additional studies on compression effects across different MoE models and more diverse tasks, to validate generalization beyond Mixtral-8×7B. For instance, how does DeepSeek-v2 or v3 behave in your system? Other open-source MoE models also include the Arctic model from SnowFlake.
Methods And Evaluation Criteria: The paper presents a strong evaluation setup, including multiple baselines (DeepSpeed-MII, Mixtral-Offloading, Fiddler), datasets, and diverse GPU configurations (RTX 3090, A100, H100, A6000). This comprehensive benchmarking makes the results more convincing. However, testing on more advanced MoE models (e.g., DeepSeek-MoE, Switch Transformers) would provide stronger evidence of FloE’s generalizability across architectures with different routing mechanisms.
A 7% accuracy loss is non-trivial, raising the question of whether model distillation could achieve a similar efficiency gain with lower accuracy degradation. Comparing FloE with distilled MoE models or smaller dense alternatives would clarify its trade-offs. I would not expect a real comparison here but I would need some justification that 4 -7 % accuracy loss is something we can expect better than distillation and make the proposed method valuable for the MoE model community (the proposed approach is indeed easier to be deployed since it does require the expensive distillation process, which I have been convinced). Additionally, the paper could discuss how much accuracy loss is acceptable in real applications, as the impact varies across tasks.
Theoretical Claims: Like I said above, strong theoretical understanding would significantly strengthen the soundness of the proposed approach.
Experimental Designs Or Analyses: Check my review above.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Yes, it is related to a broad scientific literature, since it aligns with key ML topics, such as MoE architecture, compression, sparsity and system designs.
Essential References Not Discussed: This paper did a good job of covering most key references in MoE inference systems.
Other Strengths And Weaknesses: See my review above.
Other Comments Or Suggestions: See my review above.
Questions For Authors: See my review above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer oREx,
Thanks for your careful review. We summarize and address your key concerns as follows:
**`[Suggestion 1]` A theoretical analysis of how expert sparsification and quantization affect routing distributions and model expressiveness.**
**`[Response]`** In **Appendix B**, we provide a **detailed theoretical proof explaining the sparsity sensitivity differences observed in models with SiLU activations**. Based on the distribution characteristics of the three matrices, we demonstrate why the output activations of the up projection exhibit lower sparsity sensitivity. Additionally, **in lines 190 - 201 (Section 2.2.2)**, we offer qualitative analysis regarding the quantization sensitivity differences.
The impact of compression methods on routing was evaluated by performing 500 forward passes on the ShareGPT corpus. We measured the shift in routing logits caused by compression and found the results to be consistent with downstream task performance. Our method induces significantly less distortion to the routing distribution compared to other baselines with the same sparsity ratio. This observation aligns with the theoretical proof provided in Appendix B, further validating our approach.
**`[Suggestion 2]` A deeper investigation into failure cases, such as whether certain compressed experts become underutilized or degrade faster than others.**
**`[Response]`** In our current compression scheme that applies a uniform sparsity ratio to all experts, we have supplemented additional tests measuring the L2 norm of output errors before and after expert compression to investigate failure cases (as shown in [expert sparsity loss figure](https://anonymous.4open.science/r/floe-5843/pdf/expert_loss_mse.png)). Our findings revealed that **compression of experts in the 30th and 31st layers induces the most pronounced performance degradation**. When preserving these critical layers as non-sparse, the model showed improvements of 1.3 on the **MMLU** benchmark and an average gain of 1.66 across six CSR tasks (**BoolQ, SciQ, ARC-C, ARC-E, QA** and **WG**).
These empirical results substantiate our hypothesis regarding layer-specific compression sensitivity. Accordingly, we develop to implement more fine-grained compression by selectively reducing the sparsity ratio for these critical experts, which can better preserve model performance while maintaining compression benefits.
**`[Suggestion 3]` Additional studies on compression effects across different MoE models and more diverse tasks, to validate generalization beyond Mixtral-8×7B. For instance, how does DeepSeek-v2 or v3 behave in your system? Other open-source MoE models also include the Arctic model from SnowFlake.**
**`[Response]`** In the **Appendix E and F**, we evaluate the sparsity sensitivity of **Deepseek V2** and **Phi 3.5**, as well as the quantization sensitivity of **Phi 3.5**, **Qwen 1.5A 2.7B**, and **Deepseek MoE**. **These results are consistent with the conclusions on Mixtral 8×7B, further validating our findings**. Furthermore, we have supplemented the experimental results for MMLU (5-shot), GSM8K (8-shot), and HumanEval, providing a comprehensive comparison between FloE and the baseline models. These results demonstrate the strong generalizability of our method across different models and downstream tasks.
**`[Experiment Issue]` Is the accuracy loss caused by FloE's compression sufficiently competitive compared to model distillation or direct deployment of smaller models?**
Comparing with smaller non-compressed models is indeed crucial. However, due to time and resource constraints, we were unable to distill smaller models and instead chose the more advanced **Llama3.2-3B** and **Mistral-7B** as baselines for comparison. More details refer to the response to `Reviewer bhnT`.
We evaluated performance on three complex downstream tasks: **MMLU**, **GSM8K**, and **HumanEval**, which pose significant challenges for both small distilled models and sparse methods. Our method not only exhibits the least performance degradation among sparse baselines but also outperforms smaller models of similar scale across tasks. Despite retaining only 93% of the base model's accuracy on these tasks, it significantly surpasses other baselines. (Resualt Table in response to `Reviewer bhnT`)
Thus, we argue that appropriately compressing larger models is more competitive than directly deploying or distilling smaller models, especially in scenarios involving resource-constrained environments and complex reasoning tasks.
Cheers,
Authors
---
### `Results Table`:
#### **Router Logits Shift**
| Sparsity (%) | 50 |60|70|80|90|
|-----|----|---|---|---|----|
| FloE | 0.9941 | 0.9888 | 0.9794 | 0.9607 | 0.9212 |
| CATS | 0.9744 | 0.9626 | 0.9455 | 0.9181 | 0.8608 |
#### **Downstream Task Performance of Pin Higher Layer**
| Model| MMLU-5 shot (Acc)| CSR (Acc)|
|----|---|---|
| FloE-80 | 65.4| 68.77|
| FloE-80 pin 30 & 31 |66.7|70.43| | null | null | null | null | null | null |
Divide and Conquer: Learning Label Distribution with Subtasks | Accept (poster) | Summary: The paper proposes a new plug-in method for label distribution learning, based on auxiliary tasks derived from the original dataset label distribution. The subtasks are defined by a label mask optimized before the optimization procedure, encouraging diversity between the subtasks and alignment with the original label distribution.
UPDATE AFTER REBUTTAL:
I keep my recommendance of acceptance.
Claims And Evidence: Claim 1: The optimization procedure (Eq. 1) can find subtasks which are informative and diverse.
This claim is verified using the metrics defined in Definition 4.2 and Definition 4.3, and the empirical evidence is supported on the different datasets, comparing against random baseline. One question: why do the authors define the diversity as in Definition 4.3, instead of the cosine similarity used in Equation 1?
Claim 2: The NSUM is the only normalization function which permits to reconstruct the original label distribution.
This claim is proved in Theorem 4.4, with ablations comparing it to a a min-max normalization.
Claim 3: S-LDL leads to better performance
This claim is supported by empirical evidence in Section 6, with a sufficient number of baselines and metrics.
Methods And Evaluation Criteria: The datasets and evaluation metrics are borrowed from the label distribution litterature.
Theoretical Claims: Some steps in Theorem 4.4 could be made more clear.
For example, the proof relies on the assumption that q is a normalization constant, but does not discuss what happens if this is not the case. Moreover, the steps following Equation 8 are not clear: what does it mean that \sum [p(d)]_{j} is "given"? Furthermore, there is implicitly a linearity assumption made to deduce that p(v) = v, hence more details on the derivation would be helpful.
Experimental Designs Or Analyses: Yes, the experimental section is quite comprehensive and covers a wide range of baselines
Supplementary Material: I read the appendix
Relation To Broader Scientific Literature: This paper proposes a plug in which can be used on top of certain LDL methods.
Essential References Not Discussed: I am not familiar enough with the litterature to discuss this.
Other Strengths And Weaknesses: The writing of the paper could be improved, for example the notation used is not always properly introduced (cf. Algorithm 1 where $j$ appears without proper definition, making it hard to understand the algorithm).
Furthermore, there is no clear intuition or theoretical explanation for why subtasks should help with the LDL primary task, and why diversity and information should be maximized when designing good subtasks.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Many thanks for your precious comments! We have provided point-by-point responses to your questions below.
**Comment 1:** Why do the authors define the diversity as in Definition 4.3, instead of the cosine similarity used in Equation 1?
**Response:** In **Definition 4.3**, we use $\bar{\boldsymbol{d}}$ to assign weights to each subtask pair, which can make the two subtask label spaces with different high-frequency labels considered more differentiated. This helps *evaluate* diverse subtask label distributions. In **Equation (1)**, we use cosine similarity instead for *optimization* purposes, since element-wise XOR operator is a discrete, non-continuous operation. We will clarify this distinction in the manuscript to improve readability.
**Comment 2:** Some steps in Theorem 4.4 could be made more clear. For example, the proof relies on the assumption that $q$ is a normalization constant, but does not discuss what happens if this is not the case. Moreover, the steps following Equation 8 are not clear: what does it mean that $\sum [p(d)]_{j}$ is "given"? Furthermore, there is implicitly a linearity assumption made to deduce that $p(v) = v$, hence more details on the derivation would be helpful.
**Response:** Below, we provide a detailed explanation of the points raised:
+ If $[q(\cdot)]_j$ is not a constant for all $j$, each element has its own scaling factor, which is atypical and it is difficult to output a probability simplex.
+ What we mean is that the term $\sum_{j=1}^{L} [p(\boldsymbol{d})]_{j}$ must be known *a priori* when solving for $d_k$ in Equation (8).
+ The choice $[p(\boldsymbol{v})]_j = v_j$ for all $j$ is the simplest nontrivial solution consistent with the problem constraints. We will explicitly call out the linearity assumption and justifies why it’s necessary (to avoid cross-dependencies and preserve normalization).
**Comment 3:** The writing of the paper could be improved, for example the notation used is not always properly introduced (cf. Algorithm 1 where $j$ appears without proper definition, making it hard to understand the algorithm).
**Response:** In this context, $j$ denotes the index of the label space. While we initially used a more compact pseudocode style for brevity, we acknowledge that this could lead to ambiguity. We will revise the algorithm to provide a more detailed and unambiguous formulation in the updated version of the paper.
Specifically, **Line 4** in **Algorithm 1** can be expanded into:
> $\mathcal{Y}^{(t)} \leftarrow \lbrace \varnothing \rbrace$;
>
> **for** $j=1$ **to** $L$ **do**
>
> **if** $M_{tj}=1$ **then**
>
> $\mathcal{Y}^{(t)} \leftarrow \mathcal{Y}^{(t)} \cap \lbrace y_j \rbrace$;
>
> **end if**
>
> **end for**
**Line 9** in **Algorithm 1** can be expanded into:
> $\boldsymbol{D}^{(t)} \leftarrow []$;
>
> **for** $j=1$ **to** $L$ **do**
>
> **if** $y_j \in \mathcal{Y}^{(t)}$ **then**
>
> $\boldsymbol{D}^{(t)} \leftarrow [ \boldsymbol{D}^{(t)} | \text{clip}(\boldsymbol{d}_{\bullet j}, \varepsilon, 1) ]$;
>
> **end if**
>
> **end for**
**Comment 4:** Furthermore, there is no clear intuition or theoretical explanation for why subtasks should help with the LDL primary task, and why diversity and information should be maximized when designing good subtasks.
**Response:** First, the reason *why subtasks help with the primary task* can be explained from the perspective of shared representation learning, which forces the model to learn more general and discriminative feature representations. These features may capture patterns that are beneficial to the primary task. Moreover, empirically speaking, simple techniques that are similar to our method (e.g., label powerset, dummy variable, label smoothing, etc.) have been demonstrated to achieve performance improvements. Second, the reason *why diversity and information should be maximized* is to ensure that each subtask contributes unique knowledge, avoiding trivial or overlapping learning signals. Also, diverse and informative subtasks can act as implicit regularization, preventing the model from relying too heavily on spurious label correlations in the primary task. | Summary: The authors introduce S-LDL, a novel label distribution learning (LDL) method that generates/utilizes label distribution subtasks. This method can seamlessly integrate with existing LDL methods without any prior/expert knowledge, and is suitable for some derived tasks. The paper also conducts analyses and experiments to demonstrate that S-LDL is effective and efficient.
## update after rebuttal
I have carefully read the rebuttal. The rebuttal provided a detailed explanation of the role of downstream tasks and clarified the three assumed conditions in Theorem 4.4, which addressed some of my concerns. Therefore, I maintain my vote for "accept".
Claims And Evidence: Yes. The analyses and experiments in this paper show that the additional beneficial data claimed by the authors can indeed be mined from subtasks and improve the performance of the LDL method.
Methods And Evaluation Criteria: Yes. The proposed methods and/or evaluation criteria make sense for the problem at hand.
Theoretical Claims: Yes. The correctness of the proofs for the theoretical claims is checked, and they appear to be correct without any significant issues.
Experimental Designs Or Analyses: Yes. The experimental designs and analyses are sound and valid. I have doubts about the analysis part. See the question below for details.
Supplementary Material: Yes. I reviewed the supplementary material, which includes the TensorFlow code for the methods. The code is basically consistent with the content discussed in the paper.
Relation To Broader Scientific Literature: The idea of using subtasks in the field of LDL is novel, and the proposed S-LDL is compatible with existing LDL methods. This paper provides new insights into how subtasks can be leveraged to refine label distribution predictions, contributing to the ongoing evolution of LDL methodologies.
Essential References Not Discussed: No. This paper adequately discusses related work.
Other Strengths And Weaknesses: Strengths:
1. This paper proposes an LDL method that generates/utilizes label distribution subtasks, and this idea seems novel.
2. There are some good properties of the proposed method: independence from expert/prior knowledge, compatibility with existing methods, and applicability to derived tasks, etc.
3. There are creative analyses of the proposed method, illustrating the effectiveness of the subtask label spaces/distributions.
4. There are sufficient experiments to demonstrate the performance of the proposed method, and the results seem promising.
Weaknesses:
1. There are limited performance improvements in some cases, particularly with the Yeast_ series datasets.
2. There are some non-intuitive parts of the analysis. See the questions below for details.
Other Comments Or Suggestions: The notation $[\cdot]$ is used to denote the set of natural numbers, but $[\cdot]_i$ is used to represent the element at the $i$-th position of a vector, which could be confusing for readers sometimes.
Questions For Authors: 1. Regarding Section 4.2, what benefits does the reconstructability of subtask label distribution bring?
2. Regarding Theorem 4.4, why there are three conditions? More explanations would be helpful.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your precious comments! Responses to your concerns are as follows.
**Comment 1:** There are limited performance improvements in some cases, particularly with the Yeast_ series datasets.
**Response:** For LDL, even small changes in metrics can indicate significant performance improvements. This is because the values involved in the calculation are constrained by probability simplex, which limits the range of results and leads to small divergence. For example, on the $\mathtt{Yeast\\_diau}$ dataset, the K-L divergence between the ground truth and a uniform label distribution matrix (values are all $\frac{1}{L}$, where $L$ is the number of labels) is **0.0158**. This can be considered as the worst performance. DF-LDL, a strong baseline, achieves **0.0131**. To ensure the reliability of the experimental results, we repeat the experiments and provide the standard deviation of the results, and perform significance tests (pairwise *t*-test at 0.05 significance level) to confirm that the observed changes are statistically significant.
**Comment 2:** There are some non-intuitive parts of the analysis. The notation $[\cdot]$ is used to denote the set of natural numbers, but $[\cdot]_i$ is used to represent the element at the $i$-th position of a vector, which could be confusing for readers sometimes.
**Response:** We appreciate this observation. To avoid ambiguity, we will revise the notation by using parentheses for indexing (i.e., $(\cdot)_i$ for the $i$-th element of a vector) and reserve square brackets exclusively for set definitions.
**Comment 3:** Regarding Section 4.2, what benefits does the reconstructability of subtask label distribution bring?
**Response:** The purpose of **Section 4.2** is to demonstrate that, under certain conditions, the subtask label distributions can indeed reconstruct the primary task label distribution. In **Section 5**, we concatenate the subtask label distribution with the representation to serve as input to a label distribution estimator, simulating this reconstruction process. In essence, reconstructability guarantees that *subtasks are not just arbitrary auxiliary objectives but are structurally aligned with the primary LDL problem*. Therefore, it is crucial for the overall validity of the $\mathcal{S}$-LDL framework to prove that the subtask label distributions can yield the original distribution.
**Comment 4:** Regarding Theorem 4.4, why there are three conditions? More explanations would be helpful.
**Response:** Below, we provide explanation of the conditions:
+ $\mathcal{G}$ is connected: This ensures that the proportions between all description degrees of the primary label distribution are known. If $\mathcal{G}$ is not connected, the proportions of description degrees corresponding to labels on different connected components would remain unknown.
+ $\mathcal{G}$ covers all labels in the label space: The union of the subtask label spaces must encompass the entire primary label space. If any labels are not included in any subtask label space, the description degrees for those labels would be unknown. Hence, this is a necessary condition.
+ The corresponding description degrees of all cut vertices of $\mathcal{G}$ are not zero: When there are cut vertices in $\mathcal{G}$, the non-zero description degrees corresponding to all the cut vertices are necessary to keep the proportions known.
---
Rebuttal Comment 1.1:
Comment: I have carefully read the rebuttal, and the response solved the most of my concerns. In the third response, I still have a small question: Is there a relationship between the subtask and the main task? Are they in an independent and identically distributed (i.i.d.) relationship? Aside from this question, the other responses have addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback! The subtasks and the primary task are not independent but share structural correlations. However, they do not strictly follow an i.i.d. relationship, as the subtask construction process introduces dependencies. | Summary: This paper studies the Label distribution Learning (LDL) problem. It first claims the disadvantages of existing works, pointing out their contradiction between auxiliary tasks and the generalizability. To mitigate this issue, the authors propose a new method S-LDL, which generates subtask label distributions to help LDL. The proposed S-LDL consists of two simple key components: 1) generating subtasks without prior knowledge; 2) solving subtasks using off-the-shelf LDL algorithms. The experiments demonstrate the effectiveness of S-LDL on multiple datasets.
Claims And Evidence: Strengths:
- The proposed idea is derived from some multi-label learning (MLL) problems, but is also suitable for the LDL problem. This is a minimalist and interesting motivation for LDL.
Weaknesses:
- It is better to provide some evidence that existing methods cannot deal with the contradiction. Otherwise the motivation will be a bit unconvincing.
- The claim of "furnish additional supervised data" seems not rigorous and might cause misunderstanding, , since you have not involved more external data. Maybe it is better to use "supervision" or "supervised information".
Methods And Evaluation Criteria: Strengths:
- The proposed method is minimalist and scalable. The authors have properly provided the algorithm procedure.
- The evaluation metrics are introduced in Section 6. And the detailed formulations are provided in the appendix.
Theoretical Claims: Weaknesses:
- Some related works in MLL with partitioning of the label space provided theoretical justification. However, there is no theoretical support for the proposed method.
Experimental Designs Or Analyses: Strengths:
- The experiments demonstrate the effectiveness of the proposed method on multiple datasets.
- The authors have provided proper ablation study and parameter sensitivity analysis.
Weaknesses:
- The performance gains on some datasets are marginal (less than 0.01). Is there any explanation?
Supplementary Material: Strengths:
- The authors have provided the source code for reproduction.
Relation To Broader Scientific Literature: Strengths:
- This paper is partially related to multi-label learning. The authors have provided proper explanations and discussions.
Essential References Not Discussed: Weaknesses:
- It seems that the references are a bit out-of-date. Most references are from or earlier than 2023. Is there more recent related work?
Other Strengths And Weaknesses: I have no extra concerns. Please see the above comments.
Other Comments Or Suggestions: Please see above.
Questions For Authors: Please see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Many thanks for your precious comments! We have provided point-by-point responses to your questions below.
**Comment 1:** It is better to provide some evidence that existing methods cannot deal with the contradiction.
**Response:** In **Section 1 & 2**, we have provided such evidence. On the one hand, we highlight how existing methods relying on prior/expert knowledge (e.g., [Chen et al., 2020; Wu et al., 2019; Yang et al., 2017a]) suffer from poor generalization, as they cannot adapt to new domains. On the other hand, we demonstrate that conventional LDL techniques use solely the primary task distribution (e.g., [Jia et al., 2019; 2023; Ren et al., 2019; Wen et al., 2023]) as supervised data. These comparisons directly show why existing solutions cannot resolve the contradiction, and the essential reason behind this is highly related to the design logic of these methods.
**Comment 2:** The claim of "furnish additional supervised data" seems not rigorous and might cause misunderstanding, since you have not involved more external data. Maybe it is better to use "supervision" or "supervised information".
**Response:** The claim is indeed not meant to imply external data but rather newly constructed data for supervision. We will revise the wording to clarify this in the manuscript.
**Comment 3:** Some related works in MLL with partitioning of the label space provided theoretical justification. However, there is no theoretical support for the proposed method.
**Response:** We sincerely appreciate the reviewer's insightful comment regarding theoretical justification. While prior MLL works with label space partitioning do offer theoretical analyses, our method provides distinctive theoretical support through the following aspects. In **Section 4.1**, we have discussed the validity of subtask construction, ensuring the subtasks remain meaningful for the primary LDL objective. In **Section 4.2**, we prove that the primary label distribution can be theoretically reconstructed from subtask distributions, demonstrating information preservation. Additionally, in **Section 4.3**, we have provided a detailed analysis of the time complexity of subtask construction, establishing its scalability and efficiency. While our theoretical framework differs from related works in MLL with partitioning of the label space, these three pillars collectively justify the rationality of our design.
**Comment 4:** The performance gains on some datasets are marginal (less than 0.01). Is there any explanation?
**Response:** For LDL, even small changes in metrics can indicate significant performance improvements. This is because the values involved in the calculation are constrained by probability simplex, which limits the range of results and leads to small divergence. For example, on the $\mathtt{Yeast\\_diau}$ dataset, the K-L divergence between the ground truth and a uniform label distribution matrix (values are all $\frac{1}{L}$, where $L$ is the number of labels) is **0.0158**. This can be considered as the worst performance. DF-LDL, a strong baseline, achieves **0.0131**. To ensure the reliability of the experimental results, we repeat the experiments and provide the standard deviation of the results, and perform significance tests (pairwise *t*-test at 0.05 significance level) to confirm that the observed changes are statistically significant.
**Comment 5:** It seems that the references are a bit out-of-date. Most references are from or earlier than 2023. Is there more recent related work?
**Response:** We appreciate the reviewer's suggestion. While there are indeed some recent works in the field of LDL [1, 2], they currently *lack official open-source implementations*, making their effectiveness and reproducibility unverified. For reliability and fairness in comparison, we chose to focus on well-established methods with publicly available implementations.
[1] Label Distribution Learning Based on Horizontal and Vertical Mining of Label Correlations. *TBD*, 2024.
[2] Exploiting Multi-Label Correlation in Label Distribution Learning. *IJCAI*, 2024. | Summary: This paper investigates the problem setting of Label Distribution Learning (LDL). In particular, the authors propose the concept of leveraging subtasks to facilitate learning.
## update after rebuttal
I reviewed the rebuttal and further responses. I am still not fully convinced. I did not have more time to check the motivations of the LDL setting, but it seems that other reviewers approve of this setting. I still have reservations, but the AC may wish to ignore my comments because I did not read the paper in full.
Claims And Evidence: See weakness and strength.
Methods And Evaluation Criteria: See weakness and strength.
Theoretical Claims: See weakness and strength.
Experimental Designs Or Analyses: See weakness and strength.
Supplementary Material: See weakness and strength.
Relation To Broader Scientific Literature: See weakness and strength.
Essential References Not Discussed: See weakness and strength.
Other Strengths And Weaknesses: The concept of Label Distribution Learning (LDL) feels quite unfamiliar to me. How does it differ from a typical classification network? What I mean is, even the most common classification networks can use softmax predictions as the label distribution output in this context.
Moreover, what form does the training set take? The authors only describe the samples as $\mathcal{X}$ without mentioning the labels. Does each sample come with a soft label? Even if that’s the case, it still strikes me as odd. As we know, the ERM essentially guarantees that, given a sufficient sample size, a classification network trained with cross-entropy loss will eventually produce softmax predictions that converge to the true posterior distribution $P(Y|X)$. In simpler terms, for the same sample point $X$, we would effectively be sampling multiple labels $Y$, which is indeed equivalent to training with soft labels.
Please first address the points mentioned above, after which I will proceed with my review.
Other Comments Or Suggestions: See weakness and strength.
Questions For Authors: See weakness and strength.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Many thanks for your comments! Responses to your questions are as follows.
**Comment 1:** How does it (LDL model) differ from a typical classification network?
**Response:** Although typical classification networks can formally produce outputs similar to label distribution learning (LDL) through Softmax, they differ significantly in essence, specifically manifested in the following aspects:
+ Learning Objectives: Typical classification networks with Softmax outputs aim to learn a single-peaked probability distribution, emphasizing the certainty that a sample belongs to a specific category (i.e., the "one true label"). Their limitation lies in the inability to express the varying degrees of association between a sample and multiple labels. In contrast, LDL seeks to learn a multi-modal (possibly single-peaked or multi-peaked) label distribution, describing the degree of association between a sample and all labels. It is primarily suited for scenarios where semantic overlap or granularity differences exist among labels. The core challenge of LDL *never* lies in ensuring outputs satisfy a probability simplex, but rather in precisely fitting the description degrees.
+ Semantic Interpretation: The Softmax output ensures that probabilities sum to 1, but only the highest probability label is meaningful (winner-takes-all). The label distribution output, however, determines probability distribution based on the relative descriptive strength among labels, allowing all labels to retain interpretability simultaneously.
+ Underlying Philosophy: The classification model is decision-oriented, prioritizing clear classification boundaries. LDL model is description-oriented, pursuing fine-grained quantification of label associations.
**Comment 2:** What form does the training set take? Does each sample come with a soft label?
**Response:** In short, each sample $\boldsymbol{x}$ come with a label distribution $\boldsymbol{d}$, which is *not* a soft label. Soft labels are often introduced to improve the generalization of classification tasks, and they are still only related to a specific dominant class. Label distributions model ambiguity, where diversity is a property of the data itself. Therefore, label distributions may inherently follow a continuous or multi-peaked distribution (e.g., subjective human annotations). We have some rigorous statements about LDL in the problem definition in **Section 3.1**. These do not conflict with your observations based on ERM.
To clarify, our work follows a pure LDL paradigm, which is fundamentally distinct from traditional classification tasks. We believe this clarification adequately addresses the reviewer's concerns. We sincerely hope these issues will not negatively impact the overall rating of our contribution.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My point is: when the sample size is sufficiently large and the model capacity is appropriate, the probability output by a model trained with the commonly used cross-entropy loss will converge to the true posterior probability. This convergence is independent of whether the true posterior distribution is assumed to be unimodal or multimodal.
From my perspective, Label Distribution Learning (LDL) appears to be merely a rephrasing of the same concept. Simply put, if LDL assumes that the annotation of a sample is a label distribution—e.g., for a given sample $x$, its label is represented as $y = [0.3, 0.6, 0.1]$—then in the traditional setting, when the training dataset is sufficiently large, even with single-label supervision, we would expect multiple sampled labels $y$ for the same $x$. These sampled labels will naturally form a distribution similar to $[0.3, 0.6, 0.1]$.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful discussion. While the theoretical point holds universally (any ML paradigm can converge to the true distribution given infinite data and ideal conditions), *empirically* these conditions are rarely met in practice.
Here we take an example: Your mention of single-label supervision corresponds exactly to problem-transformation (PT) methods that convert label distributions into single-label approximations (e.g., through repeated sampling). Geng (2016) demonstrates these methods perform well only on artificial datasets but fail in practical applications. This occurs because: PT methods fundamentally destroy the structure of the original label distribution. If the training data is single-labeled, the model will tend to ignore the underlying multimodality.
This example not only highlights that label distributions are *not* merely an alternative representation of traditional single-label sampling statistics, but also demonstrates the persistent gap between theory and empirical practice.
We share your enthusiasm for these fundamental questions. Therefore, in this paper, we deconstruct richer supervision information using label distribution subtasks to address empirically observed performance challenges. | null | null | null | null | null | null |
Counterfactual Voting Adjustment for Quality Assessment and Fairer Voting in Online Platforms with Helpfulness Evaluation | Accept (poster) | Summary: This paper introduces the Counterfactual Voting Adjustment (CVA) framework, designed to address biases in online voting systems that distort information quality assessment. Specifically, CVA targets position bias (where content appearing higher receives more votes) and herding bias (where visible prior votes influence subsequent evaluations).
The authors argue that traditional aggregated voting systems fail to reflect true content quality due to these biases. CVA adopts a causal inference approach, modeling the counterfactual scenario in which content is displayed at different positions with equalized prior vote counts. By leveraging voting trajectories, CVA corrects these biases and improves content ranking fairness.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: I did not check the supplementary material
Relation To Broader Scientific Literature: Yes
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is well-written and easy to follow, but some notation can be imrpoved (see below).
The problem is well-motivated.
The paper validates CVA through multiple approaches—synthetic data, real-world StackExchange datasets, and GPT-4o evaluations—demonstrating consistent performance gains over existing methods.
Regarding weaknesses, CVA’s effectiveness relies on the assumption that confounding factors are adequately captured in the observed data. Any unobserved confounders could undermine its reliability. Moreover, while CVA's improved alignment with GPT-4o's judgments is promising, GPT-4o itself is subject to limitations and may not always reflect human expert evaluations.
Other Comments Or Suggestions: Right column, line 150, instead of j = 1,...,J_t−1, a better notation is j \in { 1,...,J_t−1}, and this comment applied at line 156 as well.
The authors could explain more what M_{i,j}^t denotes. In each round, do multiple users vote on issue i? Similarly, more explanation is needed regarding what R_i,j^t denotes. An example would be useful.
I also believe that a more detailed impact statement would be useful for this work as if the proposed algorithm is applied in practice, it could have a social impact.
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive evaluation and recognition of our work’s strengths and its practical value. The reviewer raised several crucial questions, to which we respond below.
**a. Re “unobserved confounders”:**
Among various factors influencing helpfulness evaluation, this paper specifically targets social biases by distangling two primary biases: position and herding bias. These biases often lead to winner-take-all type of information cascades, overshadowing other potentially more valuable information. As shown by our experimental results, our framework can easily incorporate additional factors: any observable feature or additional biases like length-bias. Following the reviewer’s suggestion, our immediate future work will try to improve the framework’s robustness against unobserved confounders.
**b. Re “GPT-4o itself is subject to limitations”**
This issue is particularly relevant as more scientific papers adopt Large Language Models (LLMs) as their judge mechanism. Our study used GPT-4o for 1) sentiment analysis on user comments that are given to individual answers as well as 2) helpfulness evaluations as proxies of human evaluations. Given that sentiment analysis by LLMs shows near perfect accuracy, strong correlations with comment sentiment first show that our model can successfully mitigate position and herding biases.
Following the previous research by Lee et al 2016, we employed the same metric based on the observation that users frequently write their comments to address and discuss positive or negative points behind each answer — content and nuances not apparent in numeric helpfulness scores alone. Next, prior to using GPT-4o for helpfulness scoring, we performed various preliminary experiments. Specifically we chose top, middle(median), and bottom-ranked answers from each question, then trying to optimize prompts so that GPT-4o’s evaluations become approximately consistent with existing ranking of these three answers, whose quality are separately compared by human annotators. While complete agreement between GPT-4o’s rankings and existing rankings is neither expected nor desirable due to inherent biases, we carefully designed our experiments to rigorously test the effectiveness of our framework.
**c. Re “potential social impact”:**
We acknowledge that our paper does not fully address potential societal impact from two angles. First, our findings highlight the risk of information monopoly due to the design of a helpfulness voting mechanism that seeks efficient wisdom-of-crowds, thus suppressing potentially more valuable information often contributed later by other users. Our findings urge that 1) human users should critically interpret online information and 2) system providers need to employ a causal framework to mitigate biases in social decision making processes. | Summary: This paper proposes Counterfactual Voting Adjustment (CVA), a causal inference framework to mitigate position bias (content visibility due to ranking) and herding bias (influence of prior votes) in online voting systems. By modeling counterfactual scenarios where votes are cast under neutral visibility and balanced prior signals, CVA disentangles true content quality from biases. Key contributions include a causal framework leveraging voting trajectories and backdoor adjustment, empirical validation on semi-synthetic and real-world StackExchange data, and cross-community analysis revealing varying bias patterns (e.g., technical communities exhibit stronger herding bias, while others show position bias dominance).
Experiments demonstrate CVA’s superiority over baselines (e.g., raw votes, Chinese Voting Process) using metrics like Kendall’s τ correlation and alignment with ground truth approximated via comment sentiment and GPT-4o evaluations. The framework enables platforms to rerank content for fairer quality estimation, enhancing user experience and information reliability. Integration of GPT-4o as a proxy for human evaluation highlights adaptability to modern AI tools, while insights from 120 StackExchange communities underscore its practical utility in diverse contexts.
Strengths: Novel causal design, robust validation, and actionable cross-community insights.
Weaknesses: Limited discussion on computational scalability and real-time deployment challenges. Overall, the work addresses critical biases in online systems with methodological rigor and empirical grounding.
Claims And Evidence: The claims made in the submission are largely supported by clear and convincing evidence, though minor limitations exist. Here are potential limitations:
1. Real-world experiments use comment sentiment and GPT-4o evaluations as proxies, not direct human judgments. Semi-synthetic data with known ground truth partially addresses this, but the absence of human validation remains a caveat.
2.The causal graph (Figure 2) assumes covariates X block all confounders, which depends on correct model specification. While standard in causal inference, sensitivity analyses or robustness checks could strengthen confidence.
3.Experiments are limited to StackExchange communities. The framework is theoretically adaptable (as noted in §7), but empirical validation on other platforms (e.g., product reviews) is absent.
Methods And Evaluation Criteria: The methods and evaluation criteria are well-suited to the problem, leveraging causal inference principles and rigorous experimentation. The use of semi-synthetic data with ground truth and real-world proxies (comment sentiment, GPT-4o) provides balanced validation. While limitations exist (e.g., proxy reliance, unmeasured confounders), the approach is methodologically sound and addresses the core challenge of bias mitigation in online voting systems. Future work could expand validation to other platforms and incorporate human evaluations to further solidify claims.
Theoretical Claims: 1.The paper’s theoretical claims—identifiability of content quality parameters, counterfactual invariance, and estimator consistency—require stronger formal grounding. While the proposed CVA framework aligns with causal principles (e.g., backdoor adjustment), key assumptions (e.g., linear position/herding bias models, no unobserved confounders) lack validation. For instance, the identifiability of quality parameters hinges on parametric assumptions like β/(1+D)\beta/(1+D)β/(1+D) for position bias, which may not reflect real-world nonlinear dynamics (e.g., exponential decay with rank). Similarly, claims of counterfactual invariance and MLE consistency lack rigorous proofs or sensitivity analyses (e.g., robustness to model misspecification). A formal treatment using causal frameworks (do-calculus) and asymptotic theory (Van der Vaart regularity conditions) is needed to strengthen these arguments.
2. The empirical validation relies heavily on semi-synthetic data and GPT-4o as a proxy for ground truth, raising concerns about real-world applicability. While results show improved correlation metrics (e.g., Kendall’s τ\tauτ), the synthetic data generation process may oversimplify bias mechanisms (e.g., predefined herding rules). Furthermore, GPT-4o’s alignment with human judgment remains unverified, and confounding factors (e.g., content freshness, user expertise) are not controlled. For broader validity, the authors should test CVA on fully observational data with human-annotated quality labels and address unobserved confounders via sensitivity analyses.
Experimental Designs Or Analyses: 1.The semi-synthetic validation introduces circularity by generating data using the same parametric bias models (e.g., β/(1+D)\beta/(1+D)β/(1+D) assumed in CVA, potentially inflating performance. Reliance on GPT-4 as a quality proxy further weakens grounding, as its alignment with human judgment is unverified. To strengthen validity, the authors should test CVA on real-world benchmarks with human-annotated labels (e.g., StackExchange “accepted answers”) and replace semi-synthetic data with fully synthetic datasets incorporating nonlinear bias mechanisms.
2.Key causal baselines (e.g., inverse propensity weighting) are omitted, limiting claims of superiority over state-of-the-art methods. Additionally, unaddressed confounders (e.g., user reputation, content age) and overaggregated community-level results (e.g., masking failure modes in political forums) reduce interpretability. The authors should include advanced causal baselines, report statistical significance (e.g., confidence intervals), and disaggregate results by community type to clarify performance variations.
3.Insufficient details on hyperparameter tuning (e.g., regularization strength λ\lambdaλ) and missing sensitivity analyses (e.g., Rosenbaum bounds for unobserved confounding) hinder reproducibility. A code/data release and robustness checks against alternative bias models (e.g., exponential decay for position effects) are critical for practical adoption.
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: 1. The paper builds on causal inference and recommendation systems by unifying position/herding biases into a structural framework, advancing beyond correlational models (e.g., Agarwal et al., 2019) and heuristic approaches (e.g., time-decay weighting). While parametric bias modeling (e.g., β/(1+D)\beta/(1+D)β/(1+D)) aligns with classical exposure literature, its causal formalization of identifiability and invariance lacks rigorous ties to foundational tools like do-calculus (Pearl, 2009) or modern sensitivity analyses (Cinelli et al., 2020). The use of GPT-4 as a quality proxy follows trends in LLM-driven annotation (Gilardi et al., 2023) but overlooks established validation methods (e.g., StackExchange’s expert labels). Though the integration of behavioral assumptions (e.g., modular bias components) is novel, gaps remain in connecting to robust causal estimation techniques (e.g., doubly robust estimators) and addressing regularization impacts on consistency claims.
2. The work extends causal ML by treating biases as structural parameters rather than confounders, bridging social dynamics (e.g., Muchnik et al., 2013) and identifiability theory. However, its reliance on semi-synthetic data risks circularity compared to real-world benchmarks (Gordon et al., 2019), and the omission of advanced baselines (e.g., inverse propensity weighting) limits practical relevance. Strengthening ties to invariance frameworks (Pfister et al., 2021) and adopting sensitivity analyses for unobserved confounders would better situate it within modern causal literature.
Essential References Not Discussed: The paper omits critical context for its methodology. First, it cites Kamalho & Rafiei (2023) to justify GPT-4’s use as a quality evaluator but overlooks Gilardi et al. (2023), which systematically validates LLM-as-a-judge alignment with human judgments—a gap weakening claims about label reliability. Second, while proposing causal adjustments for bias, the work fails to contrast its approach with established propensity-scoring methods (e.g., Agarwal et al., 2019, widely adopted in industry for position bias correction). Addressing these omissions would clarify how CVA advances beyond existing bias-mitigation frameworks.
Other Strengths And Weaknesses: The paper demonstrates notable originality through its innovative integration of causal inference frameworks (e.g., do-calculus) with behavioral dynamics like herding and position bias, proposing a modular hybrid model (CVA) that advances interdisciplinary methodology. Its practical relevance is evident in addressing real-world platform challenges (e.g., biased content ranking) while maintaining interpretability via disentangled quality and bias parameters. Methodologically, the semi-synthetic validation and sensitivity analyses enhance robustness, though reliance on LLM-generated labels introduces unquantified risks.
However, theoretical gaps weaken its foundation: the absence of formal identifiability proofs for causal parameters under unobserved confounding and insufficient discussion of regularization-induced biases (e.g., L2 penalty trade-offs). Empirically, the lack of validation on real-world data (e.g., Reddit/Stack Exchange voting histories) and incomplete baseline comparisons (e.g., doubly robust estimators) limit claims of superiority. Technical opacity—particularly in LLM annotation protocols and computational scalability—hinders reproducibility, while ethical implications of deploying LLM-dependent models remain unexplored.
To solidify contributions, the authors should: 1) strengthen theoretical grounding (e.g., identifiability proofs), 2) validate on authentic platform datasets with broader baselines, 3) disclose LLM prompt details and code, and 4) address societal risks (e.g., bias amplification). Addressing these would elevate the work from a promising interdisciplinary prototype to a rigorous, deployable solution.
Other Comments Or Suggestions: 1. Writing & Presentation: The term “quality” is used inconsistently. In Section 3.1, it is defined as “intrinsic content value”, but in Section 4.2, it conflates with “user-perceived relevance”. Clarify whether “quality” refers to objective merit, subjective preference, or platform-defined metrics (e.g., Reddit’s upvote/downvote rules). The transition from theory (Section 2) to methodology (Section 3) feels abrupt. Consider adding a brief subsection (e.g., “Motivation for Hybrid Modeling”) to explain why causal and behavioral modules are jointly necessary.
2. Technical Clarifications:
(1) The paper states that “regularization parameters were selected via cross-validation” (Section 4.1), but does not specify the search space (e.g., λ ∈ [0.1, 1.0]?) or validation metric (e.g., MSE, likelihood?). Include these details in the appendix.
(2) The model assumes “no unobserved confounders between position and clicks” (Section 3.2). However, real-world platforms often have hidden confounders (e.g., user fatigue, time-of-day effects). Acknowledge this limitation and discuss potential solutions (e.g., proxy variables).
(3) While GPT-4 is used to generate “quality” labels, no calibration steps (e.g., temperature scaling, human validation of samples) are mentioned. Add a paragraph on how LLM outputs were standardized (e.g., z-score normalization) to mitigate overconfidence.
3. Ethical & Societal Considerations: The reliance on GPT-4 for label generation risks inheriting its biases (e.g., gender stereotypes in text, Western-centric perspectives). A fairness audit (e.g., disaggregating performance by content topic or demographic proxies) is critical but missing. Training large hybrid models (CVA + LLM components) may incur high computational costs. Estimate the carbon footprint (e.g., using tools like CodeCarbon) and suggest efficiency optimizations.
Questions For Authors: 1. The paper applies do-calculus for causal estimation but lacks formal identifiability proofs under unobserved confounders (e.g., user demographics). How do you justify the absence of such proofs, and could instrumental variables or sensitivity analyses (e.g., Rosenbaum bounds) address potential biases? Additionally, L2 regularization may bias causal estimates—did you analyze this trade-off (e.g., bias-variance decomposition), and why prioritize it over doubly robust methods?
2. Experiments rely on semi-synthetic data with GPT-4 labels but omit validation against real-world benchmarks (e.g., Reddit moderator labels). If such data is unavailable, could quasi-experiments or robustness checks substitute? Further, the LLM annotation protocol lacks transparency: Share prompts, temperature settings, and calibration steps (e.g., human validation) to assess bias risks (e.g., verbosity preference inflating performance).
Ethical Review Concerns: No ethical issues
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive evaluation and recognition of our work’s strengths. Below we respond to the questions/comments:
**a. Regarding the ground truth (proxy reliance),**
i. For human judgments, due to the different topics discussed in different communities, to hire specific groups of human experts for each community is a non-trivial work, and the huge volume of questions of StackExchange increases the cost of acquiring human judgment data. We plan to tackle these difficulties and fill the gap between GPT-4o evaluation and human evaluation in our future work.
ii. For StackExchange’s expert labels, if you mean the ‘accepted answer’ labels, we didn’t leverage it mainly because that an answer is accepted by the author of the question when he/she feels it is good enough and then it will stay on the top rank. However, It’s not necessarily the best answer for all the time.
iii. For details about how LLM is used, the prompt for an example question is provided in the Appendix I. We use temperature=0 and we will release our codes for parsing the LLM responses.
**b. Regarding the causal framework,**
i. For the causal identification assumption and robust/sensitivity analysis, we don’t allow unobservable confounders, and we will acknowledge in the paper that this assumption can be a limitation of our approach.
When hidden confounders are present, indeed one could develop alternative causal identification strategies like proxy variables to aid identification, or perform sensitivity analysis to unobserved confounding, assuming a certain level of unobserved confounding is present. However, these strategies like proxy variables need customized development and we leave them to future work.
This work also connects to invariant causal prediction; it constructs a latent information quality assessment that is invariant to changes in display rank and voting sequences. We will discuss these connections in the paper.
ii. For other key parametric modeling assumptions, it is indeed true that the model could be misspecified and may not reflect real world nonlinear dynamics fully. In response, we have stressed test our model under a series of different model misspecification scenarios (e.g. missing nonlinear terms). We found that, despite model misspecification, CVA outperforms baseline method CVP in its accuracy of assessing information quality. We will include these results in the paper, and discuss the potential issue of model misspecification.
iii. For other baselines like IPW and doubly robust AIPW estimators, while these estimators like IPW can potentially be applied to handle position bias, it cannot handle position bias and herding bias simultaneously. In particular, handling herding bias involves careful modeling of the whole sequence of votes. These obstacles make it difficult for us to compare CVA with baselines like IPW and doubly robust AIPW estimators.
**c. Regarding synthetic data generation,**
the herding bias mechanisms used for synthetic data generation are mainly based on the Chinese restaurant process which has solid theoretical foundations.
**d. Regarding including more dataset,** our method needs the historical trajectory of vote data (not snapshot static data), and we can only get this kind of data from StackExchange for free. We will consider doing the experiments when other datasets are available.
**e. Regarding how to select the regularization parameters,** we do cross-validation within the search space (λ ∈ [0.1,0.2, 0.3,0.4, 0.5,0.6, 0.7,0.8,0.9,1, 2, 3,4,5, 6, 7,8,9,10,20, 30,40,50,60, 70,80,90,100, 200, 300, 400, 500, 600, 700, 800, 900,1000]). The validation metric we used is the voting prediction accuracy.
**f. Regarding the term inconsistency of “quality”,** we will unify the term use of quality. Our quality indicates the user's perceived quality at a time, and we assume the users in the same community have the same preference.
**g. Regarding the reproducibility,** we will release our codebase.
**h. Regarding the Ethical & Societal Considerations,** we acknowledge that our paper does not fully address potential societal impact from two angles. First, our findings highlight the risk of information monopoly due to the design of a helpfulness voting mechanism that seeks efficient wisdom-of-crowds, thus suppressing potentially more valuable information often contributed later by other users. Our findings urge that 1) human users should critically interpret online information and 2) system providers need to employ a causal framework to mitigate biases in social decision making processes.
**i. Regarding the writing,** we explained why causal and behavioral modules are jointly necessary in the third paragraph of Introduction. We will improve the transition from Sec 2 to Sec 3.
**j. To answer the questions,**
(1) Please see the replies for b. the causal framework
(2) Please see the replies for a. the ground truth (proxy reliance) and d. more dataset | Summary: This paper develops a model of voting (for example, StackOverflow up/down votes) that measures and removes the effect of position bias (voters are more likely to vote on already highly-ranked items) and herding bias (voters are likely to agree with the existing consensus). This allows a better estimate of item quality than what would be naively estimated from the available votes.
Claims And Evidence: This paper makes a convincing case that the proposed method is both necessary to solve a problem not addressed by existing methods (which can't jointly mitigate position and herding bias) and makes progress towards a solution.
Methods And Evaluation Criteria: The method is a nice application of generative modeling to this problem. The evaluation datasets make sense, given the lack of available ground truth comment qualities. The improvements shown in these evaluations are modest, but the method tends to do as well as or better than the existing Chinese Voting Process model.
Theoretical Claims: I didn't check the correctness of the theoretical claims.
Experimental Designs Or Analyses: Yes, they seem sound given the difficulty of finding ground truth comment qualities.
Supplementary Material: No
Relation To Broader Scientific Literature: This paper provides a modest improvement over prior models of voting in these contexts.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: If GPT4o can be considered ground truth, what’s the value of this approach?
“thereby learning its model parameters for position and herding biases independently” I think it would be useful to cover this in more detail, since this is the key advantage over the state of the art.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive evaluation and recognition of our work’s strengths and its practical value. Below we respond to the questions and comments:
**Re the comparison with the existing model:**
The proposed CVA shows three main advantages compared to existing CVP: Firstly, the CVP paper didn’t provide causal effect estimation. Secondly, the CVA is able to mitigate position and herding bias at the same time while CVP can’t. Thirdly, the CVA is better than CVP in evaluations especially for the communities with high position bias, since CVP didn’t consider the rank position when predicting the vote. For those communities with much less position bias, the advantage of CVA won’t be shown significantly.
**Question: If GPT4o can be considered ground truth, what’s the value of this approach?**
Although GPT-4o is a good ground truth proxy, it still needs to be validated by human evaluation, especially for different expertises of different communities, which will be done in our future work. Additionally, our work can not only provide a fairer quality estimation, but also give measurements of position and herding bias which are valuable insights about the community behavior trends for the platforms. | null | null | null | null | null | null | null | null |
Learning Joint Interventional Effects from Single-Variable Interventions in Additive Models | Accept (poster) | Summary: The manuscript studies the problem of intervention generalization in additive causal models. Specifically, the authors consider a data generating process in which an outcome variable $Y$ is an additive function of actions $A_1, \dots, A_K$ and unobserved confounders $C_1, \dots, C_K$. Given some set of observational data and single-intervention experiments, the method estimates a joint intervention effect $\mathbb E[Y \mid do(a_1, \dots, a_K)]$. Several identifiability results are presented, along with experiments illustrating the utility of their proposal.
## update after rebuttal
I thank the authors for their reply and look forward to seeing the extended benchmarks should the paper be accepted. I will maintain my score of 4.
Claims And Evidence: The paper is primarily theoretical, with the joint and mixed identifiability results taking center stage. The proof sketch helps walk the reader through the authors' reasoning on this point. The experiments are a little perfunctory (more on this below), but sufficient to illustrate the principle.
Methods And Evaluation Criteria: The data generating process for the experiments makes good sense, but the alternative estimators seem a little naive. Several intervention generalization papers were mentioned in the Related Work section, and I was expecting to see these return in the benchmarks. (I'm thinking specifically of Bravo-Hermsdorff et al., Jung et al., and Saengkyongam & Silva.) Are these methods inapplicable in this setting? If so, why? Caroline Uhler's lab has also done considerable work in this area but I don't believe any of those methods are cited (see below).
-https://arxiv.org/abs/2405.19225
-https://proceedings.mlr.press/v236/ribot24a.html
Theoretical Claims: Theoretical results are exceptionally clear and well presented.
Experimental Designs Or Analyses: As noted above, the experimental DGP is sound but comparisons are a little thin.
Supplementary Material: Yes, the proofs appear sound.
Relation To Broader Scientific Literature: This work makes a meaningful contribution to the intervention generalization literature, which has numerous applications in science and industry. One recommendation to help improve the manuscript – there are several potential use cases cited in the introduction, but these are basically never mentioned again after the first paragraph. Readers may benefit from a running example that grounds us as we move through theoretical and experimental results.
Essential References Not Discussed: See the aforementioned works from Caroline Uhler's lab. These may not be identical to the problem setting here, but they're certainly in the neighborhood.
Other Strengths And Weaknesses: The paper is clear and convincing. I think with just a couple of minor amendments – namely, grounding the discussion with a running example and extending the benchmarks to include alternative intervention generalization techniques – it could realize its full potential.
Other Comments Or Suggestions: N/A
Questions For Authors: See comments above.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer 2XuD
We thank the reviewer for the valuable comments and the pointers to Caroline Uhler’s lab related research. We would like to address some of the reviewer’s concerns.
On the baselines: we agree with the reviewer that using the purely observational data the pooled data might not be a fair comparison due to their simplicity. However, using joint interventional data is the closest one gets to an “oracle” version, since then the problem is reduced to estimating a conditional function $P(Y \mid do(A))$. Moreover we believe it would be unfair to use the methods of Bravo-Hermsdorff et.al., Jung et.al. and Saengkyongam & Silva as benchmarks, as their assumptions usually do not match ours.
In particular, we cannot apply Saengkyongam & Silva’s method because their method requires the noise to be Gaussian (indeed, our method is also applicable for discrete variables). Likewise, our setting with the additive outcome mechanism is not identifiable in the interventional factor graph model by Bravo-Hermsdorff, since the additive structure of the causal mechanism does not translate to an extra factorization of the joint probability in the different interventional settings.
About Caroline Uhler’s lab related research: We thank the reviewer for the pointers. These are indeed quite related to our work.
- In particular Ribot et.al. is also interested in causal effect estimation in novel scenarios when some action-context pairs have been observed, which in the language of our paper would amount to estimating joint causal effects from observing only certain subsets of the interventions. Likewise, their identification strategy relies on making a linearity assumption, which shows that a latent factor model is induced.
- On the other hand, Bijan et.al. cluster variables with respect to interventions that come from a mixture distribution and then use the clustering to produce Synthetic Potential Outcomes to compute a previously unseen intervention. Their setting differ from ours in that we have several possible actions as opposed to a single treatment and we make an assumption on the functional form of the SCM in order to make the joint interventions identifiable.
We will add an extended comparison in the camera-ready version of the paper.
On the running example: We believe that having a running example is a good idea. However, there are two reasons why we don’t include it. First, lack of space in the initial submission, and second, including a running example on the camera-ready version would potentially require large changes to the paper, which would be challenging to do given the time constraints and difficult to review.
We appreciate the time and effort you have invested in providing constructive feedback, which will help us improve our paper. | Summary: This paper presents a novel construction for identifiability of joint interventional effect from single variable intervention results, with additive assumptions placed on the causal mechanism. The method is tested on synthetic data and demonstrated effectiveness and on pare performance when compare with a model trained on joint interventional data.
Claims And Evidence: The claims are in general correct with enough evidence.
Methods And Evaluation Criteria: The proposed methods are correct and evaluation seems to be reasonable.
Theoretical Claims: Yes. the identifiable results seems to be correct.
Experimental Designs Or Analyses: The experimental design and analysis seems to be fine, although (1) semi-synthetic experiment could be used for better representing its real-world effect (2) better motivation need for assumption 2 (Additive Outcome Mechanism) and (3) discussion should be made in case of violation of assumption 2 (e.g. feedback loop or no direct ordering in DAG as in "Sachs, K., Perez, O., Pe'er, D., Lauffenburger, D.A. and Nolan, G.P., 2005. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721), pp.523-529.") See more in question section.
Supplementary Material: I have checked the code.
Relation To Broader Scientific Literature: This is an interesting and one of the frontier questions in the causal inference field, learning the joint effect from single causal effects. The key ingredient of this paper is the additive causal mechanism assumption, which is interesting but also quite strong and restrictive to real world scenarios.
Essential References Not Discussed: I think most of the relevant literatures are discussed in the paper.
Other Strengths And Weaknesses: The paper is well-motivated and provide clear contributions with experimental validations, the theory is in general correct but I am a bit less convinced its impact on real-world scenarios as (1) the additive assumption is nothing new and not very novel (one can claim this is novel as it is not placed on the mechanism or functional forms instead of parameters but sill this is not a major novel contribution) (2) the limitation of having confounder $C_{i}$ point to $X_{i}$ only seems to be restrictive and might not be applicable in real-world (how about have $C_{i}$ point to $X_{i}$ and $X_{j}$?), please refers to comments in "**Experimental Designs Or Analyses**" section. (3) experimental results are quite weak as it use purely synthetic data without constraining on its parameters (100 different SCMs helps but still quite weak), my suggestion is to a.) learn parameters from real-world data and b.)have discussion on when the identifiable results does not apply and conduct experiments under this to give us a better understanding on how applicable the results are in more realistic setting.
Other Comments Or Suggestions: Please see my comments earlier.
## update after rebuttal
I thank the authors for the detailed and thoughtful responses. While the clarifications are helpful and address some of my concerns, I remain unconvinced about the real-world impact (in terms of application) of this research and empirical strength of the paper (with synthetic data only) under the current assumptions. Therefore, my score remains unchanged.
Questions For Authors: I still found the justification is weak for why the authors focus on parametric identification this case? The applicable of this method for real-world problem can be completely trashed if the data-generation process is not under this constrained form.
Could you please answer some of my concerns in the comments above?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your thoughtful comments. We appreciate your feedback and would like to address your concerns below.
## On the need for parametric identification
From g-ID theory [1], we know that our problem setting is non-identifiable in the non-parametric case (i.e., only considering the causal structure in Fig. 1). I.e., given single-interventional and observational data, the joint-interventional effect cannot be uniquely determined in general. The causal structure, together with the intervention sets, forms what's known as a "thicket" (Def. 6 in [1]), which precludes identifiability. We believe that such multi-cause settings are interesting across many domains (see citations in Introduction). Unfortunately, these settings are limited by non-identifiability in the intervention generalization setting without additional assumptions.
## On the novelty of the additivity assumption
We don't claim the additivity assumption itself is novel. There is indeed a rich literature on additivity in statistics [2,3]. In causal inference, additivity of exogenous noise has been quite common. However, functional additivity, where the function is composed of a sum of nonlinear functions, to the best of our knowledge, has not been explored in causal inference. In particular, it has not been investigated as a parametric assumption in the intervention generalization setting.
This function class is particularly interesting here since it limits the complexity of interaction between actions in the outcome mechanism. This effectively prevents two distinct causal models from agreeing across single-interventional regimes while differing on the joint-interventional.
## On the limitations of Assumption 2
We agree that Ass. 2 limits the generality of our results. However, as discussed above, the problem is hard and not solvable in the most general case. For many if not most real-world settings with confounding between outcome and actions, intervention generalization is impossible without additional assumptions.
The key question becomes: "What parametric and structural assumptions do we have to make to achieve identifiability in the intervention generalization setting?" In our work, we have shown that under our set of assumptions using the additive outcome mechanism, intervention generalization is possible, and we have provided a practical estimator for it.
## On the practical applicability
While intervention generalization may not be achievable in all real-world cases, the benefits for the cases where it is applicable are immense. We can reduce the required experimental conditions from growing exponentially with the No. of actions to a linear regime. This makes a significant difference in many real-world settings, where obtaining experimental data is often very hard or expensive.
Additionally, Cor. 1 provides a way to trade-off assumptions on additivity for additional experimental data. If for practical applications it is unclear whether a subset of actions contributes additively, we can choose drop the assumption on this subset. This then comes at the cost of having to collect joint interventional data for this subset. We think that allowing for this trade-off between additivity and joint interventional data makes our method applicable to a broader range of real-world cases than full additivity would allow.
## On future directions
Our set of assumptions may not be the most parsimonious, and we hope that future work will find more general solutions. One motivation for choosing the additive model class was its relation to more general model classes, namely Generalized Additive Models (GAMs) [4], which in turn are related to the general functions of the Kolmogorov-Arnold Theorem [5]. We hope that our work can serve as a stepping stone toward proving identifiability results for the GAM case.
## On extensions to more general confounding between actions
If we add confounding between actions, Lem. 1 would not hold. However, we believe this is not a complete blocker for the problem. Our identifiability proof is agnostic to the causal structure between the actions (assuming it has a DAG structure). We suppose that addressing more general action confounding would require modeling the dependencies between the actions as well as the outcome mechanism to achieve intervention generalization. We hope to make progress in this direction in future work.
## References
[1] Lee, S. et al. "General identifiability with arbitrary surrogate experiments." 2020.
[2] Friedman, J. H. et al. "Projection pursuit regression." 1981.
[3] Breiman, L. et al. "Estimating optimal transformations for multiple regression and correlation." 1985.
[4] Hastie, T. J. et al. "Generalized Additive Models." 1990.
[5] Kolmogorov, A.N. "On the representation of continuous functions of several variables as superpositions of continuous functions of a smaller number of variables." 1956. | Summary: The paper proposes a method for estimating joint interventional effects using observational data and single-variable interventions within nonlinear additive models. It establishes identifiability results, showing that joint effects are recoverable without direct joint interventional data. The authors validate their approach empirically using synthetic data. However, there are some concerns about the novelty of the proposed method.
Claims And Evidence: The authors convincingly support their claims regarding identifiability and practical effectiveness through clear theoretical results and robust empirical evidence. No significant problematic claims are evident, as each theoretical assertion is carefully supported by rigorous mathematical justification and experiments.
Methods And Evaluation Criteria: The method makes sense, but I think the assumptions and inteventional data are not necessary.
Theoretical Claims: I did not check the proofs, but I believe they are correct, because it seems that the results have been verified by previous studies.
Experimental Designs Or Analyses: The experimental design utilizing synthetic structural causal models (SCMs) is valid and well-executed. The analyses, particularly comparisons with relevant baselines, including observational-only and joint-interventional estimators, are sound and provide strong validation of the proposed approach.
Supplementary Material: No.
Relation To Broader Scientific Literature: The problem is vital in the scientific literature.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: I apologize for my oversight in missing the statement that $C$ represents a set of unobserved confounders. I have updated my score accordingly. I suggest that the authors consider using different formats in Fig. 1 to clearly distinguish between latent variables and observable variables.
#----------------------------------#
I think Assumption 1 is sufficient for identifying the joint interventional effect with only observational data in the setting of the paper. According to Def. 5 and Thm. 3 of [1], the joint interventional effect is identifiable by covariate adjustment with adjustment set being {$C_1,\cdots,C_k$}. The reason that the two models in Example 1 are with different causal effects is the violation of Assumption 1. Hence, there is no novelty in the current method, which additionally uses interventional data and Assumption 2. I am not sure that my comments are completely correct. If the authors could correct me, I am happy to increase my score.
[1] On the validity of covariate adjustment for estimating causal effects. UAI2010
Other Comments Or Suggestions: Fig.1 is too big.
Questions For Authors: See Other Strengths And Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer 9nyV
Thank you for your thoughtful review and for engaging with our work. We appreciate your careful reading and feedback.
We would like to address your primary concern regarding the novelty of our approach and the necessity of our assumptions:
You mentioned that Assumption 1 might be sufficient for identifying the joint interventional effect using only observational data, referring to Definition 5 and Theorem 3 in "On the validity of covariate adjustment for estimating causal effects" (UAI 2010).
**Critical clarification:** In our problem setting, the confounders ${C_1, \ldots, C_K}$ are unobserved. This is a fundamental aspect of our work, explicitly stated in Section 2.2 where we define $C = {C_1, \ldots, C_K}$ as "a set of unobserved confounders."
The covariate adjustment approach you reference would indeed be applicable if these confounders were observed. However, when confounders are unobserved (as in our setting), covariate adjustment is not possible, and identification becomes significantly more challenging. This is precisely why we need both Assumption 1 (Intervention Support) and Assumption 2 (Additive Outcome Mechanism), along with single-variable interventional data, to achieve identification.
We believe this clarification addresses your main concern about the novelty and necessity of our approach. Thank you again for your review, and we hope this clarifies the contribution of our work.
---
Rebuttal Comment 1.1:
Comment: I apologize for my oversight in missing the statement that
represents a set of unobserved confounders. I have updated my score accordingly. I suggest that the authors consider using different formats in Fig. 1 to clearly distinguish between latent variables and observable variables. | Summary: This paper addresses the challenge of unobserved confounding in multi-treatment (joint intervention) causal inference. It introduces a method leveraging observational data and experimental data where interventions are applied to only one variable at a time, under the assumption of a nonlinear additive outcome and single-cause confounding.
## Update after rebuttal:
I thank the authors for their response. While the theoretical results are well presented, the real-world impact is not entirely clear and the untestable assumptions require strong justification. I am maintaining my score of 3.
Claims And Evidence: The paper clearly establishes identifiability under the specified assumptions. Additionally, the assumptions required for identifiability are transparently stated and reasonably justified. Empirically, the authors convincingly demonstrate that their proposed estimator performs close to a model trained directly on joint interventional data as sample size increases. And it significantly outperforms baselines such as observational-only estimation or naive pooling of single-variable interventions.
Methods And Evaluation Criteria: 1. The proposed method relies on strong assumptions, particularly an additive outcome mechanism and single-cause (pair-wise) confounding, where each confounder influences exactly one treatment variable and the outcome. Such strict assumptions limit the generalizability and practical applicability of the method, as real-world scenarios often involve shared confounding across multiple treatments and complex outcome mechanism.
2. The empirical evaluation is limited to relatively simple synthetic experiments, which may not adequately capture the complexities of real-world causal structures or realistic data generation processes. Investigations into robustness against violations of key assumptions can also clarify the practical utility and limitations of the proposed approach.
Theoretical Claims: Look fine to me.
Experimental Designs Or Analyses: See above.
Supplementary Material: None.
Relation To Broader Scientific Literature: The paper positions itself at the intersection of causal identifiability theory, intervention generalization, and additive statistical modeling, combining insights from these areas to propose a novel method for estimating joint interventional effects from single-variable interventions.
Essential References Not Discussed: This work is related to multi-treatment causal inference literature. For example:
- [1] Miao et al. (2023) Identifying effects of multiple treatments in the presence of unmeasured confounding
- [2] Wang and Blei (2020) The blessings of multiple causes
- [3] Zheng et al. (2021) Copula-based Sensitivity Analysis for Multi-Treatment Causal Inference with Unobserved Confounding
Other Strengths And Weaknesses: Overall, this paper is well written and clearly structured.
Other Comments Or Suggestions: It seems Equation (2) is not compatible with the approach and graph.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer dotZ
Thank you for your thoughtful review and constructive feedback on our paper. We appreciate the time you've invested and would like to address your key concerns.
## On the strong assumptions of our method
We acknowledge that our model makes strong assumptions, particularly regarding the additive outcome mechanism and the single-cause confounding structure. These assumptions indeed limit the generalizability to certain real-world scenarios where shared confounding or more complex outcome mechanisms might be present.
From general identifiability theory [1,2], we know that the intervention generalization problem is non-identifiable in the non-parametric case (considering only the causal structure). Without additional assumptions, given single-interventional and observational data, the joint-interventional effect cannot be uniquely determined in general.
The key question becomes: "What minimal set of assumptions enables identification in this setting?" Our work proposes one such set of assumptions that makes the problem identifiable. We view this as a starting point rather than a complete solution to all real-world scenarios with confounding.
Importantly, our Corollary 1 provides a way to trade-off assumptions on additivity for additional experimental data. If for practical applications it is unclear whether a subset of actions contributes additively, we can choose not to make that assumption on this subset. This then comes at the cost of having to collect joint interventional data for this subset. We think that allowing for this trade-off between additivity and joint interventional data makes our method applicable to a broader range of real-world cases than full additivity would allow.
In many domains where additive models are already commonly used (e.g., marketing analytics [3], certain pharmacological interactions [4]), our approach could provide practical value despite its limitations. We hope future work will find more general solutions with weaker assumptions that cover a broader range of real-world scenarios. (See also our response to Reviewer tbRD.)
## On the multi-treatment causal inference literature
You make an excellent point about the connection to multi-treatment causal inference literature. We thank you for suggesting these important references. Our work is indeed related to this literature, and we will add a discussion of these connections in our Related Works section. The papers you suggested (Miao et al., 2023; Wang and Blei, 2020; Zheng et al., 2021) provide valuable context for our work and would strengthen our paper's positioning within the broader causal inference literature.
## On the limitations of synthetic experiments
Our main goal in the experimental section was to show that, while we have shown that the problem setting is identifiable, the joint-interventional effect is practically estimatable from data with our proposed estimator.
We agree that our empirical evaluation using synthetic data has limitations in representing the full complexity of real-world causal structures. While we've tried to incorporate some level of complexity through randomly generated SCMs with varying parameters, this approach cannot fully capture all real-world nuances.
Thank you again for highlighting these important limitations. Your feedback will help us improve our research and better understand the practical applicability of our method.
## References
[1] Lee, S., Correa, J. D., and Bareinboim, E. "General identifiability with arbitrary surrogate experiments." Uncertainty in artificial intelligence. PMLR, 2020.
[2] Kivva, Y., et al. "Revisiting the general identifiability problem." Uncertainty in Artificial Intelligence. PMLR, 2022.
[3] Chan, D., & Perry, M. "Challenges and opportunities in media mix modeling." Google Inc, 16, 2017.
[4] Pearson, R. A., Wicha, S. G., & Okour, M. "Drug combination modeling: methods and applications in drug development." The Journal of Clinical Pharmacology, 63(2), 151-165, 2023.
---
Rebuttal Comment 1.1:
Comment: I'm not very concerned about the additivity assumption in this setting, since there is no free lunch. But I'm more interested in what happens if there is shared confounding, which is the main motivation behind many multi-treatment (as mentioned above) and multi-outcome approaches like [1]. Can Corollary 1 be generalized to confounding shared across interventions within the same subset? If not, real-world examples or more justification of the single-cause confounding assumption could help strengthen the paper's practical applicability.
[1] Zheng, J., et al. "Sensitivity to unobserved confounding in studies with factor-structured outcomes." Journal of the American Statistical Association 119.547 (2024): 2026-2037.
---
Reply to Comment 1.1.1:
Comment: # Response to Reviewer dotZ's Follow-up Comment
Thank you for raising this important point about shared confounding across interventions, which we should have explicitly mentioned in our paper.
You're absolutely correct, our framework actually allows for shared confounding within action variables in a given partition (Definition 2). Specifically, Corollary 1 allows us to trade off assumptions in two ways:
1. We can relax the additivity assumption for certain subsets of action variables in the outcome mechanism (9) by collecting joint interventional data on those subsets.
2. Similarly, we can allow for shared confounding among action variables within the same partition subset, provided we collect joint interventional data on that subset.
We will clarify these points in Section 6.3 of our revised manuscript.
However, unobserved confounding between different partition subsets would not be covered by our approach, as Lemma 1 would no longer hold. If we introduce confounding between actions across different partitions, the conditional distributions of confounders would change across interventional settings, breaking the decomposition in our proof.
Addressing more general confounding structures between actions would likely require explicitly modeling the dependencies between the actions alongside the outcome mechanism. While our current proof is agnostic to the causal structure between actions (assuming it has a DAG structure), extending to shared confounding across partitions remains an important direction for future work.
Thank you again for this insightful feedback that helps strengthen our paper. | null | null | null | null | null | null |
Discovering Physics Laws of Dynamical Systems via Invariant Function Learning | Accept (poster) | Summary: This paper propose a method to learn ODE based dynamical systems from observed sequences. The main feature of the proposed method which sets it apart from prior work is that it can learn invariant functions that can be reused across different environments, effectively disentangling general and reusable functions from environment-specific functions. This is motivated by the general context of automating scientific discoveries from observations. The authors propose to achieve this disentanglement by learning a function embedding which is maximally informative about the observed sequence while remaining independent on which specific environment the sequence is from. The paper offers a theoretical account of why this approach works and empirically verify their claims with three common dynamical systems that are modified to the multi-environment setting.
Claims And Evidence: Broadly speaking, the main claims of the paper are that 1) the proposed method can learn invariant functions and that 2) learning invariant functions leads to better prediction performances.
In the experiments, 1 is demonstrated through qualitative examples where decoding f_c gives reasonable trajectories that correspond well to the underlying invariant function. For claim 2, the authors compare against invariant learning baselines and meta-learning baselines. In all cases, the proposed method seems to outperform the baselines across all datasets.
Many extra results are included in the appendix which further corroborate their claims.
Methods And Evaluation Criteria: The problem setting considered in this paper is somewhat novel in that it requires similar ODEs from different environments (i.e. function forms). Prior works mainly work with environments where the only coefficient change across trajectories. As such, this paper modifies common dynamical systems such that they can have different functional forms across environments by adding extra terms such as damping. I believe this is sufficient for a proof-of-concept, although, as acknowledged by the authors in the limitations section, a more comprehensive benchmark for these environments would be helpful in evaluating the full potential of the approach.
I have one question regarding the choice of datasets: It seems that across function environments, the changes are all additive (i.e. adding extra forcing terms to the ODE). Would we expect the method to still work if the changes are more non-linear? for example changing f(theta) = sin(theta) to f(theta) = sin(theta)^2.
Theoretical Claims: I have checked the main theoretical claims (3.1, 3.2 and 3.3) and I am happy with their soundness.
Experimental Designs Or Analyses: My main question regarding the experiments is about the baselines. It is unclear from the text how the baselines are implemented. While the proposed baselines are representative of the respective fields, it is not clear to me how these methods are adapted to the problem setting at hand: what are the architectures of the baselines? what are the input and outputs of these methods (do they also take in X_p and output z)? In the baselines, do you still have separate embeddings z_c and z_e?
Relatedly, in Fig. 5b, how is X^c decoded for the baselines?
So far, the experimental setup seems sensible and qualitatively, the results for the approach look promising. But I think I can only assess the analyses and the results when the questions about the baselines are clarified.
Supplementary Material: I have read through the supplementary material which provides extra experimental results that complements the main results. The FAQ section is particularly helpful in clarifying some of my initial questions.
Relation To Broader Scientific Literature: This paper brings concepts from invariant learning to the field of learning dynamical systems. I agree with the authors that applying the idea of invariances in this setting is non-trivial as it requires defining invariances in the function space. I believe the idea of learning invariant functions from observations is highly relevant to the dynamical system community, where the goal is not only to make forecast, but also to derive insight from the observed sequences. Here, the ability to learning meaningful and reusable functions from observations is particularly important.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: The work offers an original framework for learning reusable functions which is an important step towards ML methods that can discover generalisable and interpretable physics laws. The core concept is generally applicable and easy to understand.
The main weakness for me is that lack of clarity in terms of the experiment setup w.r.t. the baselines, which I think can be improved straight-forwardly. I would be happy to increase my score if this is clarified.
Other Comments Or Suggestions: One minor comment: at the start of section 5, the authors list their 6 research questions. I think the only 'real' research questions that this paper is addressing is 1 and 2, whereas the other ones are more 'additional evidence'. Maybe it is worth rewriting that to make the contribution of the paper more concise.
Questions For Authors: See experimental design section and evaluation section.
1. What would happen if changes across function environments are more drastic and non-linear?
2. Please clarify the baselines.
-------------
post rebuttal:
After the authors' clarification, I have raised my score to 4: accept.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful feedback. We will address them point by point below.
## Drastic and non-linear changes across function environments
> What would happen if changes across function environments were more drastic and non-linear? For example, changing f(theta) = sin(theta) to f(theta) = sin(theta)^2.
Thank you for this insightful question. In short, drastic changes are actually good for invariance discovery, but non-linear changes are much more sophisticated to analyze.
**Drastic changes**: According to our initial experiments, larger environmental differences will lead to better results. The experimental results shown in the paper indeed come with decent environment differences. For example, in the "powered" pendulum environment, the mass is flung over the top in many trajectories (Figure 7: upper left one).
**Non-linear changes**: In most optimization problems, expanding from linear to non-linear is challenging. For example, the invariant learning pioneer work IRM can only prove that the invariant principle works under the linear scenario and one SCM. Therefore, we don't expect our method to work *directly* for all non-linear scenarios. We provide analyses and open discussions around it and hope you find it interesting and useful.
- First, we consider the theory basis, the SCM in Appendix C.1. In the SCM, $f:=g\_{comp}(f\_c, f\_e)$, while $f\_e$ can be different across $e$, $g\_{comp}$ is shared. That means, if we consider $f(theta) = sin(theta)^2=sin(theta) * sin(theta)$ as $f:=f\_c * f\_e$, then the SCM remains valid if all the environments obey $f:=f\_c*f\_e$. If not all environments share the same function composition process, extended SCMs or extended $g\_{comp}$ will be required to be designed.
- Second, if SCM remains valid, the next challenge is the reverse decomposition (red dash arrows in Figure 6). In the current hyper-network implementation, given z_c denotes sin(theta), z_e denotes sin(theta), letting z_c + z_e denote sin(theta)^2 in the function representation space is a learning challenge, which may require much deeper networks or extra techniques. For example, we may choose to learn in the space of the derivative of sin(theta)^2 to alleviate the learning challenge.
- Finally, in the real world, many effects are composed additively or can be transformed to be composed additively like forces, so we believe our method is applicable in many cases.
## Baseline implementation details
> Please clarify the baselines. It is unclear how the baselines are implemented, what are the architectures, what are the input and outputs, and do they also have separate embeddings z_c and z_e?"
We appreciate this important point about baseline clarification. In the revised paper, we will provide a detailed explanation of the baseline implementations:
Basically, for fair comparisons, we tried our best to make baselines as similar to our architecture as possible. So all the performance gains come from our IFL principle.
**Architecture**
To better distinguish the baselines, let's denote our DIF framework without $\hat{e}$ predictions (Figure 3 and remove the $g_\phi$ MLP and $\hat{e}$ branch) as DIF-Base.
- **MAML**: Only Forecastor in the DIF-Base (with MAML optimization) since MAML does meta-learning at the optimization level.
- **CoDA**: DIF-Base and set the dimension of $\hat{z}_e=2$ (2 is the original paper setting).
- **IRM**: DIF-Base. IRM regularization is at the loss level so no architecture change.
- **VREx**: DIF-Base. VREx regularization is also at the loss level.
**Inputs and Outputs**:
- **MAML** takes X_p and outputs the forecasting trajectory directly.
- **CoDA, IRM, VREx** take X_p as input and output function representations $\hat{z}_c$ and $\hat{z}_e$, then finally output the forcasting trajectory. The same as DIF.
**Training**:
- **MAML**: MAML optimization on Forcaster.
- **CoDA, IRM, VREx** the same as DIF.
**Inference**: (**Decoding X^c for the baselines**)
- **MAML**: use the Forcaster with meta-parameters.
- **CoDA, IRM, VREx**: the same as DIF: use the f_c branch.
## Research Questions
> At the start of section 5, the authors list their 6 research questions. I think the only 'real' research questions that this paper is addressing is 1 and 2, whereas the other ones are more 'additional evidence'. Maybe it is worth rewriting that to make the contribution of the paper more concise.
We agree with this valuable suggestion. In the revised paper, we will restructure Section 5 to emphasize Research Questions 1 and 2 as the primary contributions of our work.
We will reduce the remaining questions (3-6) to "supplementary analyses" that provide additional evidence for our claims rather than core research questions.
We thank the reviewer for the constructive feedback and hope the questions are addressed. We will incorporate all these improvements in the revised version of the paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarification.
I believe including some of the discussion here, namely the effect of more diverse changes, the role of linearity, and the baseline setup, into the main text would strengthen the paper. As I mentioned in my review, I believe the paper addresses an interesting problem and I am happy to vouch for its acceptance. | Summary: This paper proposes a method for discovering invariant functions underlying dynamical systems governed by Ordinary Differential Equations (ODEs). The key claim is that different environments modify the observed system, but an invariant function can be disentangled and recovered to represent the core governing dynamics. The authors introduce a framework called Disentanglement of Invariant Functions (DIF), which uses an encoder-decoder hypernetwork to separate invariant functions from environment-specific dynamics. The paper also presents a causal graph-based formulation to formalize the problem and proposes an information-theoretic principle to enforce invariance.
To validate their approach, the authors conduct experiments on three multi-environment ODE datasets, comparing DIF against baseline methods such as meta-learning approaches and traditional invariant learning techniques. The results reportedly demonstrate that DIF is more effective in recovering the underlying physical laws across environments and performs well in symbolic regression tasks.
Claims And Evidence: I am doubtful towards the main claims of this paper.
* **Claim: An invariant function exists that captures the common dynamics across environments.**
* The definition of an invariant function is vague. At first thought, it represents the "common parts" of different governing equations, it does not specify what constitutes "common." As an analogy, what is considered the invariant function for $f_1(x) = x,\ f_2(x) = x+1,\ f_3(x)=x+2$? Should it be $x$, due to its simplicity, or $x+1$ because it is the average of all functions? Anyways, how does one rigorously define this commonality beyond intuition? The provided examples, such as variations of a pendulum's equation, seem to assume that a clear invariant function exists, but this assumption is not justified theoretically. If environments are too diverse, there may be no meaningful invariant function.
* **Claim: The method can successfully identify the invariant function across environments.**
* There is no strong theoretical justification for why the invariant function is identifiable in all cases. If the environments are drastically different, could it be possible that the method learns an arbitrary function rather than the true underlying invariant one? Moreover, since there seems to be no clear definition of the "true" invariant function, as pointed out earlier, how do we know the learned invariant function is correct?
Methods And Evaluation Criteria: * The paper constructs multi-environment ODE datasets to evaluate the method. The datasets are reasonable choices for testing the framework since they include variations of well-known physical systems.
* The performance is measured in terms of error in predicting future states for both the observed trajectories and the invariant trajectories. These metrics align with the goal of recovering governing equations, but in terms of verifying the "correctness" of the invariant functions, the authors assume the knowledge of a "ground truth" invariant function. However, as discussed earlier, it is unclear to me why these specific equations are preferred over others as the invariant function.
Theoretical Claims: I do not fully understand some theoretical claims in this paper, Theorem 3.1 (and its proof) in particular. What defines the "true invariant function"? From the statement I understand that we have the solution to an optimization problem, which can be mapped to a function $f_c$. Then what are we trying to prove? Is it some property of this function? Or is it the existence and uniqueness of this solution, as seemingly suggested by the proof? If it is the latter, I don't understand what this theorem is trying to establish.
Experimental Designs Or Analyses: * The experimental results indicate that DIF generally achieves lower error and better symbolic regression outcomes than baselines.
* One key concern is whether the method generalizes well to settings where environmental changes are more drastic. For example, what would happen if you make $\alpha$ smaller and $\rho$ larger in the pendulum experiment? The "powered" environment would possibly break the typical behavior of the pendulum and fling the mass over the top.
Supplementary Material: No supplementary material is provided.
Relation To Broader Scientific Literature: * The work is related to previous studies on invariant learning (e.g., Arjovsky et al., 2019; Rosenfeld et al., 2020), but many prior work focuses on categorical settings rather than function spaces. The authors attempt to extend these ideas to dynamical systems, which is a reasonable direction.
* There are abundant works on governing equation discovery for ODE systems, deriving from SINDy (Brunton et al., 2016), which the authors should have discussed.
* Apart from SINDy sparse regression, genetic programming-based methods are also widely applied to this task. The authors did mention related works such as PySR (Cranmer, 2023) in the Appendix, but I feel it should also be included in the related works section in the main text.
### References
* Brunton, Steven L., Joshua L. Proctor, and J. Nathan Kutz. "Discovering governing equations from data by sparse identification of nonlinear dynamical systems." Proceedings of the national academy of sciences 113.15 (2016): 3932-3937.
* Cranmer, Miles. "Interpretable machine learning for science with PySR and SymbolicRegression. jl." arXiv preprint arXiv:2305.01582 (2023).
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: The writing of this paper can be significantly improved. First of all, it is too verbose, and I'd suggest the authors run a grammar check using Grammarly to reduce redundancy. The paper also lacks coherence at some point. For example, in Section 2.2, the authors first identify two challenges in L147-161, but I don't understand how the paragraphs describing solutions in Section 2.2.1 correspond to these two challenges.
Then, in terms of math writing, there are many instances of missing definitions, incorrect notations, and confusing statements. Besides the lack of clear definition of an invariant function already mentioned above, other examples include:
* $\mathbf X_0$ is referred to before definition.
* When expressing a function mapping, the common practice is to write $f: X \to Y, x \mapsto f(x)$ instead of $f: X \mapsto Y$.
* In Thm 3.1, why "given the predicted function random variable $\hat f_c$? It is not used anywhere in the theorem or the proof.
Other Comments Or Suggestions: N/A
Questions For Authors: See previous sections.
## Post-rebuttal
One of my main concerns was the validity of the theoretical claims (Theorem 3.1, in particular) made in the paper. After the discussion with the authors, I believe it is mostly a clarity issue and the claims are mostly valid. Concretely:
* Thm 3.1 is essentially saying that $f_c$ is the unique solution to the optimization problem $\arg\max_{f'} I(f';f | X_0)$ s.t. $f' \perp e$. It makes sense when all the $h$ and $\theta_c$ are removed from the statement and its proof.
* The original statement in Thm 3.1 is misleading and also (slightly) incorrect. While a unique solution $f_c$ always exists, there is no guarantee that the function class of all possible $h$, which depends on the implementation of the hypernetwork, can approximate the true $f_c$. That being said, this is fine in practice. The authors just need to rewrite the theorem and make a comment about this gap between theory and realization.
Overall, I think this paper has some valid contributions, though marred by the lackluster presentation. It could be accepted should the authors improve the writing and fix the clarity issues. I have updated my score to reflect this.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review. We appreciate your feedback and would like to address your concerns point by point.
## Definition and identifiability of invariant function
> The definition of an invariant function is vague...
We thank the reviewer for raising this point.
- **Rigorous definition**: Our invariant function is not defined arbitrarily; it is rigorously grounded in causality and formally specified within our Structural Causal Model (SCM) (see **Section 2.2.1, Figure 2, and Appx. C.1**). Specifically, the invariant function fc is the function generated by the structural equation and exogenous variable c, which represents the core mechanism shared across all environments. This causal definition distinguishes our approach from ad-hoc solutions.
- **Example pitfall**: We respectfully argue that the reviewer’s example oversimplifies the problem by focusing on a single realization, however, our treatment considers the invariant function as a random variable, a distribution that must be learned, not a specific instance.
- **A more proper example**: If we have $\mathbf{f}_1=\bm{\alpha} x + \bm{\beta}$ and $\mathbf{f}_2=\bm{\alpha} (x + 1) + \bm{\rho}=\bm{\alpha} x + (\bm{\alpha} + \bm{\rho})$, then since $f_c \perp e$, we find the decision boundary for the $e$ prediction, which determines by the distribution differences between $\bm{\beta}$ and $\bm{\alpha} + \bm{\rho}$, not by the distribution of $\bm{\alpha} x$ (invariant). Note that the three functions the reviewer provide can be sampled from both $\mathbf{f}_1$ and $\mathbf{f}_2$, **illustrating the pitfall of analysing in instance level**. This example is not perfect. For rigorous understanding, please resort to our SCM and information theory based proofs. Furthermore, we invite the reviewer to refer to **Appendix B (Q2)** and related works in invariant learning for additional insights.
- **Community acknowledgment**: The invariant function is defined rigorously in a statistically significant way using SCM. We like to share that using SCM to define invariant representation/mechanism has been a mature knowledge in invariant representation learning community.
> There is no strong theoretical justification for why the invariant function is identifiable in all cases.
- **Identifiability: Sufficiency & necessity**: In Section C.2 of the appendix, we provide a complete proof of the **existence** and **uniqueness** of the solution to our optimization problem. The proof demonstrates that our invariant function learning principle guarantees identifiability under the causal assumptions.
> I do not fully understand some theoretical claims in this paper, Theorem 3.1 (and its proof) in particular.
Theorem 3.1 provides the optimization principle to sufficiently and necessarily discover the function random variable defined in our SCM.
Note that our definition and theoretical proof have been acknowledged by other reviewers:
**Reviewer pHQv** stated:
- "structural causal model drawn by the authors (Figure2) **illustrates the concept of invariant functions very well**"
- "I **read the theoretical proof** in Appendix C, and there is no obvious problem with the derivation process."
**Reviewer cD7J**: remarked on the solid theoretical framework and highlighted the connection to Independent Causal Mechanism, underscoring that invariant learning is a **well-established area**.
**Reviewer T91b** confirmed "I have **checked the main theoretical claims** (3.1, 3.2 and 3.3) and I am happy with their **soundness**."
Due to space limitations, for further context, we recommend reviewing the fundamentals of invariant learning and the role of SCM in invariant learning listed in our essential related works.
## Experimental Design and Generalization
> One key concern is whether the method generalizes well to settings where environmental changes are more drastic. ... The "powered" environment would possibly break the typical behavior of the pendulum and fling the mass over the top.
**Theoretically**: The alterations in parameters (α and ρ) do not undermine the structure of our SCM; hence, our method remains generalizable.
**Empirically**: Our experiments in the "powered" environment **do fling the mass over the top**, and the results shown are still robust, suggesting it **can handle** increasingly drastic changes.
## Writing and Organization
We appreciate this feedback and will improve the writing in our revision. We will:
1. Address redundancy and improve grammar
2. Enhance coherence, particularly in Section 2.2
3. Fix notation issues.
4. Clearly define all variables before use
5. Reorganize the explanation of challenges and solutions for improved flow
## Related Works
Thank you for highlighting these important references. While we did mention PySR and symbolic regression in the Appendix, we agree that SINDy and genetic programming-based methods deserve more attention in the main text. We will expand Section 4 (Related Work) to include them.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I am aware that other reviewers have acknowledged the soundness of the theorems and proofs. However, I did not find any detailed comments from other reviews that clarified or answered my previous questions. So I'd like to ask for additional clarifications from the authors.
* In Thm 3.1, why "given the predicted function random variable $\hat f_c$"? The notation $\hat f_c$ is not used anywhere in the proof.
* The proof of Thm 3.1 only established the existence and uniqueness of the solution to the given optimization problem. It did not justify the statement that "the true invariant function is given by this solution". If I understand correctly, the true invariant function is defined through the SCM in Figure 2. Then why is the $h_{\theta_c^*}$ the true invariant function according to this definition?
I do not have the bandwidth to go over other proofs in detail. But at least I understand what other theorems are trying to establish. Other reviewers have mentioned that some of the assumptions are impractical, e.g. white noise in observed trajectories, but I think it is a good starting point for theoretical analysis. Thm 3.1, however, as the foundational principle of this paper, is missing clear statements and exposition (at best).
---
Reply to Comment 1.1.1:
Comment: Thank you for the reply. We appreciate the additional comments and will provide clarifications to the questions:
> In Thm 3.1, why "given the predicted function random variable $\hat{f}_c$"? The notation $\hat{f}_c$ is not used anywhere in the proof.
Notation $\hat{\mathbf{f}}_c$ is not used in the proof due to its role in the optimization process:
1. $\hat{\mathbf{f}}_c$ is an **optimizable object**. In the proof, we focus on the solution of $\hat{\mathbf{f}}_c$ after the optimization process, leading to **optimized object** $\mathbf{f}_c$ and $\mathbf{f}_c'$ (used in the contradiction). So $\mathbf{f}_c$ and $\mathbf{f}_c'$ in the proof stand for the optimized $\hat{\mathbf{f}}_c$.
2. Similarly, $\theta_c$ (optimizable) is also not used in the proof, but it does not affect that $\hat{\mathbf{f}}_c$ and $\theta_c$ are both essential components in describing the optimization process.
Furthermore, mentioning $\hat{\mathbf{f}}_c$ serves two additional purposes:
1. **Connection to the implementation:** $\hat{\mathbf{f}}_c$ is mentioned in order to connect to Sec. 3.1, where the prediction of $\hat{\mathbf{f}}_c$ is introduced. This connection build the bridge between the implementation and the optimization.
2. **Clarification of the ouput of $h_{\theta_c}(\mathbf{X}_p)$**: Emphasizing the "predicted function random variable $\hat{\mathbf{f}}\_c$" reminds readers of our optimization goal, i.e., to extract $\mathbf{f}\_c$ from $\hat{\mathbf{f}}\_c$, and avoids potential confusion that might arise if only $h\_{\theta\_c}(\mathbf{X}\_p)$ were mentioned.
> The proof of Thm 3.1 only established the existence and uniqueness of the solution to the given optimization problem. It did not justify the statement that "the true invariant function is given by this solution". If I understand correctly, the true invariant function is defined through the SCM in Figure 2. Then why is the $h\_{\theta\_c^*}$ the true invariant function according to this definition?
We respectfully argue that the understanding of the existence and uniqueness of the solution is not accurate. The existence and uniqueness are described under the condition that $\mathbf{f}\_c$ $=h\_{\theta\_c^*}(\mathbf{X}\_p)$ (the solution extracts the true invariant function), where the $\mathbf{f}_c$ in the proof is from the definition in the SCM.
- **Existence:** In the first sentence of the existence proof in Appendix C.2, we state "We first prove the existence of a solution $\theta_c^*$ to the optimization problem, **such that $\mathbf{f}\_c$ $=h\_{\theta\_c^\*}(\mathbf{X}\_p)$.**" This existence proof is not trivial, it is a **conditional** existence proof. Therefore, this existence proof describes that "**there exists at least one $\theta\_c^\*$ that extracts the true invariant function**".
- **Uniqueness**: Similarly, the uniqueness proof in Appendix C.2 begins with "We now prove that for any solution $\theta\_c^*$ of the optimization process, it holds that $\mathbf{f}\_c$ $=h\_{\theta\_c^*}(\mathbf{X}\_p)$." This proof is also **conditional** and it guarantees that all solutions to the optimization process yield the same function, i.e., $\mathbf{f}_c$.
Here is a symbolic explanation:
- **Existence:** Denote the solution set as $\Theta\_c^*$, $\exists \theta\_c^* \in \Theta\_c^*$, s.t. $\mathbf{f}\_c$ $=h\_{\theta\_c^*}(\mathbf{X}\_p)$.
- **Uniqueness:** $\forall \theta\_c^* \in \Theta\_c^*$, it follows that $\mathbf{f}\_c$ $=h\_{\theta\_c^*}(\mathbf{X}\_p)$. Given existence proved, we only need to prove that all the extracted functions are the same ( as $\mathbf{f}\_c$).
Briefly, the reason "why the $h\_{\theta\_c^*}(\mathbf{X}\_p)$ is the true invariant function according to this definition" is provided as our proof. Our proof shows that $h\_{\theta\_c^*}(\mathbf{X}\_p)$ is the true invariant function according to the SCM definition.
We will incorporate these explanations into the revised version for better clarity. | Summary: This paper introduces an approach to learning invariant functions in dynamical systems (governed by ODEs) in different environments. The authors propose a method called Disentanglement of Invariant Functions (DIF), which aims to discover intrinsic dynamics across multiple environments while avoiding environment-specific mechanisms. The approach is an encoder-decoder hypernetwork to disentangle invariant functions from environment-specific dynamics. The authors have shown that the probabilistic information-maximization objective can be implemented using a simple easy-to-implement MSE loss function. The method is evaluated against meta-learning and invariant learning baselines, demonstrating its effectiveness.
## update after rebuttal
Thank you for the detailed response and for providing additional context regarding your design choices and theoretical connections. While I appreciate the clarifications, my initial concerns regarding the generalization and broader applicability of the work remain. As these issues are still relevant to the overall impact, I have decided to keep my original evaluation unchanged at a 3 (Weak Accept). I recognize the potential of the work, but feel that these concerns need further attention.
Claims And Evidence: The effectiveness of the method could be limited given the experimental setup.
Methods And Evaluation Criteria: Yes, but mostly limited to low-dimensional simulated data.
Theoretical Claims: The connection between information-maximization and MSE loss is a known theoretical result and makes intuitive sense. I did not check the details of the proof in the appendix though.
Experimental Designs Or Analyses: The design of the experiments sounds logical but lacks high-dimensional real-world experiments to support the effectiveness of the method in more challenging domains.
Supplementary Material: Briefly, the symbolic regression evaluation.
Relation To Broader Scientific Literature: Although the design of the hyper-network may be novel in discovering dynamical systems, it is not so novel in ML literature as there have been methods to use prediction loss as a proxy for information-maximization and also representing functions as weights of a NN.
Essential References Not Discussed: Connection to Independent Causal Mechanism and Independent Component Analysis.
There is a vast literature on both topics. I think this work should also be discussed in relation to these concepts.
Other Strengths And Weaknesses: ## Strengths
- Novelty: The paper presents a new perspective on invariant function learning, focusing on disentangling invariant dynamics from environment-specific factors.
- Methodology: The proposed hypernetwork which is based on information-maximization objectives derived from the causal-graph is relatively innovative in this context and in line with gradient-based learning which suggests better scalability compared with Bayesian inference methods.
- Effectiveness: Quantitative comparisons show that the proposed method outperforms existing meta-learning and invariant learning techniques on the simulated data benchmarks.
## Weaknesses
- Scope Limitation: The method is currently limited to ODE systems and seems not directly applicable to PDE systems which govern a large class of dynamical systems.
- Generalization: The experiments are all low-dimensional and with relatively simple dynamics and simulated data. High-dimensional real-world examples (e.g. from robotics, cell biology) could better illustrate the effectiveness of this method.
- Novelty limitation: Although the experiments show the effectiveness of the proposed method, the use of information maximization to disentangle causal mechanisms has been used in various contexts before.
Other Comments Or Suggestions: -
Questions For Authors: - Could you elaborate on the limitations of extending your method to PDE systems and how you plan to address them in future work?
- Based on the definition of invariant and environment-specific dynamics used in the paper, it seems there is a strong connection to the independent causal mechanism principle. Can you elaborate if you can recognize such connection?
- It seems all evaluations are done with simulated data. The assessment of the potential of the work would be easier if the authors could evaluate with real-world high-dimensional examples both in terms of scalability (higher dimensions) and also in terms of dealing with more complex dynamics.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your feedback and the acknowledgment of the novelty and effectiveness. Below, we address your concerns point by point.
## Limited experimental setup
Thank you for raising this concern. Our work represents the first step in invariant function learning, introducing a new paradigm for scientific discovery. The focus on low-dimensional systems was intentional for three reasons:
1. **Conceptual clarity:** Low-dimensional systems allow us to clearly demonstrate the core principles of invariant function learning without obscuring them with implementation details for high-dimensional data.
2. **Methodological validation:** As the first work on invariant function learning, we needed to establish baseline performance on well-understood systems where ground truth is available.
3. **Benchmark contribution:** We've constructed three multi-environment datasets (ME-Pendulum, ME-Lotka-Volterra, and ME-SIREpidemic) that will serve as important benchmarks for future research.
We fully agree that extending to high-dimensional real-world examples is an important direction, as we acknowledge in our limitations section (Section 6). The current work lays the theoretical and methodological foundation for these future extensions.
## Scope limitation to ODE systems
Thank you for this important question. As we discussed in **Appendix B**, we acknowledge that our method is currently designed for ODE systems. This choice was driven by three key challenges:
1. **Lack of multi-environment PDE datasets:** Unlike domain adaptation tasks, multi-environment datasets for PDEs aren't readily available and require domain-specific expertise to construct.
2. **PDE-specific technical challenges:** PDEs introduce multi-scale and multi-resolution problems requiring specialized techniques for stability and computational efficiency.
3. **Interpretation challenges:** Our current approach uses symbolic regression for interpretability, which doesn't extend naturally to PDE systems.
Our future work will tackle these challenges by: (1) collaborating with domain experts to develop multi-environment PDE benchmarks, (2) incorporating specialized PDE-handling techniques from neural operators literature, and (3) developing new interpretability methods for invariant function learning in PDEs.
## Connection to causal mechanisms
We agree that there is indeed a strong connection to the independent causal mechanism (ICM) assumption/principle. Our work can be viewed as discovering ICM specifically to function spaces in dynamical systems.
Note that "SCMs are underpinned by the assumption of independent causal mechanisms, a fundamental premise in causality. Intuitively, this supposition holds that causal correlations in SCMs are stable, independent mechanisms akin to unchanging physical laws, rendering these causal mechanisms generalizable."
**ICM is the basis of Jonas's invariant causal predictor and IRM**. When we use SCM in Section 2.2.1, we already adopt the ICM assumption, as in the SCM, we're assuming that invariant mechanisms (fc) and environment-specific mechanisms (fe) operate independently and can be disentangled. From the ICM perspective, our optimization goal is to identify the ICM showing in our SCM.
We acknowledge that we could have made this connection more explicit in our related work, and we appreciate the opportunity to clarify it here. In our revised version, we would enhance our discussion of this connection in the related work section.
## Novelty limitations and related work
> Although the experiments show the effectiveness of the proposed method, the use of information maximization to disentangle causal mechanisms has been used in various contexts before.
We acknowledge that information maximization for disentangling causal mechanisms has precedents in the literature. Our contribution lies in:
1. **Novel task formulation:** We introduce invariant function learning as a new task for scientific discovery (Section 2.2).
2. **Function space application:** While previous work has applied similar principles to fixed representations, we extend them to function spaces where invariance must be maintained across all possible states.
3. **Theoretical guarantees:** Our work provides theoretical guarantees for invariant function discovery (Theorem 3.1) specifically adapted to the dynamical systems context.
4. **Empirical validation:** Our experimental results (Section 5.4) demonstrate that general invariant learning principles like IRM and VREx do not perform well on our task, highlighting the need for our specialized approach.
These points, we believe, substantiate the novelty and significance of our approach.
## Regarding essential references
Thank you for this valuable suggestion. We agree that enhancing our discussion of connections to ICM and ICA literature would strengthen the paper. In the revised versions, we would expand Section 4 (Related Work) to include them.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I appreciate the clarifications and the additional context regarding your design choices and theoretical connections. While the work is promising, my initial concerns regarding generalization and broader applicability remain. I will keep my original score. | Summary: The authors proposed the task of learning invariant functions in dynamical systems: considering the ODE equation $\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} = f(\mathbf{x})$ of the system evolution as a combination of the invariant function $f_c$ corresponding to the inherent properties of the system itself and the function $f_e$ caused by the environment, and learning the common part $f_c$ from the trajectory data collected in different environments. To solve this problem, the authors proposed the DIF method, which uses a meta neural network to generate parameters of two neural networks $\hat{f}$ and $\hat{f}_c$ based on a single trajectory $X$ collected in a specific environment to fit the governing ODE equation $f$ and the invariant function $f_c$. DIF minimizes the MSE between the system's evolution trajectory governing by $\hat{f}_c$ and the actual evolution trajectory $X$ (equivalent to maximizing the mutual information of $\hat{f}_c$ and $f$), and uses adversarial training to make a discriminator unable to infer information about the environment from $\hat{f}_c$, so that the learned $\hat{f}_c$ becomes an invariant function that describes the evolutionary properties of the system itself and is invariant to the environment. The authors conducted experiments on three systems: Penduium, Lotka-Volterra, and SIR. They considered four variants of each system under different environments and generated 200 trajectories for each variant using different parameters and initial values. DIF learns the system's invariant function $\hat{f}_c$ from these trajectories. The evolution trajectory governing by $\hat{f}_c$ is consistent with the evolution trajectory governing by the ground truth $f_c$ (NRMSE < 1.0, suggesting positive correlation). The authors further extract the symbolic expression of $\hat{f}_c$ through symbolic regression, finding that it is similar to the ground truth invariant function, proving the ability of the DIF method to learn invariant functions in dynamic systems.
Claims And Evidence: Mostly, the structural causal model drawn by the authors (Figure2) illustrates the concept of invariant functions very well, and a complete mathematical proof for the training scheme used is given in the appendix. The effectiveness of the proposed DIF method is also demonstrated experimentally on three systems.
However, I have doubts regarding the practicability of the proposed method in real-world scenarios.
Methods And Evaluation Criteria: Yes, although the authors focus on a new problem and therefore have no existing dataset, they construct three datasets based on the Penduium, Lotka-Volterra, and SIR systems that are consistent with the invariant function learning task they describe.
Theoretical Claims: I read the theoretical proof in Appendix C, and there is no obvious problem with the derivation process. However, the proof in Appendix C.3 relies on the assumption of Gaussian white noise, that is, the evolution trajectory is the result of adding Gaussian white noise to the noise-free evolution trajectory. This may render the derivation untenable when evolutionary trajectories contain colored noise that is more common in the real world. This is particularly concerning given that the authors did not present experiments on real-world datasets.
Experimental Designs Or Analyses: Yes, the authors' experimental design effectively illustrates the ability of DIF to learn invariant functions of the system itself from evolutionary trajectories collected in different environments.
Supplementary Material: I read the appendix and found that the authors provided a complete mathematical justification for the training scheme adopted in the main paper as well as more experiments, which was very helpful in understanding the method.
Relation To Broader Scientific Literature: The author proposed for the first time the task of invariant function learning in dynamic systems. This task is related to Meta-Learning and invariant function learning. However, existing invariant function learning methods are not applicable to dynamical systems that evolve over time; and it is questionable whether the functions learned by Meta-Learning that can quickly fit different environments are invariant functions.
Essential References Not Discussed: I think authors should discuss possible limitations of the proposed method in high dimensional systems with graph structure, as in following papers.
1. Cranmer, Miles, et al. "Discovering symbolic models from deep learning with inductive biases." Advances in neural information processing systems 33 (2020): 17429-17442.
2. Shi, Hongzhi, et al. "Learning symbolic models for graph-structured physical mechanism." The Eleventh International Conference on Learning Representations. 2022.
Other Strengths And Weaknesses: **Strengths**: The authors proposed the task of learning invariant functions in dynamical systems and constructed a solid theoretical framework and a novel DIF method for this task. Although this task seems rather challenging, the learned invariant functions are surprisingly consistent with the real results in terms of both evolving behavior and symbolic nature.
**Weakness**: Although the authors provide a discussion of possible applications of the method in Appendix B (Q5), it is difficult to imagine any specific scenarios for this task. If we want to get a predictive model that predicts system behavior, directly using a neural network to fit $f$ (rather than $f_c$) seems to be a better choice; if we want to get a symbolic model that explains the system behavior, directly using symbolic regression methods to find $f$'s symbolic expression in different environments and then obtaining the same part $f_c$ through observing seems to be a better choice.
Other Comments Or Suggestions: Considering that the authors focus on a new task that learns invariant functions of dynamical systems, it would be better to provide experiments based on real-world datasets, which can help readers establish an understanding of the application scenarios and importance of this task, and, on the other hand, better verify the capabilities of the proposed DIF method.
Also, it is better to discuss its applicability in much complex scenarios like networked complex systems (as in my provided references).
Questions For Authors: **Q1**. If I understand correctly, $X$ is the observed evolution trajectory, and $X_p$ is the part of $X$ before $T_c$. In training, $X_p$ is used as the input of the model, and $X$ is used to calculate the loss function. Why do we need to distinguish between $X_p$ and $X$? Is it feasible to not distinguish between them and take the whole $X$ as $X_p$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are glad you recognized our theoretical framework and experimental validation. We address your concerns point-by-point as follows.
## Practicability in real-world scenarios
Thank you for this question. While we focus on establishing theoretical foundations in this work, we believe our method has significant real-world potential. In principle, the methodology, particularly the disentanglement via adversarial training and mutual information maximization, is applicable to real-world scenarios where environmental variations are present. Invariant function learning can extract fundamental dynamical laws from noisy, environment-influenced data, a critical challenge in scientific discovery. Application areas include:
1. Extracting physical laws from video data where environment factors (e.g., lighting, camera position) complicate analysis
2. Learning generalizable physics for simulation and control across varying conditions
3. Contributing to foundation models in physics by identifying invariant mechanisms
Note that some of these applications might not belong to invariant function learning but fall back to general invariant learning, which is not our target. PDE scenarios that need invariant function learning, are in our future work since much work needs to be done before being able to transfer to PDEs described in Appendix B Q1.
## Theoretical claims and noise assumptions
We acknowledge that Appendix C.3 assumes Gaussian white noise. While this is a common starting point in theoretical analysis, we agree this is a limitation. Future work will extend our framework to other noise scenarios or relax this assumption.
## High-dimensional systems with graph structure
Thank you for the suggestions and discussions on graph-structured physical mechanisms as in Cranmer et al. (2020) and Shi et al. (2022). We agree these are relevant to extending our work. Our current focus was establishing the basic framework for invariant function learning in ODE systems, but we acknowledge that discussing the approach to high-dimensional or graph-structured settings is worthwhile. In the revised paper, we will discuss these limitations and outline potential directions for adapting our method to such complex scenarios.
## Utility Compared to Alternative Approaches
Thank you for your comparison with direct neural network fitting and symbolic regression approaches. Our method targets extracting invariant dynamics. Regarding alternative strategies like finding symbolic expressions in different environments and then obtaining the same part $f_c$ through *observing*, the most difficult part is *observing*. The reason is direct: the fitting of f tends to capture both invariant and environment-specific aspects, which leads to spurious correlations so that the final equation forms from different environments **seem significantly different**. That is to say, after extracting equations from different environments, **it is challenging to find the common part directly** since the common parts are blended with environment parts and **look like not disentangable**.
Our DIF method explicitly enforces the separation of invariant dynamics from environment-specific factors, providing a more reliable basis for scientific discovery. This advantage becomes especially important in scenarios with complex environmental effects or when specific invariant mechanisms need to be identified. We will expand on this distinction in the revised manuscript.
## Real-world datasets
We agree that real-world validation would strengthen our paper. Our focus in this initial work was establishing theoretical foundations and validating on controlled systems. Testing on real-world data is the next step we discussed in the Appendix and we're actively pursuing this direction.
## Question on X_p vs X
Regarding the specific question (Q1): Your understanding is mostly correct. We distinguish between $X_p$ (past observations before time $T_c$) and $X$ (full trajectory) because the forecasting task takes $X_p$ as input $X$ as output. Similar to $X\rightarrow Y$ in general prediction tasks, the forecasting task does $X_p \rightarrow X$. In training, $X$ works as "label $Y$" for loss, while in inference time, $X$ after $T_c$ is not available. Using the whole trajectory $X$ as $X_p$ would blur the line between training and testing data. We will clarify this point further in the revised manuscript.
We hope these responses address your concerns, and we thank you for your constructive feedback. | null | null | null | null | null | null |
The impact of uncertainty on regularized learning in games | Accept (poster) | Summary: The paper examines the behavior of the stochastic FTRL dynamics in games, in which the usual FTRL algorithm is randomly perturbed. It provides a general characterization: every player reaches an almost pure strategy in finite time. This stands in contrast to the deterministic setting. A consequence of the previous result is that the only possible limits of stochastic FTRL are pure Nash equilibria. The final main result shows that a span of pure strategies is attracting if and only if it is closed under better replies, which mirrors some earlier results in the deterministic setting.
Claims And Evidence: All claims made in the paper are sound and supported by clear evidence.
Methods And Evaluation Criteria: Not applicable; the paper makes a theoretical contribution.
Theoretical Claims: All claims appear to be sound; I did not find any notable issues.
Experimental Designs Or Analyses: Not applicable; the paper makes a theoretical contribution.
Supplementary Material: I went through the supplementary material. It is well-organized and polished, and I did not find any notable issues in the proofs.
Relation To Broader Scientific Literature: The paper is part of a broader effort to characterize the behavior of natural learning algorithms in multi-player games. While most prior work has focused on the deterministic setting, this paper makes progress on the stochastic setting.
Essential References Not Discussed: All relevant references have been discussed; I did not identify any notable omission. Overall, the paper adequately places its contributions in the context of the related work.
Other Strengths And Weaknesses: Overall, the paper makes concrete contributions to a fundamental problem in multi-agent systems. It provides a comprehensive characterization of the behavior of stochastic FTRL in the continuous-time setting. Perhaps the most surprising result is Theorem 1, as it represents a departure from the deterministic setting. The main message here is very clear and interesting: stochasticity favors pure strategies. Theorems 2 and 3 mirror some existing results from the deterministic setting, but are still interesting and new. From a technical standpoint, the paper is highly non-trivial. The writing and organization of the paper are also exemplar, and it was a joy to read the paper.
Other Comments Or Suggestions: - A minor typo: after (8) there should be a comma.
- In section 5, it is claimed that harmonic games is a much broader class than zero-sum games; but, if I am not missing something, this is not really true.
Questions For Authors: No further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time, input, and overall appreciation, both in terms of results and presentation! We reply to your remarks and questions below:
> Theorems 2 and 3 mirror some existing results from the deterministic setting, but are still interesting and new.
Indeed, Theorems 2 and 3 extend the result by [15] on the asymptotic convergence of FTRL to strict Nash equilibria. However, as you rightly point out, generalizing this result to the stochastic setting is far from straightforward. The analysis requires substantially different techniques and introduces non-trivial challenges that do not arise in the deterministic case. In particular, while carrying on those results to our stochastic FTRL setting works as intended, this is not the case at all when considering different (yet rational) stochastic models. For instance, they are not true for the stochastic replicator dynamics with aggregate shocks (SRD-AS) nor for the stochastic replicator dynamics of pairwise imitation (SRD-PI); both of which that can be seen as stochastic versions of the (continuous-time) multiplicative weights algorithm.
> (..) after (8) there should be a comma
Will fix, thanks for spotting it!
> In section 5, it is claimed that harmonic games is a much broader class than zero-sum games; but, if I am not missing something, this is not really true.
To be clear, in the context of Poincaré recurrence (Section 5), any reference to zero-sum games should be understood as shorthand for “two-player zero-sum games with a fully mixed equilibrium”, or $\text{2PZS}\*$ for brevity. The claim that harmonic games form a significantly broader class than $\text{2PZS}\*$ simply referred to the fact that, while every $\text{2PZS}\*$ game is harmonic (cf. Corollary E.1), harmonic games allow for any number of players, and the players' payoffs need not sum to zero. We will make this point more explicit, thanks for flagging it.
---
We thank you again for your time and positive evaluation! Please let us know if there are any lingering questions in the above.
Kind regards,
The authors | Summary: This work investigates the impacts of the noises on the observed payoffs when applying continuous-time FTRL for game solving. The main conclusions are that every player will reach an almost pure strategy in finite time and the limit of the dynamics of the continuous-time FTRL will converge to the pure Nash equilibria.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I did not check the correctness of the proof details.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: No.
Relation To Broader Scientific Literature: To my knowledge, the findings are new and technically valuable.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
1. **Theoretical Results**: The theoretical results in this work seem to be new and might be of great interest to the community.
2. **Writing**: This paper is generally well-written.
**Weaknesses**: If any, I would be curious about the proof idea and basic techniques that lead to the main results in this paper, which do seem to be discussed in the main body of the paper. I think the results in this work are fundamental to the literature so I'm a bit surprised that these results weren't discovered in previous work. Therefore, I think it will benefit the readers a lot if the proof sketches and techniques of the main results are given.
Overall, I think the results in this paper are valuable, but I am not fully familiar with the techniques of continuous FTRL, so I would recommend a weak acceptance of this work while maintaining a low confidence level.
Other Comments Or Suggestions: NA.
Questions For Authors: 1. If I understand correctly, the results in this work hold in the self-play setting, where each player adopts the same FTRL algorithm. Can the noise $M_i(t)$ on the LHS of Line 187 account for the case where some players are potential adversaries?
2. I am a bit confused about some statements about the main results. Does the conclusion (c) on the RHS of Line 431 imply the conclusion (a) and (b) on the RHS of Line 429-431? That is, can we say “the strategy of every player reaches an almost pure Nash equilibrium in finite time” and “every player’s limit set contains a pure Nash equilibrium strategy”?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time, input, and positive evaluation! We reply to your remarks and questions below:
> I would be curious about proof idea and basic techniques that lead to the main results (…) I'm a bit surprised that these results weren't discovered in previous work. Therefore, I think it will benefit the readers a lot if the proof sketches and techniques of the main results are given.
We also very much aligned with the reviewer's opinion that more extensive proof sketches would be of benefit ot the reader, but we were unfortunately limited due to space constraints. That said, we will be happy to use the first revision opportunity (and the extra page provided) to add an overview of the technical apparatus and trajectory required to prove our results; note that all the details are already provided in the appendix.
> If I understand correctly, the results in this work hold in the self-play setting, where each player adopts the same FTRL algorithm. Can the noise $M_i(t)$ on the LHS of Line 187 account for the case where some players are potential adversaries?
The "adversarial" setting that you describe would be more aptly captured by making the evolution $X_i(t)$ of certain players arbitrary. The noise $M(t)$ has a lot of statistical features, so it cannot simply be replaced by an arbitrary stream of payoff-like observations.
> Does the conclusion (c) on the RHS of Line 431 imply the conclusion (a) and (b) on the RHS of Line 429-431? That is, can we say “the strategy of every player reaches an almost pure Nash equilibrium in finite time” and “every player’s limit set contains a pure Nash equilibrium strategy”?
To clarify, the formal version of conclusions (a), (b), (c) is as follows:
- (a) For every neighborhood $U_{i}$ of pure strategies for player $i$, there exists a finite (random) time $t$ such that $X_{i}(t) \in U_{i}$ (**Theorem 1**).
- (b) The (random) limit set $\lbrace x_{i} \in \mathcal{X_{i}} : X_i(t_{n}) \to x_{i} \text{ for some sequence } t_{n} \nearrow \infty \rbrace$ contains (at least) a pure strategy (**Corollary 1**).
- (c) If there exists a strategy $x^{\*}$ such that $X(t) \to x^{\*}$ with positive probability, then it is necessarily a pure Nash equilibrium (**second part of Theorem 3**).
Accordingly, (c) only make sense for trajectories that _do converge_ (in a pointwise sense), whereas conclusions (a) and (b) hold for every trajectories even those that do not converge to a point (for instance, those that wander indefinitely in the state space or those that converge only toward some subface of the boundary). As such, (c) does not imply any of the other two conclusions, and it is thus not possible to conclude that "the strategy of every player reaches an almost pure Nash equilibrium in finite time" nor that "every player’s limit set contains a pure Nash equilibrium strategy" (in fact, there may not even be any pure Nash equilibrium in some game, so those conclusions cannot hold in full generality).
---
We thank you again for your time and positive evaluation! Please let us know if there are any lingering questions in the above.
Kind regards,
The authors
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the detailed responses. I have no further questions and would like to keep my rating for this work unchanged. | Summary: This paper investigates the impact of uncertainty on the dynamics of the Follow-The-Regularized-Leader (FTRL) algorithm. The author first shows that under uncertainty, the FTRL algorithm approaches a pure strategy. Then, the author demonstrates that pure strategies are the only possible limit points of the stochastic FTRL. Additionally, the paper proves that the recurrent behavior observed in FTRL dynamics under deterministic settings disappears when uncertainty is introduced.
Claims And Evidence: The claims are generally supported by mathematical proofs.
Methods And Evaluation Criteria: Since FTRL is a well-studied algorithm for learning in games, it is meaningful to investigate its behavior under uncertainty.
Theoretical Claims: The proofs employ standard techniques for stochastic differential equations. While I did not verify all of the proofs in the paper, the presented theorems and corollaries appear rigorous and align with my intuition.
Experimental Designs Or Analyses: The empirical results on (twisted) matching pennies, entry deterrence, and harmonic games are provided.
Supplementary Material: The supplementary material primarily contains technical proofs and additional theoretical justifications. I did not check all of the proofs in the supplementary material.
Relation To Broader Scientific Literature: The theoretical results seem novel compared to the existing literature. However, I am curious about the difference between Corollary 3, Theorem 3, and the results presented by [19]. Do Corollary 3 and Theorem 3 generalize their results to broader settings?
Furthermore, as far as I understand, it is well known that FTRL satisfies the no-regret property even in the stochastic setting. I am wondering if this fact contradicts the derived theorems and corollaries in the paper.
Essential References Not Discussed: The paper appropriately cites relevant prior works on learning in games under the noisy feedback setting.
Other Strengths And Weaknesses: Please see “Other Comments Or Suggestions”.
Other Comments Or Suggestions: Since I’m not familiar with harmonic games, it’s unclear to me how the harmonic center $q$ is determined in a two-player zero-sum game setting.
Furthermore, in the discrete-time setting, it might be possible to control the noise level by adjusting a learning rate sequence. By doing so, is there any possibility of preventing the FTRL dynamics from approaching a pure strategy?
Questions For Authors: Please see “Other Comments Or Suggestions”.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time, input, and positive evaluation! We reply to your remarks and questions below:
>I am curious about the difference between Corollary 3, Theorem 3, and the results presented by [19]. Do Corollary 3 and Theorem 3 generalize their results to broader settings?
Regarding Theorem 3, the general setting of [19] and our own are qualitatively similar - regularized learning with stochastic feedback - but quantitatively quite different - discrete vs. continuous-time, a decreasing step-size in [19], different assumptions on the noise, etc. Thus, even though Part I of Theorem 3 of our paper and the corresponding result of [19] involve similar notions - stochastic asymptotic stability - they are otherwise completely distinct as results.
In addition, the extension of the discrete-time analysis of [19] to the continuous-time setting of our paper is far from trivial. One major difficulty that arises has to do with the way noise is injected into the system, and which plays a crucial role in the long-run behavior of the dynamics . For instance, the stated result breaks down completely when considering the stochastic replicator dynamics with aggregate shock (SRD-AS); the stochastic replicator dynamics of pairwise imitation (SRD-PI), for which stable points of trajectories usually need to be strict Nash equilibria in a noise-adjusted payoff field, cf. [21, 37].
In a similar vein, Corollary 3 and the second part of Theorem 2 are totally new results compared to [19]: Instead of characterizing only (stochastically asymptotically) stable strategies as in [19], those results also identify which states can appear as (possibly non-stable) limits of the stochastic dynamics. In words, **pure Nash equilibria are the only strategies appearing as pointwise limits of (S-FTRL)**, and among these equilibria only those that are strict can be (and are) stochastically stable.
> (...) as far as I understand, it is well known that FTRL satisfies the no-regret property even in the stochastic setting. I am wondering if this fact contradicts the derived theorems and corollaries in the paper.
Again, this is a matter of the general setting of each paper: the no-regret property of FTRL is, indeed, classical in discrete time; in continuous time however, the situation is much more complicated, and the only no-regret result that we're aware of is [10], which involves a vanishing learning rate or noise that becomes vanishingly small over time. Both of these postulates are orthogonal to our paper, where we assume positive noise covariance (Assumption 1), which, in turn, rules out the "vanishing noise" framework..
> (...) it’s unclear to me how the harmonic center is determined in a two-player zero-sum game setting.
A generic two-player zero-sum game is not harmonic; however, if it admits a fully mixed Nash equilibrium, then it is harmonic (cf. Corollary E.1). In that case, the harmonic center coincides with the Nash equilibrium and can be calculated in the same way.
> In the discrete-time setting, it might be possible to control the noise level by adjusting a learning rate sequence. By doing so, is there any possibility of preventing the FTRL dynamics from approaching a pure strategy?
Great question, thanks for raising it! We conjecture that adding a slowly decreasing learning rate as per [10] (decreasing to zero slowly enough over time) would mitigate the escape to the boundary, so we would expect the result that "any player's choices approach pure strategies" to be false in this setting. However, because of the extra terms introduced in the analysis by Itô's formula, the calculations are far from trivial and require a drastically different approach.
Considering a learning rate also opens a vast array of other interesting non-trivial questions, such as what happens to the "non-recurrence" results of Section 5 or what can we deduce on the discrete-time FTRL algorithm. As those questions typically require different and more precise tools than the ones we have used here (the stochastic dynamics becoming time-inhomogeneous due to the presence of a time-dependent learning rate), we have chosen to not explore those directions in this paper, and to defer such considerations to future work.
---
We thank you again for your time and positive evaluation! Please let us know if any of the above points is not sufficiently clear.
Kind regards,
The authors | Summary: The authors consider the continuous time limit of the FTRL dynamics under noisy observations of game payoff matrixes and prove that, unlike in the noiseless case, the dynamics converge to pure Nash, along with several other results.
## Update After Rebuttal
The authors answered my questions quite well -- I might suggest adding in parts of said answer to the discussion section of the paper, as space permits.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: N/A
Theoretical Claims: Nope.
Experimental Designs Or Analyses: N/A
Supplementary Material: Nope.
Relation To Broader Scientific Literature: Nope.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: (+) I thought this paper was exceptionally well written -- even as someone without a strong background in the area, I could understand the high-level theoretical take-aways and implications thereof. Kudos to the authors on this -- I often struggle to do this myself on papers this theoretical :).
Other Comments Or Suggestions: N/A
Questions For Authors: These questions are not particularly important but I'd be curious for responses if the authors have time:
1) I'm mostly familiar with the discretized, discrete-time version of FTRL. Could you comment on how your results would transfer to this setting? Like, with noisy payoffs, should I expect discrete-time FTRL to avoid circling equilibria and only converging on average?
2) When I think of the discrete-time FTRL dynamics, people will often use ideas like optimism to avoid the player strategies circling the Nash equilibrium (i.e., only converging on average rather than for the last iterate). My understanding is that the continuous-time analog of these results requires taking some "high-resolution" limits (e.g. https://arxiv.org/abs/2112.13826). I was wondering if there is anything interesting one could say about that flavor of approach in comparison to the ideas explored here.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time, positive evaluation and encouraging words! We reply to your remarks and questions below:
> I'm mostly familiar with the discretized, discrete-time version of FTRL. Could you comment on how your results would transfer to this setting? Like, with noisy payoffs, should I expect discrete-time FTRL to avoid circling equilibria and only converging on average?
This is a very interesting question, thanks for raising it!
In a nutshell, it depends.
To begin, there are several factors that come into play - whether FTRL is run with a constant or vanishing step-size / learning rate, the origin of the noise in the algorithm (e.g., pure payoff vector information versus bandit, payoff-based feedback) and, of course, the specific result under study.
- For games with "circling FTRL dynamics" (e.g., zero-sum games with a fully mixed equilibrium, or harmonic games) our intuition is as follows:
If the algorithm is run with a vanishing step-size, it will most likely converge to some random (Bregman) distance from the equilibrium / center of the game, but it won't necessarily return close to where it started.
- If the algorithm is run with a constant step-size, the picture is less clear: some first results in [1] for the exponential weights algorithm with pure payoff vector information suggest that the algorithm converges to the boundary in two-player zero-sum games. We conjecture that an analogue of Theorems 1 and 5 also holds in this setting, but this is likely a paper in itself.
[1] Bailey et al. "Stochastic Multiplicative Weights Updates in Zero-Sum Games"
> I was wondering if there is anything interesting one could say about that flavor of approach [optimism and high-resolution limit] in comparison to the ideas explored here.
Agreed: in discrete time with full information (that is, perfect observations of the players' mixed payoff vectors), optimism can mitigate non-convergence in certain games where the continuous-time dynamics are Poincaré recurrent (such as two-player zero-sum games or harmonic games). However, this gain only manifests itself in the deterministic case; in the stochastic case, optimistic methods (and other extrapolation-based methods, like extra-gradient and its variants) fail to converge altogether and, as you say, only time-averages remain convergent. Because of this, it does not seem that the properties of FTRL would be particularly different from optimistic FTRL in the presence of noise, because the noise in the process would overshadow the finer, smooth structure of optimistic FTRL. For similar reasons, a higher-resolution approximation would not help either, because the noise would cancel any gain obtaine from using a finer discretization scheme.
---
Thank you again for your time and encouraging words - please do not hesitate to reach out if you have any further questions!
Kind regards,
The authors | null | null | null | null | null | null |
Graph Minimum Factor Distance and Its Application to Large-Scale Graph Data Clustering | Accept (poster) | Summary: The work proposed a distance metric MMFD between graphs through comparing distributions and showed that MMFD is a pseudo metric and has a closed-form solution. The work also proposed several variants of MMFD and analyzed the properties theoretically. The proposed methods were compared with graph kernels, graph neural networks, and other distances in synthetic data analysis and graph clustering and showed improved performance.
## update after rebuttal
I appreciate the details and clarifications provided by the authors. I have no more concerns and will keep the rating.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, I briefly checked the proof of the theorems.
Experimental Designs Or Analyses: Yes. The design and analysis of graph comparing and clustering are reasonable.
Supplementary Material: Yes. The supplementary material included the code and data but I didn’t run the code.
Relation To Broader Scientific Literature: The proposed distances and clustering algorithms are useful.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1.The algorithms proposed in the paper are novel and elegant.
2.The presentations like the figures and tables are nice.
3.The proposed methods are compared with many baselines and the improvements are remarkable.
Weaknesses:
The explanations for several claims or results are not sufficient. Please refer to my questions.
Other Comments Or Suggestions: The paper will be stronger if my questions could be addressed successfully. In addition, the author may consider shortening the introduction section or other sections and move some results from the appendix to the main paper.
Questions For Authors: 1.In the text below Theorem 2.3, it was said that “...where MMFDs between graphs in the same cluster should be small...” How does this relate to the perturbations?
2.On page 4, it is not clear why the MMFD between two complete graphs is always zero. Could the authors provide more explanation or theoretical justification?
3.Section 2.5 compared MMFD with MMD. Is MMD always greater than MMFD? In addition to the listed two issues of SVD, is it possible that the order of singular values has some error caused by noise? Could the authors provide show some examples of matrix R12? Readers would like to see how R12 differs from an identity matrix or sign-flip matrix, although the final computation of MMFD is not explicitly related to R12.
4.As the MFD proposed in Section 2.8 has no closed-form solution, it is necessary to analysis its time complexity.
5.In Table 4, it is surprising that the time costs of the WL kernel and MMFD are less than one second, as the complexity is $O(N2)$ if I understand correctly. Could the authors provide more explanation about these tiny time costs?
Ethics Expertise Needed: ['Other expertise']
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
We sincerely appreciate your recognition of our work. Our responses to your comments are as follows.
**To Q1:**
In a dataset, the graphs belonging to the same cluster are usually similar to each other. Therefore, for a pair of very similar graphs, denoted as $G_1$ and $G_2$, we can regard $G_2$ as a perturbed graph from $G_1$, and vice versa. Therefore, $\text{MMFD}(G_1,G_2)$ will be small. It also means that within-cluster distances are small, which is important for downstream tasks.
**To Q2:**
Your this question is the same as the question 4 asked by Reviewer 8oX9. We provide the following theoretical justification.
A complete graph is a graph in which each vertex is connected to every other vertex. Therefore, for two complete graphs, their self-looped adjacency matrices are $\mathbf{A} _i=[1] _{n _i\times n _i}$, $i=1,2$.
$\mathbf{A} _i$ are PSD and rank-1. We have $\boldsymbol{\Phi} _i=s\cdot[1] _{1\times n _i}$, where $s$ is $-1$ or $1$. That means $\boldsymbol{\mu} _i=-1$ or $+1$. The rotation matrix $\mathbf{R} _{12}$ is now just a scalar equal to $-1$ or $+1$. Therefore, $\min _{\mathbf{R} _{12}\in\mathcal{R}} \Vert \boldsymbol{\mu} _1-\mathbf{R} _{12}\boldsymbol{\mu} _2 \Vert\equiv 0$, for any $(n _1,n _2)$.
**To Q3:**
This is an insightful question. Yes, MMD is always larger than or equal to MMFD.
In addition to the listed two issues of SVD, it is indeed possible that the order of singular values has some error caused by noise. We will include this in the revised paper. Regarding $\mathbf{R} _{12}$, we take the graphs $G_1$ and $G_7$ in Table 2 of our paper as an example. Let $d=4$, we have the following $\mathbf{R} _{12}$. We can see that it is quite different from an identity matrix or sign-flip matrix.
\begin{equation}
\begin{matrix}
\hline
0.9985 & 0.0000 & -0.0087 & -0.0538 \\\\
-0.0001 &1.0000 &-0.0005 & -0.0001 \\\\
0.0087 & 0.0005 & 1.0000 & 0.0001 \\\\
0.0538 & 0.0001 &-0.0006 & 0.9985 \\\\ \hline
\end{matrix}
\end{equation}
**To Q4:**
The time complexity of MFD is $\mathcal{O}\left(n^2 \log (d)+d^2 n+T\left(d^3+d n^2\right)\right)$, where $n$ is the number of nodes, $d$ is the rank, and $T$ is the number of iterations in the optimization. This complexity is much higher than that of MMFD and is mainly due to $Tdn^2$. Actually, we have a faster MFD based clustering algorithm, called MFD-KD. The time complexity of MFD-KD is $\mathcal{O}\left(n^2 \log (d)+d^2 n+T\left(d^3+d n_s^2\right)\right)$, where $n_s$ is much less $n$.
**To Q5:**
Thanks for the question. The tiny time costs of WL kernel and MMFD are due to the following reasons:
* In Table 4, we recorded only the time costs of computing the distance or similarity matrices on the two datasets, which was declared in line 432, column 2. That means we didn't include the time cost of spectral clustering.
* The experiments were conducted on a computer with Intel Core i9-12900K and RAM 64GB, which has a quite good CPU.
* The two datasets are not too large.
* WL kernel and MMFD are both efficient.
These four reasons make the time costs of WL kernel and MMFD less than one second. We assure that the result is correct and reproducible.
**Thank you again for your comments and suggestions. We are looking forward to your feedback.**
---
Rebuttal Comment 1.1:
Comment: I appreciate the details and clarifications provided by the authors. I have no more concerns and will keep the rating.
---
Reply to Comment 1.1.1:
Comment: We really appreciate your feedback and support. We have added the details to the paper. | Summary: The paper studies graph distance and graph clustering. It introduced a minimum mean factor distance (MMFD) between graphs and its extensions such as low-rank MMFD, MMFD-KM, and MFD.Theses methods outperformed many baselines on large-scale graph datasets such as REDDIT-5K.
Claims And Evidence: The claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation make sense.
Theoretical Claims: Yes, I checked the proofs for theoretical claims such as Theorems 2.2 and 2.3.
Experimental Designs Or Analyses: Yes, I checked the experimental settings, evaluation metrics, and analysis.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: Comparing graphs is a fundamental problem in machine learning.
Essential References Not Discussed: The literature review seems complete.
Other Strengths And Weaknesses: Strengths:
* The paper proposed an effective distance metric MMFD for comparing graphs. MMFD has a closed-form solution and hence is much more efficient than other methods such as graph kernels, GWD, and GED.
* Some extensions such MMFD-KM and MFD were developed.An approach to incorporating node attributes was provided.
* The paper has some theoretical results to support the proposed methods.
* In the experiments, the proposed methods outperformed other methods with a large margin.
Weaknesses:
* The time cost comparison is only conducted on smaller datasets.In Table 4, the two datasets are not large.
* The setting or analysis for the hyperparameters of the proposed methods seems missing.
Other Comments Or Suggestions: See my questions.
Questions For Authors: * In Table 1, what do K and T mean?
* Are the proposed methods applicable to directed graphs? I am just curious about this because Theorem 2.2 mentioned “undirected graphs”.
* In (20), how to set $\lambda$ for graph classification and spectral clustering?
* It is better to show some time cost comparison on larger datasets.
* In the caption of Table 3, the explanation for the colored text seems wrong. The purple color corresponds to best result in each case?
* How do the hyperparameters of the proposed methods affect their performance?
* It is suggested to provide more discussion about the experimental results in Table 3. For instance, on the AIDS dataset, the NMI and ARI of the proposed methods are 10 or 20 points higher than other methods. What is the possible reason for this phenomenon?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are grateful for your acknowledgment of our work. Our responses to your comments are as follows.
**To W1:**
Thanks for pointing it out. Actually, we have the time cost comparison on much larger datasets (e.g. graphs with 10000 nodes). They are in Table 8 of Appendix C4. By the way, some baselines such as Entropic GWD (shown by Table 2) are too time-consuming and hence cannot be applied to very large datasets such as the two REDDIT datasets.
**To W2:**
The settings of the hyperparameters are in Appendix B.2. Some results of hyperparameters are in Tables 6 and 7. For instance, our MMFD is not sensitive to the factorization dimension $d$. We will provide more analysis on them.
**To Q1:**
In Table 1, K is the number of clusters and T is the number of iterations in the clustering algorithm MMFD-KM. We are sorry about the absence of the description.
**To Q2:**
We think our methods can be applied to directed graphs, though we haven't conducted any experiments related to directed graphs. We may consider the following strategy. Given a directed graph, with adjacency matrix $\mathbf{A}$, which is asymmetric, we perform SVD $\mathbf{A}=\mathbf{USV}^\top$ and generate the following augmented adjacency matrix $\bar{\mathbf{A}}=[\mathbf{USU}^\top, \boldsymbol{0};\boldsymbol{0},\mathbf{VSV}^\top]$. $\bar{\mathbf{A}}$ is PSD and hence can be used in our methods such as MMFD and MFD.
**To Q3:**
Do you mean the $\gamma$ in (20)? In all experiments, we set it to $1 /(5 u)^2$, where $u$ denotes the average MMDF (or MMD, GWD, and GED) between all graphs. This setting was explained in line 837, Appendix B2.
**To Q4:**
The following table presents some results of time cost (second) comparison on very large graphs. We can see that the Shortest-path kernel is most time-consuming, while our MMFD is always the most efficient one.
\begin{equation}
\begin{matrix}
\hline n \text{(number of nodes)} & 1000 & 2000 & 4000 & 6000 & 8000 & 10000 & 20000 & 30000 \\\\
\hline \text { Shortest-path kernel } & 4.853 & 25.8 & 178.7 & 524.8 & 1125.0 & 2022.4 & - & - \\\\
\text {WL subtree kernel} & 0.393 & 1.5 & 6.6 & 15.3 & 27.4 & 43.4 & 182.6 & 449.7 \\\\
\text {entropic Gromove Wasserstein} & 0.367 & 0.5 & 2.3 & 6.4 & 12.8 & 21.9 & 169.1 & 579.7 \\\\
\text {MMFD}_{\text{LR}} & \mathbf{0 . 0 9 5} & \mathbf{0 . 3} & \mathbf{1 . 1} & \mathbf{2 . 1} & \mathbf{3 . 7} & \mathbf{6 . 4} & \mathbf{2 9 . 9} & \mathbf{7 8 . 8} \\\\
\hline
\end{matrix}
\end{equation}
**To Q5:**
Thanks for pointing out the mistakes. We have corrected them.
**To Q6:**
Our MMFD has only one hyperparameter $d$. As shown in Table 7, we compared the performance of MMFD with different $d$ on datasets PROTEINS and REDDIT-MULTI-5K. On PROTEINS, a larger $d$ is better since most graphs are very small. On REDDIT-MULTI-5K, A smaller $d$ can improve the clustering performance slightly, since most graphs are very large. In sum, MMFD is not sensitive to $d$.
Regarding the extension method MFD, besides $d$, there is a hyperparameter $\rho$ in the optimization algorithm and a hyperparameter $\beta$ in the kernel function. Figure 3 shows that the optimization is not sensitive to $\rho$. For $\beta$, we just set it as the inverse of the squared average distance in the experiments, which works well.
**To Q7:**
In the revised paper, we have added more discussion about the results. Regarding the improvement on the dataset AIDS, one possible reason is that the distribution information in the dataset is more useful than other datasets, and our methods MMFD and MFD can effectively exploit the information.
**We thank you again for your comments and time. We are looking forward to your response.**
---
Rebuttal Comment 1.1:
Comment: I appreciate the answers and clarification. I have no concerns about the work and hence keep the rating.
---
Reply to Comment 1.1.1:
Comment: We are extremely grateful for your response to our rebuttal and your acknowledgment of our work. | Summary: This paper introduces a new measurement, named MMFD, for comparing and clustering graph data. By considering the adjacency matrix of graph as a kernel matrix, MMFD transforms the graph comparison problem into distribution comparison.
This paper then proposes a low-rank approximation and a generalized version algorithm for large graph efficient clustering.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I didn't check the proofs for the theoretical claims.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature through existing graph comparing methods. The paper builds on past research while offering improvements, particularly in efficiency and scalability.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
- highly scalable and suitable for large datasets.
- superior clustering performance, as evidenced by the experiments on several real-world datasets, outperforming competitors in terms of clustering accuracy and time.
Other Comments Or Suggestions: No
Questions For Authors: - As mentioned by the authors, although the low-order approximation leads to some loss of information, it has a denoising effect. I'm interested if there are relevant toy experiments proving this yet.
- Can MMFD cope with missing attributes?
- MMFD is highly efficient and theoretically guaranteed. I would like to know how much its performance gap is compared with UGRL? In addition, I would like to know about the real application tasks of graph comparison, such as chemical molecular property prediction?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer,
It is our great honor to receive your positive assessment. Our responses to your questions are as follows.
**To Q1:**
Yes. Take $G_1$ and $G_3$ in Table 2 of our paper as an example, where $\text{MMFD}(G_1,G_3)=0.1598$. We add one additional node to $G_1$ and the additional node is only connected with one node. The same operation is conducted on $G_2$. Then we get two noisy graphs $G_1'$ and $G_3'$, where the noises are very strong because the graphs are too small. We compute the low-rank MMFD (with different rank) between $G_1'$ and $G_3'$. The difference between $\text{MMFD} _{\text{LR}}(G _1',G _3')$ and $\text{MMFD}(G _1,G _3)$ are shown in the following table. We see that when the rank decreases from 6 to 3, the difference becomes smaller. This demonstrates the denoising effect.
Additionally, as shown in Table 7 of our paper, on dataset REDDIT-MULTI-5K, using a lower rank (but not too low) yields a better clustering performance, which also demonstrates the potential denoising effect.
\begin{equation}
\begin{matrix}
\hline
\text{rank} & \text{full} & 5 &4 &3 &2 &1\\\\ \hline
|\text{MMFD}(G_1,G_3)-\text{MMFD}_{\text{LR}}(G_1',G_3')| & 0.073 & 0.070& 0.049& \textbf{0.044} & 0.046 & 0.047 \\\\ \hline
\end{matrix}
\end{equation}
**To Q2:**
Our MMFD is mainly designed for comparing the adjacency matrices of two graphs, though it is able to utilize node attributes, as explained in Section 2.6. To apply MMFD to graphs with incomplete node attribute matrices, we need to do missing data imputation (e.g. matrix completion) in advance. Therefore, MMFD cannot directly handle missing attributes. However, here we provide a possible approach. Let $\tilde{G}_1$ and $\tilde{G}_2$ be the two augmented graphs using node attributes, which is explained in Section 2.6. We just denote the missing values of node attribute matrices as $x_1$ and $x_2$, respectively, and write $\tilde{G} _1=\tilde{G} _1(x _1)$ and $\tilde{G} _2=\tilde{G} _2(x _2)$. Then, according to the definition of MMFD, we can solve $\min _{x _1,x _2}\text{MMFD}(\tilde{G} _1(x _1),\tilde{G} _2(x _2))$ to impute the missing values and get the minimum distance. Nevertheless, the optimization and effectiveness need further investigation, which could be a future work.
**To Q3(a):**
In Table 3 of our paper, we compared our MMFD with four pure UGRL methods including InfoGraph, GraphCL, JOAO, and GWF and two clutering-oriented UGRL methods including GLCC and DCGLC. On dataset AIDS, the best NMI of UGRL methods is $76\%$, while the NMI of our MMFD is $88\%$, meaning that the gap is significantly large. For convenience, we calculate the average of ACC, NMI, and ARI of each method and take the average over the 2 datasets (PROTEIN and AIDS), the results are compared in the following table. Our MMFD and MFD outperformed the best competitor by $6\%$.
\begin{equation}
\begin{matrix}
\hline
\text{Method} &\text{InfoGraph} & \text{GraphCL} & \text{JOAO} & \text{GWF} & \text{GLCC} & \text{DCGLC} & \text{MMFD} & \text{MFD} \\\\
\text{Metric (\\%)} &42 &40 &17 &35 &20 &47 &\textbf{53} &\textbf{54} \\\\
\hline
\end{matrix}
\end{equation}
We also compare the time costs of DCGLC (SOTA of graph clustering) and our MMFD-KM and MFD-KD on dataset REDDIT-12K in the following table.
\begin{equation}
\begin{matrix}
\hline
\text{Method} &\text{DCGLC} & \text{MMFD-KM} & \text{MFD-KD} \\\\
\text{Time cost} & 21\ hours & 2.2\ minutes & 3\ hours \\\\
\hline
\end{matrix}
\end{equation}
Besides the advantage of higher efficiency and accuracy, our MMFD has nearly no hyperparameter to tune (it is not sensitive to $d$), while deep learning methods have quite a few hyperparameters that are very difficult to tune in unsupervised learning.
**To Q3(b):**
Regarding chemical molecular property prediction, we combine MMFD with support vector regression and call this combination MMFD+SVR. We apply it to the QM9 dataset. Since kernel SVR is not scalable to very large datasets, we only use 25000 graphs for training and 5000 graphs for testing. Some results (space limitation) of MAE are in the following table, where the classic MPNN and enn-s2s [1] are compared. We see MMFD+SVR works well. It still has a lot of room for improvement, e.g., by using attributes more effectively, using the extension MFD, or using more training data.
\begin{equation}
\begin{matrix}
\hline
\text{Target} & \mu & \alpha & \text{HOMO} & \text{LUMO} & \text{gap} & \text{R2} & \text{ZPVE} &\text{U0}\\\\ \hline
\text{MPNN}& 1.22 & 1.55 & 1.17 & 1.08 & 1.70 &3.99 &2.52 & 3.02\\\\
\text{enn-s2s} & \textbf{0.30} & 0.92 & 0.99 & 0.87& 1.60& \textbf{0.15} & 1.27& 0.45\\\\
\text{MMFD+SVR} &0.64 & \textbf{0.34}& \textbf{0.64}& \textbf{0.43}& \textbf{0.43} & 0.54 & \textbf{0.07}&\textbf{0.41}\\\\
\hline
\end{matrix}
\end{equation}
[1] Gilmer et al. Neural Message Passing for Quantum Chemistry.
**Thank you again. We are looking forward to your feedback.**
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed information provided by the authors. I have no more concerns and will keep the rating.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your feedback on our rebuttal and acknowledgment of our method's effectiveness. We have added the additional results to the paper.
**If you think there is no weakness in our work, could you please raise the rating appropriately?** | Summary: The authors study the clustering problem for graph data. They propose a new measure between two graphs called minimum mean factor distance (MMFD) that serves as a kernel function in graph data clustering. MMFD measure the minimum distance (through rotation by a real orthonormal matrix) of the mean vectors of $\Phi_1$ and $\Phi_2$ that can be viewed as the feature vectors of nodes from two graphs $G_1$ and $G_2$, respectively. Two variants of MMFD, named Low-Rank MMFD and MF, are given to lower the time complexity and to fully exploit $\Phi_1$ and $\Phi_2$, respectively. The experiments evaluate the effectiveness and efficiency of their methods.
=========================
After reading the authors' rebuttals and rethinking the significance of MMFD, I agree that MMFD has a certain value, although I still cannot understand why if the original self-looped adjacency matrices are both PSD, the distance could be zero once the sums of elements in two matrices are equal. In this context, MFD seems more reasonable. I can only say that the instance in Table 2 and the experimental results look convincing. Moreover, I hope the authors can reorganize Section 2.3 due to my concern of W1. I decide to raise my score to 3 (Weak accept). Thank the authors for the responses.
Claims And Evidence: All claims are supported by evidence, but some descriptions are not so clear. Please refer to the questions below.
Methods And Evaluation Criteria: Yes, I think the methods and evaluation criteria make sense.
Theoretical Claims: I have checked the proofs in the main text. Some parts of the derivation process on the basic MMFD are lengthy and redundant, which makes not too much sense. Please refer to the weakness.
Experimental Designs Or Analyses: Yes, I have checked the soundness/validity of experimental designs and analyses.
Supplementary Material: The authors have included the source codes in the supplementary material, but I didn’t check them. I believe that the experiments are reproducible.
Relation To Broader Scientific Literature: This paper fits in the study line of clustering of graphs. It proposes a new kernel function and some variants, which is a contribution to graph clustering and kernel methods.
Essential References Not Discussed: I think the related works included in the paper have been essential to understanding the key contributions.
Other Strengths And Weaknesses: S1. The MMFD measure performs well in evaluation. The toy example in Table 2 is intuitive and convincing.
W1. The authors have spent much space on showing that MMFD is only related to $\mathcal{A}_i^\phi$’s. But it can be observed easily from Eq. (10) or Eq. (11) that MMFD measures actually the difference of the lengths of $\mu_1$ and $\mu_2$, that is the absolute value of $\Vert\mu_1\Vert-\Vert\mu_2\Vert$ (when they are parallel by rotation). A plain expansion of $\left| \Vert\mu_1\Vert-\Vert\mu_2\Vert \right|$ yields Eq. (16).
W2. There are a lot of unclear descriptions in the main text. Please refer to the questions below. Because of this, I don’t think this paper is suitable for publication with present version.
Other Comments Or Suggestions: 1. Please give the full names of EVD and SVD when they appear for the first time.
2. Line 206, Column 2, please clarify the inner product $\langle A, B \rangle=\text{trace}(A^T B)$.
3. “$i=1, 2$” should be deleted in Line 134, Column 2.
Questions For Authors: 1. Line 212, Column 1, why do you need to make the distance as small as possible and call it a natural principle? This seems the core of MMFD, and I can feel it to some extent, but need more explanations.
2. What is the difference between Eqs. (3) and (7)? Do you mean that if we have the PSD matrices $A_i$’s, then it’s good, but in real applications we do not have, so you need to construct the PSD matrices $\mathcal{A}_i^\phi$’s that approximate of $A_i$’s? Are $A_i$ and its augmentation PSD in Alg. 1? Is $\mathcal{A}_i^\phi$ necessary when $A_i$ and its augmentation are PSD?
3. How do you perform the randomized SVD?
4. You have assumed $G_1$ and $G_2$ of the same size in Line 134, Column 1. But it seems that $n_1$ and $n_2$ could be different in Alg 1. Do you define MMFD between two graphs of different sizes as that in Line 10 of Alg. 1? Why can you define it like that? Why can you say assuming $n_1=n_2$ is without loss of generality? Why is MMFD between two complete graphs of different sizes (Line 195, Column 2) zero?
5. What is the $0.5$ in Line 296, Column 1? The probability in randomized SVD? What does it mean?
Ethical Review Concerns: No ethical review concern.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer,
We sincerely appreciate your insightful comments. They improved our work a lot.
**To W1:**
Previously, we focused on deriving from Equation (12) and overlooked Equation (11). Your advice has helped us make Section 2.3 more concise, thereby freeing up more space to supplement other important results or discussions.
**To W2:**
We are trying our best to make the corresponding descriptions clearer and believe that the issues can be addressed by revision easily. Please refer to our responses to your questions.
**To Other Comments Or Suggestions:**
Thanks for your careful reading. We have revised our paper to resolve these issues.
**To Q1:**
As mentioned in Section 3.2, $\boldsymbol{\Phi}_1$ and $\boldsymbol{\Phi}_2$ cannot be uniquely determined, so their distances are not unique.
We used the idea of ``shortest distance", which is quite general in real problems. Suppose there are three cities A, B, and C, and there exist multiple paths between each pair of them. To compare the distance between A and B and the distance between A and C, we use the shortest path between A and B and the shortest path between A and C, rather than any other longer paths.
Back to our problem, for a graph $G_i$, $\boldsymbol{\Phi}_i$ and $\bar{\boldsymbol{\Phi}}_i:=\mathbf{R}_i \boldsymbol{\Phi}_i$ are equivalent in terms of representing the adjacency matrix, $i=1,2$. The non-uniqueness of $\boldsymbol{\Phi}_i$ leads to a set of possible distances, denoted as $\mathcal{S}:=\\{\Vert \text{Mean}(\mathbf{R}_1\bar{\boldsymbol{\Phi}}_1)-\text{Mean}(\mathbf{R}_2\bar{\boldsymbol{\Phi}}_2)\Vert: \mathbf{R}_1, \mathbf{R}_2 \in \mathcal{R}\\}=\\{\Vert \mathbf{R}_1\boldsymbol{\mu}_1-\mathbf{R}_2\boldsymbol{\mu}_2\Vert: \mathbf{R}_1, \mathbf{R}_2 \in \mathcal{R}\\}$, where the second equality is due to that we can exchange $\boldsymbol{\Phi}_i$ and $\bar{\boldsymbol{\Phi}}_i$. We then define the smallest distance in $\mathcal{S}$ as the final distance between $G_1$ and $G_2$, i.e., $\text{MMFD}\left(G_1, G_2\right)=\min (S)$; it is equivalent to optimize $\mathbf{R}_1$ and $\mathbf{R}_2$ to minimize $\Vert \mathbf{R}_1\boldsymbol{\mu}_1-\mathbf{R}_2\boldsymbol{\mu}_2\Vert$.
Please feel free to let us know if this explanation is not sufficient.
**To Q2:**
1) Eqs. (3) is the ideal case and usually does not hold for real graphs because their adjacency matrices are often not PSD. Eqs. (7) is based on the PSD $\mathcal{A}_i^\phi$ and hence always holds for real graphs. $\mathcal{A}_i^\phi$ can be regarded as an PSD proxy or approximation of $\mathbf{A}_i$.
2) Yes. Your understanding is correct.
3) Yes. In Alg.1, $\mathbf{A}_i$ is the input, and the augmentation $\mathcal{A}_i^\phi$ is constructed by line 6 or line 8.
4) Does the "augmentation" you mentioned mean the one defined in equation (5)? In the algorithm, we use (5) rather than (4), which is stated in line 171 and line 4-8 of the algorithm. If $\mathbf{A}_i$ is PSD, $\mathcal{A}_i^\phi$ is unnecessary (in this case $\mathbf{A}_i=\mathcal{A}_i^\phi$). However, PSD $\mathbf{A}_i$ is rare. For instance, in the dataset AIDS, the number of PSD $\mathbf{A}_i$ is **ZERO**.
**To Q3:**
We used the sklearn function: sklearn.utils.extmath.randomized$\_$svd(M, n$\_$components=d, random$\_$state=0). This can be found in the code of the supplementary material. We added these details to the revised paper.
**To Q4:**
We assume $n_1=n_2=n$ so as to simplify the notations in many formulas or results such as (8), (12), (16), Theorem 2.3, and Theorem 2.4. In the definition of MMFD, i.e. Definition 2.1, we can replace $n$ with $n_1$ and $n_2$ without influencing the meaning of the definition, since it is based on the mean vectors.
Therefore, we claimed "without loss of generality". We will use $n_1$ and $n_2$ throughout the paper if you think it is necessary.
Regarding the MMFD between two complete graphs of different sizes, we provide the following theoretical proof.
A complete graph is a graph in which each vertex is connected to every other vertex. Therefore, for two complete graphs, their self-looped adjacency matrices are $\mathbf{A} _i=[1] _{n _i\times n _i}$, $i=1,2$.
$\mathbf{A} _i$ are PSD and rank-1. We have $\boldsymbol{\Phi} _i=s\cdot[1] _{1\times n _i}$, where $s$ is $-1$ or $1$. That means $\boldsymbol{\mu} _i=-1$ or $+1$. The rotation matrix $\mathbf{R} _{12}$ is now just a scalar equal to $-1$ or $+1$. Therefore, $\min _{\mathbf{R} _{12}\in\mathcal{R}} \Vert \boldsymbol{\mu} _1-\mathbf{R} _{12}\boldsymbol{\mu} _2 \Vert\equiv 0$, for any $(n _1,n _2)$.
We added the above proof to the revised paper.
**To Q5:**
Sorry for the confusion. The superscript 0.5 is the square root. We visualize the square root of the distance to make the visual difference clearer. As you know, in many studies, people use a log scale to show the results. Here, we use a square-root scale.
**Thank you again for reviewing our paper. We are looking forward to your feedback.**
---
Rebuttal Comment 1.1:
Comment: I responded the authors before my rebuttal acknowledgement, but it seems not visible to the authors. I restate it here. Sorry for that.
Thank the authors' response. Many concerns have been addressed except the major one.
The non-uniqueness of $\Phi_i$ is algebraically true, but it doesn't mean that any factorization of $\mathcal{A}_i$ makes sense. It seems more reasonable to find the most suitable factorization for each $\mathcal{A}_i$ and measure the distance based on these specific factorizations. The minimum mean factor distance actually implies a "rotational equivalence" in the representations. Even though to measure the difference of two sets of representations (I don't use the word "distance" here because distance means "minimum", but I think we just need to measure the difference of two graphs), we still have several options. Minimum distance is one of them, but it is hard to say which one is a natural principle. Moreover, at the very least, the final format of MMFD only relates to the sum of entries of $\mathcal{A}_i$, which seems quite weird. Imagine that $\mathcal{A}_i$ is simply an adjacency matrix that is symmetric. Does it mean that the MMFD of two graphs that have equal size and the same number of edges is 0? Sometimes, excessive simplification is not necessarily a good thing, since it is possible to lose useful information.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 8oX9,
Thank you so much for your response.
* Regarding the options of distance measure, indeed, it is hard to say which one is a natural principle, but a minimum distance is meaningful, and we are willing to rephrase the presentation. As you can see, the proposed MMFD makes sense theoretically and performs in all experiments.
* The final format of MMFD is not related to the sum of entries of the original adjacency matrix $\mathbf{A}$. Instead, it is related to $\mathcal{A}^\phi$, which is $\mathbf{PSD}$ and different from $\mathbf{A}$. PSD $\mathbf{A}$ is rare; for instance, in the four datasets of Table 3 (8,712 graphs), the total number of PSD $\mathbf{A}$ is only 14.
* We convert the graph comparison problem into a discrete distribution comparison problem, where each sample $\mathbf{z}$ corresponds to a node in the graph. Specifically, we construct the PSD matrix $\mathcal{A}^\phi$ so as to obtain the meaningful feature representation $\phi(\mathbf{z})$, and then compare the means of $\phi(\mathbf{z})$ between two graphs, leading to a distance measure between two distributions.
* MMFD is theoretically founded, although the final format is quite simple. Regarding your question about two graphs with an equal number of nodes and the same size, if the original self-looped adjacency matrices are both PSD, yes, the distance is zero. An example is that MMFD between two **complete graphs** is always zero. However, for graphs with non-PSD adjacency matrices, which is usually the case, the distance is often not zero.
* For instance, in Table 2 of our paper, $G_5$, $G_6$, and $G_7$ have the **same number of nodes and the same number of edges**, but their MMFDs are all **not zero**. This sufficiently demonstrated the ability of MMFD to distinguish highly similar graphs.
* It should be emphasized again that the usefulness of MMFD, including the initial definition and the final format, is based on PSD $\mathcal{A}_i^\phi$. By the way, a symmetric matrix is not necessarily a PSD matrix.
* Besides the basic MMFD, we have an extension MFD introduced in Section 2.8, which is more complex and is able to utilize more information. As shown in Table 3 of our paper, MFD is more effective than MMFD.
We are very grateful for your thoughtful discussion. It appears that your concern is not directly related to the effectiveness, theoretical soundness, and completeness of our work, whereas the other three reviewers do not have concerns regarding these aspects. Please do not hesitate to let us know if your concern remains.
Sincerely,
Authors
April 05
\-----------------------------
\-----------------------------
\-----------------------------
Hi Reviewer 8oX9,
Did our previous explanation resolve your concern?
* Although the computation of MMFD is very simple, it is theoretically founded.
* MMFD outperformed many complicated methods such as WL kernel, graph edit distance, gromov wasserstein distance, and GNNs in the experiments.
* In addition to MMFD, we have provided a more complex extension called MFD, which outperformed MMFD in the experiments.
Sincerely,
Authors | null | null | null | null | null | null |
Weakly Supervised Anomaly Detection via Dual-Tailed Kernel | Accept (poster) | Summary: This paper proposes Weakly Supervised Anomaly Detection via Dual-Tailed Kernel (WSAD-DT), which uses two centroids, one for normal samples and one for anomalies. It uses a light-tailed kernel for normal samples, and heavy-tailed kernel for abnormal samples. To prevent degenerate ``all-points-collapse’’ solutions, the method introduces kernel-based regularization term that promotes intra-class diversity. Further, an ensemble strategy that partition unlabeled data into diverse subsets were proposed.
## update after rebuttal
The score was upgraded.
Claims And Evidence: The effectiveness of dual-kernels, diversity regularization, and ensemble-based learning are only shown in the Appendix.
Methods And Evaluation Criteria: In general, abnormal samples are diverse, so it is doubtful that a single center can represent them.
Theoretical Claims: - How Lemma 4.1 is used is not clear, at least on the main paper.
- Lemma 5.1, 5.2, moderate distances are ambiguous, and these Lemmas are not validated by experiments.
- Corollary 5.6 would be insufficient to prove that degeneracy is prevented. To prove this property, we further need to prove the minimum $L_{total}(\theta)$ is below 2 and $L_{seperation}$ is not zero.
Experimental Designs Or Analyses: This paper uses datasets from Ad Benchmark repository. The type of anomaly is not analyzed.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper proposes to use two different kernels for normal and anormal samples. This point is different from the existing methods listed on related work sections.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
- The theory of single-tailed kernels vs. dual-tailed kernels is interesting.
- Enhancing the representation via diversity loss is reasonable
- Ensemble-based subset splitting is effective
- The performance of the proposed method is higher than state-of-the-art.
Weaknesses
The structure of this paper is inadequate. Sec.1. is written in one paragraph except for contributions. Some section formats are not consistent. Sec.6.1 Experimental Setup is written in an improper location. Ablation studies are shown only in the appendix.
Other Comments Or Suggestions: - In Sec 3.1, 0 is the unlabeled sample, which is largely normal but can contain anomalies. However, in p.4, 0 is regarded as normal without explanation.
- The ensemble size of 5 may not the best. Since the only 1,3,5 splits are evaluated, more larger number of ensemble may increases the accuracies.
Questions For Authors: The authors should justify the single center for normal samples.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
**Usage of Lemma 4.1 in the main paper.**
Lemma 4.1 is the formal stepping-stone for showing that a single kernel cannot serve as both a strictly light-tailed and a heavy-tailed function. By demonstrating $\lim_{d\to\infty} \kappa_{\mathrm{light}}(d)\,/\,\kappa_{\mathrm{heavy}}(d) = 0$, it clarifies that any kernel “light” enough to enforce tight in-class clustering cannot simultaneously retain the broader margin necessary for out-of-class separation. This result underpins the transition to a dual-kernel scheme in Theorem5.3, where one kernel is dedicated to pulling in-class points close, and a second kernel enforces out-of-class separation. In our revised version, we plan to reference Lemma 4.1 more explicitly in the main text (e.g., just before Theorem5.3), making its critical role in motivating the dual-kernel approach more transparent.
**Validation of Lemma 5.1, 5.2 and clarification of moderate distances**
In Lemmas 5.1 and 5.2, “moderate distances” refers to the typical radius range where in-class samples are neither so close that their similarity is almost one nor so far that the kernel decays to zero. This region is precisely where the difference between a rapidly decaying (light-tailed) and a more gradual (heavy-tailed) function is most pronounced. Although “moderate” is inherently qualitative, Table 4 in the appendix shows that in real datasets, most points do indeed occupy this intermediate distance regime in the learned representation. By measuring average distances between each sample and its assigned (or opposing) center, one sees that in-class samples form tighter clusters under light-tailed similarity, and out-of-class samples remain substantially farther under heavy-tailed similarity—exactly matching the theoretical claims of Lemmas 5.1 and 5.2.
**Proof $L_{total} < 2$**
To prove this property, we further need to prove the minimum $ L_{\text{total}}(\theta) $ is below 2 and $ L_{\text{separation}}$ is not zero.
In the main paper, we define the total loss as
$$
L_{\mathrm{total}}(\theta) = L_{\mathrm{separation}}(\theta) + L_{\mathrm{diversity}}(\theta).
$$
Corollary 5.6 shows that collapsing each class onto a single point yields a diversity loss of exactly 2 while driving the separation term arbitrarily close to zero, so the total loss under this degenerate arrangement is approximately 2. To demonstrate that degeneracy does not globally minimize the total loss, one can construct a non-degenerate arrangement in which each class’s points lie near (but not identically at) its center. Placing the points close to their respective centers makes $ L_{\mathrm{separation}}(\theta) $ arbitrarily small, say $ \varepsilon$, while ensuring that the points are not all coincident reduces $ L_{\mathrm{diversity}}(\theta) $ below 2 by some margin $\delta > 0 $. Because these adjustments can be made largely independent of each other, one can tune $ \delta$ to exceed $\varepsilon $, forcing
$$
L_{\text{total}}(\theta) = L_{\mathrm{separation}}(\theta) + L_{\mathrm{diversity}}(\theta) \le \varepsilon + \bigl(2 - \delta\bigr) < 2.
$$
Since the degenerate arrangement’s total loss is 2, whereas this non-degenerate arrangement achieves a strictly smaller value, degeneracy cannot be optimal. Consequently, $L_{{separation}}$ cannot vanish at the true minimum, and the trivial collapse solution is excluded as a global minimizer.
**Explanation regarding 0 label and ensemble size.**
In Section3.1, “0” denotes unlabeled samples whose true status is unknown—mostly normal but potentially anomalous. Our method uses the labeled anomalies (y=1) to drive separation, exposing hidden anomalies among the unlabeled. We will clarify in the revision that “0” does not guarantee normality. For ensemble size (1, 3, or 5 splits), more splits can marginally improve accuracy but fragment the data and increase runtime. We find five splits to be the best balance of performance and cost, and will make this rationale explicit in the revised manuscript.
**Justification for single normal center.**
In classical one-class anomaly detection, using a single center for normal data aligns well with margin-based and complexity-theoretic reasoning: keeping the normal region compact and described by a minimal boundary (e.g., a single enclosing ball) not only simplifies optimization but also reduces the risk of overfitting, particularly under limited anomaly labels. This approach is seen in canonical methods like DeepSVDD or DeepSAD, which assume that normal data occupy a single contiguous cluster in feature space. While multiple centers can capture diverse normal modes, each adds hyperparameters. Empirically, a single center often suffices, as normal instances instances typically share enough common structure to cluster tightly around one latent coordinate.
**W1**
We will improve the structure of our manuscript in the revision, addressing section formatting and organization.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the response. So, I will raise the score to Weak Accept.
---
Reply to Comment 1.1.1:
Comment: We appreciate your feedback and will thoughtfully incorporate your suggestions into the paper. | Summary: This paper proposes a method to improve anomaly detection performance by utilizing two kernel functions in the weakly supervised anomaly detection problem, where only unlabeled data and a small amount of labeled anomaly data are available. The effectiveness of the proposed method is validated through experiments using various datasets.
## update after rebuttal
I raises the score to Weak Accept.
Claims And Evidence: The claims seem clear, but I am unsure why the method is effective for weakly supervised learning. Please refer to the Question section.
Methods And Evaluation Criteria: The proposed method appears reasonable to some extent. However, it would be beneficial to evaluate it on more realistic datasets. Please refer to the Question section.
Theoretical Claims: The reason why the proposed method is effective for weakly supervised learning is unclear.
Experimental Designs Or Analyses: An ablation study is necessary. Please refer to the Question section.
Supplementary Material: I have reviewed it thoroughly.
Relation To Broader Scientific Literature: This paper should cite and compare their method with LOE [1] and SOEL [2]. Please refer to the Question section.
[1] Qiu, Chen, et al. "Latent outlier exposure for anomaly detection with contaminated data." International Conference on Machine Learning. PMLR, 2022.
[2] Li, Aodong, et al. "Deep anomaly detection under labeling budget constraints." International Conference on Machine Learning. PMLR, 2023.
Essential References Not Discussed: Please refer to the "Relation To Broader Scientific Literature" section.
Other Strengths And Weaknesses: The motivation for the design of the proposed method is well reflected, which is commendable. The experimental results are also promising. However, the justification for why it is effective in weakly supervised learning should be explicitly stated. Additionally, the comparison methods and datasets seem insufficient. Please refer to the Question section.
Other Comments Or Suggestions: Please refer to the Question section.
Questions For Authors: - While I believe the proposed method is effective for supervised anomaly detection, I do not understand why it is effective for weakly supervised anomaly detection. Since the method assumes that the unlabeled data consists mainly of normal samples, the presence of anomalies within the unlabeled data might significantly affect the results. Could you clarify this point?
- The authors should compare their method with approaches that explicitly handle anomalies within unlabeled data, such as LOE [1] and SOEL [2]. SOEL, in particular, has been shown to be effective for weakly supervised anomaly detection. Would it be possible to conduct such a comparison?
- The appendix (Appendix P) includes an ablation study on the diversity term, but I believe additional ablation studies are necessary. For example, what happens if only one of the two kernel functions is used?
- The datasets used in the experiments seem simple. Would it be possible to conduct experiments on more realistic datasets, such as medical data like MedicalMnist [3]?
[3] Yang, Jiancheng, et al. "Medmnist v2-a large-scale lightweight benchmark for 2d and 3d biomedical image classification." Scientific Data 10.1 (2023): 41.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback.
**Q1**
From a theoretical standpoint, WSAD-DT applies margin-based reasoning to handle both partial contamination in the unlabeled set and extremely limited anomaly labels. Assigning each class its own center and using a dual-tailed kernel—light-tailed for in-class compactness, heavy-tailed for out-of-class margins—maintains separation even when anomalies are scarce or intermixed with unlabeled data. The heavy-tailed component preserves a “push” at moderate distances, which is crucial under weak supervision where a handful of labeled anomalies must guide the entire boundary. Meanwhile, the diversity term prevents degenerate “collapse” solutions, ensuring normal and anomalous samples maintain sufficient variability. The ensemble mechanism gives each ensemble model a distinct view of normality while sharing the same anomaly labels ensures consistent guidance. Aggregating these diverse detectors improves robustness and generaliza- tion under limited anomaly labels. Furthermore, our experiments under different weak-supervision setups—1%, 5%, and 10% labeled anomalies, sometimes as few as five labeled anomalies total—demonstrate that WSAD-DT remains effective even in extremely label-scarce conditions. As shown in our contamination experiments (Appendix M), the method’s performance degrades only slightly when unlabeled data are partially contaminated, underscoring both its theoretical and practical robustness in real-world weakly supervised scenarios.
**Q2**
We compared WSAD-DT against SOEL (using the authors’ code from https://github.com/aodongli/Active-SOEL), which explicitly handles anomalies in the unlabeled set, using each approach’s default configurations. Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table4.png shows that WSAD-DT generally achieves higher AUC-ROC on most datasets, occasionally by a wide margin. These results indicate that while SOEL is effective at identifying unlabeled anomalies, the dual‐tailed separation and diversity regularization in WSAD-DT provide a stronger results in weakly supervised scenarios.
**Q3**
To isolate the effect of the kernel design and the diversity term, we fix the ensemble size to 1 in both the full WSAD-DT and a simplified variant that uses only a single Gaussian kernel with no diversity regularization.
As shown in Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table5.png, we compare the full WSAD-DT (dual-tailed + diversity) to a simplified version using only a single Gaussian kernel without the diversity term. Across most datasets, WSAD-DT achieves higher AUC-ROC. Empirically, this suggests that relying on a single kernel alone and omitting the diversity regularization is insufficient for robust anomaly detection under weak supervision. In contrast, combining light- and heavy-tailed kernels with a diversity penalty improves stability and overall performance.
**Q4**
We appreciate the reviewer’s suggestion regarding the use of more realistic datasets, such as MedicalMNIST. In response, we have extended our experiments to include this dataset, applying the commonly used "one-vs.-rest" evaluation protocol. This protocol treats each class in turn as normal while randomly selecting samples from other classes as anomalies, following the standard practice for anomaly detection tasks on multi-class datasets. Our experiments on the MedicalMNIST dataset have yielded strong results, demonstrating the robustness and scalability of our method in more complex, real-world scenarios. As shown in the Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table6.jpg, we compare the performance of our method (WSAD-DT) against several competitive methods across various datasets in the MedicalMNIST suite. For example, on the Derma dataset, WSAD-DT achieved the highest AUC-ROC score of 0.9002, outperforming methods such as DeepSAD, DevNet, and XGBOD. Across several datasets, our method consistently ranks among the top performers, reinforcing its effectiveness in handling medical data with weak supervision. We believe these results demonstrate that our method is not only effective on tabular datasets but also performs well in more complex, real-world applications like medical anomaly detection.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
It's great to see that the comparison with SOEL yielded strong results.
I’d like to ask two additional questions:
- Which backend did you use for SOEL, MHRot or NTL?
- Also, would it be possible to evaluate SOEL (using MHRot as the backend) on the MedicalMNIST experiments as well?
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up questions. Below are our responses:
**Q1**
We used the NTL backend for SOEL in our experiments.
**Q2**
As our experimental setup for MedicalMNIST aligns closely with the original protocol described in the SOEL [1] paper we kindly refer to the results reported in the original SOEL paper for a baseline comparison.
We greatly appreciate your interest and look forward to any additional feedback.
[2] Li, Aodong, et al. "Deep anomaly detection under labeling budget constraints." International Conference on Machine Learning. PMLR, 2023. | Summary: This paper proposes a weakly supervised anomaly detector WSAD-DT via a dual-tailed kernel that can clearly distinguish anomalies from normal samples under weak supervision. Moreover, an ensemble strategy was devised to divide the unlabeled data into distinct subsets. Meanwhile, the limited labeled anomalies were shared across these partitions to optimize their influence.
## update after rebuttal
Most of my concerns were addressed, so I changed my score.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: no
Experimental Designs Or Analyses: no
Supplementary Material: yes, Appendix A, F, G, H, I, J, K, L, M, N, O, P, and Q.
Relation To Broader Scientific Literature: This paper proposes a weakly supervised anomaly detector with better performance.
Essential References Not Discussed: no
Other Strengths And Weaknesses: The paper is clearly described and provides some theoretical proofs.
Other Comments Or Suggestions: see questions.
Questions For Authors: 1. Is it weakly supervised if 70% of the data in the dataset is used for training? Although there are few anomalies used, there are few anomalies in the dataset. Even if all the data is used for training, there are still a few anomalies.
2. How are the parameters alpha and beta set and what is their impact on the model?
3. There is a lack of comparison with some SOTA unsupervised algorithms, such as IDK[1], isolation-based algorithms are very famous in anomaly detection, and most of them are linear algorithms.
4. Do the light and heavy kernels necessarily have to adhere to the specific forms outlined in the paper? Would it be acceptable to employ a Gaussian kernel for both, with distinct σ values?
I am willing to improve my score if my concerns are addressed.
[1] Ting, K. M., Xu, B. C., Washio, T., & Zhou, Z. H. (2020, August). Isolation distributional kernel: A new tool for kernel-based anomaly detection. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 198-206).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their perceptive review.
**Q1**
We adopt the 70/30 train/test split protocol following the approach used by AdBench (Han et al., 2022) [1], and this does not contradict the principle of weak supervision. What determines the “weakness” here is not how much of the dataset is allocated to training, but rather how few anomalies are actually labeled among that training portion. Even though 70\% of the data is used for training, only a small fraction of those anomalies—sometimes just five labeled anomalies—is available for explicit supervision. The rest of the training set remains unlabeled. This imbalance in labeled anomalies, rather than the overall training size, is what makes the setting truly weakly supervised and differentiates it from the kind of fully supervised approach that would require extensive or comprehensive anomaly labeling.
**Q2**
We set the bandwidth parameters to reflect the distinct behaviors of normal and anomalous data, using a smaller value for the normal center to enforce tighter in-class clustering and a larger value for the anomaly center to accommodate its more varied distribution. These parameters essentially control how rapidly similarity decays with distance for the light-tailed (Gaussian) and heavy-tailed (Student-\(t\)) kernels, respectively. A Gaussian kernel with a smaller bandwidth enforces compactness for normal samples, whereas a slightly larger bandwidth for anomalies prevents overly constraining their more diverse representations. Likewise, the Student-\(t\) kernel parameter determines how “heavy” the tail is, maintaining a broader margin for out-of-class points. We provide an ablation study showing that the method remains robust across reasonable ranges [0.1-1] of these bandwidth and tail parameters (Appendix O).
**Q3**
We performed an additional set of experiments comparing WSAD-DT with IDK (using the authors’ code from https://github.com/IsolationKernel/Codes) and Isolation Forest (IForest) [2] (code from https://github.com/yzhao062/pyod), adopting each method’s default parameter settings. In Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table2.png, WSAD-DT consistently outperforms IDK and IForest in AUC-ROC across nearly all datasets, often by a substantial margin. Although isolation-based methods are well-known for their speed and simplicity, these results suggest that incorporating a small set of labeled anomalies, along with a dual-tailed kernel and diversity regularization, yields more accurate detection. Even on large or high-dimensional datasets, WSAD-DT achieves better separation between normal and outlying samples than both IDK and IForest. These findings are consistent with [1], where even minimal supervision—such as having only five labeled anomalies—can substantially surpass purely unsupervised methods.
**Q4**
We conducted an additional experiment where we replaced the dual-tailed setup (Gaussian for in-class + Student-\(t\) for out-of-class) with a single Gaussian kernel using two different bandwidths—one smaller for in-class (e.g., $\sigma=0.1$) and one larger for out-of-class (e.g., $\sigma=2$). To isolate the effect of the kernel design and the diversity term, we fix the ensemble size to 1 in both setting. Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table3.png shows that while using two distinct Gaussian bandwidths can perform reasonably well, the full dual-tailed configuration (Gaussian + Student-\(t\)) generally achieves stronger AUC-PR score. Larger $\sigma$ in a Gaussian slows its decay somewhat, but that decay remains fundamentally exponential, whereas a Student-\(t\) kernel falls off more gradually and does not saturate as quickly at moderate distances. This long-tail behavior preserves a stronger “push” for out-of-class points, leading to a broader margin and, ultimately, more robust anomaly separation.
This suggests that having a genuinely heavy tail is important for pushing out-of-class points farther away and not just a matter of scaling the same kernel. In practice, a true heavy-tailed kernel—like Student-\(t\)—better preserves moderate similarities at larger distances, thereby preventing early saturation and improving separation of anomalous instances.
[1] Han, S., Hu, X., Huang, H., Jiang, M., & Zhao, Y. (2022). Adbench: Anomaly detection benchmark. Advances in neural information processing systems, 35, 32142-32159.
[2] Liu, F. T., Ting, K. M., & Zhou, Z. H. (2008, December). Isolation forest. In 2008 eighth ieee international conference on data mining (pp. 413-422). IEEE.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Thank you for your answers to Q2, and Q4. I still have some questions about Q1 and Q3.
For Q1, In AdBench, 70% of the training set is not for ‘Weakly Supervised’, and I still think that using 70% of data for training cannot be called ‘Weakly Supervised’. Although you said that only a few data are abnormal, you did not only use 5 abnormalities for training, you also used a large number of normal samples. If you use a small number of labeled normal samples, for example, 5 like the abnormalities, and the other samples (in the training set) are a mixture of normal and abnormal samples (unlabeled normal and unlabeled abnormal), and then train these data together, this may be called weak supervision. So I want to know whether the unlabeled samples in your training set include unlabeled abnormalities? How many unlabeled anomalies are there?
For Q3, I still have some questions: I found that the results in Table 2 (\url{https://anonymous.4open.science/r/weakly_anomaly_detection/Table2.png}) are not consistent with the results reported in the IDK paper. For example, the results on $\textit{http}$, $\textit{cover}$, and $\textit{shuttle}$ are very low, especially for $\textit{http}$, which is a very easy dataset in $\mathbb{R}^3$. I think this may be because the $\psi$ you used is inappropriate. How did you set the $\psi$? What is the result if you follow the parameter setting in the IDK paper?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for further feedback.
**Setup and Literature**
The anomaly‐detection literature clarifies that having only a small set of labeled anomalies in conjunction with a large unlabeled pool (which may itself contain hidden anomalies) is precisely what defines weak AD [2]. Indeed, [1] provide explicit definitions, noting that weakly supervised AD typically leverages a few labeled anomalies alongside predominantly unlabeled data that can be contaminated by anomalies (Section 3). Furthermore, in Section 4.1 of [3], the authors adopt the same train–test split for all settings, using a large pool (70\%) of unlabeled points presumed normal, together with only a small fraction of labeled anomalies.
**Additional Experiment (Smaller Unlabeled Pool):**
We acknowledge the reviewer’s point that the unlabeled set is large. Therefore, we conducted an additional experiment where only 10\% of the data are unlabeled and 5\% of anomalies are labeled, chosen uniformly at random. The rest of the data are not used in training. Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table7.png compares our WSAD-DT method against baselines (DeepSAD, DevNet, and XGBOD) under this reduced unlabeled‐pool regime. Despite the dramatically smaller unlabeled set, WSAD-DT still demonstrates on average strong results. As Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table7.png shows, even when the unlabeled set is cut drastically and we keep only 5\% of anomalies labeled, WSAD-DT still outperforms DeepSAD, DevNET or XGBOD on average—showing its robustness under smaller unlabeled‐pool conditions. Our additional experiment with a smaller unlabeled set further confirms that our method remains effective.
**Contamination**
In Appendix M, we explicitly test WSAD-DT’s robustness to contamination in the unlabeled set—i.e., scenarios in which unlabeled data definitely contain anomalies. We systematically inject different proportions of unlabeled anomalies into the training pool, mislabeling them as “normal.” Our findings show that even under these increasingly high levels of contamination, WSAD-DT’s performance degrades gracefully compared to baselines.
**IDK**
We initially set $\psi=4$ to maintain consistent parameters across all datasets. In contrast, the IDK paper tunes $\psi$ on a per-dataset basis over $\{2^1, 2^2, \dots, 2^{12}\}$ . To address the reviewer’s concern, we ran additional experiments on a subset of datasets using that exact parameter range and report the optimal results (Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table8.png). This yielded improvements; for instance, on Shuttle, IDK’s AUC-ROC rose to \(0.9458\). Some differences persist, likely because we rely on the AdBench (Han et al. 2022) 70–30 splits instead of IDK’s original split. Even with tuned $\psi$, however, the fully unsupervised IDK framework lags behind our minimal-supervision approach, where labeling just five anomalies yields stronger results. This gap aligns with prior findings that even sparse labeled anomalies can outweigh unsupervised anomaly detection. Finally, re-running IDK on the massive HTTP dataset (roughly 500k points) up to $\psi = 2^{12}$ is extremely time-consuming, so we have not yet completed those runs; we hope the results so far are sufficiently illustrative.
[1] Pang, Guansong, et al. "Deep learning for anomaly detection: A review." ACM computing surveys (CSUR) 54.2 (2021): 1-38.
[2] Pang, Guansong, et al. "Deep weakly-supervised anomaly detection." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.
[3] Han, Songqiao, et al. "Adbench: Anomaly detection benchmark." Advances in neural information processing systems 35 (2022): 32142-32159.
We truly value your interest and welcome any further feedback you may have. | Summary: The paper proposes WSAD-DT, a weakly supervised anomaly detection framework that employs dual-tailed kernels (light-tailed for in-class compactness, heavy-tailed for out-of-class separation) and an ensemble strategy to address limited labeled anomalies. Empirical results on AdBench datasets show state-of-the-art performance compared to methods like DeepSAD and XGBOD.
Claims And Evidence: Supported Claims: The dual-tailed kernel’s effectiveness is supported by ablation studies (Appendix K) and theoretical proofs (Lemmas 5.1–5.3). The ensemble strategy’s benefits are validated via experiments (Table 9).
Problematic Claims: The claim that "no single kernel can satisfy both compactness and separation" (Theorem 5.3) relies on assumptions (e.g., "sufficient model capacity") that are not empirically verified for all scenarios.
Methods And Evaluation Criteria: The dual-tailed kernel design is intuitive and aligns with margin-based theory. The use of AdBench datasets is appropriate, but experiments on temporal/graph data are missing, limiting scope validation. Evaluation metrics (AUC-ROC/PR) are standard, but statistical significance tests (Wilcoxon) strengthen reliability.
Theoretical Claims: Lemmas 5.1 and 5.2 are correctly derived, but Theorem 5.3’s proof assumes ideal conditions (e.g., "well-separated data"), which may not hold in practice. The collapse prevention via diversity loss (Lemma 5.4) is valid but lacks empirical validation in high-dimensional spaces.
Experimental Designs Or Analyses: Experiments are thorough but lack scalability tests on very large datasets (e.g., >1M samples). The contamination study (Appendix M) is a strength, but labeled anomaly fractions (1%–10%) could be extended to extremes (e.g., 0.1%).
Supplementary Material: Supplementary material is reasonable for understanding the thesis
Relation To Broader Scientific Literature: The work builds on margin-based theory and extends DeepSAD/DevNet by decoupling in-class and out-of-class similarity. The dual-kernel idea is novel but could relate to multi-kernel learning.
Essential References Not Discussed: I don't have specific knowledge of the area in question, but I currently think it's adequate
Other Strengths And Weaknesses: Strengths: Novel dual-kernel design, robust ensemble strategy, and comprehensive experiments.
Weaknesses: Limited discussion on computational overhead for large ensembles and no exploration of graph/temporal data.
Other Comments Or Suggestions: Typos: Page 3, "reserves a broader margin" → "preserves"; Page 5, "In-based separation terms" → "In-class".
Questions For Authors: Theorem 5.3: How does the method perform if anomalies are not well-separated (violating the assumption)? Would this invalidate the theorem?
Were other kernels (e.g., Laplacian, polynomial) tested? If not, why?
How does WSAD-DT handle streaming data or concept drift in applications like fraud detection?
Since I don't know the relevant literature, I will look carefully at other reviewers' comments to adjust the score
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
**Ablation stay extreme case(e.g., 0.1%) **
We have conducted additional experiments with a 0.1\% fraction of labeled anomalies. Table https://anonymous.4open.science/r/weakly_anomaly_detection/Table1.png summarizes AUC-ROC results under this extreme scenario. Despite having only 0.1\% labeled anomalies, WSAD-DT achieves state-of-the-art or near-best performance on most datasets. Notably, even with such sparse labels, dual-tailed kernels and our ensemble design still effectively leverage the limited anomaly examples to separate normal vs.\ abnormal data.
**Q1**
The “well-separated” assumption in Theorem 5.3 is a common theoretical device in margin-based analyses, akin to the strict separability often assumed in classical SVM proofs. Although real data typically feature some overlap between anomalies and normal samples, this idealized scenario highlights a core theoretical insight: having distinct margins reveals how dual-tailed kernels outperform single-kernel designs by simultaneously promoting in-class compactness (via the light-tailed kernel) and sustaining a long-range “push” for out-of-class points (via the heavy-tailed kernel). In practice, WSAD-DT remains effective when anomalies are only partially separable because the heavy-tailed kernel preserves nontrivial gradients even at moderate distances, preventing anomalies close to the normal center from collapsing into negligible similarity. Meanwhile, the light-tailed kernel keeps each class tightly clustered, and the diversity term further prevents degenerate collapsing by penalizing over-concentration. Moreover, our experiments across different weak-supervision scenarios—1\%, 5\%, or 10\% labeled anomalies, sometimes only five anomalies total—show that WSAD-DT remains effective under label scarcity. As demonstrated in our contamination experiments (Appendix M), performance degrades only mildly when unlabeled data contain anomalies, confirming that while Theorem 5.3 holds under clean-margin assumptions, WSAD-DT’s dual-tailed design still excels under real-world overlap.
**Q2**
We focus on light‐ vs. heavy‐tails, not specific forms. Any light‐ (e.g., Laplacian) or heavy‐tail (e.g., Cauchy) kernel is valid, as long as it preserves the fast‐ vs. slow‐decay principle. In Appendix O, we vary kernel parameters to test different tail decays and observe stable performance.
**Q3**
WSAD-DT is primarily designed for static, batch-oriented anomaly detection and does not explicitly account for streaming data or concept drift, similar to methods like DeepSAD, DevNet, or XGBOD. Nonetheless, it can be adapted by updating the normal and anomalous centers as new data arrive and retraining periodically to accommodate changing patterns. The ensemble structure also naturally extends to dynamic contexts by allowing older models to be replaced with ones trained on more recent data. This preserves the dual-tailed kernel advantage while helping WSAD-DT remain effective in environments, such as fraud detection, where data distributions evolve over time.
**Q4**
Classical kernel-based anomaly detection methods, such as One-Class SVM, rely on a single kernel applied uniformly across the data [3]. In contrast, multi-kernel and multi-view approaches blend multiple kernels—often through linear combinations—to capture richer similarity structures [2]. For example, Zhao and Fu introduce a dual-regularized framework that factors multi-view data into cluster indicators and sample-specific errors [1]. However, these methods still learn a single, blended kernel function that is applied globally to all points. By contrast, our approach conditionally assigns two distinct kernels to each data point: a light-tailed kernel for in-class distances and a heavy-tailed kernel for out-of-class distances. This explicitly changes tail behavior based on whether a point is close to the normal or anomaly center—rather than learning a single global kernel mixture. As we demonstrate in Section 5, this dual-tailed design achieves tighter in-class clustering and broader out-of-class separation—two conflicting goals that a single kernel or linear blend cannot simultaneously satisfy.
**Typos**
We thank the reviewer for pointing out the typos, and we will correct them in the revised manuscript.
[1] Zhao, Yue, and Yun Fu. "Dual-Regularized Multi-View Outlier Detection."
[2] Gönen, Mehmet, and Ethem Alpaydın. "Multiple Kernel Learning Algorithms."
[3] Schölkopf, Bernhard, et al. "Estimating the Support of a High-Dimensional Distribution." | null | null | null | null | null | null |
Instance Correlation Graph-based Naive Bayes | Accept (spotlight poster) | Summary: The authors propose a novel algorithm called instance correlation graph-based naive Bayes (ICGNB), which can work with numerical attributes and utilize the correlations among instances. The average classification accuracy of ICGNB on 24 datasets is higher than the best competitors.
Claims And Evidence: The results on synthetic data demonstrate that the newly generated attributes are more effective than original ones. The authors consider all feasible datasets from KEEL for evaluation, and thus I think they did not hide negative results. Hence, I'm overall satisfied with the improvements, despite that the proposed ICGNB does not always perform the best.
Methods And Evaluation Criteria: The main framework of ICGNB in Figure 1 is intuitive. The optimization of ELBO aligns with the choice of VGAE. The choice of datasets is ok to me, since ALL datasets containing only numerical attributes from KEEL are considered.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental designs of both real and synthetic datasets are checked. I think the ablation study part can be extended to include more variants.
Supplementary Material: A quick glimpse of Table 3 in Appendix B.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: No, to my knowledge.
Other Strengths And Weaknesses: No other strengths and weaknesses need to be highlighted.
Other Comments Or Suggestions: I would suggest to put the columns of ICGNB and ICGNB-A together for easy comparison.
I wonder whether the formulas in the introduction section are necessary, since they are not used in the motivation illustration of ICGNB.
Questions For Authors: 1.How about the performance if other graph convolution functions are used?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Questions For Authors:** How about the performance if other graph convolution functions are used?
**Author Response:** Thanks for your valuable comments. In addition to the graph convolution function used in VGAE, some other graph convolution functions in GraphSAGE and GAT can also be used. To address the reviewer’s concerns, we perform experiments by replacing the graph convolution function of VGAE with those of GraphSAGE and GAT to evaluate the performance of ICGNB. We perform a stratified hold-out validation on 24 real-world datasets. All other experimental settings remain consistent with those described in the paper. The average classification accuracy of ICGNB with different graph convolution functions across the 24 datasets is summarized as follows:
||ICGNB_VGAE|ICGNB_GraphSAGE|ICGNB_GAT|
|:--:|:--:|:--:|:--:|
| Average Accuracy (%)|80.22|80.46|79.22|
||
These results show that using the graph convolution function of GraphSAGE is also effective, but using that of GAT slightly reduces our ICGNB’s performance.
**Experimental Designs Or Analyses:** The experimental designs of both real and synthetic datasets are checked. I think the ablation study part can be extended to include more variants.
**Author Response:** Thanks for your valuable comments. We agree that including additional ablation variants can provide a more thorough analysis of the effectiveness of each part in ICGNB. To address the reviewer’s concerns, we have introduced two new ablation variants and conducted a more comprehensive ablation study using the experimental setup described in Section 4.1. The two new variants are denoted as ICGNB-G and ICGNB-GW. Here, ICGNB-G only retains the part of attribute generation and removes attribute augmentation and weighting. ICGNB-GW retains the part of attribute generation and weighting and removes original attributes. Meanwhile, the existing two ablation variants are denoted as ICGNB-A and ICGNB-W. ICGNB-A only retains the part of attribute generation and attribute augmentation and removes attribute weighting. ICGNB-W retains the part of attribute weighting and removes attribute generation and attribute augmentation. The experimental results are as follows:
||ICGNB|ICGNB\-A|ICGNB\-W|ICGNB\-G|ICGNB\-GW|
|:--:|:--:|:--:|:--:|:--:|:--:|
|Average Accuracy (%)|78.87|73.70|77.84|72.44|72.61|
||
From these results, it can be found that ICGNB-A achieves higher accuracy than ICGNB-G, demonstrating that attribute augmentation is essential for ICGNB. ICGNB-GW achieves higher accuracy than ICGNB-G, confirming that attribute weighting is necessary for new attributes. Meanwhile, ICGNB consistently outperforms all its variants, further validating the rationality of ICGNB. In the final version of the paper, we will add these two new variants into our ablation study. Thanks again for your valuable comments.
**Other Comments Or Suggestions:**
I would suggest to put the columns of ICGNB and ICGNB-A together for easy comparison.
I wonder whether the formulas in the introduction section are necessary since they are not used in the motivation illustration of ICGNB.
**Author Response:**
Thanks for your valuable comments. In the final version of the paper, we will put the columns of ICGNB and ICGNB-A together for easy comparison.
In our paper, the introduction section includes two formulas, Eqs (1) and (2), which are used by Bayesian networks and NB to classify instances, respectively. As the starting point of our work, they demonstrate that NB relies on the attribute conditional independence assumption (ACIA). Considering that ACIA is difficult to hold in real-world scenarios, numerous improved algorithms have subsequently been proposed. However, these algorithms overlook leveraging the correlations among instances to improve NB, which motivates us to propose the ICGNB in this paper. Therefore, the formulas in the introduction section are necessary. Thanks again for your valuable comments. | Summary: This paper proposes a novel instance correlation graph (ICG) based Naïve Bayes classification framework. It first constructs an ICG from original attributes, and then employs a variational graph autoencoder (VGAE) to generate embeddings based on both ICG and original attributes. Extensive experiments have been conducted to verify the effectiveness of the proposed method.
Claims And Evidence: Yes. The authors claim that they improve the performance of Naïve Bayes classifier by leveraging the correlations among instances, which is supported by clear and convincing theoretical analysis and experiments.
Methods And Evaluation Criteria: Yes. The proposed method and evaluation criteria in the paper make sense for the problem and application.
Theoretical Claims: Yes. I have checked the correctness of theoretical claims in this paper.
Experimental Designs Or Analyses: Yes. I have checked the soundness of the experimental settings and results analysis. Twenty-four real-world datasets were adopted for evaluating the classification performance of the proposed method, and synthetic data was used to verify the independence and Gaussianity of the attributes generated by ICG. These experimental designs and analyses look sound.
Supplementary Material: The authors have not provided the supplementary material. But I have reviewed the attached Appendixes A and B.
Relation To Broader Scientific Literature: This paper proposes a novel Naïve Bayes (NB) classifier, named ICGNB, to improve the performance of NB on numerical attributes by effectively utilizing the correlations among instances with VGAE.
Essential References Not Discussed: No. All essential references are currently cited/discussed in the paper.
Other Strengths And Weaknesses: S1. Unlike most variants of Naïve Bayes that assume instance independence, this paper enhances Naïve Bayes based on the correlations among instances, which seems both reasonable and effective.
S2. The proposed model ICGNB contains an attribute weighting module to reduce the redundancy information from the correlations of instances, which is interesting and effective.
S3. Extensive experiments on several real-world and synthetic datasets have been conducted and the results verify the effectiveness, independence and Gaussianity of the proposed model.
S4. The organization of the paper is quite good and it is easy to follow.
W1. The ICG construction algorithm appears to employ a greedy strategy. Thus, the correctness of this algorithm should be discussed more clearly.
W2. The authors claim that using a VGAE to generate augmented attributes from ICG and reweighting these attributes is effective for improving classification accuracy. However, this technique is similar to Graph Attention Networks (GAT), so the advantages of the proposed method over GAT should be clearly explained.
Other Comments Or Suggestions: I do not have any additional comments or suggestions for the authors.
Questions For Authors: Q1. Constructing a full connection graph of instances based on Euclidean distance is time consuming and usually introduces noisy edges, how to address these issues?
Q2. The ICG construction algorithm adopts a greedy strategy, and its output is similar to a minimum spanning tree of the full connection graph. Moreover, the class labels of training instances are not incorporated into the ICG construction, which might lead to the omission of supervised information. Therefore, the correctness and advantages of this algorithm should be thoroughly discussed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1:** Constructing a full connection graph of instances based on Euclidean distance is time consuming and usually introduces noisy edges, how to address these issues?
**Author Response to Q1:** Thanks for your valuable comments. In our paper, to construct the instance correlation graph (ICG), we first define a full connection graph containing $n(n-1)/2$ edges among all instances. Then, to reduce time costs, we only select $n-1$ edges from this full connection graph to construct ICG. As a result, the ICG we constructed is a sparse graph, with most noisy edges from the fully connected graph removed. In the final version of the paper, we will include the above explanations to clarify this point. Thanks again for your valuable comments.
**Q2 and W1:** The ICG construction algorithm adopts a greedy strategy, and its output is similar to a minimum spanning tree of the full connection graph. Moreover, the class labels of training instances are not incorporated into the ICG construction, which might lead to the omission of supervised information. Therefore, the correctness and advantages of this algorithm should be thoroughly discussed.
**Author Response to Q2 and W1:** Thanks for your valuable comments. In ICGNB, VGAE requires a graph that includes both training and test instances. Since the class labels of test instances are unavailable during the training stage, their supervised information cannot be incorporated into the construction of the ICG. To ensure consistency between training and test instances, the supervised information of training instances is also excluded from the ICG construction. Although supervised information is not used during ICG construction, it is incorporated into the attribute weighting stage of ICGNB. Therefore, the overall ICGNB framework fully considers supervised information. In the final version of the paper, we will include the above explanations to clarify this point. Thanks again for your valuable comments.
**W2:** The authors claim that using a VGAE to generate augmented attributes from ICG and reweighting these attributes is effective for improving classification accuracy. However, this technique is similar to Graph Attention Networks (GAT), so the advantages of the proposed method over GAT should be clearly explained.
**Author Response to W2:** Thanks for your valuable comments. The advantages of ICGNB over GAT are summarized in the following two aspects. First, GAT fails to retain the original attributes during training, thereby losing their interpretability. In contrast, ICGNB simultaneously utilizes both the original attributes and the newly generated attributes of all instances, which preserves the interpretability of the original attributes while capturing potential correlations among instances through the new attributes. Second, GAT applies a local attention mechanism to assign weights to the neighbors of each node. However, ICGNB applies a global weighting strategy to all 2$m$ augmented attributes, enabling it to capture the overall characteristics of the dataset more effectively. By combining these two aspects, ICGNB demonstrates clear advantages over GAT. We will include a discussion on these advantages in the final version of the paper. Thanks again for your valuable comments. | Summary: This paper presents ICGNB, an enhanced Naïve Bayes method based on the Instance Correlation Graph (ICG). ICGNB leverages ICG to capture instance correlations, employs a Variational Graph Autoencoder (VGAE) to generate new attributes, and optimizes attribute weighting using Conditional Log-Likelihood (CLL). Experimental results demonstrate that ICGNB achieves an average classification accuracy of 78.87% across 24 real-world datasets, significantly outperforming existing methods. Its effectiveness is further validated through Wilcoxon statistical tests. Ablation studies and synthetic dataset analysis further confirm that ICGNB generates attributes with higher independence and Gaussianity, leading to improved classification performance.
Claims And Evidence: Yes, the paper demonstrates through Wilcoxon statistical tests that incorporating instance correlations enhances Naïve Bayes' ability to handle numerical attributes. Additionally, Pearson correlation heatmaps reveal a significant reduction in correlation among the newly generated attributes, making them more aligned with the Naïve Bayes assumption.
Methods And Evaluation Criteria: This approach is meaningful as it utilizes the ICG to capture instance relationships, overcoming the limitation of NB, which relies solely on independent attributes. By leveraging VGAE to generate new attributes, the method enhances the representational capacity of the original attributes while ensuring their independence.
Theoretical Claims: The theoretical claims of this paper primarily lie in the fact that the new attributes generated by ICGNB align with the core assumptions of NB, and that attribute weighting optimization enhances classification performance. However, the paper does not provide a rigorous mathematical proof; instead, these claims are validated empirically through experiments.
Experimental Designs Or Analyses: At present, the experiments are all reasonable, but there is a lack of parametric sensitivity experiments.
Supplementary Material: The supplementary materials include a detailed description of the algorithm presented in the paper, as well as the specific dataset configurations used in the experiments.
Relation To Broader Scientific Literature: This paper integrates an enhanced NB approach with graph representation learning, leveraging instance correlations to optimize numerical attribute classification. Unlike traditional NB improvements that solely rely on the independence assumption, this method introduces a novel perspective by incorporating instance relationships. Moreover, it aligns with GNN-related research, bridging the gap between probabilistic classification and graph-based learning.
Essential References Not Discussed: The paper does not discuss classification methods based on GNN and VGAE, such as GraphSAGE [1], which have had a significant impact in the field of graph representation learning and are closely related to ICGNB’s graph-based representation approach. Additionally, the paper does not cite more advanced probabilistic graphical models, which could be relevant to optimizing the compatibility of ICGNB with the NB assumption. Including discussions on these methods could provide a broader contextual understanding and highlight potential improvements to the proposed approach.
- [1] Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs[J]. Advances in neural information processing systems, 2017, 30.
Other Strengths And Weaknesses: Strengths:
- Innovative Extension of NB: By integrating ICG and VGAE, the method enhances NB classification performance for numerical attributes, representing a novel extension of NB.
- Broader Applicability: The approach is well-suited for numerical attribute classification tasks and introduces a new perspective on instance correlation learning, making it potentially adaptable to other graph-based classification tasks.
Weaknesses:
- Unassessed Computational Cost: The construction of ICG and the training of VGAE increase computational complexity. However, the paper does not provide an analysis of runtime or computational complexity.
- Lack of Theoretical Support: While experimental results support the claims, the paper lacks a formal analysis of the independence and Gaussianity of the VGAE-generated attributes, as well as a convergence proof for attribute weighting optimization.
Other Comments Or Suggestions: None.
Questions For Authors: The paper presents extensive experimental results but lacks a complete theoretical derivation, as mentioned in the weaknesses. Providing a detailed theoretical derivation would improve the overall score.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your valuable comments. We sincerely appreciate the time and effort you have dedicated to reviewing our work. Below, we provide detailed responses to each of your concerns.
**Author Response to Computational Complexity:** We complement the time complexity analysis as follows: In Algorithm 1, line 3 constructs a full connection graph with a time complexity of $O(n^{2})$. Lines 4-8 calculate the Euclidean distance with a time complexity of $O(n^{2}m)$. Line 9 sorts edges with a time complexity of $O(n^{2}logn)$. Lines 10-18 add edges to ICG with a time complexity of $O(n^{2}\alpha(n))$, where $O(\alpha(n))$ is the time complexity of checking if two vertices are reachable. Lines 19-20 add self-connecting edges with a time complexity of $O(n)$. Due to $m$ usually being greater than $logn$ and $\alpha(n)$, considering only the highest-order terms, the overall time complexity of Algorithm 1 is $O(n^{2}m)$. In Algorithm 2, lines 4-5 transform ICG into $G$ and initialize the parameters in VGAE with a time complexity of $O(n^2)$. Lines 6-11 train a VGAE with a time complexity of $O(P(nm^2+n^{2}m))$. Line 12 generates new attributes with a time complexity of $O(nm^2)$. Lines 13-17 augment original attributes with a time complexity of $O(nm)$. Lines 18-23 train a GNB with a time complexity of $O(knm)$. Lines 24-25 weight augmented attributes with a time complexity of $O(\beta(m))$, where $\beta(m)$ has a linear relationship with $m$. Due to $n$ usually being greater than $m$, considering only the highest-order terms, the overall time complexity of Algorithm 2 is $O(Pn^{2}m)$. In the final version of the paper, we will add all these time complexity analyses.
**Author Response to Theoretical Support:** We perform a formal analysis of Gaussianity, independence, and convergence proof for attribute weighting optimization as follows:
1. Gaussianity: VGAE ensures that the distribution of the embedding vectors closely approximates $p(\mathbf{Z})$, which is an independent zero-mean multivariate Gaussian distribution with unit variances. Each variate in $p(\mathbf{Z})$ corresponds to a new attribute in $\mathbf{Z}$, and thus the new attribute closely approximates the Gaussian distribution.
2. Independence: The independent zero-mean in $p(\mathbf{Z})$ ensures that the covariance matrix of $p(\mathbf{Z})$ is a diagonal matrix, which implies that covariances between variates (new attributes) are zero, i.e., variates (new attributes) are independent of each other.
3. Convergence proof for attribute weighting optimization: In ICGNB, we perform the gradient descent search by using the L-BFGS algorithm. According to [1], the L-BFGS algorithm is globally convergent on uniformly convex problems. The objective function in our optimization maximizes the conditional log-likelihood (CLL) of the attribute weighted GNB, which is a uniformly convex problem. Therefore, the optimization process is guaranteed to converge.
[1] Pearl, J. Probabilistic reasoning in intelligent systems-net-works of plausible inference. Morgan Kaufmann series in representation and reasoning. Morgan Kaufmann, 1989.
**Author Response to Sensitivity Analysis:** The hyperparameters in ICGNB include the number of iterations $P$ and the learning rate $\eta$, which are set to default values of 500 and 0.01, respectively. To address the reviewer’s concerns, we conduct parameter sensitivity experiments on the first real-world dataset (appendicitis) to evaluate ICGNB’s performance. In each experiment, we fix one hyperparameter and vary the remaining one and use the same setup described in Section 4.1. The results are as follows:
|$P$|300|400|500|600|700|
|:--:|:--:|:--:|:--:|:--:|:--:|
|Average Accuracy (%)|88.64|88.18|89.09|88.64|90.00|
||
|$\eta$|0.005|0.0075|0.01|0.0125|0.015|
|:--:|:--:|:--:|:--:|:--:|:--:|
|Average Accuracy (%)|88.64|90.00|89.09|89.09|88.18|
||
These results show that ICGNB’s performance varies slightly with changes in hyperparameter values, which demonstrates that ICGNB is insensitive to parameters.
**Author Response to Essential References:** Indeed, GraphSAGE is closely related to our ICGNB and provides an effective graph representation learning strategy. To address the reviewer’s concerns, we will cite GraphSAGE and discuss its correlation with our study. In addition, we conduct an experiment to explore whether GraphSAGE can be used for our ICGNB. Specifically, we replace the graph convolution function of VGAE with that of GraphSAGE and perform a stratified hold-out validation on 24 real-world datasets. The average classification accuracy is as follows:
||ICGNB_VGAE|ICGNB_GraphSAGE|
|:--:|:--:|:--:|
|Average Accuracy (%)|80.22|80.46|
||
The results show that the graph convolution function of GraphSAGE can also be used for our ICGNB. | Summary: The paper introduces **Instance Correlation Graph-based Naïve Bayes (ICGNB)**, a novel enhancement of the **Gaussian Naïve Bayes (GNB)** classifier. Traditional Naïve Bayes methods assume conditional independence among attributes, limiting their effectiveness, especially for numerical data. The proposed **ICGNB** method addresses this by incorporating **correlations among instances**, which have been largely ignored in prior research.
**Main Algorithmic Ideas:**
1. **Instance Correlation Graph (ICG) Construction**: Instances are connected in a graph based on similarity, forming an **Instance Correlation Graph (ICG)**.
2. **Graph-Based Representation Learning**: A **Variational Graph Auto-Encoder (VGAE)** generates new attributes from the ICG, capturing latent instance correlations.
3. **Attribute Augmentation and Weighting**: The generated attributes are combined with original ones, and a weighting scheme optimizes attribute significance, reducing redundancy.
**Main Findings & Results:**
- **ICGNB outperforms traditional GNB and state-of-the-art Naïve Bayes enhancements** across multiple datasets.
- It significantly improves classification accuracy by leveraging graph-based attribute generation and augmentation.
- Empirical validation on **24 real-world datasets** and synthetic data confirms **ICGNB’s robustness and generalization ability**.
By integrating instance correlation insights into Naïve Bayes classification, ICGNB presents a **new paradigm** for improving probabilistic classifiers with graph-based learning techniques.
Claims And Evidence: The paper presents **Instance Correlation Graph-based Naïve Bayes (ICGNB)** as an improvement over **Gaussian Naïve Bayes (GNB)**, claiming that leveraging instance correlations via graph-based representation learning significantly enhances classification performance. Most claims are backed by **theoretical justification and empirical evidence**, but there are some concerns regarding the **generality and robustness** of the approach.
**Supported Claims:**
1. **Effectiveness of ICGNB**: The claim that ICGNB outperforms GNB and other enhanced Naïve Bayes models is well-supported. The **Wilcoxon signed-rank test** across **24 real-world datasets** provides **statistically significant** improvements, strengthening the empirical validation.
2. **Attribute Independence and Gaussianity**: The paper convincingly shows that the **generated attributes align better with the Gaussian assumption**, validated through the **Kolmogorov-Smirnov test**.
**Potentially Problematic Claims:**
1. **Scalability**: The computational overhead of constructing the **Instance Correlation Graph (ICG)** and training the **Variational Graph Auto-Encoder (VGAE)** is not fully analyzed. Large-scale applications may face challenges.
2. **Generality**: The method is benchmarked on **numerical datasets only**. Performance on **mixed or high-dimensional categorical data** remains unclear.
3. **Causality vs. Correlation**: While ICG captures correlations, it does not necessarily improve **causal relationships**, potentially leading to **overfitting**.
Methods And Evaluation Criteria: The proposed **Instance Correlation Graph-based Naïve Bayes (ICGNB)** method is evaluated using **well-established datasets and appropriate evaluation criteria**, making the study **methodologically sound**. However, there are **certain limitations in dataset diversity and scalability analysis** that should be considered.
**Strengths of Methods and Evaluation:**
1. **Benchmark Datasets**: The study uses **24 real-world datasets from KEEL**, covering various domains and numerical attributes. This is **appropriate for testing Gaussian Naïve Bayes (GNB) extensions**.
2. **Comparative Baselines**: The comparison against **state-of-the-art Naïve Bayes enhancements** (WANBIA, CFWNB, AG-NBC, AE-NBC, and GNB) is **comprehensive and fair**.
3. **Statistical Validation**: The **Wilcoxon signed-rank test** ensures that reported improvements are **statistically significant**, strengthening the reliability of results.
**Potential Limitations:**
1. **Dataset Scope**: The evaluation **excludes categorical or mixed-type datasets**, limiting its **generalizability** beyond numerical data.
2. **Scalability Considerations**: While VGAE improves feature representation, **its computational cost is not analyzed**. The method’s efficiency on **large-scale datasets** is uncertain.
3. **Alternative Evaluation Metrics**: The study **focuses primarily on classification accuracy**, without analyzing robustness to **adversarial noise, missing data, or class imbalance**.
**Overall Assessment:**
The methodology is **well-structured** and **empirical validation is strong**, but additional **scalability analysis** and **broader dataset selection** would enhance the **practical applicability** of ICGNB.
Theoretical Claims: The paper presents **theoretical claims** primarily related to **the effectiveness of the Instance Correlation Graph (ICG), the variational graph auto-encoder (VGAE) for attribute generation, and the weighted Gaussian Naïve Bayes (GNB) formulation**. While the overall framework is well-motivated, some theoretical aspects require **closer scrutiny**.
**Checked Theoretical Claims:**
1. **ICG Construction and Sparsity**:
- The paper claims that **ICG captures meaningful instance correlations** while maintaining **a sparse structure**. The **edge selection algorithm** ensures connectivity using a minimal number of edges. This is **conceptually sound**, though a formal proof of optimality is missing.
2. **VGAE-Based Attribute Generation**:
- The theoretical justification relies on **graph convolutional embedding** preserving instance relationships. The **variational objective (ELBO) is correctly formulated** following standard **VGAE methodology** (Kingma & Welling, 2014). However, the assumption that **generated attributes are inherently more independent** is **not formally proven**, only empirically suggested.
3. **Attribute Weighting via Conditional Log-Likelihood (CLL) Maximization**:
- The gradient-based weight optimization process follows a **standard likelihood maximization framework**. However, convergence guarantees or sensitivity analysis are **not provided**.
**Potential Issues:**
- **No proof of Gaussianity**: The claim that **VGAE-generated attributes better fit Gaussian assumptions** is only empirically tested, lacking **a theoretical derivation**.
- **Scalability of VGAE training**: No theoretical analysis of **complexity bounds** is presented.
**Overall Assessment:**
The proofs presented are **generally correct**, but some **key claims (e.g., independence of generated features, scalability) lack formal justification** and are **only validated empirically**. A more rigorous **theoretical treatment** of these aspects would strengthen the paper.
Experimental Designs Or Analyses: The experimental design of the paper is **generally well-structured**, incorporating a **broad set of real-world datasets**, **comparative baselines**, and **statistical validation**. However, there are some **limitations in dataset diversity and certain evaluation aspects** that could impact the validity of the findings.
**Checked Experimental Designs and Analyses:**
1. **Dataset Selection and Preprocessing:**
- The use of **24 numerical datasets from KEEL** ensures that the method is tested across multiple domains. However, it **excludes categorical or mixed-type data**, limiting generalizability beyond purely numerical datasets.
- **Z-score normalization** is applied, which is **appropriate** for Gaussian-based methods but may not reflect real-world data preprocessing variations.
2. **Comparative Baselines:**
- The paper compares ICGNB against **five well-established Naïve Bayes enhancements (WANBIA, CFWNB, AG-NBC, AE-NBC, and GNB)**, making the evaluation **comprehensive**.
- However, the **absence of deep learning-based classifiers or ensemble methods** as additional baselines means the **broader competitiveness** of ICGNB remains unclear.
3. **Statistical Evaluation and Significance Tests:**
- The **Wilcoxon signed-rank test** is an **appropriate choice** for measuring statistical significance in performance comparisons.
- However, the paper does not include **confidence intervals for accuracy improvements**, which would provide a clearer measure of variability.
4. **Scalability and Computational Analysis:**
- There is **no discussion of computational efficiency**, particularly regarding **VGAE training costs** and **graph construction overhead**.
- The method’s viability for **large-scale datasets** is uncertain.
**Potential Issues and Suggestions:**
- The exclusion of **non-numerical datasets** limits generalizability.
- The **lack of alternative evaluation metrics** (e.g., runtime analysis, robustness to noise, or class imbalance) makes it hard to assess **real-world applicability**.
- **Computational cost analysis** is missing, which could be a key bottleneck for practical deployment.
**Overall Assessment:**
The experimental setup is **rigorous in its scope and statistical validation**, but additional **efficiency analysis, dataset diversity, and robustness tests** would enhance the study’s reliability and practical impact.
Supplementary Material: This paper does not have Supplementary Material.
Relation To Broader Scientific Literature: The key contributions of this paper are closely related to several existing lines of research in **Naïve Bayes improvements, graph-based learning, and representation learning with autoencoders**. While the paper introduces a novel **Instance Correlation Graph-based Naïve Bayes (ICGNB)** framework, its ideas build upon well-established methodologies in **probabilistic classification and graph neural networks**.
**Relation to Prior Research:**
1. **Naïve Bayes Enhancements:**
- The paper follows a long tradition of **improving Naïve Bayes by addressing its independence assumption**. Previous works have explored **attribute weighting (WANBIA, CFWNB)** and **attribute transformation (AG-NBC, AE-NBC)**.
- ICGNB extends these by **introducing instance-instance correlations**, which are generally ignored in prior Naïve Bayes improvements.
2. **Graph-Based Representation Learning:**
- The use of **Instance Correlation Graphs (ICG)** aligns with advances in **graph-based semi-supervised learning**, particularly **Graph Convolutional Networks (GCN)** and **Variational Graph Autoencoders (VGAE)** (Kipf & Welling, 2017).
- VGAE has been widely used for **latent feature extraction**, and this paper leverages it to **generate new attributes** that improve classification performance.
3. **Attribute Augmentation and Weighting:**
- The method shares similarities with **feature learning techniques in deep learning**, where new attributes are derived via **embedding-based transformations**.
- However, unlike deep learning methods, ICGNB retains the **probabilistic interpretability of GNB**.
**Broader Impact and Limitations:**
- The **combination of Naïve Bayes with graph-based learning is novel**, but similar ideas have been explored in **relational learning and Bayesian networks**.
- The paper focuses on **numerical data only**, which limits its applicability in domains where **categorical attributes** play a significant role.
- While the approach improves Naïve Bayes, it does not explore how it compares to **modern deep learning classifiers**.
**Overall Assessment:**
ICGNB contributes meaningfully to **graph-enhanced probabilistic classification**, bridging **Naïve Bayes improvements with representation learning**. However, its relation to broader **graph-based Bayesian models and deep learning-based feature learning** could be explored further.
Essential References Not Discussed: The paper effectively cites foundational works on **Naïve Bayes improvements, graph-based learning, and variational autoencoders**, but **some key references in graph-based Bayesian learning and scalable feature learning are missing**.
1. **Graph-Based Bayesian Models**:
- The paper does not discuss **Graphical Models for Naïve Bayes Extensions**, such as **Tree-Augmented Naïve Bayes (TAN)** (Friedman et al., 1997), which also address feature dependencies while maintaining Naïve Bayes simplicity.
- **Relational Bayesian Models** (Getoor & Taskar, 2007) have explored **leveraging inter-instance relationships** in probabilistic classification.
2. **Scalability of Graph Autoencoders**:
- The paper uses **Variational Graph Autoencoders (VGAE)** (Kipf & Welling, 2017) but does not discuss **scalable alternatives** such as **GraphSAGE (Hamilton et al., 2017)**, which may offer better efficiency for large-scale applications.
3. **Feature Learning for Naïve Bayes**:
- **Autoencoder-based feature learning for probabilistic models** (Kingma & Welling, 2014) is relevant but not fully explored in terms of direct Naïve Bayes applications.
**Overall Suggestion**
Discussing **TAN, relational Bayesian models, and scalable GNN methods** would provide **a more complete context** for the proposed approach and highlight its novelty better.
Other Strengths And Weaknesses: **Strengths**
1. **Novel Integration of Graph-Based Learning with Naïve Bayes**: The use of **Instance Correlation Graphs (ICG)** and **Variational Graph Autoencoders (VGAE)** to enhance Naïve Bayes classification is an **original contribution**, effectively addressing the attribute independence assumption.
2. **Strong Empirical Validation**: The **comprehensive experiments on 24 datasets**, along with **statistical significance testing (Wilcoxon signed-rank test)**, provide **credible evidence** of the method’s effectiveness.
3. **Interpretability**: Unlike deep learning-based classifiers, the method **preserves the interpretability of Naïve Bayes**, making it more suitable for explainable AI applications.
**Weaknesses**
1. **Scalability Concerns**: **No discussion on computational complexity**, particularly regarding **graph construction and VGAE training**, which may hinder practical deployment on large datasets.
2. **Limited Generalization**: The approach is **only evaluated on numerical datasets**, excluding categorical and mixed-type data, limiting its **applicability in broader domains**.
3. **Clarity of Theoretical Justifications**: While empirically validated, **some theoretical claims (e.g., feature independence in VGAE-generated attributes)** lack formal proofs.
**Overall**
The paper presents an **innovative and promising approach**, but addressing **scalability and broader dataset applicability** would enhance its impact.
Other Comments Or Suggestions: No additional comments or suggestions.
Questions For Authors: 1. **Scalability and Computational Complexity**
- What is the computational complexity of constructing the **Instance Correlation Graph (ICG)** and training the **Variational Graph Autoencoder (VGAE)**? Have you evaluated runtime performance on larger datasets?
- A response showing that ICGNB is computationally efficient would **strengthen confidence in its practicality**.
2. **Applicability to Categorical and Mixed-Type Data**
- Since the experiments focus only on **numerical datasets**, how would ICGNB handle **categorical or mixed-type features**? Would discretization or embedding techniques be required?
- If the method is adaptable to broader data types, its **generalizability and impact would increase**.
3. **Independence and Gaussianity of Generated Attributes**
- The empirical results suggest that **VGAE-generated attributes align better with the Naïve Bayes assumptions**, but is there a **theoretical justification** for this claim?
- A formal proof or additional justification would **strengthen the theoretical contributions of the paper**.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | null | null | null | null | null | null | |
Private Model Personalization Revisited | Accept (poster) | Summary: The paper addresses the problem of model personalization under user-level differential privacy (DP) in the federated learning setting. The authors propose a novel private federated learning algorithm based on the FedRep framework, which learns a shared low-dimensional embedding and user-specific local models while ensuring DP. The main contributions include: (1) an efficient private federated algorithm that handles noisy labels and sub-Gaussian data distributions, (2) improved privacy-utility trade-offs compared to prior work, (3) a private initialization algorithm for the shared embedding, and (4) dimension-independent risk guarantees for binary classification using the Johnson-Lindenstrauss transform. The paper claims to improve the privacy error term by a factor of $\tilde{O}(dk)$ in certain parameter regimes and extends the applicability to broader data distributions compared to prior work.
## update after rebuttal
The authors have addressed two of my concerns, and I am generally satisfied with the rebuttal, so I have raised my score. While I raised my score, I remain somewhat unenthusiastic about the paper overall.
Claims And Evidence: The main claims are supported by proofs.
Methods And Evaluation Criteria: The proposed methodology is well-explained and aligns with intuitive expectations. The evaluation criterion used is MSE, which is appropriate for the vector estimation problem.
Theoretical Claims: The proofs are reviewed but not rigorously verified.
Experimental Designs Or Analyses: The empirical evaluation includes a simple privacy-utility trade-off analysis in terms of MSE.
Supplementary Material: The proofs are screened but not carefully checked.
Relation To Broader Scientific Literature: The paper extends previous work (Jain et al., 2021). I am not familiar with the task of model personalization, so I do not have a strong opinion on this point.
Essential References Not Discussed: I have not read any paper about model personalization.
Other Strengths And Weaknesses: I feel the practical significance of the work is rather limited. The current low-dimensional embedding assumption seems applicable only to limited scenarios. Moreover, the methodology relies heavily on the low-dimensional embedding assumption and does not appear to extend to neural network-based methods. Could the authors provide some discussion on this point?
Additionally, the contribution does not appear significant enough to me. The algorithm builds largely on FedRep, and the theoretical contribution seems to be an extension of Jain et al., 2021.
Other Comments Or Suggestions: Please ensure the correct usage of \cite, \citep, and \citet. Note that the formatting of citations may vary depending on the LaTeX template used.
Questions For Authors: The paper is written in a highly technical manner, presenting theoretical assumptions and results. I suggest the authors provide more high-level discussions, particularly regarding the following points. I would be happy to raise my score if these questions are addressed well:
- Intrinsic Difference Between Classification and Regression: What is the fundamental difference between classification and regression? If they are both treated as supervised problems, are there conclusions that hold for one but not the other? The authors treat these two problems separately in two sections, but I am somewhat confused about their differences beyond the technical aspects (e.g., different scores, target spaces). Could the authors comment on this?
- Intrinsic Difference Between Gaussian and Sub-Gaussian Assumptions: What is the fundamental difference between Gaussian and sub-Gaussian distribution assumptions? In previous literature (Jain et al., 2021), is the Gaussian distribution necessary due to its unique properties (e.g., rotation invariance, equivalence between independence and uncorrelatedness), or is it merely about tail probabilities? If the latter, I do not see a significant improvement. The paper emphasizes the improvement from Gaussian to sub-Gaussian assumptions as a main contribution multiple times, but there seems to be no detailed discussion on this point.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments. We respectfully disagree with several conclusions and clarify key contributions and motivations below.
**On the Practical Significance of the Low-Dimensional Embedding Assumption**:\
The *low-dimensional* shared representation assumption is widely adopted in both theoretical and practical work on model personalization and federated learning, including Collins et al. (ICML 2021), Jain et al. (NeurIPS 2021), Tripuraneni et al. (ICML 2021), and Duchi et al. (NeurIPS 2022). It reflects practical scenarios (e.g., in multitask/meta-learning) and enables rigorous analysis and tractable algorithms.
The *linear* shared representation setting captures a **fundamental learning problem** that has attracted significant recent interest in the literature, including the works referenced above. Many influential works explore it to understand key statistical and computational challenges in learning from heterogeneous data. Despite its apparent simplicity, this setting remains **analytically challenging and far from fully understood**, as even recent works require structural assumptions (e.g., Gaussianity, identity covariance, boundedness) to obtain formal guarantees. These challenges are magnified under differential privacy, which introduces additional technical hurdles such as sensitivity control and dealing with noise accumulation while ensuring good convergence behavior. While neural-network-based personalization has shown empirical success, it is generally **infeasible to derive formal risk guarantees** in such settings due to their non-convexity. As a result, most works rely on empirical evaluation. Our work, by contrast, focuses on providing **rigorous theoretical insights** in a meaningful and analyzable setup—paving the way for future extensions to more complex models.
**On the Role of FedRep and Comparison to Prior Work**:\
While we build on FedRep (Collins et al., 2021), our contributions go significantly beyond:
- We extend FedRep to satisfy **user-level differential privacy** in a federated setting, while also handling **noisy labels**—an important practical case not previously addressed. The addition of noisy labels requires substantial changes to the utility analysis.
- Compared with the original FedRep algorithm, we make several modifications that are crucial to our analysis. For instance, we modify the algorithm to use **disjoint batches** for updating $U$ and $V$, unlike FedRep. This crucial change allows tighter control of gradient norms, reduces the noise required for DP, and leads to **improved convergence rates** and sharper utility bounds.
- Our **utility bound** improves over the **centralized approach of Jain et al. (2021)** by a factor of $\tilde{O}(dk)$ in key regimes.
- Unlike Jain et al. (2021), our algorithm is **fully federated**, avoiding the need for centralization or exact minimization—making it more scalable and practical.
These results constitute both conceptual and analytical innovations and establish state-of-the-art privacy-utility trade-offs in this setting.
**On Regression vs. Classification**:\
Classification with margin loss requires a different analytical and algorithmic approach from regression with quadratic loss. The regression analysis benefits from the structure of the quadratic loss, which enables deriving risk bounds through optimization and concentration arguments that are tied to the sub-Gaussianity and covariance structure of data. In contrast, margin-based losses lack these convenient analytical properties, requiring distinct analysis strategy and algorithmic approach. In our work, we develop a tailored approach for the classification setting—leveraging dimensionality reduction and margin-based analysis—to obtain the **first dimension-independent risk bound under user-level DP** in this framework.
**On the Significance of Extending to Sub-Gaussian Distributions**:\
The extension from Gaussian to sub-Gaussian data is **not a minor technical change**. Gaussianity permits powerful simplifications due to properties such as rotational invariance and moment equivalence. For example, (Jain et al. 2021) explicitly leverages rotational invariance (spherical symmetry) for their initialization analysis. Further, the proof of their main bound hinges upon several results from (Thekumparampil et al., 2021) that rely on the said properties for the case of independent, standard Gaussian features. In contrast, sub-Gaussian distributions are much more general (e.g., bounded or light-tailed features) and lack such symmetries.
Our extension beyond Gaussian data required a **new analysis** that leverages concentration inequalities (e.g., Vershynin, 2018) to handle sub-Gaussian data with heterogeneous, non-identical feature distributions, significantly broadening the applicability of our private personalization algorithm.
**On Citation Formatting**: We will ensure citation formatting consistency.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' rebuttal, which addresses part of my concern regarding the extension from Gaussian to sub-Gaussian distributions. Regarding the regression vs. classification point, it would also be helpful if the authors could clarify which theoretical results do not apply without the quadratic loss—this would make my verification easier.
While I will raise my score, I remain somewhat unenthusiastic about the paper overall.
---
Reply to Comment 1.1.1:
Comment: Thanks to the reviewer for the prompt response and for the positive update. We are happy to clarify the role of the quadratic loss in our theoretical analysis.
The use of the quadratic loss is essential to several key lemmas in our work, particularly those that underpin the convergence analysis of our private FedRep algorithm.
Most notably, Lemma 27 (Appendix B.1) is critical to establishing a recursive relationship between $\text{dist}(U_{t+1}, U^*)$ and $\text{dist}(U_t, U^*)$, which is used in proving Lemma 9, one of our main results. In Lemma 27, we show that
$$\text{dist}(U_{t+1}, U^*) \leq \alpha_t \text{dist}(U_t, U^*) + \text{error term}$$
where $\alpha_t = \left\lVert P_{t+1}^{-1} \right\rVert_2 \left\lVert I_k-\eta \Sigma_{V_t} \right\rVert_2 $ is a value between 0 and 1.
The derivation of this recursive inequality explicitly relies on the closed-form expression for the gradient of the quadratic loss in the linear regression setting. In particular, lines 786 - 796 in the appendix use the structure of the quadratic loss to simplify and control this recursion. It is unclear whether a similar recursive relationship could be derived for other loss functions lacking this structure.
Furthermore, in Lemma 37 (Appendix B.3), we provide a closed-form expression for $V_{t+1}$, the matrix where row $i$ corresponds to the local vector $v_{t+1}^i$ obtained by minimizing a quadratic loss. This closed-form expression plays a critical role in other key results, such as Proposition 40 (used in the privacy analysis) and Lemma 33, which contributes to the final convergence bounds. For general loss functions, such a closed-form solution may not exist, and our arguments in those sections may not extend directly.
We also note that prior works, such as (Collins et al., Jain et al. 2021 and Thekumparampil et al., 2021), similarly rely on the analytical tractability afforded by the quadratic loss in their theoretical analyses. | Summary: The authors present a novel technique to unlock differently private personalised models in the shared representation framework in a federated setting.
Claims And Evidence: The claims are sufficient and well supported.
Methods And Evaluation Criteria: While the paper is focused on theory the experiments are rather light and only compare with a base method using only synthetic datasets. It would be great to have an expanded experiment section applying it to real-world datasets and for additional methods to raise convition about its applicability.
Theoretical Claims: The theoretical claims appear to be sound.
Experimental Designs Or Analyses: The design of experiments seems to be sufficient for the domain of the paper (albeit the actual experiments are few, as mentioned previously).
Supplementary Material: Only skimmed it to better understand the intuition behind the proofs but did not thoroughly check it.
Relation To Broader Scientific Literature: The authors address a very important problem, especially in the case of federated computing. Unlocking the ability to have user personalised models is not only novel but really important for a variery of applications.
Essential References Not Discussed: Nothing of note.
Other Strengths And Weaknesses: It is highly appreciated that the authors shared the code and hope, if their work gets published, to be attached as an artefact to this submission to enhance dissemination.
Other Comments Or Suggestions: Nothing of note.
Questions For Authors: I would like to ask the authors how they can ensure that the model remains private after iterations, surely assuming they use composition under differential privacy to preserve the DP under aggregation only weakens over time as results are "revealed" and propagated.
It would be great to expand on this and address this concern.w
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback.
**On the scope of the experimental evaluation**: We would like to emphasize that the primary contribution of our work is theoretical, and the experimental section is designed to validate our theoretical findings and demonstrate the concrete advantages of our algorithm. Our experiments follow the same setup as (Jain et al., 2021), allowing for a direct and meaningful comparison that highlights the improved privacy-utility trade-off achieved by our method. While our focus is not on empirical benchmarking across datasets, we believe the current results effectively showcase the key strengths of our approach. Extending the empirical evaluation to additional datasets and broader personalization settings is a valuable direction for future work and will further complement the strong theoretical foundation established in this paper.
**On the privacy guarantee after composition**: Indeed, we take into account the privacy loss over the iterations of the algorithm. We leverage the adaptive composition guarantee of zero-concentrated differential privacy (zCDP) to bound the overall privacy loss of our algorithm and ensure it satisfies $(\epsilon, \delta)$-DP (see the proof of Theorem 8, Appendix B.1). Specifically, we show that the privacy cost for each iteration is $O\left(\frac{\epsilon^2}{T\log(1/\delta)}\right)$-zCDP. By applying composition over $T$ iterations, we obtain an overall privacy cost of $O(\epsilon^2/\log(1/\delta))$-zCDP, which implies an $(\epsilon, \delta)$-DP guarantee. Also, note that, after $U^\text{priv}$ is computed and sent to the users, each user $i$ will compute their own final local vectors $v_i^\text{priv}$ independently via this $U^\text{priv}$ matrix and a reserved set of fresh local data. The vector $v_i^\text{priv}$ will be kept by user $i$ (for each $i$) and never shared; thus, computing $v_i^\text{priv}$ will not lead to any additional privacy loss. | Summary: The authors study model personalization under user-level differential privacy in a shared representation framework for the federated learning setting. Specifically, there are $n$ users, and the data for user $i\in[n]$ is generated using $y = x^Tw^\star_i + \zeta$ where $\zeta$ is sub-gaussian noise and $w_i^\star = U^\star v_i^\star$ for some shared representation $U^\star\in \mathbb{R}^{d\times k}$ where $k<<d$. This setting was popularized by works such as [this one](https://arxiv.org/pdf/2002.11684) studying representation learning for meta-learning and other applications. This representation learning model has been previously shown to be useful for understanding the effectiveness of [Federated averaging](https://arxiv.org/pdf/2205.13692) and proposing a new algorithm called [FedRep for personalized federated learning](https://arxiv.org/pdf/2102.07078).
In this paper, the authors propose a federated algorithm that extends the FedRep method to ensure user-level differential privacy while including a private initialization step. Their algorithm accommodates sub-Gaussian feature distributions, noisy labels, and heterogeneous user data. In comparison to the [most closely related work](https://proceedings.neurips.cc/paper/2021/hash/f8580959e35cb0934479bb007fb241c2-Abstract.html) on private model personalization this paper relaxes the gaussian data assumption and allows federated training (i.e., without exchanging raw data and only sharing privatized model updates). The authors also improve the utility guarantee provided by this prior work in a reasonable low data regime. Overall, the theory in the paper and one small toy experiment show that the new approach improves the balance between privacy and utility for linear regression and matches or exceeds earlier methods’ performance.
The paper also addresses the binary classification setting with a margin-based loss. It uses a Johnson-Lindenstrauss transform to reduce dimensionality and obtain a margin-based risk bound that does not depend on the original feature dimension.
Claims And Evidence: Yes, this is a technically solid paper.
Methods And Evaluation Criteria: Yes, this is a theoretical paper, and the central guarantees in Theorems 10 and 16 make sense.
Theoretical Claims: I checked some proofs just to understand what is happening in the paper, but not every proof.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I reviewed some of the proofs.
Relation To Broader Scientific Literature: See the summary.
Essential References Not Discussed: No references I can think of.
Other Strengths And Weaknesses: The paper is very well written, with explicit assumptions and theoretical results. It relaxes assumptions required by prior work by allowing sub-Gaussian feature distributions and improving over the existing centralized results in reasonable regimes.
If I were to nitpick, the paper could benefit from additional empirical evaluations that measure performance on more diverse datasets and compare implementation complexities with other approaches. The paper focuses primarily on the strongly convex quadratic loss, so researchers who work with other loss functions might find the scope somewhat narrow. Having said that, the authors deliver a technically solid paper, and I am inclined to accept it.
Other Comments Or Suggestions: 1. $u$ was not defined while defining the clipping function in lines 193-194. I suppose it is the unit Frobinus norm version of the Matrix?
2. In terms of presentation, it would also be good to state the utility guarantee for a new user using the algorithm's learned shared representation. This can be stated in a corollary after Theorem 10.
Questions For Authors: 1. Have the authors thought about co-variate heterogeneity? We often see a combination of covariate and label shifts in a distributed setting. It would be nice to see how these different sources of heterogeneity affect the final personalization guarantee.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful and constructive comments and for their appreciation of our work.
**On the scope of the experimental evaluation**: We would like to emphasize that the primary contribution of our work is theoretical, and the experimental section is designed to validate our theoretical findings and demonstrate the concrete advantages of our algorithm. Our experiments follow the same setup as (Jain et al., 2021), allowing for a direct and meaningful comparison that highlights the improved privacy-utility trade-off achieved by our method. While our focus is not on empirical benchmarking across datasets, we believe the current results effectively showcase the key strengths of our approach. Extending the empirical evaluation to additional datasets and broader personalization settings is a valuable direction for future work and will further complement the strong theoretical foundation established in this paper.
On the other comments of the reviewer:
1. **Definition of $u$**: The reviewer is correct about the definition of $u$. We will correct the oversight and define the clipping function more clearly. Indeed, $u$ represents the normalized direction of the gradient; namely, it is the matrix $M$ normalized by its Frobenius norm, $M/\left\lVert M \right\rVert_F$.
2. **Utility guarantee for new users**: We agree that a utility corollary for new users using the learned shared representation is valuable. Indeed, the bound on the distance to $U^*$ (given by Lemma 9) can be used to bound the excess risk of new users. We will add this result as a corollary of Theorem 10 as suggested by the reviewer.
3. **Co-variate heterogeneity**: Our analysis does allow for a certain degree of co-variate heterogeneity among users. In particular, our results hold when user distributions differ as long as they are sub-Gaussian and share a common covariance structure. Note that the prior work of (Jain et al., 2021) lacks co-variate heterogeneity altogether as it requires all the data features to be i.i.d. standard Gaussian. We agree that extending the framework to handle more general co-variate and label shifts is an important direction for future work. | null | null | null | null | null | null | null | null |
PRIME: Deep Imbalanced Regression with Proxies | Accept (poster) | Summary: The paper introduces PRIME, a novel representation learning method for deep imbalanced regression tasks. PRIME leverages synthetic reference points called "proxies" to guide the learning of balanced and well-ordered feature representations, even for minority samples. Unlike previous methods that rely solely on sample relationships within individual batches, PRIME utilizes proxies as global anchors to shape the desired feature distribution. PRIME also enables the seamless application of class imbalance techniques from classification to regression setups, bridging the gap between the two tasks. Proposed experiments demonstrate the effectiveness of PRIME, achieving good performance on various real-world regression benchmarks.
Claims And Evidence: not all. Authors assert that PRIME achieve the SOTA performance, but it seems that authors ignore the current best model which have better performance than PRIME.
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: The paper mentions that the authors followed the "same experimental setup" and "previous state-of-the-art methods" for each dataset, indicating that they have adopted well-established experimental protocols.
However, several SOTA baselines are missed in the paper, and I listed partial of them below.
By reviewing the VIR paper, it is evident that VIR outperforms PRIME in all shot settings on AgeDB-DIR and surpasses all other methods on IMDB-WIKI-DIR. Why did the authors not report this?
Additionally, the paper *"IM-Context: In-Context Learning for Imbalanced Regression Tasks"* outperforms both VIR and ConR across all metrics, establishing it as the current state-of-the-art (SOTA) method. However, the authors did not mention this approach.
Does this imply that PRIME is unable to outperform VIR, PFN-localized, and GPT2-localized?
Supplementary Material: yes, all.
Relation To Broader Scientific Literature: The PRIME method, as presented in the paper, addresses a significant gap in the existing literature on imbalanced regression by introducing the concept of proxy learning into this domain for the first time. It leverages novel ideas surrounding sample-proxy relationships and integrates these with established imbalanced learning techniques to achieve state-of-the-art performance across various regression benchmarks.
Essential References Not Discussed: By reviewing the VIR paper, it is evident that VIR outperforms PRIME in all shot settings on AgeDB-DIR and surpasses all other methods on IMDB-WIKI-DIR. Why did the authors not report this?
Additionally, the paper *"IM-Context: In-Context Learning for Imbalanced Regression Tasks"* outperforms both VIR and ConR across all metrics, establishing it as the current state-of-the-art (SOTA) method. However, the authors did not mention this approach.
Does this imply that PRIME is unable to outperform VIR, PFN-localized, and GPT2-localized?
Other Strengths And Weaknesses: The main concern is why the authors assert that PRIME achieves state-of-the-art (SOTA) performance while not reporting the results of prior methods that outperform PRIME. I believe the authors need to address this issue.
Other Comments Or Suggestions: None, but authors need to report the mentioned performance.
Questions For Authors: I find this paper interesting, and the proposed method is novel. However, the claim of "demonstrating state-of-the-art performance on four real-world regression benchmarks across diverse target domains" needs further justification.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s feedback and the opportunity to clarify our claim regarding state-of-the-art (SOTA) performance. Below, we provide detailed comparisons with VIR and IM-Context, along with additional experiments to ensure a fair evaluation.
## 1. Comparison with VIR
Our claim of achieving SOTA performance is based on evaluations under a unified experimental protocol: using the same train/val/test splits, backbone (vanilla ResNet-50), and training settings as in prior works such as LDS, RankSim, and ConR. This setup allows for a fair and controlled comparison that isolates the contribution of the proposed PRIME framework.
In contrast, VIR uses a different data split and a more complex model architecture, incorporating a calibration network and reconstruction modules. These differences make direct comparisons of reported performance potentially misleading, as the results reflect not only the learning objective but also auxiliary components and dataset configurations.
In response to the reviewer’s suggestion, we conducted an additional experiment comparing PRIME to VIR under VIR’s official setting. Since the code and data splits for VIR are only publicly available for AgeDB-DIR, we focused our comparison on this dataset. We trained PRIME using the same train/val/test splits as VIR, while keeping all other settings unchanged. We then compared our results against the official pre-trained VIR model provided by the authors. For PRIME, results are averaged over five runs. As shown in Table 1, PRIME consistently outperforms VIR across all evaluation metrics under this setting.
We will revise the manuscript to include this experiment and clarify the scope of our SOTA claim accordingly.
**Table 1.** Comparison with VIR.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|VIR (author-provided)|7.12|6.69|7.73|9.59|4.56|4.26|5.11|6.29|
|PRIME|**7.06**|**6.44**|**7.61**|**9.28**|**4.39**|**4.08**|**4.93**|**6.00**|
## 2. Comparison with IM-Context
We thank the reviewer for bringing IM-Context to our attention.
First, we note that IM-Context is built upon a substantially different model configuration. It employs a pre-trained CLIP image encoder to extract visual features, followed by in-context learning using large-scale models such as GPT-2 and PFN. In contrast, PRIME uses a ResNet-50 backbone trained from scratch, without leveraging any pre-trained models. Therefore, direct comparisons may not be meaningful, as the two approaches differ significantly in both model capacity (i.e., CLIP, GPT-2, PFN vs. ResNet-50) and training paradigm (i.e., leveraging pre-trained models vs. training from scratch).
Second, we emphasize that PRIME is a proxy-based representation learning framework designed to be independent of model architecture. Indeed, our method is broadly applicable and can benefit from stronger backbones. We expect that using more powerful models would further improve performance.
To examine this, we conducted additional experiments using the pre-trained CLIP image encoder (ViT-B/32) as the backbone, which is also used in IM-Context. Specifically, we fine-tuned the CLIP backbone together with a two-layer MLP regression head and trained the model using the PRIME loss. To ensure robustness, we report the average performance of PRIME over five independent runs. We then compared our results on AgeDB-DIR and IMDB-WIKI-DIR with those reported by IM-Context.
As shown in Tables 2 and 3, PRIME substantially outperforms both PFN-localized and GPT2-localized across all evaluation metrics. These findings confirm that PRIME remains effective even on top of strong pre-trained models, demonstrating its flexibility across backbone choices. We believe these additional experiments further support our SOTA claim under a unified and fair evaluation protocol.
We will revise the manuscript to include these results and reflect the comparison with IM-Context.
**Table 2.** Results for AgeDB-DIR.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|PFN-localized|6.58|5.61|8.49|10.49|4.29|3.58|6.30|8.19|
|GPT2-localized|6.05|5.67|6.71|7.83|3.79|3.59|4.17|4.90|
|PRIME|**5.47**|**5.46**|**5.48**|**5.57**|**3.48**|**3.45**|**3.64**|**3.35**|
**Table 3.** Results for IMDB-WIKI-DIR.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|PFN-localized|8.96|8.71|10.79|16.33|5.26|5.17|6.00|9.42|
|GPT2-localized|7.76|7.35|11.15|17.71|4.29|4.13|5.96|11.00|
|PRIME|**6.42**|**5.98**|**9.92**|**16.28**|**3.49**|**3.33**|**5.17**|**9.41**|
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal.
I believe there may have been a misunderstanding regarding the VIR paper. It uses the same datasets as other related works—for example, AgeDB-DIR, IMDB-DIR, NYUD2-DIR, and STS-B-DIR.
Therefore, author's statement that `VIR uses a different data split` is not accurate.
Additionally, the model architecture in their paper follows the same structure as DIR; the only difference is the incorporation of uncertainty modeling. From my perspective, this should not be a reason to exclude their results or to include only the AgeDB-DIR comparison. The results can likely be reproduced with minimal effort.
That said, given the rebuttal and the current results provided by the authors, I am willing to either raise my score or keep my score. I submit my reason to AC. In addition, I strongly encourage the authors to include full comparisons in the camera-ready version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments.
We would like to clarify that **the data splits used in VIR are different from those used in DIR, RankSim, ConR, and our work**, despite all methods using the same underlying datasets.
This difference is clearly evident by comparing the `agedb.csv` files (which define the train/val/test splits) provided in the official repositories of VIR and other related works. For example, the first sample in the split provided by VIR, `715_RonaldReagan_53_m.jpg`, is included in the training set, whereas the exact same sample is assigned to the validation set in the splits used by DIR, RankSim, ConR, and our work.
We invite the reviewer to directly compare the following official repositories to confirm this discrepancy:
**Official repositories:**
* [VIR](https://github.com/Wang-ML-Lab/variational-imbalanced-regression/blob/main/data/agedb.csv)
* [DIR](https://github.com/YyzHarry/imbalanced-regression/blob/main/agedb-dir/data/agedb.csv)
* [RankSim](https://github.com/BorealisAI/ranksim-imbalanced-regression/blob/main/agedb-dir/data/agedb.csv)
* [ConR](https://github.com/BorealisAI/ConR/blob/main/agedb-dir/data/agedb.csv)
We emphasize that such discrepancies in data splits can significantly affect reported performance, and thus using consistent splits is essential for fair and meaningful comparison.
Regarding the model architecture, while VIR adopts the same backbone as prior DIR works, its use of uncertainty modeling introduces additional implementation complexity. That said, we agree that this difference alone should not preclude a full comparison.
For more complete comparisons, **we conducted additional experiments with VIR using its official implementation under our setup**, which is shared across DIR, RankSim, and ConR, for the AgeDB-DIR and IMDB-WIKI-DIR datasets. Tables 4 and 5 below summarize the results. For AgeDB-DIR, we report the performance of PRIME with $C=40$ (as discussed in our rebuttal to Reviewer h5o4). Across both datasets, PRIME consistently outperforms VIR.
**Building on both the previous comparison in Table 1 of our rebuttal (evaluating PRIME under VIR’s setup) and the additional experiments presented here (evaluating VIR under our setup), we believe the results consistently demonstrate the superior performance of PRIME over VIR**. We will also include results on the remaining datasets in the camera-ready version to ensure a thorough and comprehensive comparison.
We hope that our clarifications and the additional results have addressed your concerns. We would greatly appreciate it if you considered updating your score to an accept.
**Table 4.** Comparison with VIR on AgeDB-DIR under our setup.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|VIR|7.31|6.69|8.29|10.44|4.59|4.12|5.53|7.52|
|PRIME|**7.03**|**6.35**|**8.24**|**9.90**|**4.35**|**4.00**|**5.29**|**6.09**|
**Table 5.** Comparison with VIR on IMDB-WIKI-DIR under our setup.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|VIR|7.51|6.86|12.89|23.31|4.17|3.88|7.75|16.90|
|PRIME|**7.36**|**6.73**|**12.48**|**23.01**|**3.98**|**3.73**|**7.17**|**14.38**| | Summary: This paper presents PRIME, a method for handling regression tasks with imbalanced data distributions. PRIME introduces synthetic proxies as reference points that uniformly represent the continuous target space, aiming to mitigate representation collapse toward majority-target regions. The method uses two main loss components: a proxy loss, which positions these reference proxies in the feature space according to their relative positions in the target space, and an alignment loss, which encourages features of individual samples to move closer to appropriate proxies based on target similarities. PRIME treats each proxy like a class prototype, allowing it to adapt imbalanced classification methods. PRIME is empirically evaluated on four benchmark datasets.
Claims And Evidence: The paper presents experimental evaluations using four benchmark datasets: AgeDB-DIR, IMDB-WIKI-DIR, NYUD2-DIR, and STS-B-DIR. However, several important baselines or combinations of existing methods appear to be missing or selectively reported. Specifically, previous methods are typically evaluated by combining multiple methods (e.g., LDS + FDS + RankSim), and other baseline are missing such as VIR (Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing).
Methods And Evaluation Criteria: Yes
Theoretical Claims: The paper provides theoretical generalization bounds intended to justify the effectiveness of PRIME, in Theorem 4.1. However, the theoretical analysis assumes that proxies are well-positioned, which may not always hold in practice.
Experimental Designs Or Analyses: The paper follows standard benchmarks for evaluating imbalanced regression methods, specifically those introduced by Yang et al. (2021).
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
- The paper is clearly written, logically structured, and easy to follow.
- It includes extensive experimental analyses, with a thorough exploration of the approach through ablation studies and evaluations across multiple benchmarks.
- Introducing synthetic proxies to guide feature learning and achieve a balanced representation in regression tasks is innovative and conceptually clear.
Weaknesses:
- Although the paper provides a sensitivity analysis, PRIME introduces several hyperparameters (e.g.,
λ
p
,
λ
a ,
τ
f
,
τ
t
,
α). The detailed impact of these parameters is not thoroughly explored, raising concerns about the robustness and practicality of tuning the method for different datasets.
- The initialization of proxies plays a central role in PRIME’s success, yet the paper does not adequately investigate how different initialization strategies affect performance. Without this, the claimed advantages might be overly dependent on optimal initial placements of proxies.
- The theoretical analysis relies heavily on the assumption that proxies are optimally positioned, an assumption that may not hold in practical scenarios. The robustness of PRIME under deviations from this ideal condition is not discussed.
- Leveraging concepts from classification for regression problems, such as the use of class prototypes or proxies, is not entirely novel. The paper should clarify and distinguish more explicitly how PRIME significantly differs from prior approaches that adopt classification techniques in regression contexts.
- The related work section is missing references-
Other Comments Or Suggestions: NA
Questions For Authors: Please see weaknesses
[1] The theoretical analysis assumes that proxies are well-positioned (i.e.,
L proxy=0). How does PRIME perform in scenarios where this assumption does not hold? Is there empirical evidence to support the robustness of PRIME under such conditions?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s comments. We have addressed all points and will revise the manuscript accordingly.
## 1. Impact of hyperparameters
We have already provided detailed analyses of the impact of each hyperparameter ($\lambda_p$, $\lambda_a$, $\tau_f$, $\tau_t$, and $\alpha$) in Tables 16–20 in the appendix. The results show that PRIME consistently outperforms the w/o PRIME baseline regardless of the choice of hyperparameters, demonstrating strong robustness and practical stability. We will revise the manuscript to make this analysis more prominent.
## 2. Clarification on proxy initialization
We believe there may be a misunderstanding. PRIME does not rely on any optimal or specific initial placements of proxies. As stated on the right side of line 115 in the manuscript, all proxies are randomly initialized and jointly learned during training. Moreover, the effect of initialization randomness is already reflected in the repeated experiments with different random seeds, as reported in our main tables. The consistent performance across these runs confirms that PRIME is robust to proxy initialization. We will revise the manuscript to make this clearer and avoid potential confusion.
## 3. Theoretical analysis under non-optimal proxy positioning
We note that the derivation of the generalization error bound in Theorem 4.1 remains valid regardless of whether the proxies are optimal. When proxies are not optimally positioned, the resulting discrepancy can be incorporated as an additional term in the bound, rather than invalidating the analysis. To support this, we provide a sketch of how the generalization bound can be extended to the non-optimal case.
Let $\tilde{\mathbf{z}}_j^p$ for $j=1,\ldots,C$ denote the optimal proxy features,
and $\tilde{p}:=\tilde{p}_{\theta}(\xi|\mathbf{x})$ be the corresponding feature association distribution.
The empirical (i.e., non-optimal) proxies can be defined as $\mathbf{z}_j^p:=\tilde{\mathbf{z}}_j^p+\epsilon_j$,
where $\epsilon_j$ represents the estimation error, and $p:=p_{\theta}(\xi|\mathbf{x})$ denotes the corresponding feature association.
We revisit the balanced alignment risk term in Eq. (21) of the appendix, which was originally formulated based on optimally positioned proxies. To analyze the non-optimal case, we subsititue $\log \tilde{p}$ by applying the following identity: $\log \tilde{p}=\log p + [\log \tilde{p} - \log p].$
Then, the first term, $\log p$, follows the same derivation as in Theorem 4.1. The second term, $\log\tilde{p}-\log p$, captures the discrepancy introduced by the deviation between the empirical and optimal proxies. Since $p$ and $\tilde{p}$ is defined as softmax, we can bound this residual term using the inequality: $\frac{\sum a_i}{\sum b_i} \leq \max_i \frac{a_i}{b_i}$ for positive values $a_i$, $b_i$, which leads to:
$|\log\tilde{p} - \log p|
\leq
2\tau_f\max_j |d_f(\mathbf{z},\tilde{\mathbf{z}}_j^p + \epsilon_j) - d_f(\mathbf{z},\tilde{\mathbf{z}}_j^p)|.$
Finally, applying the inequality above to Eq. (21) yields an additional bounded term, which can be directly incorporated into the generalization error bound derived in Theorem 4.1. Importantly, as training progresses and the proxies become more accurate (i.e., $\epsilon_j$ becomes smaller), the residual decreases accordingly, leading to a tighter bound. Our empirical results also show that PRIME performs robustly with random initialization of proxies. We will include this extended derivation in the revised manuscript with a full proof.
## 4. Distinction from classification-based methods
To the best of our knowledge, PRIME is the first to introduce proxies specifically designed for imbalanced regression. Moreover, as discussed in Section C.4 of the appendix, PRIME differs substantially from prior classification-based regression methods in its formulation and learning objective.
First, existing approaches typically quantize continuous targets into discrete bins and treat each bin as a class, which inevitably introduces quantization error. In contrast, PRIME assigns proxies based on soft associations (Eq. (6)) derived from pairwise target distances, effectively mitigating such errors.
Second, instead of predicting proxy indices (i.e., classes), PRIME learns to minimize the distance between features and their associated proxies in the embedding space. This objective aligns more naturally with the continuous nature of regression and promotes representations that reflect target similarities.
Furthermore, we also empirically compare PRIME with the most recent classification-based method, Hierarchical Classification Adjustment (HCA) [CVPR’24], and observe consistently superior performance, as shown in Tables 1–3 of the manuscript. We will revise the main text to better highlight these distinctions and clarify the novelty of our method.
## 5. Related work
As the comment does not specify which references are missing, we would appreciate any suggestions the reviewer might have.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed and thoughtful rebuttal. That said, I still have a few remaining concerns regarding PRIME.
Regarding proxy initialization, I was referring to the number of supports C, as the results on the few-shot regions are sensitive to this parameter. As seen in Table 15, with C = 10 and C = 40, the MAE in the Few category (11.47 and 11.20 respectively) is higher than not using PRIME at all (10.71). Sorry for not making this clearer in my initial review.
While you use soft associations, considering proxies (as you refer to them as classes) is still a form of discretization. It may differ technically from binning, but semantically, it serves the same purpose.
As noted in **the Claims and Evidence section of my review**, several important baselines and combinations of existing methods appear to be missing from your evaluation. Methods like VIR (Wang & Wang, 2023, Variational Imbalanced Regression, NeurIPS) and IM-Context (Nejjar et al., 2024, In-Context Learning for Imbalanced Regression Tasks, TMLR) are relevant and should be discussed.
Regarding the experimental results, I noticed that your comparisons do not reflect the full landscape of prior techniques. Many recent works evaluate using combinations of methods (e.g., LDS + FDS + RankSim + ConR), which are notably absent from your main tables. Including such combinations is important to fairly assess the performance of PRIME.
For example:
AgeDB-DIR (MAE, lower is better):
| Method | All | Many | Median | Few |
|--------|-----|------|--------|-----|
| LDS+FDS+RankSim+ConR | **6.81** | **6.32** | **7.45** | **9.21** |
| VIR | 6.99 | 6.39 | 7.47 | 9.51 |
| PRIME | 7.09 | 6.38 | 8.39 | 10.13 |
| PRIME + PRW | 7.06 | 6.67 | 7.27 | 9.91 |
| PRIME + CB | 7.12 | 6.61 | 8.07 | 9.29 |
| PRIME + LDAM | 7.24 | 6.85 | 7.84 | 9.29 |
IMDB-WIKI-DIR (MAE, lower is better):
| Method | All | Many | Median | Few |
|--------|-----|------|--------|-----|
| FDS + ConR | 7.29 | 6.90 | 12.01 | 21.72 |
| VIR | **7.19** | **6.56** | **11.81** | **20.96** |
| PRIME | 7.36 | 6.73 | 12.48 | 23.01 |
| PRIME + PRW | 7.37 | 6.74 | 12.04 | 22.34 |
| PRIME + CB | 7.48 | 6.90 | 12.05 | 22.71 |
| PRIME + LDAM | 7.49 | 6.91 | 12.23 | 22.32 |
These comparisons suggest that, as currently presented, PRIME underperforms relative to combinations of existing methods in several important cases. Including these baselines—or combining PRIME with previous techniques like VIR or ConR—could help show its complementary value and improve the overall narrative. Without this, the current results may give a misleading impression of PRIME's relative performance.
If you show that combining PRIME with other methods leads to improvements, **I'd be open to reconsidering my recommendation**. As it stands, however, the evaluation does not provide a fully fair or comprehensive picture.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. We have addressed all points and will revise the manuscript accordingly. We hope this resolves any remaining concerns and would appreciate your reconsideration of the overall recommendation.
## 1. Number of proxies
For simplicity, we fixed the number of proxies $C$ and did not extensively tune the hyperparameters. Still, **PRIME consistently outperforms the baseline in overall performance (i.e., All) across different values of $C$** (Table 15 in the appendix), suggesting that it is relatively robust to the choice of $C$.
Moreover, appropriate tuning could further improve its performance. As shown in Table 3 of our rebuttal to Reviewer h5o4, tuning for $C=40$ led to improved results, particularly in the Few category (9.90).
## 2. Beyond binning
While PRIME introduces discrete proxies, its purpose fundamentally differs from binning. Traditional binning discretizes the label space to produce classification targets, whereas PRIME uses soft associations with multiple proxies **to guide representation learning in regression**. As other reviewers acknowledged the use of proxies in imbalanced regression as an interesting direction, we believe PRIME makes a meaningful contribution to representation learning for DIR.
## 3. Comparison with VIR
We note that VIR employs different train/validation/test splits. As such, direct comparisons of the reported performance may not be meaningful.
For a fair comparison, we **evaluated PRIME under VIR’s setting**. Since VIR’s code and data splits are available only for AgeDB-DIR, we limit our comparison to this dataset. As the official code does not reproduce the reported performance, we compare against the pre-trained VIR model provided by the authors. As shown in Table 1 below, **PRIME consistently outperforms VIR across all evaluation metrics**.
In addition, we further **evaluated VIR under our setup**, which is shared across DIR, RankSim, and ConR, on both AgeDB-DIR and IMDB-WIKI-DIR. **PRIME again demonstrates consistently better performance**, as shown in Tables 4 and 5 in our rebuttal to Reviewer SddV.
**Table 1.** Comparison with VIR on AgeDB-DIR.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|VIR (author-provided)|7.12|6.69|7.73|9.59|4.56|4.26|5.11|6.29|
|PRIME|**7.06**|**6.44**|**7.61**|**9.28**|**4.39**|**4.08**|**4.93**|**6.00**|
## 4. Comparison with IM-Context
IM-Context builds on large-scale pre-trained models. Specifically, it uses a pre-trained CLIP image encoder to extract features, followed by in-context learning with large models such as GPT-2 and PFN.
Importantly, PRIME is a proxy-based representation learning scheme designed to be independent of the model architecture. It is broadly applicable and can benefit from stronger backbones.
We conducted additional experiments using the same pre-trained CLIP encoder (ViT-B/32) employed by IM-Context. We compare our results on AgeDB-DIR and IMDB-WIKI-DIR with those reported by IM-Context. As shown in Tables 2 and 3 below, **PRIME substantially outperforms both PFN-localized and GPT2-localized models across all evaluation metrics**, demonstrating its effectiveness under the same backbone.
**Note:** Further details regarding VIR and IM-Context are provided in our rebuttal to Reviewer SddV.
## 5. Combining PRIME with other methods
As noted (lines 210–213), PRIME can be easily integrated into existing methods. In particular, PRIME focuses on aligning samples with proxies, making it complementary to recent approaches that leverage sample-wise feature relationships, such as FDS, RankSim, and ConR.
As suggested, we added PRIME to the best-performing combinations: LDS+FDS+RankSim+ConR (AgeDB-DIR) and FDS+ConR (IMDB-WIKI-DIR). As shown in Tables 2 and 3, **combining PRIME with existing techniques consistently improves performance**, highlighting its complementary role and broad applicability.
**Table 2.** Results for AgeDB-DIR.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|**ResNet-50**|||||||||
|LDS+FDS+RankSim+ConR|6.81|6.32|7.45|9.21|4.39|3.81|5.01|6.02|
|LDS+FDS+RankSim+ConR+PRIME|**6.76**|**6.29**|**7.37**|**9.11**|**4.24**|**3.80**|**4.90**|**5.98**|
|**CLIP**|||||||||
|PFN-localized|6.58|5.61|8.49|10.49|4.29|3.58|6.30|8.19|
|GPT2-localized|6.05|5.67|6.71|7.83|3.79|3.59|4.17|4.90|
|PRIME|**5.47**|**5.46**|**5.48**|**5.57**|**3.48**|**3.45**|**3.64**|**3.35**|
**Table 3.** Results for IMDB-WIKI-DIR.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Med.|Few|All|Many|Med.|Few|
|**ResNet-50**|||||||||
|FDS+ConR|7.29|6.90|12.01|21.72|4.02|3.83|6.71|12.59|
|FDS+ConR+PRIME|**7.25**|**6.85**|**11.45**|**21.22**|**3.99**|**3.78**|**6.56**|**12.41**|
|**CLIP**|||||||||
|PFN-localized|8.96|8.71|10.79|16.33|5.26|5.17|6.00|9.42|
|GPT2-localized|7.76|7.35|11.15|17.71|4.29|4.13|5.96|11.00|
|PRIME|**6.42**|**5.98**|**9.92**|**16.28**|**3.49**|**3.33**|**5.17**|**9.41**| | Summary: The authors propose a novel method for imbalanced regression, leveraging learnable proxies as global reference points to achieve a balanced and well-structured feature distribution and aligning sample features with these proxies. Extensive experiments are conducted on multiple benchmarks, yielding strong and impressive results.
Claims And Evidence: Yes, I believe the claims are supported by clear evidence.
Methods And Evaluation Criteria: Yes, the method is evaluated on standard benchmarks alongside other representative baselines.
Theoretical Claims: I reviewed the theoretical claims and did not find any apparent issues. However, I am not fully certain about their rigorous correctness in the supplementary material, and a more thorough verification may be necessary.
Experimental Designs Or Analyses: Yes, I find the current experimental designs and ablation studies to be valid and sound.
Supplementary Material: I reviewed the supplementary material, except for the mathematical derivations.
Relation To Broader Scientific Literature: The authors have done a good job in covering the related work.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**: The manuscript is in a clear and easy-to-follow presentation flow. The methodology and experimentation are solid.
**Weakness**: the overall idea is interesting and the demonstration is solid, but I have concerns about the proxy features. It would enhance the contribution if the authors can discuss more on the choice of proxy features. In the proposed method a learnable proxy feature bank is used, but what if the method use non-learnable proxy features? E.g. we can discretize the data points to bins like LDS in [1], and then within each bin we can have its own proxy features (e.g., the centroid of all features within the bin)?
Do the authors have any insights? How would this type of proxy learning compare to learnable proxy learning? It would even better if the authors can do an ablation study.
I am open to raising my score based on the authors' response.
[1] Yang et al., Delving into Deep Imbalanced Regression, ICML 2021
Other Comments Or Suggestions: I have one curious question about the Figure 2(b): why is there a twisted line pattern in the top right?
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and insightful comments. We have addressed all points and will revise the manuscript accordingly.
## 1. Ablation on non-learnable proxy
As noted by the reviewer, PRIME is flexible with respect to how proxies are constructed—proxies can be either learnable or non-learnable. Following the suggestion, we conducted an additional ablation experiment on a non-learnable variant of PRIME, where proxy features are updated as the centroids of sample features assigned to each proxy. Specifically, to ensure the proxy loss and alignment loss can be properly backpropagated, we update the proxy features within each mini-batch based on the current sample-to-proxy assignments.
Table 1 compares the performance of the centroid-based proxy update with our PRIME using learnable proxies on the AgeDB-DIR dataset. Results are averaged over five runs. While the centroid-based method achieves slightly better performance than the learnable proxy in the Median category, it suffers from notable performance degradation in the other regions. In particular, we observe a significant performance drop in the Few category, indicating that the centroid-based proxies struggle under severe data sparsity.
We attribute this performance gap to the inherent limitations of the centroid-based method. Since proxies are updated as the centroids of the assigned sample features, their quality heavily depends on the number of assigned samples. When only a few samples are available—as is often the case in the Few category—the estimated centroids become unstable and unreliable. Moreover, because centroids are computed within each mini-batch, their estimates can fluctuate significantly depending on the batch composition. In contrast, learnable proxies are global parameters that are updated via backpropagation, offering greater stability and robustness, particularly in data-sparse regions.
This issue is further exacerbated in regression settings, where some proxy bins may contain no samples at all. In such cases, the centroid-based approach cannot update the corresponding proxies, leaving them inactive throughout training. In contrast, learnable proxies remain effective even without sample assignments, serving as global reference points that guide other samples and support representation learning.
We will add this ablation study and discussion to the revised manuscript.
**Table 1.** Results with non-learnable proxies.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Median|Few|All|Many|Median|Few|
|Non-learnable (centroid)|7.21$\pm$0.09|6.57$\pm$0.10|**8.20**$\pm$0.13|10.89$\pm$0.33|4.67$\pm$0.11|4.24$\pm$0.12|**5.42**$\pm$0.15|7.64$\pm$0.22|
|PRIME|**7.09**$\pm$0.08|**6.38**$\pm$0.11|8.39$\pm$0.26|**10.13**$\pm$0.36|**4.39**$\pm$0.08|**3.91**$\pm$0.10|5.58$\pm$0.22|**6.57**$\pm$0.49|
## 2. Clarification on Figure 2(b)
The twisted line pattern in Figure 2(b) appears due to suboptimal alignment between features and their corresponding proxies in the Few category. Although the proxies represent a balanced feature distribution, the alignment process in Eq. (7) still faces challenges under sample imbalance. Minority samples, which occur infrequently, often fail to align properly with their proxies, resulting in distorted feature–proxy alignment.
The use of class imbalance techniques (e.g., PRW, CB, and LDAM) provides better alignment focus on minority samples, mitigating this issue. To empirically validate their effect, we conducted an additional analysis on the AgeDB-DIR dataset, measuring the Spearman correlation between the proxy–feature similarity matrix (as visualized in Figure 2(b)) and the label similarity matrix. A higher correlation indicates better alignment and reduced distortion in the learned feature space.
Table 2 reports the Spearman correlation values when PRIME is combined with various class imbalance techniques. Results are averaged over five runs. Incorporating class imbalance techniques significantly improves the correlation, confirming their effectiveness in facilitating better alignment, particularly for samples in the Few category.
We will revise the manuscript to include this additional analysis and the corresponding figures alongside Figure 2(b).
**Table 2.** Alignment between proxies and features.
|Method|Spearman $\rho$ ($\uparrow$)|
|---|---|
|PRIME|0.722$\pm$0.020|
|PRIME + PRW|0.802$\pm$0.021|
|PRIME + CB|0.800$\pm$0.023|
|PRIME + LDAM|0.837$\pm$0.015| | Summary: For Deep Imbalanced Regression (DIR), the authors propose Proxy-based
Representation learning for IMbalanced rEgression (PRIME).
They generate synthetic proxies in the feature space and align
instances to the proxies. The proxies are distributed uniformly
across the target values. While the corresponding target values are
determined, the proxies are learned via model parameters. Inspired by
tSNE, they define $p_{i,j}$ and $q_{i,j}$ to be similarities between $y_i$ and
$y_j$ (target space) and between $z_i$ and $z_j$ (feature space). They seek
to minimize the KL divergence of the two distributions ($p_{i,j}$ and $q_{i,j}$). To reduce
trivial solutions, they encourage distance between proxies and "feature
space uniformity" (Wang & Isola 2020). Proxy loss ($L_{proxy}$) is the
KL divergence and regularization.
For each instance, they calculate the association via distance and
softmax to each proxy in the features space. Similarly, they
calculate the association in the target space. Alignment loss
($L_{align}$) is the cross entropy with the two associations (similar
to a classification loss). The overall loss is the regression loss
plus $L_{proxy}$ and $L_{align}$.
For evaluation they use 4 datasets and compare with 8 existing
techniques. Empirical results indicate adding PRIME is somewhat
beneficial. Ablation studies were performed to indicate the
contribution of some of the components.
## update after rebuttal
After reading and responding to the authors' rebuttal, I decided to raise my rating to Accept -- between Weak Accept and Accept to be more precise.
Claims And Evidence: For evaluation they use 4 datasets and compare with 8 existing
techniques. Empirical results indicate adding PRIME is somewhat
beneficial. PRIME generally outperforms existing methods. PRIME alone outperorms in 3 out of 4 datasets in the Few category.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable.
Theoretical Claims: I am not familiar with some of the terms in Theorem 4.1 and hence did
not check the proof in the Appendix.
Experimental Designs Or Analyses: Tables of results and visualization are helpful. In the tables, since the existing methods do not have the benefit of additional methods, compare PRIME alone with the existing methods, perhaps with a different highlight.
Supplementary Material: I quickly reviewed Further Experiments and Analyses (Part C) of the
supplementary materials.
Relation To Broader Scientific Literature: The proposed method is different from existing methods on
representation learning in imbalanced regression. While the different
components are borrowed from tSNE, (Wang & Isola 2020), and
classification loss, the combination seems interesting.
Essential References Not Discussed: I am not aware of essential references that are not discussed.
Other Strengths And Weaknesses: While the different components are borrowed, the combination is
interesting.
Other Comments Or Suggestions: More explanation on why the second term in Eq 4
can achieve "feature space uniformity" (Wang & Isola 2020)
Questions For Authors: 1. To be a self-contained paper, can you explain why the second term in Eq 4
can achieve "feature space uniformity" (Wang & Isola 2020)?
2. How to determine the number of proxies? Table 15 in Appendix: any
insight on why C=20 seems to perform better? I was wondering that
more proxies would generally improve the learned feature space.
Ethical Review Concerns: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We have addressed all points and will revise the manuscript accordingly.
## 1. PRIME’s effectiveness in the Few category
We would like to point out that PRIME alone achieves state-of-the-art performance in the Few category on three out of four datasets, outperforming existing methods by 0.7\%, 0.5\%, and 3.6\% on AgeDB-DIR, NYUD2-DIR, and STS-B-DIR, respectively. Furthermore, a key contribution of PRIME lies in its ability to seamlessly integrate class imbalance techniques into regression tasks. This facilitates balanced feature learning, substantially enhancing performance in the Few category.
To further validate PRIME’s effectiveness in the Few category, we conducted an additional analysis on the test set of AgeDB-DIR, measuring the Spearman correlation between the feature similarity matrix and the label similarity matrix. A higher correlation indicates that the learned features are more well-ordered and better reflect the continuity of the label space, which is crucial for learning effective representations in regression tasks.
As shown in Table 1, we compared the Spearman correlations for all samples and for samples in the Few category across PRIME, RankSim, and ConR. Results are averaged over five runs. While RankSim and ConR exhibit reasonable correlation values on the full dataset, their performance significantly degrades in the Few category. In contrast, PRIME maintains a high correlation within the Few category, indicating that it learns well-ordered feature representations, even for minority samples. This highlights the strength of our proxy-based formulation, which provides holistic guidance for feature positioning, allowing minority samples to be embedded in alignment with the overall label structure.
We will incorporate these additional results and analyses into the revised manuscript to better highlight PRIME’s effectiveness in the Few category.
**Table 1.** Spearman correlation between feature and label similarities.
|Method|All|Few|
|---|---|---|
|RankSim|0.804$\pm$0.008|0.587$\pm$0.036|
|ConR|0.790$\pm$0.024|0.614$\pm$0.043|
|PRIME|**0.942**$\pm$0.008|**0.828**$\pm$0.020|
## 2. Feature space uniformity
The second term in Eq. (4) encourages proxies to be repelled from one another, as it increases the angle between $\mathbf{z}_i^p$ and $\mathbf{z}_j^p$, thereby inducing a more dispersed (i.e., uniform) proxy distribution. Unlike the original uniformity loss (Wang & Isola, 2020), which pushes all pairs equally apart, our formulation increases the pairwise cosine distance proportionally to the label distance. This not only promotes uniformity in the proxy space but also preserves label ordinality.
Since sample features are aligned to proxies via the alignment loss in Eq. (7), the proxy uniformity is naturally transferred to the feature distribution. In this way, the second term in Eq. (4) promotes feature space uniformity by shaping the proxy distribution, which in turn guides the feature distribution through alignment.
To validate this, we trained a variant of PRIME without the second term in Eq. (4) (i.e., setting $\alpha=0$) and measured feature space uniformity on the test set of AgeDB-DIR using the $\mathcal{L}_{\text{uniform}}$ metric proposed by (Wang & Isola, 2020).
Table 2 presents the results averaged over five runs. Removing the second term leads to higher $\mathcal{L}_{\text{uniform}}$ (i.e., lower uniformity), confirming that this term is critical for inducing feature space uniformity. We will include this result and discussion in the revised manuscript.
**Table 2.** Feature space uniformity.
|Method|$\mathcal{L}_{\text{unifrom}}$ ($\downarrow$)|
|---|---|
|PRIME ($\alpha=0$)|$-$1.458$\pm$0.022|
|PRIME|**$-$1.544**$\pm$0.016|
## 3. Larger number of proxies
For simplicity, we set the number of proxies $C$ such that each proxy would roughly cover a 5-year age span for AgeDB-DIR and IMDB-WIKI-DIR, and kept it fixed while tuning the other hyperparameters. As shown in Table 15 in the appendix, even without tuning, increasing the number of proxies leads to improved performance in the many and median regions. This suggests that with proper hyperparameter tuning, a larger number of proxies could further improve performance.
To support this, we conducted an additional experiment with $C = 40$, tuning the associated hyperparameters ($\lambda_p = 1$, $\lambda_a = 5$, $\tau_f = 5$, $\tau_t = 1$, $\alpha = 0.005$), and obtained even better results, as reported in Table 3. Results are averaged over five runs. We will revise the manuscript to include this result.
**Table 3.** Results with more proxies on AgeDB-DIR.
|Method|MAE||||GM||||
|---|---|---|---|---|---|---|---|---|
||All|Many|Median|Few|All|Many|Median|Few|
PRIME ($C=40$)|**7.03**$\pm$0.08|**6.35**$\pm$0.12|**8.24**$\pm$0.07|**9.90**$\pm$0.25|**4.35**$\pm$0.09|**4.00**$\pm$0.07|**5.29**$\pm$0.24|**6.09**$\pm$0.13| | null | null | null | null | null | null |
Low-Rank Tensor Transitions (LoRT) for Transferable Tensor Regression | Accept (poster) | Summary: This paper proposes the Low-Rank Tensor Transitions (LoRT) framework to address various shift problems and decentralized data management. LoRT employs a novel fusion regularizer to enforce low-tubal-rank solutions, enabling effective integration. Its two-step refinement process mitigates model shifts and ensures robust adaptation to target tasks. For decentralized scenarios, the authors extend LoRT to D-LoRT, a distributed variant that preserves statistical efficiency. The authors provide detailed theoretical analysis for each step.
In experiments, the authors demonstrate the superiority of the two proposed frameworks in simulated settings and experiments based on YUV RGB video datasets. Overall, this paper explores transfer learning in tensor regression, providing rich content, detailed theory, and numerous experiments. However, the LoRT and D-LoRT frameworks proposed in this paper are largely based on the TransFusion framework [1], with similar algorithmic concepts and theoretical analysis. This significantly diminishes the originality of the paper.
[1] He, Z., Sun, Y., & Li, R. (2024, April). Transfusion: Covariate-shift robust transfer learning for high-dimensional regression. In International Conference on Artificial Intelligence and Statistics (pp. 703-711). PMLR.
Claims And Evidence: The LoRT and D-LoRT frameworks proposed in this paper have been thoroughly validated both theoretically and experimentally, but there are still some shortcomings:
1. **Theoretical Issues**: The authors prove in both LoRT and D-LoRT how the estimation error is affected by the heterogeneous measure $\bar h$. However, there is an obvious issue: when $\bar h$ becomes large enough, the current theory cannot guarantee that the transfer learning performance will always outperform using only target data. Referring to the theory in [2], it would be ideal to ensure that in any case, the performance after transfer is at least as good as using only the target data.
2. **Experimental Issues**: The authors have done extensive work and presented many numerical results. In the completion performance, the recovery of the ground truth is quite good. However, the experiments provided are all based on simulations, assuming the true tensor coefficients are known. The authors should include real data analysis using actual tensor covariates $X$ and response $y$ for analysis (without knowing the true $W$). This would make the findings more convincing.
[2] Duan, Y., & Wang, K. (2023). Adaptive and robust multi-task learning. The Annals of Statistics, 51(5), 2015-2039.
Methods And Evaluation Criteria: In the experiments, the authors mainly compared TNN and $k$-Sup. However, there are many other comparative methods for tensor regression on a single dataset, such as tensor regression based on CP decomposition/Tucker decomposition [3,4] or tensor regression with convex regularization [5]. The authors did not evaluate the performance of these methods.
[3] Zhou, H., Li, L., & Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502), 540-552.
[4] Li, X., Xu, D., Zhou, H., & Li, L. (2018). Tucker tensor regression and neuroimaging analysis. Statistics in Biosciences, 10(3), 520-545.
[5] Raskutti, G., Yuan, M., & Chen, H. (2019). Convex regularization for high-dimensional multiresponse tensor regression.
Theoretical Claims: I have read the proof in the author's appendix, which is very detailed. I believe it is feasible, but the specific details require further reading.
Experimental Designs Or Analyses: I believe the soundness/validity of any experimental designs or analyses is feasible. The authors have conducted experiments on both simulated and real data. However, the experiments provided so far are all based on simulations, assuming the true tensor coefficients are known. The authors should include real data analysis using actual tensor covariates and response $y$ for analysis (without knowing the true $W$). This would make the findings more convincing.
Supplementary Material: The authors have provided the code in the supplementary material. Although I haven't run it myself, the authors have provided detailed parameters, which I believe are trustworthy.
Relation To Broader Scientific Literature: Overall, this paper explores transfer learning in tensor regression, providing rich content, detailed theory, and numerous experiments. However, the LoRT and D-LoRT frameworks proposed in this paper largely follow the TransFusion framework [1], with very similar algorithmic concepts and theoretical analysis. This significantly diminishes the originality of the paper.
Essential References Not Discussed: The authors provide a detailed discussion of the literature on transfer learning, tensor regression/recovery, and multi-task learning (MTL), which is very thorough.
Other Strengths And Weaknesses: **Strengths**: As discussed above.
**Weaknesses**:
1. This paper explores transfer learning in tensor regression, providing rich content, detailed theory, and numerous experiments. However, the LoRT and D-LoRT frameworks proposed in this paper largely follow the TransFusion framework [1], with very similar algorithmic concepts and theoretical analysis. This significantly diminishes the originality of the paper. The authors could emphasize the challenges of extending TransFusion to tensor regression and how they address these challenges in theory.
2. The paper does not mention any computational details. Tensor regression is computationally challenging, and the authors should explain the details of the algorithm and its computational cost.
3. The paper uses a large number of symbols, many of which are superscripts and subscripts. This makes the paper difficult to read. I suggest that the authors reorganize the symbols, for example, by representing three-dimensional tensors as $\mathcal X$ and $\mathcal W$ to reduce the use of superscripts and subscripts.
4. There are some obvious typos in the paper, such as in Equation 6, where it should be $y^k,X^k$ instead of $y^0, X^0$.
5. In the theoretical analysis, when $\bar h$ becomes large enough, the current theory cannot guarantee that the transfer learning performance will always outperform using only target data. Referring to the theory in [2], it would be ideal to ensure that in any case, the performance after transfer is at least as good as using only the target data.
Other Comments Or Suggestions: The authors could reorganize the symbols used in the paper to make it more readable.
Questions For Authors: 1. In line 163, why is the assumption $ r \ll d_2 $ made? Is this assumption necessary, and does it have any connection to the final rate $ O(r d_1 d_2 N^{-1}) $, where only $ d_1 $ and $ d_3 $ appear but not $ d_2 $?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive and thoughtful feedback.
We are encouraged that the reviewer finds our theoretical analysis and experimental design sound, and recognizes the potential of the proposed LoRT frameworks for addressing distribution shifts in tensor regression. Below, we respond to the reviewer’s concerns point by point.
> On similarity to TransFusion
While LoRT is conceptually inspired by TransFusion [1], it addresses a fundamentally different and more challenging setting: **tensor-valued regression**, where both inputs and responses are high-order tensors. This introduces unique challenges:
1. **Multi-mode low-rank structure**: LoRT employs a **tubal nuclear norm-based fusion regularizer** to model low-rank structure along tensor modes, in contrast to the $\ell_1$-based fusion in vector settings like TransFusion.
2. **Tensor-specific two-step estimation**: Although both methods use two-step strategies, **LoRT’s design is tailored to tensors**, first estimating a shared low-tubal-rank component, then refining task-specific parts using tensor algebra.
3. **Theoretical analysis under tensor structure**: LoRT establishes **estimation error bounds** based on tensor-specific assumptions and tools, differing from the vector-based analyses in TransFusion.
These aspects make LoRT a **nontrivial and technically distinct extension** of fusion-based transfer learning to the tensor domain.
> On comparison with CP/Tucker/convex baselines
We have conducted initial comparisons with representative CP-, Tucker-, and convex-based tensor methods under our experimental setting (https://anonymous.4open.science/r/LoRT-113D/CPtable.png) and will include full results in the final version.
> On real data without known parameters
Our current study focuses on settings with known tensor coefficients, which allow precise assessment of parameter recovery and direct validation of our theoretical predictions. This experimental design is a standard and widely accepted practice in the tensor learning literature (e.g., Zhang et al., 2020; Wang et al., 2021), especially for theory-driven studies. By working with controlled synthetic data, we can rigorously examine the behavior of the low-rank transfer mechanism and its consistency with our generalization bounds.
We respect and appreciate the reviewer’s perspective that experiments on real-world data without ground-truth parameters can demonstrate practical relevance. However, such experiments serve a different purpose and are not necessary for validating the core theoretical contributions of this work. As our current goal is to establish and verify theoretical guarantees, we follow the conventional evaluation protocol in this area. We recognize the value of such experiments for illustrating broader applicability and consider them a promising direction for future work.
> On computational details
We will include further algorithmic details—such as per-iteration computational cost and implementation notes—in the final version to improve clarity and reproducibility. We note that we have already provided runnable example code in the supplementary material, and we will expand the documentation to make the computational aspects more transparent.
> On notation and typos
Many thanks for your careful reading! We will revise the notation system for better readability and fix all typographical errors.
> On "no negative transfer" guarantee
Our current theory guarantees improvement under moderate heterogeneity, which aligns with prior work (e.g., [1]), as universal guarantees are generally infeasible without adaptive mechanisms.
That said, the structural design of LoRT naturally enables "no negative transfer"-style extensions. Specifically, the fusion regularizer allows for task-wise reweighting, and the divergence scores $h_k$ are available during training. This makes it feasible to implement heterogeneity-aware fusion, similar in spirit to [2], where adaptive regularization balances between source pooling and target-only training.
While our current focus is on estimation error in favorable regimes, we plan to explore an oracle-type guarantee of the form
$$
\mathbb{E}[\text{Err}_{\text{LoRT}}] \le \\min\\{\text{Err}\_{\text{target-only}}, \text{Err}\_{\text{fusion}}\\} + \delta,
$$
where $\delta$ depends on heterogeneity. We will add a discussion of this future direction in the final version.
> On the assumption $r \ll d_2$
The assumption aligns with the low tubal-rank setting commonly used in t-SVD-based tensor models (e.g., Qiu et al., 2022a). Note that we assume $d_1 \ge d_2$ without loss of generality (line 111), so the rate $O(r d_1 d_3 N^{-1})$ can be more precisely written as
$$O(r \min\\{d_1, d_2\\} d_3 N^{-1})$$ Hence, the dependence on $d_2$ is implicitly included via both $\min\\{d_1, d_2\\}$ and $r$.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' clarifications and the new efforts made on the experiment part.
However, the most critical concern: **no negative transfer** guarantee still remains unconvincing.
I understand that the authors follow [1] to derive a similar theoretical result in the tensor setting. However, this also means the approach inherits the same limitation as [1]: it does not ensure no negative transfer, especially when $h$ is large. This issue arises both theoretically and empirically in the presence of model shift. I have tried similar fusion regularizers and observed performance degradation when $h$ is large, which further supports this concern.
The authors should provide a more intuitive theoretical result regarding **no negative transfer**. The current theory is not sufficiently convincing.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s continued engagement and thoughtful feedback on this point. We agree that preventing negative transfer is a desirable property in transfer learning, especially under model shift and heterogeneous source-target relations. While our main theoretical contribution focuses on estimation error guarantees under moderate heterogeneity—consistent with prior work such as [1]—we appreciate the opportunity to clarify and expand on this point.
To address the reviewer’s concern more concretely, we have included a supplementary theoretical analysis (https://anonymous.4open.science/r/LoRT-113D/NNT-LoRT.pdf), which presents **a preliminary extension of the LoRT framework** aimed at mitigating the risk of negative transfer. Specifically, we propose a weighted regularized least squares formulation in which each domain is assigned a weight according to its relevance to the target. When the informativeness levels $h_k$ are known, we derive an optimal weighting rule; for the more practical case where $h_k$ is unknown, we suggest a preliminary initialization strategy. This allows the model to suppress the influence of poorly aligned sources and mitigate negative transfer. **(We kindly note that if the PDF appears misformatted in the browser (e.g., incorrect fonts), it is best to download and view it locally for proper rendering.)**
The key result of this extension is summarized by the following informal bound from Theorem 2 in the supplementary theoretical analysis:
> *Under a suitable choice of weights, the estimation error satisfies*
$$\text{Err} \lesssim \frac{rd_1 d_3}{N_T}.$$
*This matches the target-only rate and thus ensures no worse performance than using the target data alone.*
When $K = 1$, we further obtain:
$$\text{Err} \lesssim \frac{rd_1 d_3}{N} + \min\\left\\{ h_1 \sqrt{\frac{d_1}{N_T}}, \ \frac{rd_1 d_3}{N_T} \\right\\},$$
which confirms that sources with large $h_1$ (i.e., low task relevance) are automatically ignored, thereby preventing negative transfer.
We emphasize that this analysis is intended as a **preliminary theoretical supplement**, not a core claim of the paper. It illustrates that *LoRT’s structure naturally accommodates a "no negative transfer" extension* through learnable domain weights, and highlights a clear bias–variance tradeoff that governs transfer effectiveness.
While this preliminary exploration demonstrates the potential for mitigating negative transfer, *developing a fully principled and general solution lies beyond the scope of this submission* and is left for future investigation.
We hope this clarified and strengthened exposition addresses the reviewer’s concern more satisfactorily. | Summary: This paper addresses the challenge of data scarcity in the target task within Tensor Regression. The authors propose a novel transfer learning framework called Low-Rank Tensor Transitions (LoRT) to tackle this issue. LoRT employs a two-stage adaptation process to mitigate model shift and enhance adaptation to the target task.
In the first stage, Joint Low-Rank Learning (JLL), the framework simultaneously optimizes both the target and source tasks by leveraging a low-rank constraint across multiple datasets. This stage also incorporates a weighted averaging approach to refine parameter estimation.
In the second stage, Target-Specific Refinement (TSR), the parameters obtained from the first stage are further adapted by introducing target-specific learning components. A key aspect of this stage is that the low-rank constraint is applied only to the target-specific components, preventing overfitting to the target data while maintaining an optimal balance with the source task information.
Additionally, the paper provides theoretical guarantees for LoRT, mathematically delineating the conditions under which source task knowledge can be effectively leveraged. Furthermore, the authors extend LoRT to a distributed data environment by introducing D-LoRT, which minimizes communication overhead by locally performing tensor regression and subsequently integrating the models.
The proposed framework is empirically validated on Compressed Sensing and Tensor Completion tasks, demonstrating superior performance compared to conventional approaches.
## update after rebuttal
Thank you for the thorough rebuttal. I understand your points regarding the apparent existence of valid applications and the mention of addressing non-i.i.d. cases as future work. I am satisfied with this response, so I will leave my evaluation unchanged. I mistakenly posted my comment in the Official Comment section. My apologies.
Claims And Evidence: The proposed novel transfer learning framework with a two-stage adaptation process is theoretically well-justified based on assumptions about data distribution. Additionally, a reasonable algorithm is presented for application in a distributed data environment. Furthermore, the effectiveness of the proposed method is validated through appropriate evaluation experiments.
Methods And Evaluation Criteria: The proposed method is a transfer learning approach designed to leverage knowledge from source tasks to compensate for data scarcity in the target task. Consequently, Tensor Compressed Sensing (TCS), which reconstructs the original tensor from partially observed linear measurements, appears to be a more natural problem setting for evaluating the proposed method compared to Tensor Completion (TC), where only some entries are missing.
However, the evaluation of Tensor Compressed Sensing in this study is based solely on synthetic data. Conducting evaluations on real-world problems where Tensor Compressed Sensing is applicable could strengthen the claim regarding the practical utility of the proposed method.
Theoretical Claims: I have reviewed the theoretical guarantees of LoRT (Section 4.2) and D-LoRT (Section 4.3). Under the assumption of a Gaussian distribution, the theoretical guarantees appear to be correctly derived. However, there is room for discussion regarding the validity of the i.i.d. assumption across a wide range of tensor data.
Experimental Designs Or Analyses: As mentioned in the Methods and Evaluation Criteria section.
Supplementary Material: I have roughly checked the supplementary material. It provides additional results for the experiments presented in the main text, further reinforcing the validity of the proposed method. Additionally, the theoretical derivations involving complex tensor operations are thoroughly documented, enhancing the reliability of the approach. Furthermore, a detailed comparison with existing methods is included, which strengthens the justification for the proposed technique.
Relation To Broader Scientific Literature: This paper is broadly related to tensor regression, particularly recent tensor learning techniques that incorporate low-rank constraints. Additionally, it is closely connected to the field of transfer learning, providing theoretical guarantees for effectively applying transfer learning to tensor data.
Essential References Not Discussed: This paper focuses on transfer learning for tensor regression with low-rank constraints. As far as I am aware, the relevant prior studies for comparison have been appropriately cited.
Other Strengths And Weaknesses: As mentioned earlier, this paper has a strong contribution in proposing a novel transfer learning framework with theoretical guarantees to address the issue of data scarcity in Tensor Regression. Additionally, a concern is that Tensor Compressed Sensing, which appears to be a more natural problem setting for evaluating the proposed method, is only assessed using synthetic data. Evaluating it on real-world problems could strengthen the validity of the method. Another point of discussion is the assumption that the data are i.i.d., which may not always be valid across different datasets, and further examination of its appropriateness would be beneficial.
Other Comments Or Suggestions: In the definition of t-SVD in Definition 2.1, shouldn't the size of $\underline{C}$ be $d_1 \times d_4 \times d_3$?
Questions For Authors: 1. What are some real-world problems related to Tensor Compressed Sensing? Can the proposed method effectively address them?
2. Are there cases where the i.i.d. assumption on the data is difficult to satisfy?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. We appreciate the recognition of our theoretical framework, decentralized extension, and empirical evaluations. Below, we address the main concerns:
> **Concern 1:** Conducting evaluations on real-world problems where Tensor Compressed Sensing is applicable could strengthen the claim regarding the practical utility of the proposed method. What are some real-world problems related to Tensor Compressed Sensing? Can the proposed method effectively address them?
Tensor Compressed Sensing (TCS) arises in various real-world scenarios where acquiring full tensor data is costly or constrained. Prominent examples include:
- Magnetic Resonance Imaging (MRI): Reconstructing high-resolution 3D volumes from a limited number of Fourier measurements.
- Hyperspectral Imaging: Recovering spatial-spectral data cubes under limited sampling due to hardware or bandwidth constraints.
Our method is theoretically well-suited to these scenarios: it leverages both low-rank structures and transferable information from source tasks to enhance reconstruction quality in the presence of data scarcity and distribution shifts.
In the current paper, we focus on theoretical development and controlled validation. The TCS experiments on synthetic Gaussian measurements are explicitly designed to validate the generalization guarantees and the low-rank transfer mechanism presented in Sections 4.2 and 4.3. These settings enable precise evaluation of recovery error and alignment with theoretical predictions.
Moreover, this design follows a well-established practice in the tensor learning theory literature. Many prior theoretical works (e.g., Lu et al., 2018; Wang et al., 2021) also provide guarantees under compressed sensing assumptions, while using real data primarily for completion tasks.
We fully respect the reviewer’s suggestion that applying the method to real-world TCS data could further support its practical relevance. However, such experiments are not essential to validate our main theoretical claims, and would require additional modeling assumptions (e.g., real sensing matrices, noise models) and substantial engineering efforts that fall outside the scope of this theory-focused work. Due to the limited rebuttal timeframe, adding real-world TCS experiments is unfortunately not feasible. We will clarify this rationale in the final version and discuss such extensions as promising directions for future research.
> **Concern 2:** Are there cases where the i.i.d. assumption on the data is difficult to satisfy?
Yes, and we appreciate the reviewer raising this important point. While our current theory assumes that samples within each task are i.i.d., this is a **standard assumption** adopted in many theoretical works on transfer learning and tensor regression (e.g., Zhang et al., 2020; Qiu et al., 2022a). It facilitates a clear derivation of estimation bounds and highlights the key effects of low-rank structure and distribution shift.
We acknowledge, however, that real-world data often exhibit dependencies or structured sampling patterns that violate the i.i.d. assumption. Extending our theoretical framework to accommodate such cases is a promising and practically relevant direction for future work.
As an initial step in this direction, we have conducted preliminary experiments on tensor completion under non-i.i.d. settings, particularly involving mixed tube-wise and element-wise missing patterns (https://anonymous.4open.science/r/LoRT-113D/NIIDtable.png). We will provide a detailed discussion in the final version.
> **Concern 3**: In the definition of t-SVD in Definition 2.1, shouldn't the size of $C$ be $d_1 \times d_4$?
Thank you for catching this typo. We confirm the correct dimension is $d_1 \times d_4$ and will make the correction in the final version. | Summary: In this paper, the authors propose a novel method called Low-Rank Tensor Transitions (LoRT), designed to address issues such as model shift, covariate shift, and decentralized data management in tensor regression for transfer learning. Experimental results demonstrate that LoRT significantly outperforms traditional methods like TNN and k-sup in terms of average correlation error.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes. The experimental design is not very reasonable.
1. The experimental scenarios are somewhat limited, lacking diverse validation. To more comprehensively evaluate the effectiveness of the LoRT method, it is recommended to conduct extensive tensor regression tests on general datasets in the field of computer vision, thereby verifying its universality and robustness across different tasks.
2. The baseline methods compared in the paper were all published before 2021, which somewhat limits the timeliness and persuasiveness of the experimental results. To more convincingly demonstrate the superiority of the LoRT method, it is advisable to compare it with state-of-the-art methods proposed in recent years, thereby more accurately reflecting its competitiveness in the current research landscape.
Supplementary Material: Yes. All.
Relation To Broader Scientific Literature: None
Essential References Not Discussed: None.
Other Strengths And Weaknesses: In this paper, the authors propose a novel method called Low-Rank Tensor Transitions (LoRT), designed to address issues such as model shift, covariate shift, and decentralized data management in tensor regression for transfer learning. Experimental results demonstrate that LoRT significantly outperforms traditional methods like TNN and k-sup in terms of average correlation error. Although the theoretical proofs in the paper are relatively comprehensive, several issues warrant further exploration:
1. Although the LoRT method exhibits strong performance in experiments, its core idea lacks significant novelty. The application of tensor regression and transfer learning has already been well-established in prior research, and LoRT appears to be more of an improvement on existing techniques rather than a groundbreaking innovation.
2. The experimental scenarios are somewhat limited, lacking diverse validation. To more comprehensively evaluate the effectiveness of the LoRT method, it is recommended to conduct extensive tensor regression tests on general datasets in the field of computer vision, thereby verifying its universality and robustness across different tasks.
3. The baseline methods compared in the paper were all published before 2021, which somewhat limits the timeliness and persuasiveness of the experimental results. To more convincingly demonstrate the superiority of the LoRT method, it is advisable to compare it with state-of-the-art methods proposed in recent years, thereby more accurately reflecting its competitiveness in the current research landscape.
Other Comments Or Suggestions: None.
Questions For Authors: 1. Although the LoRT method exhibits strong performance in experiments, its core idea lacks significant novelty. The application of tensor regression and transfer learning has already been well-established in prior research, and LoRT appears to be more of an improvement on existing techniques rather than a groundbreaking innovation.
2. The experimental scenarios are somewhat limited, lacking diverse validation. To more comprehensively evaluate the effectiveness of the LoRT method, it is recommended to conduct extensive tensor regression tests on general datasets in the field of computer vision, thereby verifying its universality and robustness across different tasks.
3. The baseline methods compared in the paper were all published before 2021, which somewhat limits the timeliness and persuasiveness of the experimental results. To more convincingly demonstrate the superiority of the LoRT method, it is advisable to compare it with state-of-the-art methods proposed in recent years, thereby more accurately reflecting its competitiveness in the current research landscape.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and recognition of our contributions to robust tensor regression under distribution shift. We address the reviewer’s key concerns below.
> **Novelty:** The core idea of LoRT lacks significant novelty and seems to be an incremental improvement.
We respectfully clarify that while LoRT builds on prior transfer learning principles, it tackles a **substantially more challenging and underexplored setting**—that of *tensor-valued regression under shift*. Unlike prior works which focus on vector-valued regression (e.g., TransFusion) or assume shared covariates across tasks, **LoRT generalizes fusion-based transfer learning to the tensor domain**, where:
- The response and predictors are both high-order tensors.
- Low-rank priors must be imposed across multiple tensor modes.
- Task heterogeneity must be handled in decentralized, distribution-shifted settings.
To address these challenges, we propose a **new fusion regularizer based on the tubal nuclear norm**, a **two-step estimation procedure** specifically designed for tensors, and **estimation error bounds** derived under high-dimensional tensor regression settings.
These innovations are **not straightforward extensions of previous methods**, and to the best of our knowledge, LoRT is the **first theoretical framework** to jointly address model/covariate shift and low-rank tensor regression under multi-task and decentralized settings.
> **Experimental diversity:** Experimental diversity is limited. More general datasets from computer vision should be used to test robustness.
We acknowledge the value of broader empirical validation and note that our experiments are carefully designed to support the paper’s theoretical goals. Specifically, we focus on compressed sensing and completion tasks, which provide a controlled setting for directly assessing parameter recovery and validating the theoretical results presented in Sections 4.2 and 4.3. These tasks are intentionally selected to align with our analytical objectives, and the results already provide sufficient evidence supporting our claims.
To address the reviewer’s suggestion, we have conducted preliminary experiments on two video clips (*Apply Eye Make-up* and *Blowing Candles*) from the UCF-101 dataset, which is a standard dataset for evaluating tensor methods in computer vision [R1]. These experiments follow the setup described in Appendix A.2, and preliminary results are available at (https://anonymous.4open.science/r/LoRT-113D/CVtable.png).
[R1] Wang J, Zhao X. Functional Transform-Based Low-Rank Tensor Factorization for Multi-dimensional Data Recovery. ECCV 2024.
> **Baseline comparisons:** Baseline comparisons only include methods published before 2021. Newer methods should be considered.
We thank the reviewer for the suggestion. To the best of our knowledge, this paper is the first to consider the proposed transferable tensor learning setting, and no existing methods are directly comparable. As explained in Footnote 6, our goal is to evaluate the benefit of transfer for tensor completion with very limited observations. In such extremely sparse regimes, all tensor formats (e.g., t-SVD, TT, TR, Tucker) are fundamentally limited by data availability, and their performance differences become marginal. Therefore, we selected TNN (Lu et al., 2019b) as a representative baseline to isolate the effect of transfer, rather than focus on fine-grained differences between tensor models.
To address the reviewer’s suggestion, we have also conducted comparisons with several recently proposed tensor completion methods [R2]–[R4]. Preliminary results are available at (https://anonymous.4open.science/r/LoRT-113D/CVtable.png), and we will clarify this point in the final version.
[R2] Qiu Y, et al. Balanced unfolding induced tensor nuclear norms for high-order tensor completion. IEEE TNNLS, 2024.
[R3] Wang A, et al. Noisy tensor completion via orientation invariant tubal nuclear norm. PJO, 2023.
[R4] Tan Z, et al. Non-convex approaches for low-rank tensor completion under tubal sampling. IEEE ICASSP, 2023. | Summary: This paper on Low-Rank Tensor Transitions proposes a new tensor regression framework focusing on transferable tensor learning and decentralized data management.
This work aims to address three challenges. Working with limited sample sizes,
shifting modes as well as covariance shift.
To tackle these challenges, the paper proposed a novel fusion regularizer and a distributed variant for tensor regression. The idea here is that individual nodes compute their estimators, which are sent to a target node and aggregated by solving a minimization problem.
The proposed framework comes with theoretical guarantees.
Claims And Evidence: 1. The main focus of the paper is theoretical, with theoretical proofs.
2. I have not found evidence in the supplementary code that the authors actually solved a distributed problem using real hardware nodes.
3. Generalization with limited data - The paper supports this claim with experimental evidence from synthetic and real-world data.
Performance under distribution shifts and covariate shifts: This claim is further experimentally supported by an evaluation of synthetic data.
4. “Fusion regulariser enables effective knowledge transfer while accepting for model shits”.: Theoretically, this formulation would allow this, but there is a lack of experimental ablation on the effect of individual terms in the regulariser.
Methods And Evaluation Criteria: 1. The paper presents Gaussian synthetic experiments and a real-world tensor completion experiment using the YUV RGB video dataset.
On the video dataset, the experimental section measures the reconstruction performance of a video tensor is given a limited number of input frames.
- Using real data, the authors observe improved video reconstruction performance of both their local and distributed variants compared to a t-SVD-based approach by Lu et al.
Theoretical Claims: The supplementary material is extensive. I superficially looked at the proofs, and they seemed okay to me.
Experimental Designs Or Analyses: 1. The theoretical section claims that $\|W |\_*$ induces a low-rank structure. Is there any experimental evidence to back up this claim?
2. The proposed method uses tensor singular value decomposition. Other tensor decompositions, like CP, Tucker, and Tensor-Train, remain underexplored. Line 176 claims they can be easily replaced, but no such experiment is provided in the paper.
3. Using average relative error seems a logical approach to evaluate the synthetic data results.
4. In real-world data, the target task is to reconstruct a video frame from a very limited number of previous frames. The evaluation is limited to PSNR. Since this is one of the first works in this domain, it may be wise to incorporate other widely used evaluation metrics like SSIM and LPiPS to strengthen this baseline for future work.
Supplementary Material: I looked at the proofs and the Matlab code in the zip file.
The MATLAB source code is partially documented.
Most files have docstrings. The `f_tsvd_f.m` file, for example, does not.
The authors remembered to set the seed for their pseudorandom number generator in their experiments. I expect this work to be reproducible.
Relation To Broader Scientific Literature: The paper relies on the t-SVD from kernfeld et al. While reading the paper, I was uncertain how or if the HOSVD from De Lathauwer et al. ( https://epubs.siam.org/doi/10.1137/S0895479896305696 ) was related?
How does the presented approach relate to Tensor Regression Networks ( https://www.jmlr.org/papers/volume21/18-503/18-503.pdf )?
Essential References Not Discussed: Most people rely on the extensive review by Kolda and Bader for the notation. It's nice that this paper also took inspiration from it, which makes the paper accessible. Overall, to the best of my knowledge, the related work is sufficiently discussed.
Other Strengths And Weaknesses: The authors present an interesting and novel theoretical approach to distributed tensor regression. This work is important, especially given the privacy concerns the machine learning community must respect in many countries.
The idea of transmitting regression parameters instead of data in the decentralized setting is interesting. It is intuitive that parameters have lower memory requirements compared to data. An increase in the model/tensor size or number of source nodes should allow this approach to scale well.
Other Comments Or Suggestions: Clearly, lots of hard work went into this well-written paper.
Questions For Authors: - What potential additional real data sets would this framework allow us to work with?
- Would it be possible to study the UK Biobank MRI data and compare this paper's approach to Kossaifi et al. ( https://www.jmlr.org/papers/volume21/18-503/18-503.pdf )?
- What is the overall computational complexity - Similar to tensor-based LoRA [1], does the proposed method suffer from the curse of dimensionality?
- Assume the source and target data are highly different, potentially even from different distributions, is there a convenient way to detect such cases in the distributed setting?
- Both LoRT and D-LoRT use weighted averaging of the source and target tasks. Is the resulting W in equation 7 stable? Can we measure the norms for individual $W^k$?
- Line 340 states, “sample sizes are sufficiently large” for D-LoRT. Does this mean that D-LoRT generally requires more training data than LoRT?
[1] Bershatsky, Daniel, et al. "LoTR: Low tensor rank weight adaptation." arXiv preprint arXiv:2402.01376 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and recognition of our theoretical framework, analysis, algorithmic design, and empirical results. Below we address the specific concerns.
> Hardware implementation for distributed setting
Current implementation focuses on validating the theory, but D-LoRT is parallelizable: source gradients are computed independently and aggregated via proximal updates. It is compatible with distributed frameworks like MPI. We will release a version simulating distributed computation in the final submission.
> Ablation study on the fusion regularizer
While we do not explicitly ablate individual terms in the regularizer, their effects are indirectly reflected through varying the number of nodes (Tables 1–3, Fig. 4). These experiments serve as empirical ablations. We will clarify this in the final version.
> Empirical evidence for low-rank structure
The low-rank-inducing property of TNN is well established (e.g., Lu et al., 2019b). Our visualizations (https://anonymous.4open.science/r/LoRT-113D/LowRankFig.png) show rapid spectral decay in recovered tensors, confirming low-rankness.
> Clarification on Line 176 for lack of CP/Tucker/TT experiments
We respectfully clarify that our paper does not claim these decompositions can be "easily" replaced, nor does it mention CP. Line 176 states that "TNN can also be replaced with other norms, such as Tucker-based (Liu et al., 2013), Tensor Train (Imaizumi et al., 2017), or Tensor Ring norms," which aims to highlight the modularity of LoRT. However, we acknowledge that the word "replace" may suggest direct applicability. “Extend to” would be more appropriate.
This paper focuses on validating LoRT under t-SVD, chosen for its computational and theoretical advantages. CP is excluded due to the NP-hardness of its nuclear norm.
While LoRT can in principle be extended to Tucker or TT, such adaptations require nontrivial redesign and lie beyond the scope of this theory-oriented work.
> Evaluation limited to PSNR
We use both PSNR and RE, which are standard metrics sufficient to support our core claims. That said, we agree that SSIM and LPIPS could offer complementary insights and will include them in the final version.
> Code documentation and reproducibility
We will add more docstrings and ensure consistent random seeds for reproducibility.
> Relation to HOSVD
Our method builds on t-SVD, which is structurally distinct but complementary to HOSVD. While HOSVD is not the focus here, our ideas can be extended to it.
> Relation and comparison to TRN
TRN focuses on end-to-end deep learning with tensor layers for single-task prediction, aiming at architectural efficiency. In contrast, our work studies transfer learning across heterogeneous tasks with theoretical guarantees. Due to these fundamentally different goals and settings, a direct comparison is not appropriate. We will cite TRN and discuss its relevance in the final version.
> Potential datasets and UK Biobank
This work focuses on theoretical development, and the current experiments—based on both synthetic and structured real-world data—are designed to support our theory. We believe this setting is appropriate for the scope and goals of the paper.
From a theoretical perspective, LoRT applies to a variety of structured tensor data, such as hyperspectral images, and multi-subject neuroimaging. This includes large-scale datasets like UK Biobank, where transfer across tasks or sites may be beneficial. That said, applying LoRT to such domains requires domain-specific modeling and infrastructure, which are beyond the scope of this work.
> Computational complexity and scalability
LoRT is designed with a focus on theoretical guarantees rather than computational efficiency. As noted in Line 428, it may face scalability challenges in high-dimensional settings, similar to tensor-based LoRA. Such challenges are common in tensor learning and arise from the inherent complexity of modeling high-dimensional structure. Addressing them is beyond the scope of this theory-oriented work, and we will clarify this in the final version.
> Detection of highly heterogeneous source tasks
Our framework assumes moderate heterogeneity, a standard setting in transfer learning. In practice, detecting large distribution gaps is important. Common approaches include computing divergence scores (e.g., MMD), domain classification, or local statistics. We will briefly discuss these options and their integration with LoRT (e.g., adaptive weighting) in the final version.
> Stability of $W$ and norms of $W_k$
Thm 4.2 guarantees estimator stability, and Thm 4.3 refines the estimate via target-specific adaptation. While we do not analyze individual norms, this is an interesting extension which we will comment on in the revision.
> Does D-LoRT require more data than LoRT?
Yes. Since D-LoRT transmits estimated parameters (not data), local models must be more accurate, requiring more samples per source task.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. I looked at the other reviews. Reviewer DU3m writes, ''LoRT appears to be more of an improvement on existing techniques rather than a groundbreaking innovation.'' I still think we can accept this paper. It's solid work, and it advances the field. I believe there is an audience for it at ICML. I continue to recommend accepting this work in the proceedings.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your constructive feedback and continued support of our work. We are particularly grateful for your thoughtful recognition of the theoretical contributions, decentralized modeling design, and the potential impact of the proposed LoRT framework. Your suggestions, such as highlighting the importance of ablation analysis, connections to alternative tensor decompositions, and real-world applicability, have been especially insightful and deeply appreciated.
We are also pleased to share that several concerns raised by other reviewers have been further addressed through additional theoretical clarification and extended experiments. For example, to partially address Reviewer jygk’s Concern 1, we conducted preliminary experiments on two additional datasets, following Reviewer DU3m’s suggestion to explore CV datasets. The results have been appended to the anonymous link provided in our response to Concern 2.
Once again, thank you for your engagement throughout the review process. Your feedback has been instrumental in shaping this work, and we believe the current work now presents a solid and coherent contribution to the field. | null | null | null | null | null | null |
Craftium: Bridging Flexibility and Efficiency for Rich 3D Single- and Multi-Agent Environments | Accept (poster) | Summary: The paper introduces Craftium, a new platform for creating rich 3D environments that balance flexibility and computational efficiency. This design allows researchers to develop complex environments with minimal code. Notably, Craftium supports both single-agent and multi-agent scenarios in the same rich 3D worlds, which addresses a gap in existing platforms that often lack multi-agent support or open-world scale. Compared to 3D game-based platforms like VizDoom or MineDojo, Craftium is fully customizable and open-source – researchers can create new worlds, objects, and game mechanics using Lua. In contrast, many established environments allow only limited parameter tweaks or rely on closed-source games. The platform is released as open source with extensive documentation and community asset support, distinguishing it as a practical and extensible tool for researchers.
**Strengths**:
- Novelty and significance of the Craftium platform. It addresses a clear need in the RL/AI research community for environments that are both rich and scalable.
- Empirical performance advantage: the paper provides benchmarking evidence that Craftium can simulate on the order of thousands more steps per second than alternatives, a crucial improvement for running long or multi-agent experiments.
- Craftium’s design emphasizes ease of use and integration – by adopting the Gymnasium/PettingZoo interfaces and providing documentation, it can plug into existing workflows with minimal friction.
**Weaknesses**:
- Craftium’s environments are voxel-based, which means the world is composed of blocky units
- The paper does not explicitly address how well Craftium could handle detailed physics or continuous control dynamics beyond grid-aligned interactions.
- Craftium is highly flexible; it does not come with a standardized benchmark suite of tasks; it is a toolkit.
- However, these weaknesses are not fundamental flaws but rather inherent trade-offs or scope limitations.
Claims And Evidence: - They measured the environment simulation speed (steps per second) for Craftium versus two popular platforms: MineDojo (a Minecraft-based environment suite) and VizDoom (based on the Doom engine). The test used three different environments per platform and averaged results over 5 runs each, on a machine with a single NVIDIA A5000 GPU and an Intel Xeon Silver CPU. The outcome (presented as Figure 7 in the paper) showed Craftium achieving on the order of 2000–2500 steps per second, which is competitive with VizDoom (noting that VizDoom isn’t fully 3D) and vastly higher than MineDojo.
- The results, summarized in Figure 8, show different learning outcomes depending on task difficulty. For instance, in a “ChopTree” task (rewarding the agent for chopping trees), PPO was able to learn a successful policy, reaching an average of over 6 trees chopped per episode at convergence.
- While the paper does not delve into detailed multi-agent metrics.
Methods And Evaluation Criteria: The methods are well-designed for demonstrating the platform’s strengths, and the evaluation criteria align with the key claims of the paper. However, the absence of standardized benchmark environments and limited large-scale multi-agent experiments could be areas for improvement in future research.
Theoretical Claims: There are no Theoretical Claims in the paper that need to be checked.
Experimental Designs Or Analyses: They measured the environment simulation speed (steps per second) for Craftium versus two popular platforms: MineDojo (a Minecraft-based environment suite) and VizDoom (based on the Doom engine). The test used three different environments per platform and averaged results over 5 runs each, on a machine with a single NVIDIA A5000 GPU and an Intel Xeon Silver CPU. I think there should be an experiment that shows the parallelized training environment. e.g., Several environments start at the same time.
Supplementary Material: The supplementary material is the code base of the Craftium environment. It shows that the author is ready to open source the codebase and that the codebase is runable.
Relation To Broader Scientific Literature: Craftium extends the work of these environments by combining customizability, computational efficiency, and open-ended 3D world design, making it suitable for both traditional RL training and more exploratory AI research.
Craftium builds on MARL research by providing a fast, open-ended alternative to MineDojo and PettingZoo-compatible 3D environments, making it an attractive platform for large-scale MARL experiments.
Craftium expands open-ended RL research by providing scalable, procedurally generated 3D worlds where lifelong learning and AI-driven exploration can be tested in ways that go beyond static datasets or limited pre-defined tasks.
Craftium positions itself as an alternative to existing game-based RL research tools, benefiting from its lighter computational footprint and easier modding support.
Essential References Not Discussed: - ProcTHOR: Large-Scale Embodied AI Using Procedural Generation
- RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation
- ManiSkill: Generalizable Manipulation Skill Benchmark with Large-Scale Demonstrations
Although the environments presented by the authors differ from conventional robotic simulators, I believe it may still be necessary to mention and discuss similar RL environments in the paper.
Other Strengths And Weaknesses: No other strengths and weaknesses.
Other Comments Or Suggestions: No other comments and suggestions.
Questions For Authors: 1. In a parallel environment setting, does Craftium still maintain its high-speed performance? For example, when training RL models, can we efficiently run training across 512 parallel environments simultaneously?
2. Is there any plan to enhance Craftium’s rendering capabilities by integrating a more realistic renderer, such as the Minecraft Raytracing Renderer(Sonic Ether’s Unbelievable Shaders)?
3. Are the authors interested in leveraging large models for the automatic generation of simulation-based environments? Would it be possible to introduce a framework for automated environment generation in future updates?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's comments on Craftium's novelty and its role in addressing a clear need in the RL/AI research community. We also thank the reviewer for the valuable feedback and references. Below, we address the concern regarding the parallel environment benchmark and respond to the questions.
**[Environment parallelization]**
We completely agree with the reviewer on the importance of evaluating Craftium's performance with parallel environments. Accordingly, we conducted a new performance benchmark following the reviewer's suggestion. As shown in the [plot](https://bit.ly/43YfHPO) (using the setup from Section 3.4), Craftium significantly benefits from environment parallelization, achieving competitive results compared to VizDoom (despite VizDoom being considerably simpler and not 3D) and reaching over 12K steps per second at the high end. Finally, MineDojo lacks support for parallel environments, preventing us from obtaining its performance metrics despite considerable effort. This limitation makes MineDojo impractical for many research scenarios (e.g., large-scale experimentation) and for algorithms that greatly benefit from parallel environments (e.g., PPO).
**[References]**
We appreciate the reviewer for suggesting three relevant and up-to-date works that enrich the literature on simulation frameworks for robotics. We will incorporate and discuss these references alongside other photorealistic simulators in the final version of the paper.
**[Questions]**
**Q1:** Please refer to the *"Environment parallelization"* section above.
**Q2:** Currently, we focus on other aspects of Craftium's development, but we recognize the potential benefits of ray tracing (e.g., enhanced realism). However, Craftium directly benefits from Luanti updates, which may introduce ray tracing in the future. Luanti has an active community that continuously enhances the engine. For instance, a recent [update](https://blog.luanti.org/2024/11/10/5.10.0-released/) implemented rendering improvements. In the meantime, a more realistic appearance can be achieved using high-resolution texture packs like [Realistic 1024px](https://content.luanti.org/packages/Clemstriangular/phototextures_1024px/).
**Q3:** Yes, integrating Craftium with LLMs for automatic environment generation is an exciting research direction we plan to explore in the near future. Craftium's simple and expressive Lua interface reduces boilerplate code and helps improve the reliability of LLM-generated code. Moreover, Craftium and its engine are extensively documented, which could further facilitate integration by providing LLMs with structured, high-quality reference documentation. Additionally, recent approaches like OMNI-EPIC [1] could greatly benefit from Craftium's richness, flexibility, and efficiency for generating more complex and diverse scenarios.
[1] Faldor et al. "OMNI-EPIC: Open-endedness via Models of Human Notions of Interestingness with Environments Programmed in Code." ICLR 2025. | Summary: The paper introduces Craftium, a flexible, efficient, and user-friendly framework for creating rich 3D environments for single- and multi-agent research in autonomous systems, such as reinforcement learning (RL), embodied AI, and multi-agent reinforcement learning (MARL). It addresses the trade-off between computational efficiency and environment richness, which is a common limitation of current platforms.
Craftium is built upon the open-source Luanti game engine, allowing users to easily design customizable 3D voxel environments through an accessible Lua Modding API, rather than relying on complex Domain-Specific Languages (DSLs) or restrictive game-based platforms. It supports essential research features like procedural environment generation, customizable reward structures, adjustable observation/action spaces, and single- and multi-agent setups. Craftium is compatible with widely-used standards in the RL community, including Gymnasium and PettingZoo interfaces.
Claims And Evidence: The authors provide substantial evidence through examples, benchmarks, and detailed descriptions.
- Craftium significantly reduces computational cost compared to existing similarly rich alternatives (e.g., MineDojo and VizDoom). A clear benchmark comparison is provided (Fig. 7), showing quantitative results across multiple environment setups. The authors measure and report steps per second, clearly demonstrating the computational superiority of Craftium over MineDojo and competitive performance with VizDoom.
- Craftium enables easy creation of diverse, customizable environments. Multiple illustrative examples provided in the paper demonstrate diverse scenarios created with minimal Lua scripting. Code snippets and mod examples explicitly illustrate ease of use (e.g., Figs. 4, 5, and examples in the appendix).
- Craftium’s architecture offers flexibility via the Luanti Modding API and is enhanced by an active community. The authors present clear examples demonstrating mod customization (e.g., Figs. 21, 22 in the appendix), and explicitly mention community-contributed assets and games (e.g., VoxeLibre).
Methods And Evaluation Criteria: The paper introduces Craftium as a flexible, efficient framework designed specifically to address the limitations found in existing environments for reinforcement learning (RL), multi-agent RL (MARL), and embodied AI research.
The choice of using Luanti, an open-source voxel-based game engine with a robust Lua Modding API, aligns effectively with the stated need for customizable and computationally efficient environments. This decision directly addresses the identified limitations of closed-source and computationally expensive platforms like Minecraft or overly simplistic 2D grids.
Lua Modding API and Python Interfaces (Gymnasium/PettingZoo):
The combination of the Lua-based modding approach with the widely adopted Gymnasium and PettingZoo interfaces is logical and pragmatic. It ensures broad usability and integration with existing reinforcement learning tools and workflows.
Single- and Multi-Agent Setup:
The decision to explicitly support both single-agent and multi-agent settings is clearly justified by the growing research interest in MARL and complex agent interactions, which are currently underserved by existing frameworks.
Procedural Generation and Customizability:
Including procedural generation capabilities and extensive modding options addresses the critical needs of research in fields such as continual reinforcement learning (CRL), unsupervised environment design (UED), and embodied agents. The proposed API is coherent, easy to understand, and relevant for the intended use-cases.
Theoretical Claims: The paper does not contain any theoretical claims or formal mathematical proofs.
Its contributions and claims are practical, conceptual, and empirical in nature, such as introducing a computationally efficient framework, providing a versatile API, and demonstrating applicability through a variety of illustrative experiments and benchmarks.
Experimental Designs Or Analyses: I carefully checked the experimental designs and analyses provided in the paper. Overall, the experiments and analyses are sound, valid, and well-designed to demonstrate the capabilities and utility of the proposed Craftium framework. The paper explicitly states that hyperparameters were not tuned (e.g., Appendix F.1). While this is understandable given the illustrative nature, a minimal ablation or hyperparameter sensitivity analysis would further strengthen experimental conclusions, particularly in demonstrating Craftium’s suitability for rigorous algorithm development.
Supplementary Material: n/a
Relation To Broader Scientific Literature: Craftium bridges these two extremes by providing efficient, highly customizable environments using the open-source Luanti engine, combining the computational efficiency often found in simpler platforms (such as MiniGrid by Chevalier-Boisvert et al., 2023, or Craftax by Matthews et al., 2024) with the flexibility and richness typically exclusive to fully-featured environments (like MineDojo).
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: see above
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are pleased that the reviewer acknowledges Craftium's role in bridging the gap between computationally efficient yet simple environments and those that are rich and flexible but computationally costly, as well as its logical, pragmatic, and well-justified design.
We agree that an ablation study or hyperparameter sensitivity analysis would further strengthen our experimental results and demonstrate Craftium's suitability for rigorous algorithm development. To this end, we identified the LLaVa-Agent prompt as the most arbitrary hyperparameter in our experiments (detailed in Appendix F.3), as the remaining hyperparameters follow standard values known to work correctly (please refer to this [ICLR 2022 blog](https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/)).
Accordingly, we have created four distinct prompts: **(1)** direct and imperative tone, **(2)** emphasis on the primary task, **(3)** listing possible actions, and **(4)** structured decision-making. Due to response length limitations, we provide a [table](https://drive.google.com/file/d/1LS-by0HVsH17aBO7cMDOzQik3AfSXcrh/view?usp=sharing) detailing the prompts used.
To evaluate their impact on agent performance, we conducted five 1-hour runs for each prompt. The results, shown in the [plot](https://drive.google.com/file/d/1KMep8_1is82-Jfd7ebkLzf0eo0A1z-Ov/view?usp=sharing), present the average final episodic return for each option. As observed, the first prompt yields the best performance, followed by the third, suggesting that prompts with a more direct and imperative tone are most effective in guiding the LLaVa-based agent toward the environment's goal.
We appreciate the reviewer's suggestion, as we believe that this experiment further highlights Craftium's suitability for controlled and insightful analyses. We are open to any additional questions or requests for further clarification. | Summary: The paper presents Craftium, a highly customizable and efficient multi-agent 3D environment tailored to RL research. Craftium builds on the open-source game engine environment Luanti. It can be used to easily build rich 3D environments with numerous possibilities. Benchmarks against other environments show the superiority of Craftium in terms of average steps per second, which allows significant scaling of training and testing RL agents. The paper includes several examples of environments that were built with Craftium. Code examples are included to show its simplicity, and results of RL agents trained on the environments to demonstrate its usability.
Claims And Evidence: The paper is very well written. The claims are supported with examples of environments and code snippets.
Methods And Evaluation Criteria: Methods and evaluation makes sense.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: I don't see any issue with the experimental design.
Supplementary Material: The paper is self-contained so I did not see a need to review the supplementary material.
Relation To Broader Scientific Literature: The space of video games / game engine environments for RL research is pretty crowded. Still, I think that this work makes an important contribution by providing a general-purpose easy to use and highly efficient environment for RL researchers.
Essential References Not Discussed: The related work section covers most relevant work. In the high-end photo/physics-realistic space I would mention ThreeDWorld (https://www.threedworld.org) in addition to Habitat and AI2-Thor, as it supports both outdoor and indoor environments.
Other Strengths And Weaknesses: Nothing to add.
Other Comments Or Suggestions: Nothing to add.
Questions For Authors: A general question which could be worthwhile touching on in the paper is how much gain do we get for RL research using 3D environments such as Craftium over 2.5D such as VizDoom?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are pleased that the reviewer recognizes Craftium as an important contribution and finds the paper well-written with well-supported claims.
We also appreciate the reviewer for suggesting a valuable reference that strengthens our coverage of realistic simulators. We will include ThreeDWorld in the final version of the paper.
Regarding the reviewer's question, the 2.5D nature of VizDoom (due that is based on the Doom game) limits its environments in multiple ways compared to fully 3D frameworks such as Craftium. Some of the most critical limitations are the following:
- The agent's viewpoint is restricted to a horizontal plane, preventing it from truly looking up or down.
- Level height (floor and ceiling) is stored in a 2D matrix, making it impossible to create overlapping structures like bridges, floors, or buildings.
- Enemies and objects are 2D sprites that change in size and angle based on the agent's position.
These limitations make VizDoom environments significantly different from more realistic and diverse 3D scenarios as those in Craftium, failing to cover fundamental challenges for autonomous agents that are of interest for current research e.g., spatial 3D reasoning [1] and complex agent-environment interactions [2].
Furthermore, 2.5D environments greatly limit the diversity of tasks and scenarios, which is particularly relevant for areas like continual reinforcement learning [3], unsupervised environment design [4], and meta-learning [5], all of which are of growing interest to the research community [6].
Craftium is especially relevant to these fields, as it enables diverse, 3D, vast open-world environments that are computationally efficient and support multi-agent scenarios, opening the door to exciting future research directions.
Finally, we agree with the reviewer on the importance of this topic and will include a more in-depth discussion in the final version of the paper.
[1] Chen et al. "SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities." CVPR 2024.
[2] Wang et al. "Voyager: An Open-Ended Embodied Agent with Large Language Models." TMLR 2024.
[3] Abel et al. "A Definition of Continual Reinforcement Learning." NeurIPS 2023.
[4] Beukman et al. "Refining Minimax Regret for Unsupervised Environment Design." ICML 2024.
[5] Bauer et al. "Human-Timescale Adaptation in an Open-Ended Task Space." ICML 2023.
[6] Hughes et al. "Open-endedness is Essential for Artificial Superhuman Intelligence." ICML 2024. | Summary: The paper introduces Craftium, a new platform for creating 3D environments aimed at reinforcement learning and multi-agent research.
Claims And Evidence: Claim 1: High Flexibility and Rich Features. The authors assert that Craftium allows “nearly limitless” possibilities for creating custom environments, in contrast to many existing platforms that only offer fixed games or limited parametric tweaks.
Claim 2: First to Support Vast 3D Open-World + Multi-Agent. The paper highlights that Craftium is “the first framework” enabling huge 3D open-worlds with multi-agent support
Claim 3: Major Efficiency Improvement (Speed). A core claim is that Craftium is significantly more computationally efficient than comparable rich 3D environments.
Claim 4: Ease of Integration and Use. The authors claim Craftium is user-friendly: it supports Gymnasium/PettingZoo APIs out-of-the-box and is opensourced
The most problematic claims are those around “unlimited flexibility” and being uniquely first in certain capabilities – these are somewhat overstated without direct comparative experiments..
Methods And Evaluation Criteria: The paper is largely an engineering contribution – the “methods” consist of describing the Craftium framework’s design and implementation. The approach taken (using a modified game engine with scripting for environment definition) is sensible for the stated problem.
The performance benchmarking focuses only on throughput (steps/sec). While this is crucial, the evaluation could be more comprehensive. For example, the paper does not report memory usage, scalability with number of agents, or how performance is affected by increasing observation resolution or world size.
The choice of comparison frameworks in the benchmark is limited. The authors compare to MineDojo and VizDoom. The RL task demonstrations are not used to rigorously evaluate learning or agent performance.
No user study or complexity analysis is provided for the environment authoring process. Since a major point is that Craftium makes it easy to create custom environments, it would strengthen the paper to have a more formal evaluation of that.
Theoretical Claims: This paper does not make any significant theoretical claims.
Experimental Designs Or Analyses: The illustrative RL experiments are less about rigorous analysis and more about showcasing capabilities. The analysis of the performance results is quite minimal – essentially, “Craftium is faster, presumably because of C++ vs Java and fewer layers”. This explanation is valid and actually quite obvious. The authors didn’t attempt any deeper profiling.
As noted, the experiments miss direct comparisons for RL outcomes across frameworks. It would strengthen the paper to see, for example, the same task implemented in both MineDojo and Craftium and an RL agent’s learning curve in each.
Supplementary Material: The supplementary material includes the codes which helps verify the framework.
Relation To Broader Scientific Literature: The paper situates Craftium in the context of existing environment platforms and benchmarks. The authors demonstrate a good awareness of prior work, citing a broad range of relevant literature.
Essential References Not Discussed: Unity: A General Platform for Intelligent Agents
Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot (ICML 2021)
Other Strengths And Weaknesses: The paper’s contribution is primarily engineering-oriented, which can make it seem less novel in a research sense. Craftium is not introducing a new algorithm or new scientific finding; it is packaging existing ideas (voxel world, Lua scripting, Gym interface) into a platform. In terms of novelty, many components of Craftium existed separately: Minecraft proved the value of voxel worlds for open-ended tasks, Lua modding is a known technique in games, and others have integrated environments with Gym. The authors’ work in combining these elements in a coherent, open-source system is commendable, but it feels more like an incremental
Other Comments Or Suggestions: There is a minor typo in the abstract: “Micraft-based frameworks” should be “Minecraft-based frameworks.” This appears to be a small spelling mistake. Make sure to fix “Micraft” to “Minecraft” in the final version.
Questions For Authors: 1. What tasks or environments were used for the performance comparison with MineDojo and VizDoom? Can you clarify if all frameworks were tested under similar conditions (e.g., same observation resolution, minimal entities)?
2. How does Craftium perform with multiple agents simultaneously? Have you benchmarked scenarios with, say, 5 or 10 agents in one environment all acting at once?
3. How is the physics simulation in Craftium? Only axis-aligned gravity and simple collisions like Minecraft?
4. Can Craftium support continuous control of an agent’s joints or is it strictly the discrete game controls?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful feedback and the time spent evaluating our paper. Below, we address the concerns raised.
**[Incremental contribution]**
While Craftium's individual components have existed separately, no prior framework combines ease of use, high efficiency, multi-agent support, extensive flexibility, and open worlds. This unique integration is more than an incremental improvement: it makes previously impractical research directions feasible. For instance, works in unsupervised environment design, open-ended learning, continual reinforcement learning, and meta-learning can now leverage large-scale experiments in rich 3D open-world and (single- or) multi-agent environments without prohibitive computational costs. We believe these capabilities make Craftium a valuable tool for future research and of interest to the ML community.
**[Performance]**
We have significantly expanded the performance benchmarking in line with the reviewer's suggestions (we have followed the experimental setup from Section 3.4).
- **Observation resolution (see [plot](https://bit.ly/4iIisJA)):** Craftium's step/s loss for 128×128 observations is minimal (less than 5%). For larger observations (512×512), the loss is more significant (around 33%). However, in such cases, the performance bottleneck is likely in the model processing the images rather than in Craftium itself.
- **Number of agents (see [plot](https://bit.ly/43mW6bT)):** As shown, Craftium's multi-agent implementation scales efficiently, with a 5% performance loss in the worst case. This loss is minimal, and the fluctuations in the plot are due to the randomness of performance measurements due to the OS, other processes, etc.
- **World's complexity (see [plot](https://bit.ly/4lmKCvJ)):** The plot compares Craftium's performance in two worlds of varying complexity: `flat_world`, a minimal environment containing only the task scenario, and `minetest_world`, the base open-world environment. Although complexity impacts performance, it remains very high in both cases.
- **Memory:** We have measured memory consumption across different tasks. Craftium (660MB) is notably lighter than MineDojo (1.7GB), the only framework with comparable environment richness. VizDoom (84MB) is the lightest due to its minimalist design at the cost of simplicity (e.g., no 3D).
- **Number of environments (see [plot](https://bit.ly/43YfHPO)):** Craftium benefits significantly from parallelization, achieving impressive performance comparable to the much simpler VizDoom. In contrast, MineDojo lacks parallel environment support, making it impractical for many research scenarios, and despite extensive efforts, we could not obtain its performance metrics.
These results will be included in the final version of the paper together with further details and extended analysis.
**[Environment creation process]**
We fully agree with the reviewer and have added a new appendix to the paper detailing the complete environment creation process. As we cannot include a direct link to the text, we provide new [figures](https://bit.ly/4jcJWXL) that illustrate this simple process.
**[RL outcome comparison]**
We believe that directly comparing RL outcomes from identical tasks in Craftium and MineDojo may not accurately reflect Craftium's contributions and could shift attention from the main focus of the paper, potentially leading to misleading conclusions. Craftium is a new framework designed to create diverse environments for AI research, and the experiments showcase its capabilities and compare them to other environment creation tools. A comparison between RL outcomes in Craftium and MineDojo could be misinterpreted, as differences are likely to arise from various confounding factors (e.g., different mechanics or texture packs), rather than any inherent superiority of one of the environments.
**[Other comments]**
- **Claims:** We will revise the claims based on the reviewer’s suggestion in the final version of the paper.
- **Typo:** We sincerely appreciate the reviewer pointing out the typo. It will be corrected in the final version.
- **Reference:** We are grateful to the reviewer for suggesting an additional reference, it will be included in the final version of the paper.
**[Questions]**
**Q1:** Due to fundamental differences between the frameworks, replicating the exact environment setup is not feasible. We have used multiple scenarios for each framework to ensure a fair comparison (see Appendix E).
**Q2:** Please refer to the *"Performance"* section above.
**Q3:** Yes, by default. However, more complex physics can be implemented using the Modding API (some examples [here](https://bit.ly/4hUIU1k)).
**Q4:** Currently, Craftium supports continuous control only for the main camera. We see continuous control as a promising direction for future work and plan to implement it using Lua mods within Craftium's current framework.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. The clarification regarding the benchmarking and Environment creation makes a strong point for its utility to the community. Based on the rebuttal and the authors’ clear commitment to improving the final version, I am updating my overall recommendation to Weak Accept. | null | null | null | null | null | null |
Q-Supervised Contrastive Representation: A State Decoupling Framework for Safe Offline Reinforcement Learning | Accept (poster) | Summary: This paper introduces SDQC, a method designed to address the OOD problem in safe offline RL. By decoupling states into two independent representations, SDQC improves decision-making, safety, and generalization when facing unseen states. The method employs Q-supervised contrastive learning to guide the learning of state representations based on Q-values and selects policies based on safety evaluations. Experimental results show that SDQC outperforms baseline algorithms, achieving zero violations in most tasks. The paper also provides theoretical evidence that SDQC generates coarser representations while still preserving the optimal policy.
Claims And Evidence: The claims in the paper are supported by clear and convincing evidence. This theoretical analysis supports SDQC’s ability to better handle OOD scenarios and perform well in unseen environments, which is validated through experimental results. Moreover, the theoretical analysis shows that SDQC can effectively separate these two aspects, allowing precise decisions under different safety conditions.
Methods And Evaluation Criteria: The proposed method and evaluation standards are highly relevant to current problems and applications, particularly in the field of safe offline RL. The SDQC method addresses the challenge of OOD problems, which is crucial in safety-critical applications like autonomous driving. The use of the DSRL benchmark dataset allows comprehensive evaluation across various tasks, demonstrating SDQC's ability to achieve zero violations.
Theoretical Claims: The theoretical analysis is clear and the proof is rigorous. Theorem 3.1 highlights the relationship between bisimulation and Q*-irrelevance representations in MDPs, showing that the latter is a coarser but equally effective representation for deriving the optimal policy.
Experimental Designs Or Analyses: The comparative experiments with other baselines on the DSRL benchmark are sufficient, showing safer performance in all tasks.
The generalization experiments involved Goal and Push tasks of different difficulties, showing that SDQC achieves safer behavior and good rewards.
The ablation experiment is very thorough, involving not only the ablation of Q-supervised contrastive learning but also the ablation of contrastive learning parameters. The experiment fully demonstrates the impact of the contrastive learning module on the performance of the SDQC algorithm.
Supplementary Material: Appendix A provides sufficient theoretical analysis to show that a coarser representation can also reach the optimal strategy; Part B clarifies the relevant theory and emphasizes the innovation of the SDQC algorithm; Part C explains the implementation details and more clearly explains how the algorithm works. The subsequent explanation of the supplementary experiments and the provision of relevant training curves further illustrate the effectiveness of the SDQC algorithm.
Relation To Broader Scientific Literature: This article innovatively combines contrastive learning with offline safe RL, which greatly improves the performance of existing algorithms and is of great significance to the development of offline safe RL algorithms.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. For the first time, Q-supervised contrastive learning is integrated into offline safe RL.
2. On the DSRL benchmark, comparative experiments on all tasks fully demonstrate the excellent security performance of the SDQC algorithm.
3. Ablation experiments on contrastive learning are sufficient and complete.
Weaknesses:
1. The generalization experiment is relatively simple, only dealing with Goal and Push tasks, without Button tasks, and does not vividly demonstrate the changes in task difficulty.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Can provide other generalization experiments that are not Goal and Push tasks?
2. Can visualize scenarios of different difficulties of these tasks to illustrate the generalization performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer y1rM for providing the positive feedback and recognizing the importance of our work. Please see our response to your questions below.
## (Weakness/Question1) More Generalization Experiments on "Button"
We would like to clarify why we did not conduct generalization experiments on the Button task. As shown in Table 1 (page 6), all baseline algorithms exhibit extremely high costs on Button-related tasks. While our SDQC framework achieves safe results, the corresponding reward remains exceptionally low.
In the Button task, the agent is required to reach the orange button (marked with a green cylinder) without colliding with obstacles, stepping into traps, or activating incorrect buttons. At difficulty level 2, however, the environment includes 17 distractors, six of which are dynamic and move around the agent. (For a visualization, please refer to the results [**[here]**](https://github.com/wrk8/SDQC-Rebuttal/blob/main/to_reviewer_y1rm/CarButton.gif).)
The optimal policy learned by the agent trained with SDQC involves moving alongside the dynamic obstacles to avoid collisions, but this behavior largely ignores the target button. In contrast, agents trained with other baseline algorithms tend to disregard the moving obstacles entirely and recklessly navigate toward the target button. These completely contradictory behaviors make comparisons between algorithms meaningless.
Conversely, in other tasks, such as Goal and Push, agents trained with different algorithms exhibit similar behaviors, enabling fair generalization tests in these environments. (For reference, additional generalization experiments on the "PointGoal" and "PointPush" tasks are provided in Figure 9 (page 25) of our main paper.)
We will include a more detailed discussion of this phenomenon in the final version of the paper.
## (Question2) Visualization results of difficulty-level.
Thank you for your insightful comments. We agree that clear visualizations can better illustrate the generalization performance of our SDQC.
In the Goal-related tasks, the agent is required to reach the goal (marked with a green cylinder) while avoiding traps and collisions with obstacles. A visualization of this task is provided [**[here]**](https://github.com/wrk8/SDQC-Rebuttal/blob/main/to_reviewer_y1rm/PointGoal.gif). At difficulty level 1 (row 1 in the GIF), the environment contains 8 traps and 1 obstacle. At difficulty level 2 (row 2), the complexity increases, with 10 traps and 10 obstacles.
In the Push-related tasks, the agent is required to push a yellow box to the goal (marked with a green cylinder) while avoiding traps and collisions with solid pillars. A visualization of this task is available [**[here]**](https://github.com/wrk8/SDQC-Rebuttal/blob/main/to_reviewer_y1rm/CarPush.gif). At difficulty level 1 (row 1 in the GIF), the environment consists of 2 traps and 1 pillar. At difficulty level 2 (row 2), the environment becomes more challenging, with 4 traps and 4 pillars. | Summary: This paper proposes a new safe offline RL algorithm called SDQC, the key idea of the algorithm is to decouple observations into reward and cost-related representations, and use them in a value & policy learning framework similar to the one proposed in FISOR. The key intuition is that the values related to reward and cost may depend on different state representations, and decoupling them can help enhance generalization and safety. The proposed method achieves impressive performance in the DSRL benchmark. However, the algorithm is also quite heavy, and needs to learn two distinct state representations plus three different diffusion policy models.
Claims And Evidence: The paper claims that it can achieve strong safety and generalization performances, which are well-supported by the DSRL benchmark results and additional generalization tests.
Methods And Evaluation Criteria: **Regarding Method:**
- In Equation 5, the contrastive loss includes two exponential terms, $exp(s, \tilde{s})$ and $exp(z, \tilde{z})$, which might lead to instability during training. Since training is coupled with the value function update, this instability could be further amplified, especially in applications using image inputs.
- The authors use only a subset of the dataset for training, whereas contrastive learning typically requires a large amount of training data. It is unclear why using a limited number of anchor states would lead to better performance. Clarification on this point would be helpful.
- The authors introduce an upper-bound cost value function but lack sufficient theoretical analysis to explain its significance. Since this function is used to identify safe conditions, further clarification is needed.
**Regarding Evaluation:**
- Using the DSRL benchmark and additional generalization tests is reasonable.
Theoretical Claims: I do not think the theoretical analysis in Section 3.4 has much relevance to the proposed algorithm. Theorem 3.3 only discusses the properties of bisimulation representations and $Q^*$-irrelevance representation. The proposed contrastive representation has nothing to do with bisimulation representation, and there is also no formal proof or discussion given to show the proposed representation is a $Q^*$-irrelevance representation. Hence Theorem 3.3 is completely irrelevant. The authors then use Theorem 3.3 and the entropy maximization design (Eq.(4)) to claim the proposed method has better generalization (has larger conditioned entropy), which in my opinion, is seriously flawed. First, whether Theorem 3.3 holds for your representation is questionable; second, why does larger conditioned entropy justify better generalization?
I suggest removing Section 3.4 and the "theoretical analysis" entirely from the paper, as it is irrelevant and misleading, and cannot provide convincing arguments to demonstrate the superiority of the proposed method. If the authors really want to keep the theoretical analysis in the paper, at least the authors should:
- Rigorously prove the proposed contrastive representation is a $Q^*$-irrelevance representation, which I personally believe is impossible, as contrastive representations typically don't have many good properties for decision-making problems.
- Use a more rigorous generalization metric for discussion, instead of using the magnitude of conditioned entropy.
- Lastly, even if the current logic of the theoretical analysis holds (if both the above issues can be proved), the authors can only show their method is better than bisimulation representations, which is a rather weak conclusion and does not offer much real-world relevance.
Experimental Designs Or Analyses: The evaluation is generally reasonable, but there are also some aspects that can be further improved:
- SDQC utilizes **two sets of representations, three diffusion policies, and transformer-based (ATN) encoder** during training, making it very complex and computationally expensive. The authors should compare SDQC with baselines with similar scales of parameters, training time, and inference time. Additionally, given the similarity between FISOR and SDQC, it would be valuable to evaluate FISOR's performance using the same transformer encoder and separated diffusion policy model. Furthermore, testing FISOR with representation learning could help isolate and highlight the contribution of state-decoupled representation learning.
- Adding more ablation and discussion on temperature hyperparameters $\iota_r, \iota_h, \iota_{to}$ in Eq. (14).
Supplementary Material: I've checked both the appendix and supplementary videos.
Relation To Broader Scientific Literature: Safe offline RL methods are relevant to industrial control, robotics, and autonomous driving.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
- SDQC focuses on the hard constraint setting and achieves state-of-the-art performance.
**Weaknesses:**
- See my above comments in the Methods, Theoretical Claims, and Experimental Designs sections.
Other Comments Or Suggestions: N/A
Questions For Authors: In Table 4, different parameter settings are used for different tasks. How does the performance compare when using the same parameters across all tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank Reviewer JBF8 for the valuable suggestions. We would like to address your concerns point by point below.
## (Method1) Potential instability for Eq. 5
In Eq. 5, the exponential coefficient $\Gamma(s,\tilde{s})$ is a detached soft measure introduced in [1] that helps stabilize training.
Regarding the potential instability caused by coupling value function updates with representation learning, a detailed analysis is provided in Appendix F.2 (pages 25–26).
Experimental results demonstrate that the contrastive term has no adverse effect on value estimation, while our proposed ATN-based representation network significantly reduces estimation error.
## (Method2) Why limited anchors lead to better results
We apologize for the unclear description. We use the full dataset for training, not just a subset. For each gradient step, we sample a batch of data and perform contrastive learning within the batch.
Since our total loss includes multiple components, using more anchors could cause the neural network to overly focus on learning the representation, which might result in suboptimal performance of the value function.
## (Method3) Upper-bound of cost value function
Please note that $Q_{h}$ is updated based on the TD error computed relative to $V_{low}$ rather than $V_{up}$. In contrast, $V_{up}$ is updated toward $Q_h$ via expectile regression parameterized by $\tau$. According to Lemma. 1 in [2], it is straightforward that $\lim_{\tau\rightarrow 1}V_{up}(z(s))=\max_{a\in\mathcal{A}_\beta^s}Q_h(z(s),a)$.
## (Theory1) Why introduce a comparison with "bisimulation"
We conduct a theoretical comparison with bisimulation, which is one of the most widely used representation learning methods in RL [1, 3]. While implementing our idea of decoupling states for decision-making, bisimulation initially seemed like a natural choice.
However, this approach proved unsuccessful. It fails during the initial model-estimation stage due to the non-smooth nature of the cost function and its sparse value distribution. Similar challenges have been observed in sparse reward RL tasks [4].
In contrast, our SDQC method avoids these issues by using smooth Q-values and provides stronger theoretical advantages. We will include this failed attempt in the final version of our paper.
## (Theory2) Contrastive representations do not have good properties for decision-making.
In fact, contrastive-based representations are widely used in RL [1]. Similar to previous studies, our analysis is built upon the MDP framework, while our simulations address high-dimensional, infinite state and action spaces.
We do not claim that contrastive learning achieves a rigorous Q-irrelevance representation; indeed, such a result is unattainable for infinite problems approximated by neural networks. Rather, we cluster the Q-indicated representations in high-dimensional space, analogous to previous works that cluster representations with their associated indicators.
## (Theory3) Why conditional entropy can indicate "generalization ability"
Due to space limitations, please refer to our response to Reviewer zA7P on **Explanations of "conditional entropy"**.
## (Experiment1) "Fair comparison" with baselines.
The required "fair comparison based on same computational cost" is hard to execute.
First, each baseline algorithm has unique optimal parameter settings suggested by their authors.
Moreover, some baselines, such as CDT with GPT-2 backbone and TREBI with U-Net structure, are even more computational expensive than our SDQC.
According to reviewer's requirement, we implement FISOR with ATN-based structure and separate diffusion policies [**[here]**](https://github.com/wrk8/SDQC-Rebuttal/blob/main/to_reviewer_JBF8/compare_FISOR_SDQC.png).
## (Experiment2) Temperature ablations for diffusion policies
Please find ablations [**[here]**](https://github.com/wrk8/SDQC-Rebuttal/blob/main/to_reviewer_JBF8/Ablation_diffusion_temp.png). The default parameters follow FISOR and are kept consistent across all environments.
## (Question) Consistent hyperparameters for all environments
We would like to emphasize that our choice of hyperparameters is domain-specific.
(We use the same hyperparameters across 12 tasks in safety-gym domain.)
Please refer to our response to Reviewer zA7P on "Complex hyperparameter settings".
We argue that **no RL algorithm can employ a unified set of hyperparameters across all domains and environments**, including most commonly used SAC and PPO.
[1] Agarwal, Rishabh, et al. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. ICLR (2021).
[2] Kostrikov, Ilya, et al. Offline Reinforcement Learning with Implicit Q-Learning. ICLR (2021).
[3] Castro, Pablo. Scalable methods for computing state similarity in deterministic markov decision processes. AAAI (2020).
[4] Lee, Vint, et al. DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing. ICLR (2024).
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for the response. However, many of my concerns remain. Specifically,
- **Regarding theoretical analysis**: I'm not asking why introduce a comparison with bisimulation, but saying the theoretical analysis in Section 4.3 is not relevant to your proposed method. As I have mentioned in my comment, Theorem 3.3 only discusses the properties of bisimulation representations and $Q^*$-irrelevance representation. As your algorithm does not learn bisimulation representations, and the authors cannot prove their representations are $Q^*$-irrelevance representations, thus the entire theoretical analysis has nothing to do with the proposed algorithm. The authors are dodging my comment.
- **Regarding why conditional entropy can indicate "generalization ability"**: The authors have added some explanation on conditional entropy, but this still doesn't provide any concrete evidence/theoretical support on "why this can justify better generalization".
- **Heavy and complex algorithm**: As I have stated, "SDQC utilizes two sets of representations, three diffusion policies, and transformer-based (ATN) encoder during training", given such complexity and extra hyperparameters, few people would want to use it to solve real-world tasks. There are simply too many components to be tuned to make it work.
- **Hyperparameters**: In fact, it is recommended in offline RL to use consistent hyperparameters for different tasks, since in real-world deployment, it would be infeasible or even dangerous for model/hyperparameter selection on real-world systems. Keeping a minimal set of hyperparameters or having hyperparameter robustness is extremely important for an offline RL algorithm to be useful. The authors use SAC and PPO as an example, which is not a meaningful argument, since in online RL learning in the simulation environment, for sure, you can perform unrestricted hyperparameter tuning, but it is simply not the case for offline RL scenarios. Also, it should be noted that the baseline FISOR uses the same set of hyperparameters for all of its experiments.
---
Reply to Comment 1.1.1:
Comment: We sincerely acknowledge reviewer JBF8 to provide further feedback. To address your remaining concerns, please refer to our responses below.
## Regarding theoretical analysis
We would like to emphasize that we are NOT dodging your comments.
Firstly, you requested a rigorous proof that our Q-supervised contrastive representation is a $Q^{\*}$-irrelevant representation. As we have clarified, such a result is unattainable for infinite problems approximated by neural networks. However, similar to prior work [1], we assume that contrastive learning, when guided by appropriate indicators, can perfectly cluster the representations and achieve the ideal effect. The theoretical analysis is then conducted for this idealized setting.
Secondly, you are concerned that the comparison between bisimulation and $Q^{\*}$-irrelevance representation is not relevant to our methods.
In response, we claim that we tried both methods (through contrastive learning with assumption that the representations are ideally clustered). While bisimulation initially appeared to be a mature and natural approach, it failed due to the non-smooth nature of the cost function. In contrast, our SDQC method avoids these issues by using smooth Q-values and provides stronger theoretical advantages.
## Regarding why conditional entropy can indicate "generalization ability"
Here, we would like to reiterate the step-by-step reasoning that connects "generalization ability" to "conditional entropy":
1. According to [2], coarser representation in RL generally leads to improved generalization performance due to reduced state space.
2. To obtain coarser representations, we treat the representation as a random variable and aim to minimize its information content, i.e., the entropy $H(z)$.
3. Minimizing $H(z)$ is equivalent to maximizing $H(s|z)$, as the relationship $H(s|z) = H(s) - H(z)$ holds, and $H(s)$ is a constant for a fixed offline dataset.
## Heavy and complex algorithm
We acknowledge that our algorithm has a more complex structure compared to existing ones, and we have explicitly discussed this as a limitation in Appendix G.
However, we believe that our primary contribution lies in the introduction of the concept of "decoupling states for decision-making", which has been shown to be highly effective in our experiments. While there may still be potential room for simplifying the current framework, we believe it provides valuable insights for future research.
## Unified Hyperparameters
We agree with your perspective that offline RL algorithms, in theory, should impose stricter requirements on hyperparameter consistency compared to online RL.
However, this is often impractical in most cases. For instance, two well-known offline RL algorithms, MOPO [3] and Diffusion-QL [4], require different hyperparameter configurations across various environments to achieve satisfactory performance.
As you pointed out, FISOR [5] performs well in this regard by employing a unified hyperparameter setting across all environments. Nonetheless, it fails to achieve strictly zero-cost (its original target) in over 70% of environments.
In our work, we emphasize that SDQC requires different hyperparameter configurations across environments primarily due to its reliance on representation learning. Unlike image-based RL tasks, state-based RL tasks typically involve observations with high information density. Over-compression or over-expansion of these observations in the representation space can result in the collapse of the final outcomes. Thus, **it is necessary to adjust the dimension of the representation space according to the original dimensionality of the observations**.
As shown in Table 3 of our main paper, we have ensured that **most hyperparameters remain consistent across all environments and domains**. Users only need to select the network architecture and adjust the encoded state dimension based on the global observation space, which is largely determined by the physical characteristics of each domain. Additionally, we provide detailed guidelines for these selections (see Appendix C.3, lines 1097–1133). We do NOT consider such hyperparameter configuration approach to be unacceptable.
[1] Agarwal, Rishabh, et al. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. ICLR (2021).
[2] Liu, Guoqing, et al. Return-based contrastive representation learning for reinforcement learning. ICLR (2021).
[3] Yu, Tianhe, et al. MOPO: Model-based offline policy optimization. NeurIPS (2020).
[4] Wang, Zhendong, et al. Diffusion policies as an expressive policy class for offline reinforcement learning. ICLR (2023).
[5] Zheng, Yinan, et al. Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model. ICLR (2024). | Summary: This paper introduces a framework "State Decoupling with Q-Supervised Contrastive Representation" (SDQC) to improve safe offline reinforcement learning. They do this by using two distinct state representations (the "decoupling"), one being for rewards and one for costs, and learn these using contrastive objectives. They then use these representations to extract three policies: a "reward policy", "cost policy", and "tradeoff policy", which they train using diffusion models. In practice, they use a safety assessment function on the two representations of a given observation, which dictates which policy is used at that timestep. They evaluate their method against a number of baselines across the DSRL benchmark, as well as further experiments to compare generalization capabilities of different algorithms.
---
Not sure where I'm supposed to acknowledge the rebuttal so I guess I'll do it here.
I thank the authors for a strong rebuttal, it cleared up the uncertainties I had and I will adjust my score to an accept. One thing I will note is that I'm not a fan of the term ``reverse expectile regression'' in its use as the authors described, but if it's used in prior work I can't fault them for it.
Claims And Evidence: I believe that the claims made are mostly supported by clear and convincing evidence (I have a couple questions on some theoretical aspects below, but I believe these should be squashable in the rebuttal).
Methods And Evaluation Criteria: I believe the proposed methods and evaluation criteria make sense.
Theoretical Claims: I checked all proofs and theoretical claims made. I had some questions which I highlighted in the sections below.
Experimental Designs Or Analyses: Their experiments on the DSRL benchmark appears sound to me (although no code was provided).
Supplementary Material: I read through the entirety of the appendix.
Relation To Broader Scientific Literature: This paper is at the intersection of safe RL, offline RL, and representation learning in RL. It borrows some techniques from previous work in safe offline RL (such as the use of expectile losses, using regressed diffusion models to train the policies, and using a diffusion behaviour cloner. They primarily build on this work by improving representation learning in a novel way.
Essential References Not Discussed: The authors state that their method is a useful alternative to bisimulation as it doesn't require model-based methods in order to estimate. However there is a line of work on estimating bisimulation-type distances from samples in a model-free setting, for example beginning with [1] from NeurIPS 2021 and a number of works which followed.
[1] Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, and Mark Rowland. MICo: Learning improved representations via sampling-based state similarity for Markov decision processes. In Advances in Neural Information Processing Systems, 2021.
Other Strengths And Weaknesses: **Strengths**
- The method introduced appears novel and non-incremental (at least to my knowledge), and obtains strong results against baselines on the environments considered.
- I find Section C of the appendix to be really great to help the reader understand and build intuition for the algorithm (pseudocode + architecture diagrams).
- The GIF in the supplementary material is super helpful to build intuition and understand the 3 policies working together (this could've been advertised to the reviewers better! I came across this accidentally).
**Weaknesses**
- The proposed algorithm seems quite complex with a number of moving parts, which may make it brittle with respect to hyperparameter choices. (I appreciate that the authors demonstrate that this isn't the case with respect to the diffusion hyperparameters at least).
- Similar to the above, I can imagine the number of moving parts may make it run potentially slow (multiple diffusion models, nearest neighbour search, contrastive objective, etc).
- Some of the theory presented in Section 3.4. feels a bit hard to follow for me and I do not entirely see the motivation (I don't "type-check" the variant of cross-entropy they use, and I also don't see how this relates to generalization). I expand on this in my comments and questions in the sections below.
Other Comments Or Suggestions: - I think the cross-entropy $H(s| z_\theta(s))$ should be defined prior to its use in Equation (4). This definition confuses me, as cross entropy is normally defined with respect to two probability distributions, but here your arguments are a state and the state's representation.
- Following the above: the way $H(s| z_1)$ is used in the proof of Proposition A.4. seems to be a variation of the KL divergence rather than cross entropy.
- It is stated a number of times that a coarser representation has better generalization properties. Perhaps briefly arguing why this might be the case, or citing references that go into detail on this fact, can be helpful.
Minor typos/nits:
- Lemma A.3: "finner" -> "finer".
- For quotation marks, use ``<>'' instead of ''<>'' (otherwise the opening one is reversed).
- A representation being finer than another is defined but a representation being coarser than another is used without definition.
Questions For Authors: - What is "reverse expectile regression" in **Cost-related Representation** of Section 3? It seems like the standard expectile loss?
- Following my second weakness: how does the wall-clock time of your method compare to alternative methods?
- In Table 1, what is a "Safe agent" vs a "Unsafe agent"?
- Following my comments in the previous section, what definition of cross-entropy are you using in the terms $H(s|z)$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We acknowledge Reviewer zA7P for appreciating the significance and novelty of our work. Below, we respond to the your questions point by point.
## (References) Missed discussing on relevant references
The suggested papers are indeed highly relevant to our work. They represent extensions of bisimulation-series representation methods. We will incorporate a discussion of these papers in Section 3.4 and the Related Works section.
## (Weakness1) Complex hyperparameter settings
We aknowledge SDQC involves multi-step training and many hyperparameters. However, as shown in Table 3, we have set most of the hyperparameters to be consistent across all environments and domains. Users only need to select the network structure and adjust the encoded state dimension according to the global observation space, which is largely determined by the physical characteristics of each domain. In fact, we provide detailed guidelines for selecting them (in Appendix C.3, lines 1097-1133).
## (Weakness2/Question2) Computational costs
The computational costs for SDQC are indeed slight higher than other baseline algorithms. We discussed this point in Appendix E.3, lines 1288-1302.
To provide a clear comparison, we implement all algorithms with pytorch on a single machine equipped with one GPU (NVIDIA RTX 4090) and one CPU (AMD Ryzen 7950X), and report the results on *CarPush2* task.
During the training phase, the total training time (hours) are presented as follows.
|BCQ-Lag|CPQ|COptiDICE|CDT|TREBI|FISOR|SDQC(ours)|
|-|-|-|-|-|-|-|
|0.46|0.45|0.41|3.12|16.62|1.77|5.43|
During the testing (inference) phase, we measure the time consuption (seconds) over 1000 RL timesteps as follows.
|BCQ-Lag|CPQ|COptiDICE|CDT|TREBI|FISOR|SDQC(ours)|
|-|-|-|-|-|-|-|
|1.51|1.85|1.86|3.45|585.87|6.11|11.13|
## (Weakness3/Question4) Explanations on "conditional entropy"
We apologize for any conceptual confusion caused by the missing definition. Here, we clarify that we denote $H(s|z)$ as **conditional entropy**, which is used measure information content of random variables, rather than cross-entropy or KL divergence, which are used to measure the distance between prob-distributions.
Conditional entropy is a concept in information theory, and its formal definition was introduced by Shannon[1]:
$H(s|z)=-\sum_{s,z} p(s,z) \log p(s|z)$
where $p$ represents the probability function.
According to [2], coarser representation in RL generally introduce better generalization performance due to reduced state space. Thus, when treating the representation as a random variable $z$, **we initially aim to minimize the information content of $z$, i.e., the entropy $H(z)$**. Note that the following relationship always holds:
$H(s|z) = H(s,z) - H(z) = H(s) - H(z)$
where the second equality is valid since $z$ is a function of $s$. For any given offline dataset, $H(s)$ is fixed. Therefore, **minimizing $H(z)$ is equivalent to maximizing $H(s|z)$**.
We will add above detailed explanation in the final version of our paper.
## (Typos)
Thanks for pointing out these! We have carefully checked entire paper to correct all typos.
## (Question1) What is "reverse expectile regression"
The optimal value function under the general Bellman operator is given by $V_r(s)=\max_a Q_r^*(s,a)$, and is $V_h(s)=\min_a Q_h^*(s,a)$ under the cost-related safe Bellman operator. The expectile regression $L^{\tau}(u)=\vert \tau - \mathbb{I}(u<0)\vert u^2$, with $\tau \in (0.5, 1)$ can be used to approximate the maximum value. Conversely, the reverse version $L^{rev}_{\tau}(u)=\vert \tau - \mathbb{I}(u>0)\vert u^2$ with $\tau \in (0.5, 1)$ can be utilized to approximate the minimum value. The difference lies in the direction of the indicator function $\mathbb{I}(u)$. Following the setting in FISOR [3], we refer to this as "reverse expectile regression".
## (Question3) What is Safe Agents and Unsafe Agents in Table 1
As introduced in Section 4.1 (lines 320–325, column 1, and lines 299–303, column 2), our ultimate objective is to achieve zero cost during testing, in alignment with the framework established by FISOR [3]. However, most baseline algorithms struggle to perform effectively under a zero-cost threshold. Therefore, following FISOR [3], we impose a strict cost limit of 10 for the Safety-Gymnasium environment and 5 for the Bullet-Safety-Gym environment. Agents with an **average cost below this threshold** during testing are classified as **safe agents**, while those **exceeding the threshold** are classified as **unsafe agents**.
**Lastly, and importantly, we are committed to open-sourcing all code upon acceptance.**
[1] Shannon, C. E. A mathematical theory of communication. The Bell system technical journal (1948).
[2] Liu, Guoqing, et al. Return-based contrastive representation learning for reinforcement learning. ICLR (2021).
[3] Zheng, Yinan, et al. Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model. ICLR (2024).
---
Rebuttal Comment 1.1:
Comment: I initially appended this to my original review as I didn't see the rebuttal comment option, but I'll add this here again for posterity.
I thank the authors for their rebuttal, it cleared up the uncertainties I had and I will adjust my score to an accept. One thing I will note is that I'm not a fan of the term ``reverse expectile regression'' in its use as the authors described, but if it's used in prior work I can't fault them for it.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank reviewer zA7P for acknowledging our rebuttal and for improving the score. Regarding your final concern about the naming convention of ``reverse expectile regression", we agree that the terminology is not entirely rigorous.
After carefully reviewing the pioneering work IQL [1], we note that they refer to expectile regression used to approximate the maximum value as **upper expectile regression**. Following this convention, we will rename the version used to approximate the minimum value as **lower expectile regression**. We will rewrite the corresponding parts in the final version of our paper. Thanks again for your feedback!
[1] Kostrikov, Ilya, et al. Offline Reinforcement Learning with Implicit Q-Learning. ICLR (2022). | Summary: This paper introduces State Decoupling with Q-supervised Contrastive representation (SDQC), a framework that decouples the global observations of an RL agent into reward- and cost-related representations to deal with OOD data and improve generalisation.
Claims And Evidence: 1) The authors claim that "Safe offline RL [...] provides a promising solution that learns the safety-guaranteed policy in a fully offline manner. Its training requires no risky interaction with the environment and relies only on the precollected offline dataset."
I am not familiar with offline safe RL, but it looks like the problem has only been moved, since how is the precollected offline dataset of execution going to be collected?
2) "The representations solely capture either reward or cost information, independent of another factor, as determined by the training of Q*."
How is it possible to decouple this information determined by the training of Q*?
3) The authors claim superior experimental performance, which looks backed by empirical evidence.
4) On p. 5 the authors say that "our Q-supervised contrastive learning method theoretically surpasses bisimulation in terms of generalization."
But what it is meant by generalisation at this point? So far generalization has only been discussed informally in the introduction.
Methods And Evaluation Criteria: The empirical evaluation is rather thorough. It considers 5 other relevant methods in the benchmark DSRL.
As states above, I am not sure that offline learning is the best solution for safety in RL, as the agent has to interact with an unsafe environment to collect experiences.
Theoretical Claims: Proofs are provided in the appendix. I did not check all details, but the theoretical claims look correct.
Experimental Designs Or Analyses: The experiments look fine in terms of baselines and benchmarks for comparison.
Supplementary Material: Not really.
Relation To Broader Scientific Literature: The authors compare frequently to [Zheng et al., 2024], as one of the first examples of application of Hamilton-Jacobi reachability for safety analysis. However, the empirical results demonstrate the inability of their algorithm (FISOR) to achieve absolute safety guarantees.
It looks like the main innovation wrt [Zheng et al., 2024] is the decoupled representation, which leads to a superior experimental performance.
Essential References Not Discussed: It doesn't look like.
Other Strengths And Weaknesses: 1) The authors stress the relevance of OOD generalization for their contribution, but then generalization is only discussed in Sec. 4.2, rather marginally.
I wonder whether a stronger case for the paper can be made by simply considering the improvement in performance.
2) The authors manage to fit safe (offline) learning in a clear and concise manner in a single column, which is a feat in itself.
Other Comments Or Suggestions: 1) The example in the introduction does not really add much to the discussion, as I'd assume that most people are familiar with issues with OOD data and generalisation.
Questions For Authors: 1) To what extent your method can be applied to online learning?
2) The goal of the authors is to achieve a decoupled representation of rewards and costs. However, can't this goal be achieved by defining corresponding Q and value function for costs and rewards? These could then be used directly in their framework.
Also, please answers points (1), (2), and (4) in the Claims section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely acknowledge Reviewer cp6m for providing the insightful feedback. Please see our response to your concerns below.
## (Claim1) Motivation: How are safe offline datasets collected?
In offline RL, datasets are typically collected from past human experiences. For example, thousands of car accidents occur every day. These unsafe driving records, combined with safe ones, can be used to build a safe offline RL dataset for training autonomous driving systems. Compared to allowing UGVs to explore the real world through an online trial-and-error manner, it is both safer and more efficient.
## (Claim2) How is it possible to decouple the representations according to $Q^{\*}$?
When focusing solely on the reward $r$, the optimal value $Q_r^{*}$ is inherently independent of cost information $c$, as defined by its formulation. For instance, as shown in Figure 1 (page 1), all states in the figure share the same $Q_r^{\*}$ across all actions. Leveraging this property, we cluster the mid-layer representations of states with similar $Q_r^{\*}$ values across actions within the support, ensuring **these representations remain independent of cost-related information**.
Likewise, the same principle applies when considering only the cost.
For further intuition, please refer to the GIF in the supplementary material, which visualizes policies derived from the decoupled states.
## (Claim4) What does "generalization" refer to?
We apologize for any confusion caused by the lack of explanation regarding this terminology. In machine learning, "generalization" usually refers to a model's ability to perform well on new, unseen data not included in its training set.
In real-world decision-making, state spaces are often high-dimensional and infinite, making it impossible to cover all possible states during training phase.
This is especially true in offline settings, where the agent is trained on a fixed dataset without exploration [1].
The online testing performance of an offline-trained agent directly reflects the algorithm's generalization ability [1]. During deployment, the agent may encounter unfamiliar states and take suboptimal or unsafe actions. As shown in Table 1, our SDQC algorithm achieves SOTA performance in both rewards and costs.
Moreover, our approach extends generalization to more complex scenarios, where the testing environment (containing more obstacles) differs significantly from the one in which the offline data was collected, as displayed in Figure 3 and Figure 9.
(For online RL, generalization is usually evaluated in a similar way [2].)
## (Question1) Applying SDQC to online settings.
We agree that SDQC has great potential for extension to online settings. However, the in-sample learning method (implicit-Q learning) used in our work is specifically designed for offline RL.
Our primary contribution is the introduction of the concept of "decoupling the states for decision-making," which can be seamlessly integrated with many existing online RL methods that use representations like bisimulation. Unfortunately, due to time constraints, we were unable to verify this point during the rebuttal period.
## (Question2) Possibilities of abstracting representations from pre-trained value functions.
If we understand correctly, the reviewer is asking whether it is possible to first train the value functions through in-sample learning and then extract the representations based on the converged value functions.
We argue that this approach would make the training process more complex and prone to failure.
Specifically, given any global observation $s$, we should firstly obtain its cost-related representation $z_h$ then make safety assessment on representations $z_h$ according to value function $V_h(z_h)$. This process requires learning two mappings: $\mathcal{S} \mapsto \mathcal{Z}_h \mapsto \mathcal{V}_h$.
However, if we were to train the value functions first and subsequently extract the representations based on these pre-trained value functions, the task would become more complex. This is because an additional mapping would be introduced: $\mathcal{S} \mapsto \mathcal{V}_h \mapsto \mathcal{Z}_h \mapsto \mathcal{V}_h$. In this case, the mapping between the **vector space** $\mathcal{Z}_h$ and the **scalar space** $\mathcal{V}_h$ would be **bijective**, which is extremely challenging for neural networks to learn. Furthermore, this approach would be susceptible to additional errors due to the approximation nature of neural networks.
In contrast, our approach utilizes the neural network structure presented in Figure 5 (Appendix C.2, page 19). The representations are explicitly set as mid-layer outputs, and only the necessary mappings are learned.
[1] Levine, Sergey, et al. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv:2005.01643 (2020).
[2] Agarwal, Rishabh, et al. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. ICLR (2021). | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.