title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Accelerating Unbiased LLM Evaluation via Synthetic Feedback
Accept (poster)
Summary: This paper introduces Control Variates Evaluation, a novel method for unbiased and cost-efficient evaluation of large language models (LLMs) in head-to-head comparisons. The approach leverages synthetic feedback from LLMs, combined with human annotations, to reduce annotation costs while maintaining evaluation reliability. The authors demonstrate that this method reduces the number of required human annotations by up to 24.8% when synthetic feedback is fine-tuned. Theoretical guarantees for variance reduction are provided, and the method is empirically validated against benchmarks such as Chatbot Arena and MT Bench. Additionally, the paper introduces a human annotation saving ratio metric to predict the potential savings. Claims And Evidence: The claims in the paper are mostly supported by convincing evidence. The authors provide both theoretical analysis and extensive experimental results to validate the effectiveness of the proposed method. Key claims, such as the reduction in human annotations and the alignment between theoretical variance predictions and empirical results, are well-supported. However, some claims about the scalability and generalizability of the method to more complex evaluation tasks (e.g., beyond head-to-head comparisons) are less substantiated and could benefit from further exploration. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-suited to the problem at hand. The use of Chatbot Arena and MT Bench as benchmarks ensures the relevance and applicability of the results. The incorporation of synthetic feedback and the focus on reducing human annotation costs align with the goals of scalable and efficient LLM evaluation. However, the paper could expand on how the method might generalize to other evaluation setups, such as multi-model ranking or fine-grained assessments. Theoretical Claims: The theoretical claims, particularly those concerning variance reduction using control variates, appear sound. The proofs provided in Section 4.1 are logically structured, and the derivations seem correct at a high level. However, I did not verify all mathematical details rigorously, and some minor steps in the derivations (e.g., bias analysis in Equation 2) could benefit from additional clarification. While the claims are likely correct, their presentation could be more transparent for broader accessibility. Experimental Designs Or Analyses: The experimental design is comprehensive and addresses the key questions about the effectiveness of the proposed method. The use of multiple synthetic evaluators (e.g., GPT-4, Skywork-8B) and fine-tuning experiments adds robustness to the findings. The alignment between theoretical savings and empirical results is a strong point. However, the experiments primarily focus on head-to-head comparisons, and it would be valuable to test the method on more diverse evaluation tasks. Additionally, some results (e.g., the saving ratios in Table 1) could be better contextualized to highlight their practical implications. Supplementary Material: No supplementay material Relation To Broader Scientific Literature: The paper is well-situated within the broader literature on LLM evaluation. It builds on prior work on synthetic feedback (e.g., LLM-as-a-judge) and variance reduction techniques (e.g., control variates in Monte Carlo sampling). Moreover, this paper also connect to some famous concepts like critique ability and reward models. The connections to recent benchmarks like Chatbot Arena and MT Bench are appropriate and timely. However, the paper could benefit from a deeper discussion of related methods for reducing human annotation costs, such as active learning or adaptive sampling, to highlight its unique contributions. Essential References Not Discussed: Some related works are not described, like the papers include critique ability concepts 1. A Survey on LLM-as-a-Judge 2. From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge 3. CriticEval: Evaluating Large Language Models as Critic Other Strengths And Weaknesses: Strengths: The method is original and addresses a critical bottleneck in LLM evaluation—reducing human annotation costs. The theoretical framework is solid, and the empirical results strongly support the claims. The introduction of the human annotation saving ratio as a predictive metric is a useful and practical contribution. Weaknesses: The focus is narrow, primarily on head-to-head comparisons, limiting the generalizability of the results. Some theoretical details, while likely correct, could be presented more clearly. The paper assumes access to high-quality synthetic feedback, which may not always be feasible in practice. Other Comments Or Suggestions: Typos: In Section 5.5, the phrase "introduce more significant savings" could be rephrased for clarity. Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We address your comments below. > **Weakness 1: Beyond head-to-head comparisons** Our theory directly applies to many other evaluation tasks, such as single response evaluation, where a human gives scores to a single LLM generation, instead of giving preference to two LLM generations. However, there are limited public datasets available for these tasks, and collecting such a dataset might require a significant amount of human effort which might be beyond the scope of our paper. Nonetheless, we believe that this will be an interesting future effort. That said, we conduct an additional experiment in the single response evaluation setting. We utilize the validation split of the HelpSteer2 dataset as our benchmark. This split consists of 1.04K samples, each containing a prompt, a response, and five human-annotated attributes: helpfulness, correctness, coherence, complexity, and verbosity. Each attribute is scored from 0 to 4, with higher scores indicating better performance. Our focus is on the helpfulness attribute, as it is the primary metric that reward models are typically trained to evaluate. We employ the Control Variates Evaluation method to predict the average helpfulness score. The human annotation saving ratio is shown in the table below: | Model | GRM-2B | Skywork-8B | ArmoRM-8B | GPT-4o | |------------------------------|--------|------------|-----------|--------| | Saving | 10.3% | 21.0% | 14.1% | 27.4% | The result above indicates the perspective of Control Variates Evaluation in single-response evaluation. To our best knowledge, this is the only public dataset with high-quality human annotation for single-response evaluation. We will include this experiment in the camera-ready version. > **Weakness 2: Theoretical details, bias analysis in Equation (2)** This is discussed in https://artowen.su.domains/mc/Ch-var-basic.pdf, Page 32. We will expand the clarification in Appendix in the final version for completeness. > **Weakness 3: Assume access to high-quality synthetic feedback** Our method is effective as long as there is a non-zero correlation between human and synthetic evaluations. While high-quality synthetic feedback is ideal due to its typically strong correlation, evaluations from a small reward model—though highly biased regarding human preferences—can still yield satisfactory performance in Control Variates Evaluation, as shown in Table 1. Ultimately, we believe the correlation requirement will diminish as AI systems continue to progress. > **Relation To Broader Scientific Literature: related methods for reducing human annotation costs** To our best knowledge, Control Variates Evaluation is the first unbiased LLM evaluation method with variance reduction. There are indeed other methods to reduce human annotations, such as Active Evaluation Acquisition (AEA) [1]. However, AEA might introduce bias to the evaluation because choosing a subset of human annotation data causes distribution shift in the evaluation dataset. In addition, AEA requires training of the neural process, while finetuning is optional in our method. Furthermore, we can combine active learning and control variates evaluation to further reduce human annotations in LLM evaluation. To be specific, we can first apply active learning to select a representative subset of prompts for evaluation, and then run control variates evaluation on this subset. The downsides of this combination are: 1) The evaluation will be biased because strategic sampling of responses causes distribution shift with respect to the original evaluation dataset. 2) Active learning–based approaches like AEA [1] require an additional training procedure, which relies on existing human annotations. We will add this discussion in the final version of our paper. [1] Li, Yang, et al. "Active Evaluation Acquisition for Efficient LLM Benchmarking." arXiv preprint arXiv:2410.05952 (2024). > **Essential References Not Discussed** We will include these papers in the camera-ready version. > **Typos** We will change it to "improve the human annotation saving ratio."
Summary: -Paper proposes Control variates evaluation --- the goal being to reduce the cost of LLM evaluations -It does so using a principled statistical approach that combines human annotations with synthetic feedback (i.e. LLM as a judge). -Specifically, the synthetic feedback is the control variate to reduce the variance of limited human evals. -Paper shows the generic & fine-tuned approach reduces the number of human annotations. --- Update after rebuttal Thank you for the detailed responses. Adding these to the revised paper would strengthen the paper and ensure clarity. That said, I retain my score and my positive assessment of the paper - best of luck :) Claims And Evidence: The claims made in the paper are generally well-supported by theoretical analysis and experiments. - Good theory in Sec 4 and formal proofs in the appendices. - The unbiasedness is shown empirically and theoretically - The main claim of human annotation savings is shown across a variety of settings. ------------ - Only doubt is whether the claims are generalizable beyond the head-head settings. Methods And Evaluation Criteria: -Use of control variates is well motivated from the perspective of variance reduction -Good use of established benchmarks chatbot arena and MT-bench -Nice evaluation across different sizes of models -Q: Asked later --- are their other variance reduction approaches that could be baselined against. i.e. is control variates the best? Theoretical Claims: -I’m not a theory expert, but the proofs seem correct -One thing I noted is the assumption of the strong correlation between human and synthetic annotations being important. Perhaps some ablations or analysis where the relationship is weak would be useful Experimental Designs Or Analyses: As mentioned above experimental designs and analyses are sound: - Well structured, uses multiple synthetic evaluators, datasets and fine-tuning vs no fine-tuning - As mentioned before, and acknowledged by the authors it’s unclear how this generalizes to other eval setups. Supplementary Material: I primarily reviewed the experimental details in App B and additional experiments in App C. These provided good additional value to the paper. Maybe adding more details on prompt templates would be useful. Relation To Broader Scientific Literature: The paper situates itself well within several research areas: LLM as a judge, efficiency LLM evals and control variates. Essential References Not Discussed: It would be useful for the paper to position a bit better against alternative variance reduction approaches. For instance, one could use active learning to decide which limited set needs human eval and which can remain as synthetic. Other Strengths And Weaknesses: Strengths: - Tackles an important problem with a general approach --- variety of evaluators - Convincing empirical results for an important problem - Nice theoretical links Weaknesses: - Only head-to-head evaluation tasks considered - No comparison with other variance reduction methods - Assessment of computational costs Other Comments Or Suggestions: -It would be useful to add some computational costs, e.g. cost of fine-tuning vs human annotation costs incurred -Maybe adding an appendix fleshing out how the method could be extended beyond head-head comparison Questions For Authors: -Is there a minimum number of human annotations needed for the control variate to be reliable? -Figs 7 and 8 show variance for the annotation saving ratio across different LLM pairs? Do you have insights into the characteristics of different LLM pairs and why certain ones are more amenable and have higher savings? -Is it possible that alternative variance reduction approaches like active learning could fit into this paradigm and how would they differ from the proposed approach Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We address your questions and comments below. > **Weakness 1 & Comment 2: Only head-to-head evaluation tasks** Public datasets for other evaluation tasks are limited, and collecting such data may require significant human effort, which is beyond our paper's scope. However, we believe that this will be an interesting future effort and currently our theory directly applies to many other tasks, such as single response evaluation, where human scores are given to individual LLM outputs rather than comparing two responses. Therefore, we conduct an experiment in this setting using validation split of HelpSteer2 as our benchmark. The human annotation saving ratio is shown in the table below: | Model | GRM-2B | Skywork-8B | ArmoRM-8B | GPT-4o | |------------------------------|--------|------------|-----------|--------| | Saving | 10.3% | 21.0% | 14.1% | 27.4% | To our best knowledge, this is the only public dataset with high-quality human annotation for single-response evaluation. We will include this experiment in the camera-ready version. > **Weakness 2 & Question 3: No comparison with other variance reduction methods, e.g. active learning** To our best knowledge, Control Variates Evaluation is the first unbiased LLM evaluation method with variance reduction. There are indeed other methods to reduce human annotations, such as Active Evaluation Acquisition (AEA) [1]. However, AEA might introduce bias to the evaluation because choosing a subset of human annotation data causes distribution shift in the evaluation dataset. In addition, AEA requires training of the neural process, while finetuning is optional in our method. We can also combine active learning and control variates evaluation to further reduce human annotations in LLM evaluation. To be specific, we can first apply active learning to select a representative subset of prompts for evaluation, and then run control variates evaluation on this subset. The downsides of this combination are: 1) The evaluation will be biased because strategic sampling of responses causes distribution shift with respect to the original evaluation dataset. 2) Active learning–based approaches like AEA [1] require an additional training procedure, which relies on existing human annotations. [1] Li, Yang, et al. "Active Evaluation Acquisition for Efficient LLM Benchmarking." arXiv preprint arXiv:2410.05952 (2024). > **Weakness 3 & Comment 1: Assessment of computational costs** In our experiment, fine-tuning reward models, such as the 7B model, can be performed locally using four H100 GPUs with 80GB of GRAM. The evaluation cost for GPT-4o is approximately $0.0035 per annotation. While the exact cost of human annotation is unknown, we believe it is significantly more expensive, by orders of magnitude. We will include this discussion in the final version. > **Question 1: Minimum number of human annotations needed for the control variate to be reliable** Theoretically, the reliability of Control Variates Evaluation is independent of number of human evaluation since it is unbiased. That said, if we want to compare with purely synthetic evaluation in practice, then the minimum number of human annotations required depends on the point at which the variance of Control Variates Evaluation is lower than the square of the synthetic evaluation bias. Since this threshold is influenced by the intrinsic variance of human annotations on a given evaluation dataset, it must be determined empirically. However, as shown in Figures 4 and 5, Control Variates Evaluation with just 200 human annotations already achieves significantly lower error than synthetic evaluation across all experiments. This is a relatively small number compared to the scale of popular LLM benchmarks such as MT Bench and Chatbot Arena. > **Question 2: Interpretation of Figure 7 and 8** They show the human annotation saving ratio (please refer to Page 4, right column of Line 180) of different LLM pairs. Factors influencing this ratio include the architecture of the response generator LLMs, the datasets used for training the generators, and whether an LLM is distilled from another LLM. Exploring these factors in detail is a focus for our future research. > **Theoretical Claims: Assumption of the strong correlation between human and synthetic annotations** We would like to clarify that our method works as long as the correlation is non-zero. While a strong correlation is ideal, a weak correlation can still lead to significant human sample savings: we observe from experiments (Table 1) that weak reward models can still achieve satisfactory correlation and thus good human annotation saving in Control Variates Evaluation. > **Supplementary Material: More details on prompt templates** We will attach the specific prompt template for completeness in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. Adding these to the revised paper would strengthen the paper and ensure clarity. That said, I retain my score and my positive assessment of the paper - best of luck :) --- Reply to Comment 1.1.1: Comment: We are glad to know that our rebuttal improved the technical content and clarity of the paper, and we will make sure to incorporate them into the final version. Thank you again for your positive assessment of our paper!
Summary: The paper proposes Control Variates Evaluation, a method to reduce human annotation costs in evaluating large language models (LLMs) while maintaining unbiased results. By combining synthetic feedback from LLMs, the method achieves variance reduction in win rate estimation. The approach is theoretically grounded in the control variates technique and has been validated on benchmarks like Chatbot Arena and MT Bench, demonstrating human annotation savings. This general and scalable method offers a cost-effective alternative to full human evaluation without compromising reliability. Claims And Evidence: The claims made in the paper are well-supported by clear and convincing evidence: - The paper shows the method is unbiased and achieves variance reduction. Experimental results demonstrate that Control Variates Evaluation matches human evaluation accuracy with fewer annotations, and the mean square error converges to near zero, indicating negligible bias. - The paper establishs the properties of the control variates method and its application to LLM evaluation. - The paper presents human annotation saving ratios across different evaluators and benchmarks, showing consistent savings. The experimental results validate that the theoretical saving ratios match practical variance reduction, with figures demonstrating the effectiveness across various evaluators and benchmarks. Methods And Evaluation Criteria: The method is theoretically grounded in the control variates technique from statistics, which is a well-established variance reduction method in Monte Carlo sampling. Thus combining human annotations with synthetic feedback in a principled way makes sense for reducing annotation costs while preserving unbiasedness Theoretical Claims: Yes, I have thoroughly reviewed the theoretical proofs. The general proofs are correct, as they directly follow from the principles of Control Variates. However, I have one question: - The general application of Control Variates does not require the sampled $x$ to follow a uniform distribution. Unless, in your design, the uniform distribution is an inherent assumption in the evaluation objective (win rate)—specifically, to estimate the global win rate equally weighted across all prompts, rather than reflecting preferences for a specific distribution. I would appreciate it if the authors could clarify this point. Experimental Designs Or Analyses: I have reviewed the experimental sections. The overall experimental designs are well-structured and comprehensive. The experiments are well-aligned with the theoretical results. Supplementary Material: I have carefully reviewed the supplementary materials, particularly the experimental results, as the majority of the detailed findings are included there. Relation To Broader Scientific Literature: I believe Control Variates can serve as a valuable technique for LLM evaluation in the future. Although Control Variates is not a new concept, to my knowledge, this is the first time it has been introduced into the field of synthetic LLM evaluation. Essential References Not Discussed: Given the generality of Control Variates, I believe the paper is self-explanatory and does not require further elaboration from other sources. However, it might be beneficial to cite and discuss additional papers that utilize Control Variates for different purposes, such as [R1]. [R1] Training Chain-of-Thought via Latent-Variable Inference Other Strengths And Weaknesses: Strengths: - The paper presents a novel application of the control variates technique from statistics to the problem of LLM evaluation. This creative combination of an established statistical method with modern LLM evaluation challenges addresses a gap in the field. The introduction of the human annotation saving ratio as a metric provides a clear way to quantify the method's effectiveness. - The extensive experimental validation across multiple evaluators and benchmarks effectively demonstrates the robustness and generalizability of the method. The inclusion of both pretrained and fine-tuned synthetic evaluators further enhances the practical relevance of the findings. Particularly, I was surprised to see in Figure 6 how closely the shifted Control Variates aligned with human error. - The paper is well-structured and clearly written. The figures and tables effectively illustrate the method and results, and the appendix provides additional details for reproducibility. Weaknesses: - The method and the experiments focus primarily on head-to-head win rate estimation. The paper does not explore other evaluation metrics or more complex scenarios like multi-model ranking. Other Comments Or Suggestions: Typos: - Line 186: It should be $u_{\hat{z}}$ instead of $\hat{u}_{z}$. - Line 601: It should be $\frac{1}{n}$ instead of $\frac{1}{n}(1-\rho^{2})$ Questions For Authors: - Regarding the fine-tuned section, could the authors elaborate on how they ensured that the responses in the evaluation dataset are out-of-distribution with respect to the fine-tune dataset? This is crucial because if the datasets share a similar distribution, the comparison would no longer be fair. - Can authors discuss how to generalize Control Variates to other evaluations beyond head-to-head win rates. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful review and constructive feedback. We address your comments and questions below. > **Theoretical Claim: The general application of Control Variates does not require the sampled $x$ to follow a uniform distribution.** Our method can be applied directly to the setting where $X$ follows a non-uniform distribution. We assume the prompt $x$ to be sampled uniformly from $X$ because that is what people usually do when evaluating LLMs in practice. We will add a clarification in the camera-ready version. > **Essential References Not Discussed: [R1] Training Chain-of-Thought via Latent-Variable Inference** We will include this paper in Section 2.3 in the camera-ready version. > **Weaknesses: Focus primarily on head-to-head win rate estimation** There are currently no public datasets available for testing multi-model ranking, and collecting such a dataset might require a significant amount of human effort which might be beyond the scope of our paper. However, we believe that this will be an interesting future effort and currently our theory directly applies to the multi-model ranking setting. That said, we conduct an additional experiment in the single response evaluation setting, where a human gives scores to a single LLM generation, instead of giving preference to two LLM generations. We utilize the validation split of the HelpSteer2 dataset as our benchmark. This split consists of 1.04K samples, each containing a prompt, a response, and five human-annotated attributes: helpfulness, correctness, coherence, complexity, and verbosity. Each attribute is scored from 0 to 4, with higher scores indicating better performance. Our focus is on the helpfulness attribute, as it is the primary metric that reward models are typically trained to evaluate. We employ the Control Variates Evaluation method to predict the average helpfulness score. The human annotation saving ratio is shown in the table below: | Model | GRM-2B | Skywork-8B | ArmoRM-8B | GPT-4o | |------------------------------|--------|------------|-----------|--------| | Saving | 10.3% | 21.0% | 14.1% | 27.4% | The result above indicates the perspective of Control Variates Evaluation in single-response evaluation. To our best knowledge, this is the only public dataset with high-quality human annotation for single-response evaluation. We will include this experiment in the camera-ready version. > **Typos** Thanks for pointing them out. We will fix them in the final version. > **Question 1: How to ensure that the responses in the evaluation dataset are out-of-distribution with respect to the fine-tune dataset?** As we discussed in Section 5.1, the evaluation dataset is out-of-distribution in the sense that the evaluated model's response is excluded from the training dataset for fine-tuning. In the final version, we will reference Section 5.1 in the paragraph "(Optional) Synthetic Evaluator Fine-Tuning" on Page 5 for added clarity. > **Question 2: Generalize Control Variates to other evaluations beyond head-to-head win rates** This is addressed in the "Weaknesses" section above. --- Rebuttal Comment 1.1: Comment: I have carefully reviewed the authors' responses as well as the other reviewers' comments. Most of my concerns have been addressed, so I am raising my score from 3 to 4 and now lean toward acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score! We appreciate your feedback, and we are happy to know that our rebuttal addressed your concerns. We will reflect the changes in the final version of our paper.
null
null
null
null
null
null
null
null
On the Query Complexity of Verifier-Assisted Language Generation
Accept (poster)
Summary: This paper introduces a mathematical framework for analyzing verifier-assisted language generation and shows that process verifiers can transform intractable generation problems into tractable ones. Through theoretical analysis and experiments on synthetic grammar and code generation tasks, they demonstrate their approach significantly outperforms baselines in computational efficiency, accuracy, and generation diversity. Claims And Evidence: Yes Methods And Evaluation Criteria: The methods and evaluation criteria in the paper are well-aligned with the problem of verifier-assisted language generation. Theoretical Claims: 1. Theorem 1 shows that without a verifier, certain constrained generation tasks require at least $2^{D-1}$ oracle queries in expectation. The proof constructs a scenario where the algorithm must identify a specific prefix among all possible prefixes and uses Yao's minimax lemma to establish the lower bound for randomized algorithms. 2. Theorem 2 establishes that even with simple constraints like those in the knapsack problem, constrained generation can be NP-hard without a verifier. The proof is a straightforward reduction from the knapsack problem. 3. Proposition 1 demonstrates the computational benefits of tokenwise rejection sampling with a process verifier compared to standard rejection sampling. This proof calculates expected queries for both methods on a simple example. 4. Proposition 2 shows that tokenwise rejection sampling doesn't sample from the restricted distribution. The proof provides a counterexample. Experimental Designs Or Analyses: 1. The authors pretrain language models on synthetic data with controlled parameters, then test on out-of-distribution samples. They train lightweight verifiers on error cases and evaluate backtracking effectiveness. The design is sound, with proper controls and ablations on backtrack quota and stride length. 2. The authors evaluate their method against multiple baselines across metrics of accuracy, diversity, and query complexity. They conduct comprehensive hyperparameter searches and ablation studies. Supplementary Material: I reviewed Appendix B. Relation To Broader Scientific Literature: 1. Verifier-assisted generation relates to inference-time optimization methods like best-of-N sampling, and provides formal analysis of when and why verifiers help, moving beyond empirical scaling laws. 2. The tokenwise rejection sampling with backtracking approach extends classical rejection sampling techniques with a process verifier, connecting to both statistical sampling methods and search algorithms like beam search Essential References Not Discussed: The paper covers most essential related work. Other Strengths And Weaknesses: Other Strengths: 1. The paper bridges theoretical analysis and practical implementation, showing both mathematical proofs of complexity benefits and empirical evidence of performance gains. 2. tokenwise rejection sampling with backtracking algorithm is relatively simple to implement, making it accessible for practical use while still providing significant benefits. 3. The error analysis in Appendix B.1.3 offers valuable insights into why certain errors persist, which could guide future improvements. Other Weaknesses: 1. The real-world experiments focus only on code generation (specifically test case generation), which may not fully demonstrate the method's applicability to other constrained generation tasks like mathematical reasoning or formal specification. Other Comments Or Suggestions: N/A Questions For Authors: 1. Have you explored how the approach performs when constraints are "softer" or probabilistic rather than binary? 2. For the CodeLlama experiment, you focus on test case generation for simple list append functions. How do you expect the results to generalize to more complex programming tasks with deeper logical dependencies and constraints? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review! In the following, we respond to a few points raised in your review. **1. When constraints are "softer" or probabilistic rather than binary.** There could be multiple types of constrained generation tasks with non-binary constraints. In the following, we explain our thoughts on two common types: (1) Suppose the constraints are specified as a probabilistic distribution over responses, such that better responses are assigned higher probabilities, and the task is to sample according to this distribution. The main difficulty then is understanding what are reasonable assumptions on the power of the process verifier. Namely, Proposition 2 suggests that a verifier that solely outputs a binary answer (whether there exists a completion of the prefix in the target domain) is not powerful enough. On the other hand, a simple modification of part 2 of Proposition 1 also shows that a “perfectly calibrated” process verifier (which outputs the conditional probability of the prefix according to the target distribution) can be trivially used to design a linear-time algorithm to sample from the target distribution. Finding reasonable assumptions would involve both theoretical reasoning and incorporating empirical evidence from trained verifiers. (2) Suppose the constraints are specified as scores over the responses, such that better responses are assigned higher scores, and the task is to sample responses with high scores. Under this setting, one approach to the verifier is known as reward modeling in the literature. (Our response to your second question below includes more discussion on reward modeling.) We surmise that our verifier-assisted approach can be applied to this setting as well, by either setting a cutoff for our verifier (reward model) predicted scores, or accepting a prefix according to a probability predicted by the verifier. These approaches can very likely improve the average reward of the final generated responses. Like case (1) above, one challenge is to maintain the calibration of the output distribution against some target distribution. **2. How do you expect the results to generalize to more complex programming tasks with deeper logical dependencies and constraints?** Thank you for raising this important question! A key component of our approach is the verifier – the success of our token-wise rejection sampling with backtracking algorithm relies on the ability of the verifier to correctly reject partially decoded sequences that are wrong. Therefore, more complex programming tasks may require larger verifiers or more verifier training data. Despite these challenges, prior works have shown strong potentials for verifiers. For example, one approach to training verifiers is to “soften” them as reward models: train the verifier to predict a higher score for correct responses, and lower scores for incorrect ones. In the case of programming tasks, the RewardBench leaderboard [1] shows that researchers have developed accurate reward models that can distinguish between correct code vs. buggy codes in several common programming languages (with >95% accuracy). It may be a more challenging task to train process verifiers (Definition 3 in our paper) that can check any partially generated codes at arbitrary positions. However, we believe that since code is highly structured, it is promising to incorporate existing tools such as compilers/interpreters to provide additional signals for verification, or to automatically break down the programming tasks in to logical steps (e.g. a simple baseline is to treat each full line as a step), and train the verifier to check at the end of each logical step. [1] Nathan Lambert et al, RewardBench: Evaluating Reward Models for Language Modeling, 2024.
Summary: The paper presents several theoretical and experimental results on the query complexity of constrained decoding. First, in Section 3, it demonstrates theoretically that there exists tasks for which constrained decoding is difficult without a verifier. Then, in Section 4, it shows that for certain problems, the access to a process verifier significantly makes constrained decoding easier. Finally, in Section 5, the paper presents experimental results to validate their theoretical results in Sections 3 and 4. Claims And Evidence: I feel that the theoretical results are very weak as they only show the existence of hard problems but do not touch the difficulty in general. Can the authors comment on this? Moreover, the results only consider cases where the members of the constraint set A have fixed length. In practical use cases, the target length can vary, e.g., for code syntax and JSON. There is a discrepancy between the theoretical results and the experimental results. While the theoretical results assume a perfect verifier that outputs a boolean value, the experiments leverage imperfect, trained verifiers that output a probability. On a high level, the problems studied by the theoretical and the experimental sections can be interpreted differently: the former focuses on the generation of structured outputs that can be described by symbolic constraints, while the latter targets the generation of any outputs with a trained verifier. The accuracy of the imperfect verifiers is also low, e.g., ~70% as shown in Section B.2.3. I think it is possible to construct perfect verifiers for both experiments in Section 5, e.g., leveraging SynCode (https://arxiv.org/abs/2403.01632). Why train the verifiers instead? Methods And Evaluation Criteria: The backtracking condition in Algorithm 1, i.e., Line 6, does not look correct to me. Shouldn’t it be V(x+s) = 0? Theoretical Claims: The correctness of the theoretical claims look good to me. As discussed in the “Claims And Evidence” section, they are just not strong enough in my opinion. Experimental Designs Or Analyses: The experiment results are not impressive either. If we compare the accuracy between Tables 1 and 2, the access to a verifier with Q=1 or Q=2 can even decrease the accuracy, for the same B value. I am not even sure if the results for Q=1 or Q=2 are interesting. Since we have access to the verifier, why do we only use it once or twice? Supplementary Material: I went over the appendix. The submission has no other supplementary material. Relation To Broader Scientific Literature: I think the topic studied by this paper is very important, given LLMs’ tendency to hallucinate and the promise of using constrained decoding to address these hallucinations. However, I have various concerns about the paper’s results, as discussed in the other review sections. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review! In the following, we respond to a few points raised in your review. **1. Scope of our theory, imperfect verifiers** We see one of our main theoretical contributions as being a new formalism for theoretically reasoning about algorithms for verifier-assisted language generation. Moreover, our theoretical analyses already illustrate a few practically relevant claims: the importance and power of verifiers from information-theoretical (Theorem 1) and computational (Theorem 2) perspectives, and the challenge of maintaining calibration while incorporating a verifier (Proposition 2). We agree with the reviewer that there are multiple ways in which our results should be extended in order to guide practical algorithm design—in particular, an important aspect is dealing with imperfect verifiers. The main challenge is mathematical modeling: namely, what are assumptions on the verifier that comport with empirical evidence (i.e. satisfied by trained verifiers in practice), but are also provably theoretically helpful, and conducive to designing better algorithms. We hope that our proposed theoretical framework and these initial results will motivate the community to tackle such questions. Finally, for some of these relaxations of the theoretical assumptions, our paper includes careful empirical analyses which will hopefully inspire future theory. **2. Adding experimental results with perfect verifiers** Thank you for your suggestion! We ran experiments using perfect groundt-truth verifiers. We show the results for the Python test case generation task in this figure: https://ibb.co/pv5z26Wk. In particular, our trained verifier approaches the accuracy of the ground-truth verifier. In addition to perfect verifiers, we also reported experimental results with imperfect, trained verifiers with an eye towards other realistic tasks (e.g. creative story writing), because in such settings it’s not straightforward to implement perfect verifiers. This is to illustrate that our approach of training neural network-based verifiers is applicable to more general tasks. Our Dyck grammar task can be seen as a special case of the task proposed in SynCode. We will cite this paper and discuss its relevance in our Related Works section. **3. Why not use larger Q, B; can larger Q, B decrease the accuracy?** We first note a possible misunderstanding: Table 1 and Table 2 are not directly comparable because they test different settings. In Table 1, we use a ground-truth verifier to exactly locate the first position of error, and allow the generator to backtrack once. In Table 2, we use a trained verifier. In particular, Table 2 shows that, when backtrack quota (Q) and stride (B) are small, increasing them always results in higher accuracy. When using much larger Q or B, our results in https://ibb.co/pv5z26Wk show that even with a perfect verifier, applying it too many times can decrease the accuracy. We conjecture that the generator model, CodeLlama, is imperfect, so unnecessarily backtracking and forcing the model to re-generate more tokens may increase the chance that the model makes mistakes. **4. Algorithm 1, i.e., Line 6** Yes it should be V(x+s) = 0. Thank you for catching this typo!
Summary: This work studies the problem of constrained autoregressive language generation from the perspective of computational complexity, showing that for even very simple autoregressive oracles (i.e. autoregressive language models) and very simple constraints, the task of constrained generation can be computationally hard, while becoming tractable if given access to a process verifier. Instead of trying to propose some new constrained decoding algorithms, this paper aims to conduct an a systematic study (theoretical and empirical) covering three most commonly leveraged constrained decoding paradigms: (1) sequence-level (rejection) sampling w/ access to a sequence-level verifier (2) token-level rejection sampling with a process verifier (that works on partial sequences) (3) token-level rejection sampling + backtracking. The paper first (i) establishes the difficulty of constrained generation due to the oracle itself being complicated, then (ii) demonstrates the exponential separation between (1) and (2) when the oracle is simple, and lastly (iii) verifies the advantages of (3) over (2), i.e. the benefit of backtracking, as well as the advantages of (2) over (1) with well-controlled synthetic experiments. The theoretical results in this paper are not ground-breaking but the arguments and ideas behind are somewhat fundamental to the study of constrained autoregressive generation in general. In addition, Proposition 2 also makes a nice and clean argument uncovering the bias (in terms of approximate probabilistic inference) carried by some of the commonly adopted constrained generation approaches for structured output (i.e. JSON generation). Claims And Evidence: The claims are well supported by either theoretical results or empirical evidences. Methods And Evaluation Criteria: The synthetic empirical evaluation employed by this paper is very rigorous and immediately to-the-point. Theoretical Claims: Yes, I checked the (sketch) proofs of the theoretical results. They are sound and well-written. Experimental Designs Or Analyses: Yes. Supplementary Material: I did not need to review the supplementary materials as the main text is pretty much self-contained. Relation To Broader Scientific Literature: See summary. Essential References Not Discussed: The authors should probably include references to some of the controllable text generation literature such as *Yang, Kevin, and Dan Klein. "FUDGE: Controlled text generation with future discriminators." arXiv preprint arXiv:2104.05218 (2021).* and more. Other Strengths And Weaknesses: See summary. Other Comments Or Suggestions: N/A Questions For Authors: 1. It would be nice if the authors/other people can study this problem in the settings where proper conditioning, i.e. we want to sample from $p(s) \propto \mathbf{1}(s \in A) p_{\mathcal{O}}(s)$, is desired. 2. For the models trained on Dyck grammar, as you sample from them, how many samples to do you need to cover the whole support of the constraints? More specifically, do they achieve near 0 KL divergence between the ground-truth and the trained model? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your review! In the following, we respond to a few points raised in your review. **1. Additional references.** Thank you for the recommendations! Methods like FUDGE [1] are indeed related to a slight extension to our framework. In particular, the attribute predictor in FUDGE can be thought of as training a process verifier (Definition 3 in our paper) which, instead of outputting binary acceptance / rejection decisions, returns a probability of accepting each prefix. We will cite [1] and references about controllable text generation literature therein, and include a discussion along these lines. [1] Yang, Kevin, and Dan Klein. FUDGE: Controlled text generation with future discriminators. 2021. **2. Studying this problem in the settings where proper conditioning is desired.** We agree that this is an interesting and challenging setting! The main difficulty is understanding what are reasonable assumptions on the power of the process verifier. Proposition 2 suggests that a verifier that solely outputs a binary answer (whether there exists a completion of the prefix in the target domain) is not powerful enough. On the other hand, a simple modification of part 2 of Proposition 1 also shows that a “perfectly calibrated” process verifier (which outputs the conditional probability of the prefix according to the target distribution) can be trivially used to design a linear-time algorithm to sample from the target distribution. Finding reasonable assumptions would involve both theoretical reasoning and incorporating empirical evidence from trained verifiers. **3. KL divergence between the ground-truth and the trained model in Dyck grammar models.** In summary: this KL divergence is very small (meaning that the trained model is very well-calibrated against the groundtruth next-token distribution) when completing in-distribution prefixes. For out-of-distribution prefixes, the KL divergence is sometimes large (meaning that the calibration is worse). More specifically: we define the out-of-distribution prefixes as Dyck_OOD (above Definition 9 in Section 5.1.1 of the paper). Under this setting, a visualization of the calibration results is included in this figure: https://ibb.co/ymSGGsv2. It can be seen that in this figure, for both the (in-distribution) training set and eval set, the model-predicted log probability of a sequence is generally consistent with the groundtruth log probability. For most sequences generated by the model itself, this consistency also holds, though for some model-generated samples, the model predicts a finite log probability, but the groundtruth log probability should be negative infinite (visualized at -100), because these model-generated samples are not grammatical. For samples generated according to Dyck_OOD, the rightmost subplot of this figure shows that, in many cases, the model can be very miscalibrated. In calibrated cases, for each prefix, it takes a relatively small number of samples to cover the support for valid next tokens (because vocabulary size is small, and the next token probability is either exactly zero or lower bounded away from zero). In miscalibrated settings, it may take a large number of samples to cover the support for valid next tokens (e.g. when the model predicts an extremely small probability for some correct continuations).
null
null
null
null
null
null
null
null
TimeStep Master: Asymmetrical Mixture of Timestep LoRA Experts for Versatile and Efficient Diffusion Models in Vision
Accept (poster)
Summary: The paper introduces TimeStep Master (TSM), a novel approach for fine-tuning diffusion models efficiently. Unlike traditional Low-Rank Adaptation (LoRA), which applies the same tuning across all timesteps, TSM uses TimeStep LoRA experts specialized for different noise levels. It consists of two stages: fostering, where different LoRAs are trained for specific timestep intervals, and assembling, which combines these experts asymmetrically using a core-context mechanism. This method improves adaptability and achieves state-of-the-art results in domain adaptation, post-pretraining, and model distillation, demonstrating strong generalization across architectures and visual modalities. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes. I reviewed implementation details and visualization results. Relation To Broader Scientific Literature: This paper advances the field of LORA fine-tuning by introducing the TimeStep Master (TSM) paradigm, which optimizes the application of diffusion models in tasks such as text-to-image/video generation (e.g., Flux and Sora). It underscores the necessity for efficient fine-tuning methods like LoRA to manage the extensive number of parameters involved, thereby enhancing both performance and computational efficiency. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: 1. This paper introduces a general and concise TimeStep Master paradigm with two key fine-tuning stages. The fostering stage (1-stage), apply different LoRAs to fine-tune the diffusion model at different timestep intervals. The assembling stage (2-stage) use a novel asymmetrical mixture of TimeStep LoRA experts, via core-context collaboration of experts at multi-scale intervals. 2. The extensive experiments are conducted on three typical and popular LoRA-related tasks of diffusion models, including domain adaptation, post-pretraining, and model distillation. Weakness: 1. Insufficient Visualization Results: The paper lacks sufficient visualization results, which are particularly critical for a study focusing on fine-tuning diffusion models. Adding more visual examples, such as qualitative comparisons, intermediate outputs, or generated results, would provide a clearer and more intuitive demonstration of the effectiveness of the proposed TSM paradigm. Visualizations should ideally be included in the main paper to ensure direct evidence of the method’s performance is presented upfront, rather than being relegated to the supplementary material. 2. The novelty is limited. While the proposed method appears to be relatively straightforward, essentially extending the original LoRA to different timesteps and assembling them. The lack of more complex or novel mechanisms may limit the perceived novelty of the approach. The authors should clarify how this simplicity contributes to the method’s practicality and scalability, or discuss potential directions for further enhancement. 3. The performance difference between TSM-Stage1 and TSM-Stage2 in Table 6 is very small. Including visual comparisons that highlight the contributions of each stage would help validate the effectiveness and necessity of the two-stage design. This would also provide a clearer understanding of how the assembling stage (Stage 2) augments the fostering stage (Stage 1). Other Comments Or Suggestions: Too many tables are included in the main paper. I suggest moving in the supplementary material or simplifying some of them to free up space for more visualization results. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer eNpx: We sincerely appreciate your thoughtful and detailed review. Your insightful comments have been invaluable in guiding us to improve our manuscript. Below we provide our point-by-point responses, and we hope our clarifications and planned enhancements address your concerns. **Question1:** : The paper lacks sufficient visualization results, which are particularly critical for a study focusing on fine-tuning diffusion models. **Answer1:** Thank you for your reminder. We realize that visualization is very important for generation tasks. Due to space constraints, we had to place some visualizations in the supplementary material, only showing a comparison example with traditional methods in the main paper teaser. We will add more visualizations to the main paper in the final version, including comparison results between TSM 1-stage and 2-stage to further demonstrate TSM's effectiveness and superiority, as well as visualization of intermediate layer outputs and generation results at different training steps to further illustrate the effectiveness of our method. --- **Question2:** The novelty is limited. While the proposed method appears to be relatively straightforward, essentially extending the original LoRA to different timesteps and assembling them. The lack of more complex or novel mechanisms may limit the perceived novelty of the approach. The authors should clarify how this simplicity contributes to the method’s practicality and scalability, or discuss potential directions for further enhancement. **Answer2:** Thank you for this perceptive comment. We appreciate the opportunity to elaborate on the innovative aspects of our approach. Despite its straightforward formulation, TSM includes several key contributions that we believe represent a significant advancement: 1. **Ensemble Strategy**: We cleverly ensemble LoRAs trained with different timestep intervals via gating mechanism to leverage complementary knowledge across intervals. 2. **Comprehensive Validation**: We comprehensively validate TSM across three architectures (UNet, DiT, MM-DiT), two modalities (image/video), and three task types (domain adaptation, post-pretraining, model distillation). 3. **Model Scalability**: We verify TSM's effectiveness on pretrained models of varying scales, from PixArt (0.6B) to SD3 (2B). We plan to extend experiments to more tasks like text-to-image generation, image colorization, and image restoration. Furthermore, we are excited to extend our experimental evaluations to include tasks such as text-to-image generation, image colorization, and image restoration. We truly appreciate your suggestion, which has motivated us to further clarify how the simplicity of our method contributes to both its practicality and scalability. --- **Question3:** The performance difference between TSM-Stage1 and TSM-Stage2 in Table 6 is very small. **Answer3:** Thank you for your keen observation. The performance difference observed in Table 6 specifically corresponds to distillation tasks. As indicated in Tables 4 and 5, the two-stage approach shows a more pronounced improvement for domain adaptation and post-pretraining tasks. We hypothesize that the relatively modest improvement in distillation may be due to early performance saturation after the first stage, leaving less room for further gains. We appreciate your detailed attention and will provide additional discussion on this aspect in the final revision. --- **Question4:** Including visual comparisons that highlight the contributions of each stage would help validate the effectiveness and necessity of the two-stage design. This would also provide a clearer understanding of how the assembling stage (Stage 2) augments the fostering stage (Stage 1). **Answer4:** We completely agree that visual comparisons can effectively highlight the contributions of each stage. In response, as mentioned in our answer to Question 1, we will include clear and side-by-side visual examples comparing TSM 1-stage and TSM 2-stage. This will better illustrate the overall impact and the incremental benefits brought by the second stage. Thank you for this excellent suggestion. --- Once again, we are truly grateful for your encouraging and incisive feedback. Your comments have inspired us to refine our manuscript further, and we hope that the planned revisions will enhance the clarity and impact of our work. Please do not hesitate to let us know if there are any additional details or clarifications that would be helpful. Thank you for your time and consideration.
Summary: This paper introduces TimeStep Master (TSM), a diffusion fine-tuning framework using an asymmetrical mixture of timestep LoRA experts. Rather than applying a single LoRA module across all timesteps, which limits the adaptability of different noise levels in the diffusion process, TSM introduces a timestep-specific fine-tuning approach. Specifically, the authors present an asymmetric mixture of LoRA experts, dividing the entire timestep into different intervals, then using the smallest-scale interval as the core expert and the rest as context experts. Core experts address fine-grained noise modeling while context experts are adaptively weighted based on the timesteps. Extensive experiments verify the effectiveness of TSM. Claims And Evidence: The authors build on existing evidence that diffusion models are trained to represent different data distributions at different timesteps, a concept previously framed as a multi-task learning problem. While this motivation is somewhat not novel, they strengthen their claim by constructing extensive experiments for various diffusion fine-tuning tasks. Methods And Evaluation Criteria: The authors validate the effectiveness of TSM across various diffusion fine-tuning tasks, but the analysis of its design choices is insufficient. Recent studies have provided in-depth analyses of timestep modeling in diffusion training, including approaches that divide timesteps into finer intervals to capture inter-task relationships between denoising tasks [1, 2] and methods that enhance timestep conditioning in attention layers [3, 4]. Building on these analyses, some works have proposed more detailed fine-tuning strategies based on timesteps [5, 6]. Therefore, the authors should support their claim by constructing some experiments to confirm compatibility or logical connections with prior timestep-based training and fine-tuning approaches. [1] Park et al., Denoising Task Routing for Diffusion Models, ICLR 2024. \ [2] Park et al., Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts, ECCV 2024. \ [3] Hatamizadeh et al., Diffit: Diffusion vision transformers for image generation, ECCV2024. \ [4] Choi et al., Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model, TMLR 2024. \ [5] Fang et al., Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising, NeurIPS 2024. \ [6] Ham et al., Diffusion Model Patching via Mixture-of-Prompts, AAAI 2025. Theoretical Claims: No theoretical claims and proofs. Experimental Designs Or Analyses: The authors use vanilla LoRA as the primary baseline to validate the effectiveness of TSM. However, since TSM incurs higher computational costs, its improved performance may be expected. To properly demonstrate the benefits of the asymmetric mixture of LoRA experts, the authors should either design an experimental setup with a similar computational cost to vanilla LoRA or conduct comparative experiments against other fine-tuning methods. One potential experiment could involve extending the ablation studies on MoE LoRA to include other timestep-based LoRA conditioning methods [4] and alternative fine-tuning approaches [5, 6] (as referenced above). Supplementary Material: The authors provide implementation details and further results, which significantly aid in understanding the paper. Relation To Broader Scientific Literature: Extensive experiments confirm that their approach aligns with observations in previous work and demonstrates effectiveness across various diffusion fine-tuning tasks. However, its superiority over existing methods remains unproven. Essential References Not Discussed: Many other timestep-based diffusion training [1, 2, 3, 4] and fine-tuning methods [5, 6] are missing. [1] Park et al., Denoising Task Routing for Diffusion Models, ICLR 2024. \ [2] Park et al., Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts, ECCV 2024. \ [3] Hatamizadeh et al., Diffit: Diffusion vision transformers for image generation, ECCV2024. \ [4] Choi et al., Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model, TMLR 2024. \ [5] Fang et al., Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising, NeurIPS 2024. \ [6] Ham et al., Diffusion Model Patching via Mixture-of-Prompts, AAAI 2025. Other Strengths And Weaknesses: * **Strengths**: The authors provide comprehensive experiments to verify the effectiveness of TSM. Additionally, the asymmetric mixture of LoRA experts has a strong potential for scalable diffusion fine-tuning by increasing the number of context experts. * **Weaknesses**: However, the paper lacks sufficient justification for the proposed method and comparative experiments to validate its superiority. Other Comments Or Suggestions: The claim would be more robust if the authors could validate that insights and observations from previous works are compatible with TSM. Questions For Authors: Does TSM exhibit any scaling properties by increasing the number of intervals m? Many MoE-based diffusion training methods [1, 2] analyze their scaling behavior, so it would be valuable if the authors could provide insights into this aspect. [1] Fei et al., Scaling Diffusion Transformers to 16 Billion Parameters, arXiv 2024. \ [2] Sun et al., EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing, ICLR 2025. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer zVmL: Thank you for your careful reading and analysis of our article, and for providing valuable feedback. Below are our responses to your comments: **Question1:** Recent studies have provided in-depth analyses of timestep modeling in diffusion training, including approaches that divide timesteps into finer intervals to capture inter-task relationships between denoising tasks [1, 2] and methods that enhance timestep conditioning in attention layers [3,4]. Building on these analyses, some works have proposed more detailed fine-tuning strategies based on timesteps [5,6]. **Answer1:** We sincerely appreciate your reminder. We will supplement references to these works in the final version and provide detailed comparisons: - *Denoising Task Routing for Diffusion Models [1]* introduces channel mask strategies for different timesteps during training to inject temporal priors. - *Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts [2]* employs timestep-conditioned gating mechanisms to regulate expert activation. - *Diffit: Diffusion vision transformers for image generation [3]* adapts ViT architectures for image generation. - *Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model [4]* designs multi-expert ensembles for class labels and timesteps. Above methods are typically used in pretraining. Methods like *Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising [5]* (learnable coefficients for timestep-specific fine-tuning) and *Diffusion Model Patching via Mixture-of-Prompts [6]* (gating modules with prompt tokens) focus on efficient timestep-aware fine-tuning. Compared with previous efficient timestep-aware fine-tuning methods, our TSM has a wider range of applicability (3 architectures, 2 modalities and 3 tasks, see **Table1,2 and 3** in our paper) and more advanced performance (see in **Question4**). --- **Question2:** The authors should support their claim by constructing some experiments to confirm compatibility or logical connections with prior timestep-based training and fine-tuning approaches. **Answer2:** Thanks for your reminder and we added comparisons with state-of-the-art fine-tuning methods in **Question4**. We will include these comparative experiments in the final version of the article. --- **Question3:** The authors should either design an experimental setup with a similar computational cost to vanilla LoRA or conduct comparative experiments against other fine-tuning methods. **Answer3:** This is an important question, and one we explore rigorously in our paper. We address this in Table 8 by comparing performance under **equal computational budgets** (e.g., `n=1, r=4, step=32k` vs. `n=4, r=4, step=8k` where `1✖️32k==4✖️8k`). Results show: - Training 4–8 experts outperforms single-expert setups under equal costs. - Performance improves with increased training steps (e.g., `step=8k` vs. `4k` for `n=1, r=4`). What's more, Table 13 further demonstrates TSM's superiority over MoE LoRA under identical budgets. --- **Question4:** Its superiority over existing methods remains unproven. However, the paper lacks sufficient justification for the proposed method and comparative experiments to validate its superiority. The claim would be more robust if the authors could validate that insights and observations from previous works are compatible with TSM. **Answer4:** Thanks for your reminder. We added comparisons with state-of-the-art fine-tuning methods and all comparative experiments use the same training and evaluation settings as the comparative methods. 1. vs. **Diffusion Model Patching via Mixture-of-Prompts, AAAI 2025. (DMP)** on LAION-5B: |Model|FID| |-|-| |SD1.5|47.18| |SD1.5+DMP|35.44| | SD1.5+TSM (1-stage)|33.45| | SD1.5+TSM (2-stage)|**32.73**| 2. vs. **Decouple-Then-Merge: Fine-Tuning Diffusion Models as Multi-Task Learning, CVPR 2025. (DeMe)** on MSCOCO: |Model|FID| |-|-| |SD1.5|13.42| |SD1.5+DeMe|13.06| |SD1.5+TSM (1-stage)|11.98| |SD1.5+TSM (2-stage)|**11.92**| --- **Question5:** Does TSM exhibit any scaling properties by increasing the number of intervals m? **Answer5:** This is a very important question for TSM, as we describe experimental phenomena related to scaling in our paper. We will discuss this in two aspects. 1. **Timestep intervals (n):** Table 8 shows optimal scaling at `n=4~8` under fixed budgets, with performance improving as training steps increase. 2. **Model size scaling:** Experiments in Tables 1,4,7,8,9 and 11 confirm TSM's effectiveness across model sizes (up to SD3 with 2B parameters). --- We sincerely appreciate your insightful comments. Please do not hesitate to let us know if you require any further information or additional explanations. If you find that our revisions have satisfactorily addressed your concerns, we would be most grateful if you could kindly consider an improved score. Thank you once again for your valuable input. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response and pointing out what I missed. The comprehensive ablative and comparative experiments adequately address my concerns. Thus, I have decided to increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer zVmL, Thank you very much for your thoughtful and perceptive evaluation. Your detailed feedback and recognition of our work’s merits truly motivate us. We are especially grateful for your discerning insight, which reflects both your deep understanding and high standards. Your positive assessment means a great deal to us.
Summary: This article addresses the issue of limited model performance during the fine-tuning process of diffusion models, which arises from the use of the same LoRA across different time steps. We propose the TimeStep Master method, which employs different LoRAs at varying time step intervals to fine-tune the diffusion model. This allows different TimeStep LoRA experts to effectively capture varying noise levels. However, the article does not sufficiently elaborate on the scientific issues it addresses. For example, in the diffusion model, why is it necessary to learn different LoRAs during the fine-tuning process when the same set of U-Net network parameters can identify different noise levels? Besides, what issue can be illustrated by subfigure “a” of Figure 1? What does the hidden state of each block represent? According to the scientific problem proposed in the article, the model parameters should differ for different time steps, but the parameters are locked during the inference process. How can subfigure a be used to demonstrate the existence of this problem? Claims And Evidence: subfigure “a” of Figure 1 is not clear. How can the variance changes of the hidden states of blocks at different time steps be used to demonstrate the existence of the scientific problem that this paper aims to address? Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense for the introduced problem. Theoretical Claims: “We then hypothesis that it is the low-rank characteristic of LoRA that makes it difficult to learn complex representations at different timesteps.” The article lacks necessary evidence to support the validity of this hypothesis; simply stating that the variance of the hidden states of blocks at different time steps is large is insufficient to illustrate the problem. Experimental Designs Or Analyses: The experimental data is sufficient, with comparisons made to the current state-of-the-art (SOTA). Ablation experiments were conducted for the two important stages of the proposed method. Additionally, generalization validation was performed for tasks related to LoRA. Supplementary Material: The supplementary materials provide a detailed explanation of the implementation details of the method, as well as additional experimental results, and the content is very thorough. Relation To Broader Scientific Literature: From a performance perspective, it has improved the current performance of fine-tuning based on diffusion models and contributed to the advancement of LoRA. However, from a theoretical standpoint, the expansion of existing research theories is still insufficient, particularly in providing robust evidence for the hypothesis proposed in the article. Essential References Not Discussed: Not yet Other Strengths And Weaknesses: The article excels in the sufficiency of the experiments and the quality of writing and figures, which are significant highlights. However, it is somewhat lacking in theoretical exposition, particularly in the articulation of the scientific problem. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer whQF: We sincerely appreciate the time and care you invested in reviewing our manuscript. Your insightful comments and suggestions have been extremely valuable, and we are grateful for the opportunity to clarify these points. Below are our detailed responses: **Question1:** What does the hidden state of each block represent? **Answer1:** In Figure 1(a), the term “hidden state” refers to the output produced by a module after the input has been processed. For example, the curve labeled “Block26” represents the variable obtained after the input has passed through the first 26 modules of the model (noting that each transformer block typically comprises both an attention module and a feed-forward network module). We hope this explanation clarifies your query. **Question2:** What issue can be illustrated by subfigure “a” of Figure 1? How can subfigure a be used to demonstrate the existence of this problem? **Answer2:** In Figure 1(a), the curves show the progression of hidden states across various timesteps within each module. This visualization reveals that even within a single module, hidden state representations vary substantially over time. This significant variation underscores the challenge of using a single set of model parameters to capture such diverse representations consistently. Our observations are in line with similar findings discussed in previous works [1–8]. **Question3:** Why is it necessary to learn different LoRAs during the fine-tuning process when the same set of U-Net network parameters can identify different noise levels? **Answer3:** Addressing the challenge mentioned in **Answer2**—namely, that a single set of network parameters may struggle to adequately model the data distribution across different timesteps—it becomes a natural choice to introduce distinct parameters for various timestep intervals. Directly augmenting the full model with new parameters for each interval, however, would lead to prohibitively high computational costs. To balance efficiency and effectiveness, we employ LoRA (Low-Rank Adaptation) during fine-tuning, which allows us to efficiently adapt the network across multiple timestep intervals without incurring excessive cost. **Question4:** The model parameters should differ for different time steps, but the parameters are locked during the inference process. **Answer4:** Thank you for this excellent observation. To reconcile this, our approach incorporates a gating mechanism within the TSM 2-stage framework. Although the underlying network parameters remain fixed during the inference process, the gating mechanism dynamically adjusts the contribution (or weights) of multiple experts. This adaptive activation allows the model to effectively tailor its response to different timesteps. We have provided a detailed demonstration of the effectiveness of this experts ensemble using the 2-stage gating approach in Tables 4, 5, 6, and 11, and we further analyze the gating structure in Table 7. **References** [1] Multi-Architecture Multi-Expert Diffusion Models, AAAI 2024. [2] eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers [3] Towards Practical Plug-and-Play Diffusion Models, CVPR 2023. [4] Mixture of efficient diffusion experts through automatic interval and sub-network selection, ECCV 2024. [5] Improving training efficiency of diffusion models via multi-stage framework and tailored multi-decoder architecture, CVPR 2024. [6] Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts, ECCV 2024. [7] Denoising Task Routing for Diffusion Models, ICLR 2024. [8] Addressing Negative Transfer in Diffusion Models, NeurIPS 2023. --- Thank you once again for your thoughtful and constructive feedback. Your insightful suggestions have greatly enhanced our manuscript, and we have carefully integrated the corresponding experimental details and clarifications into the final version. We sincerely hope these revisions have addressed your concerns, and if you feel they have been satisfactorily resolved, we would be most grateful if you could consider revising your evaluation score accordingly. Please do not hesitate to let us know if you need any further explanations.
Summary: This paper introduces TimeStep Master (TSM), a method that employs multiple LoRA experts, each specialized in specific timestep regions. The authors empirically analyze the degradation caused by sharing LoRA parameters across all timesteps, which motivates their proposal of expert LoRAs tailored to distinct timestep ranges. Specifically, they construct specialized LoRAs by evenly dividing timesteps (or noise levels) and further refining them through multi-scale partitioning to capture coarse-to-fine timestep variations. These LoRA parameters are then dynamically routed in a mixture-of-experts (MoE) fashion, enabling effective ensembling. Claims And Evidence: I think most claims are clear, as the effectiveness of using specialized parameters for divided timesteps has already been demonstrated. Methods And Evaluation Criteria: The evaluation metrics and datasets appear reasonable, but the comparison is primarily against LoRA, leaving out several baselines. I believe their setup should include comparisons with multi-expert methods and MoE-like diffusion models. Theoretical Claims: N/A Experimental Designs Or Analyses: I think the training costs reported in Table 3 are somewhat exaggerated, but it is not a critical issue. Supplementary Material: I reviewed. Relation To Broader Scientific Literature: In this paper, the authors primarily focus on enhancing LoRA for efficient fine-tuning of diffusion models. Previous works on leveraging LoRA for efficient tuning have explored three main directions: (1) dataset design, (2) distillation objectives (e.g., reinforcement learning), and (3) LoRA architecture. This work falls under the third category, LoRA architecture, aiming to improve it by incorporating timestep division, MoE, multi-expert modeling, and multi-task learning—concepts that are extensively studied in the diffusion model literature. Notably, some closely related works that explore similar approaches but are not cited include DMP and Decouple-Then-Merge (but not cited): - Diffusion Model Patching via Mixture-of-Prompts (DMP), AAAI 2025 - Decouple-Then-Merge: Fine-Tuning Diffusion Models as Multi-Task Learning, CVPR 2025 These works share similarities with the proposed method and should be considered in the discussion. Essential References Not Discussed: As mentioned in Relation to Broader Scientific Literature, this work is closely related to diffusion timestep division. Therefore, it would be beneficial for the paper to explicitly discuss this topic. Multi-Experts [2, 3, 4, 5, 6]: These approaches employ specialized parameters to mitigate conflicts between denoising tasks across different timesteps, as discussed in [1]. Mixture-of-Experts (MoE) [7, 8, 9]: This technique is used in their method specifically for ensembling LoRA experts. Given these connections, it would be useful to compare the proposed approach with these existing methods. The key novelty of this work lies in multi-scale timestep division, which enables the model to leverage both broadly trained LoRA experts that capture overall denoising patterns and fine-grained LoRA experts that specialize in specific timesteps. This flexibility makes the method more adaptable to different granularity levels of denoising tasks. As long as the additional computational cost remains manageable, I believe this is a promising approach. ### Reference - [1] Addressing Negative Transfer in Diffusion Models, Neurips 2023, - [2] Multi-Architecture Multi-Expert Diffusion Models, AAAI 2024 - [3] eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers - [4] Towards Practical Plug-and-Play Diffusion Models, CVPR 2023 - [5] Mixture of efficient diffusion experts through automatic interval and sub-network selection, ECCV 2024 - [6] Improving training efficiency of diffusion models via multi-stage framework and tailored multi-decoder architecture, CVPR 2024. - [7] Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts, ECCV 2024 - [8] Scaling Diffusion Transformers to 16 Billion Parameters, - [9] EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing, ICLR 2025. Other Strengths And Weaknesses: ## Strength 1. The paper is clearly written and well-structured, making it easy to follow the motivation, methodology, and results. 2. The authors conduct a thorough experimental evaluation, demonstrating the effectiveness of their approach with various settings. 3. The concept of multi-scale timestep division for LoRA is an interesting and novel aspect. ## Weakness 1. Lack of discussions with timestep division in the diffusion model literature - This paper considers LoRA specialization based on timestep division as its core idea. However, the discussion on similar approaches in the diffusion model literature feels somewhat lacking. - Multi-expert methods [2, 3, 4, 5, 6] have proposed a conceptually similar approach by assigning specialized parameters to different timestep regions, thereby resolving conflicts between denoising tasks across timesteps. These works have demonstrated the effectiveness of such strategies [1], making them highly relevant to this paper’s methodology. Given the strong similarity between these approaches and the timestep-specialized LoRA concept, a more detailed discussion would be beneficial. - Furthermore, prior works have already explored timestep division for fine-tuning diffusion models in setups very similar to the one proposed in this paper. Notably, DMP (Diffusion Model Patching via Mixture-of-Prompts) [7] and Decouple-Then-Merge [8] explicitly adopt a strategy of dividing timesteps and merging them later, making them particularly relevant. Given the conceptual overlap, an empirical comparison with these works would provide a clearer positioning of this paper’s contributions. 2. Additional Training Costs for Multi-Expert Convergence - A major drawback of multi-expert methods is the increased training cost due to the need for each expert to converge on its respective timestep region. Since TimeStep Master (TSM) follows a similar paradigm by specializing LoRA experts for different timesteps, it inherently inherits this issue. However, the paper does not provide sufficient validation on how well the model mitigates this problem. To address this, it would be beneficial to include an analysis of training efficiency. One possible way to validate this is by presenting a training step vs. evaluation metric curve, similar to iteration-based learning curves, to show the additional computational overhead and how performance scales with training. ### Reference - [1] Addressing Negative Transfer in Diffusion Models, NeurIPS 2023 - [2] Multi-Architecture Multi-Expert Diffusion Models, AAAI 2024 - [3] eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers - [4] Towards Practical Plug-and-Play Diffusion Models, CVPR 2023 - [5] Mixture of Efficient Diffusion Experts through Automatic Interval and Sub-Network Selection, ECCV 2024 - [6] Improving Training Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architecture, CVPR 2024 - [7] Diffusion Model Patching via Mixture-of-Prompts (DMP), AAAI 2025 - [8] Decouple-Then-Merge: Fine-Tuning Diffusion Models as Multi-Task Learning, CVPR 2025 Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Response to Reviewer 4dgm: Thank you for your thorough review of our paper and your valuable suggestions. Below are our point-by-point responses: **Question1:** Some closely related works that explore similar approaches but are not cited include DMP and Decouple-Then-Merge. **Answer1:** We sincerely appreciate your reminder. 1. We were previously unaware of the DMP work, which shares similarities with our method. We will elaborate on this comparison in **Question4**. 2. Decouple-Then-Merge was published in CVPR 2025, with its release date preceding the ICML 2025 submission deadline. Unfortunately, we could not access this work during our submission period. However, we will elaborate on this comparison in **Question4** and we will include citations to both works in the final version. --- **Question2:** Essential References Not Discussed: Multi-Experts [2,3,4,5,6]: These approaches employ specialized parameters to mitigate conflicts between denoising tasks across different timesteps, as discussed in [1]. Mixture-of-Experts (MoE) [7,8,9]: This technique is used in their method specifically for ensembling LoRA experts. **Answer2:** Thank you for highlighting the need for broader discussion. We deeply appreciate your additional references. We will discuss the differences between us in detail below and these will be incorporated into the final version. |Method|Limitations|TSM Advantages| |-|-|-| |[2] Multi-Architecture Multi-Expert Diffusion Models. [3] eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers. [6] Improving training efficiency of diffusion models via multi-stage framework and tailored multi-decoder architecture.|Require training multiple large models (storage/inference challenges)|Uses LoRA for **efficient** timestep interval tuning in 1-stage and ensembles knowledge in 2-stage| |[4] Towards Practical Plug-and-Play Diffusion Models.|Applies LoRA only to guidance models|Directly integrates LoRA into both text encoder and diffusion models| |[5] Mixture of efficient diffusion experts through automatic interval and sub-network selection|Focuses on timestep redundancy for inference speed|Achieves **superior performance** via two-stage specialization and ensembling| |[7] Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts. [8] Scaling Diffusion Transformers to 16 Billion Parameters. [9] EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing.|Need to train the model from scratch, which consumes resources.|TSM is compatible with the existing diffusion model and continues to improve model performance.| --- **Question3:** The discussion on similar approaches in the diffusion model literature feels somewhat lacking. Multi-expert methods [2,3,4,5,6] have proposed a conceptually similar approach by assigning specialized parameters to different timestep regions, thereby resolving conflicts between denoising tasks across timesteps. **Answer3:** Thank you for your extensive research on our article. We have supplemented **Question2** with detailed discussions and will include them in the final version. --- **Question4:** DMP [7] and Decouple-Then-Merge [8] explicitly adopt a strategy of dividing timesteps and merging them later, making them particularly relevant. **Answer4:** We give a detailed comparison below and all experiments use the same training and evaluation settings 1. DMP - Trains gating modules and prompt tokens simultaneously - TSM uses **Two-Stage training**: - Stage 1: Specializes experts per timestep interval - Stage 2: Freezes experts and learns gating modules LAION-5B Results (FID↓): | Model|FID| |-|-| | SD 1.5|47.18| | SD1.5 + DMP|35.44| | SD1.5 + TSM (Stage1)|33.45| | SD1.5 + TSM (Stage2)|**32.73**| 2. Decouple-Then-Merge (DeMe) - Merges parameters via weighted averaging - TSM uses **gating mechanisms** (See Table 7 for gating mechanism ablation) to preserve timestep-specific knowledge MSCOCO Results (FID↓): | Model|FID| |-|-| | SD 1.5|13.42| | SD1.5 + DeMe|13.06| | SD1.5 + TSM (1-stage)|11.98| | SD1.5 + TSM (2-stage)|**11.92**| --- **Question5:** Additional Training Costs for Multi-Expert Convergence. A major drawback of multi-expert methods is the increased training cost. Since TimeStep Master (TSM) follows a similar paradigm by specializing LoRA experts for different timesteps, it inherently inherits this issue. **Answer5:** We rigorously **evaluated computational efficiency** in **Table 8**: - **Equal Cost Settings**: Training multi-experts (e.g., `n=4, r=4, step=8k`) outperforms single-expert training (`n=1, r=4, step=32k`) under the same training cost (4✖️8k==1✖️32k) - **Scaling Benefits**: Performance improves consistently with more training steps (e.g., `step=8k` vs. `step=4k` for `n=1, r=4`) --- We sincerely thank you again for your insightful feedback. All suggestions and experimental results will be integrated into the final version. Please let us know if further clarifications are needed. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal well addressed my concerns, so I raised my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 4dgm, Thank you very much for your thoughtful and constructive comments. We truly appreciate your keen insights and the careful consideration you gave to our work. Your perspective has been invaluable in helping us refine our paper, and we are grateful for your clear vision and support.
null
null
null
null
null
null
Efficient ANN-SNN Conversion with Error Compensation Learning
Accept (poster)
Summary: This paper focuses on ANN-to-SNN conversion methods and proposes three techniques to mitigate conversion errors. The first technique, a clipping function, is introduced to replace the ReLU activation in ANNs, thereby improving their compatibility for conversion to SNNs. The second technique, the Dual Threshold Neuron, develops a novel pulse function and a membrane potential update rule for the IF neuron. The third technique involves membrane potential initialization, which provides a principled value for initializing the membrane potential. Experimental results demonstrate that these techniques enable better performance within fewer time steps and provide an analysis of the effectiveness of key settings. Claims And Evidence: The paper claims high accuracy and ultra-low latency for ANN-to-SNN conversion. These claims are supported by: Mathematical analysis of conversion errors (clipping, quantization, uneven activation). Experimental validation across multiple datasets and architectures (VGG-16, ResNet-18, ResNet-34). Comparative results showing superior performance over existing methods (e.g., QCFS, Opt, RTS)​. Methods And Evaluation Criteria: 1. The paper analyzes the causes of three types errors in ANN-to-SNN conversion through case studies. 2. The paper examines the reasons for significant errors when the time step T is few. It further designs solutions and validates them through experiments. Theoretical Claims: The paper presents a theoretical framework for ANN-SNN conversion, focusing on: 1. Mathematical modeling of conversion errors and their impact. 2. Optimal membrane potential initialization, derived to minimize error propagation. 3. Dual-threshold neuron mechanism, analyzed for improving activation consistency. Experimental Designs Or Analyses: The experiments are comprehensive and well-executed. The study: 1. Uses multiple network architectures (e.g., VGG-16, ResNet-18, ResNet-34). 2. Evaluates accuracy under varying time steps (T=2, 4, 8, 16, etc.). 3. Conducts ablation studies on dual-threshold neurons and quantization methods. A potential improvement would be including real-world deployment results on neuromorphic hardware​. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: This work builds upon prior ANN-SNN conversion research. It advances the field by proposing error compensation learning, improving conversion accuracy at ultra-low latency. Essential References Not Discussed: The paper covers major ANN-SNN conversion studies. Other Strengths And Weaknesses: Strengths 1. The proposed method achieves state-of-the-art accuracy on CIFAR-10 and ImageNet, thereby further advancing the research in ANN-to-SNN conversion. 2. Innovative combination of threshold learning, dual-threshold neurons, and optimized initialization. 3. Energy efficiency analysis supports practical deployment. Weaknesses 1. Fixed hyperparameter choices without adaptive mechanisms – The method fixes thresholds and membrane potential initialization values, but allowing adaptive, data-driven tuning of these parameters might further improve accuracy and stability. 2. Potential numerical instability in deeper architectures – While the method is validated on ResNet-18/34, performance on very deep models (e.g., ResNet-50, Vision Transformers) is unclear. The scalability of the approach to deeper networks needs further exploration. Other Comments Or Suggestions: Expand on potential applications beyond static image classification. Questions For Authors: 1. How does the choice of the negative threshold value impact overall network stability and performance? 2. What guided the choice of initializing the membrane potential at half the threshold? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments and constructive feedback. Below, we address the concerns raised and clarify the contributions of our work. ## **Response to Weaknesses** ### 1. Fixed Hyperparameter Choices While our method uses empirically determined hyperparameters (e.g., negative threshold $\theta'$ =-1e-3, membrane potential initialized at $\frac{\theta}{2}$), these choices are rigorously validated through ablation studies (Tables 3, 5, and 7). For instance, initializing $v^l(0)=\frac{\theta^l}{2}$ minimizes the expected conversion error (Eq. 8). We acknowledge the potential benefits of adaptive mechanisms. Future work will explore data-driven tuning (e.g., layer-wise threshold learning) to further enhance flexibility. ### 2. Scalability to Deeper Architectures Our experiments focus on widely adopted networks (VGG-16, ResNet-18/20/34) for fair comparison with previous work. We further validate our method on ResNet-50 architecture (on CIFAR-10 and CIFAR-100 datasets). The results are shown in the following Table. On the CIFAR-10 dataset, our method achieves an accuracy of 96.05% when T=4. While on the CIFAR-100 dataset, when the time step is only 4, our method achieves an accuracy of 80.51%, which is similar to the accuracy of the source ANN. This demonstrates that our framework is robust even in deeper architectures. Moreover, the optimal membrane potential initialization and error compensation in the theoretical foundation (Section 3.4) also ensure consistent performance across network depths. **Table 1. Accuracy of ResNet-50 on CIFAR-10 and CIFAR-100 datasets** | Dataset | Architecture | ANN | T=2 | T=4 | T=8 | T=16 | T=32 | | --------- | ------------ | ----- | ----- | ----- | ----- | ----- | ----- | | CIFAR-10 | ResNet-50 | 96.44 | 90.37 | 96.05 | 96.25 | 96.17 | 96.24 | | CIFAR-100 | ResNet-50 | 81.48 | 73.00 | 80.51 | 80.98 | 80.85 | 81.06 | ### 3. Applications Beyond Image Classification Although our current experiments focus on VGG and ResNet architectures, the error compensation learning principles of adaptive threshold and dual threshold neurons are generalizable and adaptable. Adaptive threshold and dual threshold neurons can also be applied to event-driven tasks (e.g., video recognition, neuromorphic perception). We will explore dynamic data applications in subsequent work. ## **Response to Questions for Authors** ### 1. Impact of Negative Threshold Selection The negative threshold $\theta^{\prime}$=-1e-3 balances activation symmetry and stability. A smaller $\theta^{\prime}$ (e.g., -1e-4) reduces sensitivity to minor fluctuations, while a larger $\theta^{\prime}$ (e.g., -0.1) risks over-activation. Our experiments (Figure 3, Table 3) show $\theta^{\prime}$=-1e-3 optimally mitigates uneven error without destabilizing training. ### 2. Membrane Potential Initialization at half the threshold This choice stems from minimizing the expected conversion error (Eq. 8). Mathematically, initializing $v^l(0)=\frac{\theta^l}{2}$ achieves the least squares fit between ANN activations and SNN firing rates. Empirical results (Table 1, T=2) suggest that it reduces quantization error by 19.31% in ResNet-18. --- Rebuttal Comment 1.1: Comment: Thanks for the author's responses. All my concerns have been resolved.
Summary: This paper proposes a new ANN-SNN Conversion framework by combining a learnable threshold clipping function, dual-threshold spiking neurons, and an optimal membrane potential initialization strategy. Claims And Evidence: Yes Methods And Evaluation Criteria: The design of dual-threshold spiking neuron effectively reduces conversion errors from both theoretical and experimental perspectives. Theoretical Claims: I have checked the theoretical claims in Sec. 3.1-3.4. Experimental Designs Or Analyses: I have checked experimental designs mentioned in Section 4. Supplementary Material: There is no supplementary material in the submission. Relation To Broader Scientific Literature: The research field of this work is related to the fast and efficient ANN-SNN Conversion learning in the SNN community. Essential References Not Discussed: Not found yet Other Strengths And Weaknesses: 1. As shown in Sec.3.1 and Sec.3.3, this paper systematically summarizes the conversion errors caused by different reasons and explores corresponding solutions. The error compensation learning strategy has demonstrated its superior performance within fewer time-steps. 2. To make the contribution of this work more convincing, it seems that the authors need to argue the difference between the dual-threshold neurons and the spiking models mentioned in [1, 2]. In addition, the specific difference between learnable threshold clipping function (with optimal membrane potential initialization) and QCFS function [3] may also need further discussion. 3. The applicability of this method to transformer-based SNNs may deserve further exploration. [1] Li C, et al. Quantization framework for fast spiking neural networks. Frontiers in Neuroscience, 2022. [2] Li Y, et al. Spiking neural networks with consistent mapping relations allow high-accuracy inference. Information Sciences, 2024. [3] Bu T, et al. Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks. ICLR 2022. Other Comments Or Suggestions: See Strengths And Weaknesses Section. Questions For Authors: See Strengths And Weaknesses Section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments and constructive feedback. Below, we address the concerns raised and clarify the contributions of our work. ### 1. Difference Between Dual-Threshold Neurons and Spiking Models in [1, 2] Thanks for your feedback. Our design is fundamentally different from the spiking models in [1, 2]. While these approaches adjust the firing threshold or employ additional bias terms to mitigate transition errors, our dual-threshold neuron explicitly distinguishes between over-release at positive membrane potentials and under-utilization at negative membrane potentials. As discussed in Sections 3.3 and 4.2, this design not only compensates for quantization errors but also addresses the problem of non-uniform errors, a challenge that becomes critical at ultra-low latency (e.g., 2 time steps). Our dual-threshold neuron is theoretically demonstrated in Section 3.3 and empirically validated in Tables 3 and 5, showing significant accuracy improvements (e.g., 19.31% improvement over ResNet-18 at T=2 on CIFAR-10), highlighting the effectiveness of our approach compared to previous models. ### 2. Difference Between Learnable Threshold Clipping and QCFS [3] The learnable threshold clipping function in our work (augmented with an optimal membrane potential initialization strategy) is significantly different from the QCFS function [3]. QCFS focuses on reducing the translation error by optimizing the activation quantization process. However, our approach employs the learnable threshold that maps directly to the SNN firing threshold. This ensures that the clipping error is minimized in a unified framework, and importantly, our initialization (as derived in Section 3.4) minimizes the expected squared translation error by setting the initial membrane potential to half the threshold. QCFS does not address this joint consideration of clipping, quantization, and non-uniform errors. Our approach achieves superior performance (e.g., 94.75% accuracy at T=2, compared to 75.44% for QCFS in Table 1). This adaptability is critical for ultra-low latency SNNs, as shown in Section 4.1. ### 3. Applicability to Transformer-Based SNNs While our current experiments focus on VGG and ResNet architectures, the principles of error compensation learning of adaptive thresholds and dual-threshold dynamics are generalizable and adaptable. Because the adaptive thresholds and dual-threshold dynamics also could be applied to the spiking neuron models in the Transformer-Based SNNs. We acknowledge that transformer-based SNNs pose unique challenges (e.g., attention mechanisms) and would explore this direction in future work.
Summary: This paper proposes an efficient ANN-to-SNN conversion method that significantly reduces conversion errors and inference latency. The key contributions include: A learnable threshold clipping function to mitigate clipping errors. Dual-threshold neurons to dynamically reduce quantization errors. Optimized membrane potential initialization to minimize unevenness errors. The approach achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, and ImageNet with only a few time steps, making it practical for low-power hardware applications. Claims And Evidence: The paper claims that the proposed ANN-to-SNN conversion achieves high accuracy and ultra-low latency, with experimental results showing competitive accuracy using just two time steps (e.g., 94.75% on CIFAR-10 with ResNet-18). The claims are supported by detailed mathematical formulations and empirical evaluations on benchmark datasets​. Methods And Evaluation Criteria: The methods and evaluation criteria are well-aligned with the problem domain. The study uses CIFAR-10, CIFAR-100, and ImageNet datasets, standard for ANN-SNN conversion. Compares performance with existing state-of-the-art methods, including RMP, TSC, RTS, Opt, QCFS, and SNNC-AP. Evaluates accuracy at different time steps to assess efficiency​. Theoretical Claims: The paper derives mathematical formulations for ANN-SNN conversion errors (clipping, quantization, and unevenness). The proofs appear correct, and the derivation of the optimal membrane potential initialization (reducing conversion errors) follows a logical framework​. Experimental Designs Or Analyses: The experiments are well-structured, using widely recognized benchmarks and multiple ANN architectures. However, additional ablation studies on different network depths could further strengthen the conclusions​. Supplementary Material: None Relation To Broader Scientific Literature: This paper is based on prior ANN-SNN conversion techniques Essential References Not Discussed: The paper cites relevant ANN-SNN conversion works but does not explicitly mention some recent hybrid training techniques that combine backpropagation-through-time (BPTT) with ANN-SNN conversion. Including such references would provide a more comprehensive background​ Other Strengths And Weaknesses: Strengths 1. Novel approach combining error compensation learning with ANN-SNN conversion. 2. Achieves high accuracy at ultra-low latency (T=2). 3. Comprehensive comparison with state-of-the-art methods. 4. Strong theoretical foundation and experimental validation. Weaknesses 1. Computational overhead of dual-threshold neurons – The introduction of dual-threshold mechanisms could increase computation, which might reduce efficiency on resource-constrained hardware. A quantitative study on computational costs would be useful. 2. Limited discussion on generalization beyond image classification – The study focuses on static image tasks, but ANN-SNN conversion is also crucial for event-driven data, speech recognition, and reinforcement learning. A discussion on how the method generalizes to such tasks would be beneficial. Other Comments Or Suggestions: In the reference list, many entries are still in the arXiv format even though those works have been officially published. Please make sure to update them accordingly. Questions For Authors: 1. How does the learnable threshold clipping function adapt across different network depths? 2. How does the clipping function compare to other activation function alternatives, such as threshold scaling techniques? 3. Does the dual-threshold neuron introduce additional computational overhead during inference? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments and constructive feedback. Below, we address the concerns raised and clarify the contributions of our work ## **Response to Weaknesses** ### 1. Computational Overhead of Dual-Threshold Neurons Thanks for your feedback. We find that the additional threshold brings little computations cost per neuron due to the event-driven nature of SNNs. As shown in Table 1, we evaluated the computational cost in terms of energy consumption and synaptic operations (SOPs) to show the Computational Overhead of Dual-Threshold Neurons. The results show that the Dual-Threshold Neurons almost introduce no extra energy consumption especially in small time steps (T =2 and 4). And other large time steps only introduce limited energy consumption. Therefore, our method significantly reduces energy consumption compared to standard ANNs while maintaining competitive accuracy at ultra-low latency. **Table 1. Computational Overhead of Dual-Threshold Neurons using VGG-16 on CIFAR-100 dataset** | | | ANN | T=2 | T=4 | T=8 | T=16 | T=32 | | - | - | - | - | - | - | - | - | | **without Dual Threshold Neuron** | Energy (mJ) | 4.170 | 0.004 | 0.007 | 0.015 | 0.031 | 0.052 | | **with Dual Threshold Neuron** | Energy (mJ) | 4.170 | 0.004 | 0.008 | 0.017 | 0.034 | 0.068 | ### 2. Generalization Beyond Image Classification We appreciate the suggestion to discuss how our method extends beyond static image classification tasks. Our conversion approach is designed to be broadly applicable, and the learnable threshold clipping function, dual-threshold neurons, and optimized membrane potential initialization can benefit event-driven data processing tasks, such as neuromorphic vision processing, speech recognition, and reinforcement learning. Although this work focuses on image classification benchmarks for fair comparison with prior ANN-SNN conversion methods, our approach can be extended to other spatiotemporal data. Specifically, the dual-threshold mechanism helps mitigate quantization errors in temporally sparse spike sequences, which is beneficial for event-driven data processing. We will explore these applications in subsequent work. ### 3. Reference Formatting We acknowledge the oversight regarding outdated references and have updated the bibliography to reflect officially published versions of cited works where applicable in the revision. ## **Response to Questions for Authors** ### 1. How does the learnable threshold clipping function adapt across different network depths? Thanks for your feedback. The learnable threshold clipping function is trained alongside ANNs’ weights and adjust dynamically per layer. During training, it learns to match the optimal activation statistics of each layer, ensuring smooth ANN-to-SNN conversion. For deeper networks, the function effectively scales the thresholds based on the range of activation distributions, which helps mitigate conversion errors. Empirical results demonstrate that this adaptation enables high accuracy even for deeper models like ResNet-34 on ImageNet (e.g., For VGG-16, when T=16, our method achieves 74.15% accuracy, which is 23.18% higher than QCFS. For ResNet-34, we achieve 72.37% accuracy with only 16 time steps.) ### 2. How does the clipping function compare to other activation function alternatives, such as threshold scaling techniques? Thanks for your feedback. Our learnable threshold clipping function provides a more flexible and data-driven alternative to static threshold scaling. Unlike predefined threshold adjustments (e.g., RMP, Opt), our function dynamically learns the optimal threshold mappings, reducing reliance on manual heuristics. We conducted additional experiments comparing our method with existing threshold scaling approaches, such as RTS and QCFS. Using the ResNet-18 model on the CIFAR-10 dataset, our method achieves an accuracy of 94.75% with only two time steps, 19.31% higher than QCFS, while RTS achieves an accuracy of 84.06% at T=32. The results show that our approach consistently outperforms prior methods across different datasets and network architectures while maintaining ultra-low latency. ### 3. Does the dual-threshold neuron introduce additional computational overhead during inference? As mentioned earlier, the dual-threshold mechanism slightly increases per-neuron computation. However, since SNNs are event-driven and avoid redundant computations, the overall increase is negligible compared to conventional ANN operations. We measured the impact on inference latency and found that our method achieves faster convergence with fewer time steps (e.g., T=2 on CIFAR-10 with ResNet-18 at 94.75% accuracy). This efficiency gain compensates for the minor computational overhead. Additionally, the power consumption analysis (Table 1) suggest that our method maintains a highly efficient energy profile.
null
null
null
null
null
null
null
null
Zero-Shot Offline Imitation Learning via Optimal Transport
Accept (poster)
Summary: This paper proposes ZILOT, a MPC-style inference-time trajectory optimization technique that can learn from a single incomplete state-only expert demonstration with pretraining of a dynamic model from an unlabeled state-action dataset. The paper theoretically proves prior method, which is using a goal recognizer with a goal-conditioned policy, will encounter myopic issues and is suboptimal. This paper instead uses MPC, with the Sinkhorn distance between state occupancies of the rollout policy and expert demonstrations as the cost optimization objective. The distance metric between two states are the expected steps from one state to another, which is learned through a goal-conditioned value function. The optimal transport between occupancies are discretized into matching between the states. To determine whether expert states are reachable, another value function is learned to provide an estimate. On several testbeds, the proposed method outperforms existing baselines. ## Update After Rebuttal I have updated my score accordingly during the author-discussion period; no further update. Claims And Evidence: Overall, I feel the claim "zero-shot" is somewhat strange to me; I think this is one of my major concern for this paper. In this paper, the definition of "zero-shot" seems to be "(a single) demonstration devoid of actions (which is rough and partial), such as specifying only a few checkpoints" (Sec. 1). However: 1. in one of the zero-shot prior work the authors referred to in Sec. 1, Pathak et al. [1], the definition of zero-shot is "the agent never has access to expert actions during training or for the task demonstration at inference" (as specified in the abstract). This is, however, not the case in this paper; $\mathcal{D}\_\beta$ contains actions as specified in Sec. 2.1. 2. In another paper the authors referred to in Sec. 1, Pirotta et al. [2], the author writes in their Sec. 1: "1) ... no prior knowledge or demonstrations of the behavior to be imitated are available, and only a dataset of unsupervised trajectories is provided (which is the case for this paper); 2) ... solve any imitation task without any additional samples on top of the demonstrations, and without solving any complex RL problem (which means offline, but not necessarily without actions) ... computation needed to return the imitation policy should be minimal (which is questionable for this work)". 3. in "zero-shot" reinforcement learning [3], which Pirotta et al. [2] refers to (as Pirotta et al. did not formally define the word "zero-shot"), the definition of zero-shot is "solve any RL task in a given environment, instantly with no additional planning or learning, after an initial reward-free learning phase", which is also not satisifed by this paper. 4. The authors also use LLM's in-context learning ability as an example for zero-shot capabilities in the beginning of this paper. However, LLM's in-context learning ability is "no any training related to the problem", which is different from this paper with a world model training stage. 5. according to the author's definition in this paper (state-action offline dataset + state-only incomplete / goal-only trajectories), there are many other papers that fits into this definition in the name of "learning from observations", such as IQLearn [4], TAILO [5], PW-DICE [6], SMODICE [7], SAIL [8], RCE [9] (which is goal-based IL), AILOT [10]. The authors seem to only have cited SMODICE in the appendix as an example of forward-backward framework. Therefore, I feel the word "zero-shot" is not well-defined in this paper. **Reference** [1] D. Pathak et al. Zero-Shot Visual Imitation. In ICLR, 2018. [2] M. Pirotta et al. Fast Imitation via Behavior Foundation Models. In ICLR, 2024. [3] A. Touati et al. Does Zero-Shot Reinforcement Learning Exist? In 34th Offline Reinforcement Learning Workshop at Neural Information Processing Systems, 2022. [4] D. Garg et al. IQ-Learn: Inverse soft-Q Learning for Imitation. In NeurIPS, 2021. [5] K. Yan et al. A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories. In NeurIPS, 2023. [6] K. Yan et al. Offline Imitation from Observation via Primal Wasserstein State Occupancy Matching. In ICML, 2024. [7] Y. J. Ma et al. Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching. In ICML, 2022. [8] F. Liu et al. State Alignment-based Imitation Learning. In ICLR, 2020. [9] B. Eysenbach et al. Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification. In NeurIPS, 2021. [10] M. Borbin et al. Align Your Intents: Offline Imitation Learning via Optimal Transport. In ICLR, 2025. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria over make sense for the problem. There are a few possible concerns and improvements though: **Concerns** In the paper, the authors try to estimate the state occupancy using $\rho^\pi_N\approx\frac{1}{N+1}\sum\_{t=0}^N\delta\_{s\_t}$. While empirically, works such as OTR [1] mentioned in the paper have proved the success of the discretization, it is worth noting that the estimation of optimal transport between distributions from discrete samples are very inaccurate [2, 3]; in fact, one will need exponentially many samples with respect to the number of state dimensions to accurately estimate the Wasserstein distance between distribution. Thus, the story of approximation from Eq. 12 to the end of "occupancy estimation" does not sound convincing enough. **Improvements** 1. The paper did not specify how the "world model" is trained from the offline dataset, which I feel is an important part of the method (as the authors indicated in Appendix C.4, it seems to follow prior work, but the paper should still introduce the method and its implementation in this paper to be self-contained). Moreover, the paper did not specify what a goal abstraction $\phi(\cdot)$ is and how it is implemented in the proposed method or baselines (given this, it is unexplained on the objective right above Eq. 18 why $W(\phi(s);\phi(s'))$ has goal abstraction on both state inputs but $V(s,\phi(s'))$ only has goal abstraction on the second state input. This is another major concern of mine. 2. It will be better if the authors can show the result of ZILOT on more environments (walker2d, ant, antmaze, franka kitchen etc.), which are usually accompanieed with the halfcheetah environment in the literature. **References** [1] Y. Luo et al. Optimal Transport for Offline Imitation Learning. arXiv: In ICLR, 2023. [2] J. Stanczuk et al. Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance). arXiv:2103.01678. [3] J. Weed et al. Sharp Asymptotic and Finite-Sample Rates of Convergence of Empirical Measures in Wasserstein Distance. arXiv:1707.00087. Theoretical Claims: The paper has one theoretical claim in Sec. 3, which is about the suboptimality of the method proposed in Pathak et al. [1]. The authors claim that there exists some controllable Markov chain and sequence of goals that the existing method, even with optimal solution, cannot all achieve. The authors give a simple counterexample as a proof and it seems correct to me. To make it more rigorous, I would suggest add "almost surely" or "with probability 1" to the conclusion "will not reach all goals in the sequence". **Reference** [1] D. Pathak et al. Zero-Shot Visual Imitation. In ICLR, 2018. Experimental Designs Or Analyses: Yes, I think the experiment are generally solid as many details as well as visualizations are given in the appendix. The results seem reasonable which shows the proposed method outperforms several baselines, and ablations are made in Sec. 5.3. Many environment variants are tested in Tab. 2, which I feel sufficient to prove the performance of the proposed method. Supplementary Material: I have checked the whole appendix. I found it to be very detailed; the authors add many extra experiment results, implementation details and visualization in the paper, which greatly helps to understand the behavior of the algorithm and increases the reproducibility of the proposed algorithm. The paper does not have supplementary material beyond the appendix. Relation To Broader Scientific Literature: This paper is beneficial for the Reinforcement Learning (RL) / Imitation Learning (IL) community and robotics community. It does not have significant impact for researchers out of these communities. Essential References Not Discussed: I suggest the authors check "learning from observations" in the imitation learning literature, as mentioned in "Claims and Evidence" point 5. Currently, I feel multiple papers are missing from the literature discussion. Also, I would suggest the authors to discuss the relation of this work with the ICVF paper [1], which also uses a goal-conditioned value function and aims to learn dynamics through value function from action-free data, which is similar to the goal-conditioned valuer function in this paper. **References** [1] D. Ghosh et al. Reinforcement Learning from Passive Data via Latent Intentions. In ICML, 2023. Other Strengths And Weaknesses: **Other Strength** Overall, I feel the high-level idea of this paper is clearly conveyed and easy to follow. The shortcoming of the prior methods are clearly illustrated in Fig. 1 and rigorously stated afterwards in Sec. 3. **Other Weakness** The method needs to solve optimization problem for every step, which is inefficient; as the authors suggested in Appendix C.3, the method only runs at 0.5 to 3Hz, which is too slow for practical uses (and even slower for simulations). Other Comments Or Suggestions: See the other parts of the review. Questions For Authors: I have two question for the authors: 1. Is the proposed method robust against dynamic mismatch between expert demonstration and the agent? I would expect the method to be robust if it can work with incomplete state-only trajectories and uses Wasserstein distance to match those states. An example of such experiment can be found in Appendix H of the SMODICE [1] paper. **References** [1] Y. J. Ma et al. Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching. In ICML, 2022. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Definition of zero-shot We would like to clarify our definition of “zero-shot”. _We refer to a method as “zero-shot” if it retrieves an optimal policy for unseen objectives provided at test-time, with modest compute overhead_. “Zero-shot” methods may be allowed a compute-heavy pre-training phase, which should not be informed of the downstream task. This definition aligns with [1], which is thus a method for zero-shot IL. Similarly, [2] and [3] perform zero-shot IL and RL, respectively. IQLearn, TAILO, PW-DICE, SMODICE, AILOT, and the majority of offline IL methods, would not qualify as zero-shot IL methods, as they need the objective (in the case of IL, the demonstration) to be available during pre-training. Existing definitions do not specify what constitutes a “modest compute overhead”. In [1], “zero-shot” methods require <5 minutes to imitate a single trajectory, while non-zero-shot methods require >3 hours (fig. 2 in [1]). ZILOT takes <4 minutes to imitate a trajectory in our tasks, which is comparable to more involved FB-IL variants, and orders of magnitude faster than non-zero-shot methods, e.g., BC. We would thus argue that ZILOT performs zero-shot IL. This definition of “zero-shot” depends on when demonstrations are provided, not on the information they contain. They may contain actions (as for FB-IL-BC), or may not. We can qualify this distinction: we _may define a method to be “learning from observations” if it imitates demonstrations that do not contain actions_. ZILOT and [2] would then belong to this category. For both, an action-labeled offline dataset (not demonstrations, but arbitrary trajectories) is provided during pre-training to convey action semantics. We thank the reviewer for encouraging this discussion, which we hope clarifies our definitions. We will update our submission accordingly. # Concerns - Finite sample approximation of OT We acknowledge that our occupancy matching objective is sample-based and thus subject to approximation errors. As mentioned, for both OTR and our work, this approach performs well empirically. We hypothesize that this might be due to the adoption of entropy regularization [4], or to a low dimensionality (as defined in [5]) of occupancy under smooth dynamics in discrete time. # Suggested Improvements ## World Model Training We will expand Appendix C.2 with training objectives for our practical choice of world model (TD-MPC2), and refer to it prominently in the main text. ## Goal Abstraction We introduce the general concept of a goal-abstraction in Sec. 2.1. and specify the exact functions used for each environment we evaluate in Table 5 in the Appendix (e.g., for Fetch we use the cube pose). As is standard in GC-RL the goal-abstraction is known [6]. ZILOT’s planning objective (Eq. 17) matches the sampled state occupancy to the expert goal occupancy, and thus uses a standard GCVF $V(s, g)$. Eq. 18 is used for estimating the time the expert required for the demonstration; as the demonstration only contains goals, it requires a separate value function estimating distances between pairs of goals ($W(g, g’)$). It is only used as a heuristic for selecting a part of the expert trajectory to match at each step, which is necessary for finite-horizon optimization. ## Extra Evaluation Environments Please refer to our response to Reviewer LRC1, where we motivate our choice of environments and provide evaluations of the Walker environment. # Other Weaknesses - Running Time We would like to clarify that the reported planning frequencies were recorded on dated hardware (GTX 2080 TI). On modern hardware (RTX 4090) the planning frequencies are at least 2-4Hz depending on the problem size. We will update these numbers in the manuscript. Further, we expect that the inference time could easily be doubled with a more efficient implementation of the Sinkhorn Algorithm using JIT-compilation [7] or writing specialized kernels [8]. # Question on Expert mismatch As our expert demonstrations may be rough and partial, our method can imitate expert demonstrations sourced from a slightly different embodiment (e.g. the new evaluations on Walker reuse the expert demonstrations for Cheetah). --- We hope this addresses your questions, and we are happy to elaborate on points that were inadequately covered due to character limitations. References: 1. Pirotta et. al. Fast Imitation via Behavior Foundation Models. ICLR ‘24 2. Pathak et. al. Zero Shot Visual Imitation. ICLR ‘18 3. Touati et. al. Learning one Representation to optimize all Rewards. NeurIPS ‘21 4. Genevay et. al. Sample Complexity of Sinkhorn Convergences. AISTATS ‘19 5. Weed et al. Sharp Asymptotic and Finite-Sample Rates of Convergence of Empirical Measures in Wasserstein Distance. 6. Andrychowicz et. al. Hindsight Experience Replay. NeurIPS ‘17 7. Cuturi et. al. Optimal Transport Tools (OTT) arXiv:2201.12324 8. Feydy. et. al. Interpolating between Optimal Transport and MMD using Sinkhorn Divergences. AISTATS ‘19 --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal; I think it addresses most of my concerns, especially the major ones such as the definition of zero-shot and world model. I will now increase my score from 2 to 3, though I still would like to point out that the planning frequency still seems quite slow for real-world tasks and is a limitation of the proposed method given the authors' response. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for raising their score. We agree that ZILOT’s current inference speed represents a limitation. However, current trends suggest that this limitation may be resolved. First, as noted in our earlier response, more efficient implementations of the Sinkhorn algorithm have been developed and could potentially offer up to a 2× speedup. Second, the performance improvements we observed when transitioning from older to newer hardware suggest that further 2× gains in inference speed are likely over the next 1–2 years as hardware continues to advance. Finally, policies operating at 2–10Hz are already effective as high-level controllers in robotics applications. For instance, both OpenVLA [1] and Diffusion Policy [2] operate within this range. [1] Kim et. al., OpenVLA: An Open-Source Vision-Language-Action Model. CoRL ‘24 [2] Cheng et. al., Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. RSS ‘23
Summary: - The paper concerns zero-shot offline model-based imitation learning - A new method is proposed based on Optimal Transport to match the occupancy measure of the learning and expert policies - Experiments conducted on three tasks (Fetch, HalfCheetah, PointMaze), comparing the proposed method with existing baselines Claims And Evidence: - The main claim is that the proposed method optimizes an occupancy matching objective using Optimal Transport. In terms of methodological contribution, the approach appears novel. Using Optimal Transport for occupancy matching is a sound idea. - The paper claims to address a zero-shot imitation learning problem, but it is restricted to model-based, finite-horizon MDPs. The broader literature on imitation learning includes several efficient model-free algorithms, which limits the contribution of this work. - The experiments are limited and insufficient to fully support the main claims. Only three tasks are considered, while MuJoCo itself contains multiple tasks, yet the authors only evaluate on HalfCheetah. Methods And Evaluation Criteria: - The method of using Optimal Transport for occupancy matching is reasonable. - Besides Optimal Transport, are there alternative ways to optimize occupancy matching, such as using KL divergence? Can the authors comment on this? - The main problem formulation is restricted to finite-horizon MDPs. Can the proposed method be extended to infinite-horizon models? - The occupancy estimation relies on sampling, which can introduce high errors and impracticalities. Additionally, the method requires dynamics estimation, which demands significant data and introduces further estimation errors. - The proposed method is specifically tailored to model-based settings. Can it be extended to model-free settings, which are often more practical? - As mentioned, the experimental evaluation is **weak**—only three simple tasks are considered. Why were other MuJoCo tasks not utilized for evaluation? Theoretical Claims: The paper provides minimal theoretical claims and proofs. Experimental Designs Or Analyses: The experimental setup is reasonable, but as previously mentioned, the experiments themselves are weak. The authors should consider extending the study with more challenging tasks typically used in prior imitation learning research. Supplementary Material: - I have reviewed the appendix, which provides some useful information. - However, the supplementary material is unnecessarily long—Appendix E contains numerous figures, some of which do not seem essential. The authors should consider shortening this section. Relation To Broader Scientific Literature: The contributions relate to reinforcement learning, imitation learning, zero-shot learning, and optimal transport. The approach has potential applications in fields where imitation learning is valuable, such as healthcare and autonomous driving, particularly when datasets are limited Essential References Not Discussed: The paper should discuss more relevant work on model-free imitation learning and inverse reinforcement learning. These could suggest alternative approaches for handling model-free, infinite-horizon models and different ways to optimize occupancy matching. Other Strengths And Weaknesses: - The approach is sound, and the use of Optimal Transport is an interesting direction. - The primary weakness is the experimental evaluation. This undermines the credibility of the claims and makes the paper not yet ready for publication. Other Comments Or Suggestions: Please see my questions Questions For Authors: - Can the authors clarify why the proposed method is restricted to model-based settings? Could it be adapted for model-free imitation learning? - What are the main advantages of using Optimal Transport over other occupancy matching techniques, such as KL divergence? - Given that the formulation applies to finite-horizon MDPs, how could this be extended to infinite-horizon settings? - The occupancy estimation relies on sampling, which can introduce high errors. Have the authors considered methods to reduce estimation errors? - How does the method perform in more complex tasks beyond the three considered in the experiments? Why were additional MuJoCo tasks not included? - How does this work compare to prior model-free approaches in imitation learning and inverse reinforcement learning? - Could the authors discuss the scalability of their approach when applied to larger or more complex domains? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your assessment and valuable feedback. We first address your main concern, our empirical evaluation, and then your other questions. # Q5 on further experiments Our experiments are chosen to be representative of the 3 most common types of MDPs found in robotics: manipulation (Fetch), navigation (Pointmaze), and locomotion (Cheetah). Instead of including similar environments, we focused on evaluating a diverse set of tasks in each environment (see Table 1), in order to cover the full complexity of possible behaviors. We have performed evaluations on additional Mujoco tasks, i.e. Walker with the same tasks as in Cheetah, and found results to be strongly consistent with Cheetah: |Task|$W_{\min}$ $\downarrow$|||GoalFraction $\uparrow$||| |-|-|-|-|-|-|-| ||Pi+Cls|MPC+Cls|ZILOT (ours)|Pi+Cls|MPC+Cls|ZILOT (ours)| |||||||| |walker-backflip|2.804±0.056|1.737±0.146|**1.273±0.205**|0.34±0.07|**0.89±0.03**|**0.92±0.06**| |walker-backflip-running|3.039±0.292|2.444±0.189|**1.709±0.093**|0.49±0.07|0.70±0.08|**0.81±0.09**| |walker-frontflip|2.688±0.400|1.830±0.185|**1.551±0.086**|0.57±0.16|**0.94±0.04**|**0.95±0.07**| |walker-frontflip-running|2.597±0.265|**1.937±0.172**|**1.921±0.149**|0.55±0.03|**0.63±0.16**|**0.76±0.11**| |walker-hop-backward|1.447±0.076|**0.872±0.032**|**0.836±0.100**|0.64±0.11|**0.78±0.06**|**0.84±0.07**| |walker-hop-forward|0.932±0.098|0.663±0.071|**0.467±0.044**|0.95±0.05|**0.99±0.01**|**1.00±0.01**| |walker-run-backward|1.290±0.148|**1.050±0.086**|**0.957±0.111**|**0.81±0.08**|**0.83±0.14**|**0.84±0.09**| |walker-run-forward|1.180±0.105|0.954±0.079|**0.672±0.058**|0.86±0.07|**0.97±0.03**|**0.99±0.01**| |||||||| |walker-all|1.997±0.047|1.436±0.049|**1.174±0.061**|0.65±0.03|0.84±0.02|**0.89±0.04**| |||||||| Our evaluation is also similar to prior work in zero-shot Imitation from offline data. [2] evaluate on different tasks in one manipulation environment. [1] uses four embodiments all of which are locomotion tasks. In comparison, our evaluation is richer. # Q1 model-free IL adaptation To the best of our knowledge, zero-shot distribution matching under OT has not been explored with model-free methods. The model-based component allows us to accurately predict and optimize finite-horizon occupancies. A model-free variant optimizing a similar OT objective could use techniques from [1], which we found to be, however, less data efficient (Section 5.2). Thus, a model-free variant represents an interesting research direction. # Q2 on OT vs. other distances The main advantages of OT over f-divergences (eg. KL) is that (1) it is more robust to empirical approximation (line 100 col 2), and (2) it can incorporate the underlying geometry of the space(s) which allows us to use an MDP-specific (learned) metric, the goal-conditioned (GC) value function in our case (line 210 col 2). # Q3 on finite horizon planning We acknowledge that our method relies on finite horizon planning in practice; as we show empirically, this is sufficient to largely avoid myopic behavior. An extension would be possible leveraging occupancy estimates (e.g., as in [1]); however, these techniques did not perform well in our evaluations (Section 5.2). # Q4 on sampling errors Estimation errors from sampling did not seem to impact performance in our experiments, especially as the support of the future state distribution is often modest. If the learned world model is non-deterministic, one may sample multiple trajectories from it to get a better estimate of the future state distribution. # Q6 on the comparison to IL and IRL ansewredThe large majority of methods for imitation learning and inverse RL (e.g., GAIL, IQLearn, DICE, OTR) are not zero-shot, and require demonstrations to be provided in advance, at training time. After pre-training, ZILOT is instead capable of imitating unseen trajectories. To the best of our knowledge, only two model-free approaches can achieve the same. The first approach [2] is however fundamentally myopic (Section 3), and is considered as a baseline (Cls+Policy in table 1). The second approach [1] is model-free, zero-shot, and non-myopic; however, we find it to be less data efficient, and to underperform in our evaluation (Section 5.2). Thus, ZILOT distinguishes itself from existing model-free approaches because it is zero-shot, non-myopic, and data-efficient. # Q7 on scalability ZILOT relies on a GC value function and a world model. Both of these are well-studied objects in the field, and we expect ZILOT to scale as these components improve. --- We hope this fully addresses your questions and comments, and we are happy to expand on any answers that had to be short due to character limitations. References: 1. Pirotta et. al. Fast Imitation via Behavior Foundation Models. ICLR ‘24 2. Pathak et. al. Zero Shot Visual Imitation. ICLR ‘18
Summary: The problem of greediness in goal conditioned imitation is resolved by matching goal occupancies. The proposed algorithm can learn from a single demonstration with partial observability. It is shown that minimizing wassertein distance between goal occupancies of expert and learner is equivalent to minimizing an optimal transport objective. This is utilized to learn policies. The proposed setup is compared to existing works with thorough experiments by measuring the proportion of goals achieved by each algorithm. Claims And Evidence: - ZILOT is non-myopic. Evidence theoretical from a pathological example where some other algorithms might fail, and empirically from certain environments where there are multiple goals to be achieved in a single trajectory. - ZILOT giving good Offline and Zero-Shot performance and the approximations being enough to learn a good policy demonstrated via experiments. - Ablation studies demonstrate the usefulness of some individual components. Methods And Evaluation Criteria: Yes. Thorough experimentation. Comparison to relevant work. Theoretical Claims: Not in depth. No issues from surface level reading. Experimental Designs Or Analyses: Did not run any code. The design description in paper and appendix seems very sound. Supplementary Material: No. Relation To Broader Scientific Literature: Zero Shot Learning that is not Greedy. Essential References Not Discussed: none that I am aware of Other Strengths And Weaknesses: Strengths: - Very detailed results and diagrams showing how proposed algorithm works in comparison to others. One result in main paper, rest in appendix. Other Comments Or Suggestions: - Acronym GC-RL is used without first introducing it. - I don't understand why there is a subscript 1 in definitions of $\mathcal{D}_\beta$ and $\mathcal{D}_E$ . Line 78 col 2. Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their feedback and comments. Thank you for pointing out that the acronym “GC-RL” was not introduced in the paper. We will introduce it properly given a chance to update the paper. The subscripts in the definitions of $\mathcal{D}\_\beta$ and $\mathcal{D}\_E$ were meant to denote that $i$ runs from $1$ to $|\mathcal{D}\_\beta|$ and $|\mathcal{D}\_E|$ respectively. We will update the definitions to $\mathcal{D}\_\beta = (s\_0^i, a\_0^i, s\_1^i, a\_1^i, \dots)\_{i=1}^{|\mathcal{D}\_\beta|}$ and $\mathcal{D}\_E = (g\_0^i, g\_1^i, …)\_{i=1}^{|\mathcal{D}\_E|}$ respectively, to make this more clear. We remain available for further discussion for the rest of the rebuttal period. --- Rebuttal Comment 1.1: Comment: Thank you for your response in the rebuttal. It clarifies things. I will keep my rating. I have noticed experimental evaluations as well as pointed out in other reviews, but I would like to keep this rating because of the theoretical contribution.
null
null
null
null
null
null
null
null
MODA: MOdular Duplex Attention for Multimodal Perception, Cognition, and Emotion Understanding
Accept (spotlight poster)
Summary: The paper identifies the attention deficit disorder problem in SOTA MLLMs, characterized by inconsistent cross-modal attention and layer-by-layer decay of attention activation. Then, the authors introduce a linear-based attention mechanism that simultaneously conducts inner-modal refinement and inter-modal interaction. Experimental results show an improvement on fine-grianed detail understanding. The evaluation setting follows Cambrian-1, MMRole, and some emotion benchmarks. Claims And Evidence: -The claimed linear complexity is not supported by any details. The paper does not provide detailed analysis of MODA's computational cost, especially compared to simpler baselines. -While the modular masked attention and duplex alignment are well-motivated, the paper does not explore alternative designs or compare MODA to other attention mechanisms (e.g., sparse attention or low-rank attention). Methods And Evaluation Criteria: -Yes, the evaluabtion setting follows The evaluation setting follows Cambrian-1, MMRole, and some emotion benchmarks. Theoretical Claims: -The computation method of attention distribution is not mentioned with details Experimental Designs Or Analyses: -The paper does not provide detailed analysis of MODA's computational cost (e.g., FLOPs, memory usage, or inference time) compared to simpler baselines. Supplementary Material: -The details of testing benchmark construction are included in the supplementary material, which illustrate the prompt used for each dataset. Relation To Broader Scientific Literature: -The paper demonstrates MODA's effectiveness across a wide range of tasks, including perception, cognition, and emotion understanding, setting new benchmarks in multimodal understanding. -The paper highlights the potential of MODA to advance multimodal understanding in real-world applications, such as human-computer interaction, robotics, and healthcare. -Identify the Attention Deficit Disorder Problem for multimodal large language model area. Essential References Not Discussed: N/A Other Strengths And Weaknesses: see above Other Comments Or Suggestions: N/A Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: ## **Response to Reviewer Ack2** Thank you for your insightful comments and questions. For your reference, we summarized the main results and included the attached file in our response to Reviewer XinM. **A1. Analysis for linear complexity of duplex attention alignment** **(1) Complexity Analysis:** As suggested, we provide the complexity analysis of the proposed duplex attention alignment. Given a multimodal token sequence $X\in\mathbb{R}^{N\times D}$, the computation cost comes from three operations, including Gram Matrix and alignment. In the Gram matrix, the computation cost is $O(D^2N)$. In the alignment, the computation cost is $O(DN)$, which in total is $O(D^2N+DN)=O(D^2N)$. Therefore, the proposed duplex attention alignment has a linear complexity in terms of token length, thus yielding a length extrapolation ability towards long multimodal context. **(2) Complexity comparison with other attention mechanism:** In contrast, the baseline attention owns a nature of $O(DN^2)$ due to the similarity computation between each pair of tokens. Therefore, they have a higher computation complexity of $O(DN^2)$, where in most cases $N>>D$, especially for multimodal tokens, the number can be up to 128000 (e.g., max model len of Qwen2.5-VL). **A2. Ablation study on Attention mechanism** (1) As suggested, we further conduct a comparison between our MODA and other attention mechanisms, including baseline attention and SOTA attention mechanism. As shown below, MODA shows better performance among leading attention mechanisms due to its balance on multimodal tokens, while sparse and baseline attention may wrongly focus on the textual part. | Model | G | K | O | V | | -------------------------- | :--: | :--: | :--: | :--: | | MODA | 69.3 | 48.3 | 67.0 | 54.3 | | DeepSpeed Sparse Attention | 67.8 | 47.6 | 63.3 | 48.1 | | Multi-head Attention | 63.6 | 44.0 | 60.8 | 38.0 | (2) Besides, as shown in Tab.1 of the manuscript, we provided an ablation study on alternative designs from three aspects: how to align attention tokens (Tab.1b), how to fuse attention tokens (Tab.1c), and how to mask attention (Tab.1d). **A3. Computation cost:** As suggested, we compare the computation cost of MODA and other attention mechanisms in terms of FLOPs, memory, and latency as below. We can observe that MODA achieves better performance on fine-grained understanding tasks (e.g., four vision-centric benchmarks) with a few increased computation costs. | Model | FLOPs | MACs | Latency | Vision-Centric | | ---------- | :----: | :---: | :-----: | :------------: | | MODA | 134.9T | 76.7T | 2.04s | 66.0 | | LLaVA-NeXT | 123.3T | 61.6T | 1.87s | 56.6 | --- Rebuttal Comment 1.1: Comment: Thanks to the response from the authors. After reading other reviews and responses, my concerns have been resolved. I am happy to see the significant margin over the latest baseline, benefiting from the novel insights on attention mechanisms. To the best of my current knowledge, the key insight might present positive impacts in the VLM area. Therefore, I lean towards a stronger acceptance and will upgrade my rating accordingly. --- Reply to Comment 1.1.1: Comment: Dear reviewer cYUZ, Thank you for kindly recognizing our contributions to VLM and providing invaluable comments on this work. We authors greatly appreciate the efforts you have made to improve our manuscript. If accepted, we will include `cYUZ` in our acknowledgments. Best Regards, Authors of paper 10155
Summary: This paper proposes a MOdular Duplex Attention (MODA) for multimodal perception, cognition and emotion understanding. The proposed method is evaluated and the paper is well organized. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: The authors propose a novel modular and duplex attention mechanism which could be used in multimodal LLMs. Essential References Not Discussed: no Other Strengths And Weaknesses: 1. Introduction (1) Although the authors propose the phenomenon of deficit disorder attention, the discussion of its underlying causes and theoretical foundations is weak. The article focuses more on the description of the phenomenon and lacks strict mathematical definitions and theoretical explanations, which makes the concept appear to be more of an analogy than a well-proven theoretical basis. (2) only mentions the inadequacy of MLLM for advanced tasks, but does not provide a detailed compendium and comparison of current research methods and advances to address this inadequacy. 2. Related work When presenting related work, it is mostly descriptive, without clearly stating the direct correlation and difference between these studies and the methods proposed in this paper, and fails to clearly demonstrate the unique advantages of MODA in dealing with the problems of inconsistent attention and layer-by-layer attenuation. 3. Methods (1) For the specific implementation of V-Aligner and T-Aligner, only the mapping based on the basis vectors of modal space is mentioned, but the specific form of the mapping function and why the basis vectors are used are not explained in detail. There is a lack of detailed description of the fusion method of the mapped tokens, such as whether weighted summing and splicing are used, as well as the parameter estimation and optimization strategy in the fusion process. (2) Insufficient explanation for some formulas. For example, “||” in Equation 5. (3) Although the attention is divided into two parts, self-modal and cross-modal, how to coordinate these two parts in the overall architecture and how to deal with their interactions and information fusion are not described clearly enough. There is a lack of detailed discussion on how the modules in the overall process work together. (4) There is a lack of specific guidance and examples on how to integrate MODA into existing multimodal large-scale language models, which cannot provide effective references for actual model development and application. 4. Experimental Part (1) The description of the experimental setup is not exhaustive enough, and key settings such as data preprocessing, hyperparameter selection, and training details (e.g., hardware environment, running time) are not fully developed. (2) Despite the ablation experiments, there is a lack of detailed discussion on the individual contributions and interactions of the two modules (Duplex Attention Alignment and Modular Attention Mask) to the overall performance. (3) In the comparison with existing SOTA models, only a large amount of performance data is given, while the lack of analysis of the advantages and disadvantages of each model on different tasks makes the discussion of the reasons for the performance improvement insufficient. (4) The experimental results present more data, but fail to discuss in depth the performance differences between different tasks and metrics, potential bottlenecks, and directions for future improvement, limiting a comprehensive understanding of the advantages and limitations of the method. Other Comments Or Suggestions: no Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## **Response to Reviewer Ack2** Thank you for your insightful comments and questions. For your reference, we summarized the main results and included the attached file in our response to Reviewer XinM. **A1&A2. Discussion on DDA** (1) **Actually, DDA can be interpreted from the perspective of a multimodal token graph.** The core of DDA, the attention mechanism, builds the linkage across multimodal tokens by computing the similarity of each pair of tokens [A,B]. The attention output is yielded by a weighted sum among tokens. Therefore, the computation can be seen as a densely connected direct graph through layer-by-layer attention, and the DDA problem can be reformulated as a mislinked graph problem, where the multimodal token flows to the textual part and misses the visual token from the input layer. DDA, therefore, decreases the interaction between visual and textual modality, and throws away 17% of important visual cues and is lost in the last embedding layer of MLLM. For the detailed diagram of multimodal token flow, please refer to Fig.1 in the attached file. (2) **Formula definition of DDA.** Given the visual tokens $x_v^l$ and text tokens $x_t^l$ in the block $l$, the multimodal attention builds the link from two parts (*i.e.*, self-modal $x_t^l \rightarrow x_t^{l+1}, x_v^l \rightarrow x_v^{l+1}$ and cross-modal $x_t^l \rightarrow x_v^{l+1}, x_v^l \rightarrow x_t^{l+1}$ ), where the links are commonly implemented by the pair-wise token similarity and weighted sum. However, the modality gap between tokens decrease the magnitude of links, as we observed, the link value of $x_v^l \rightarrow x_v^{l+1}$ and $x_t^l \rightarrow x_v^{l+1}$ decays exponentially with depth ($\alpha_{v \rightarrow v}^l \propto \gamma^l,\gamma \neq 1$). This misalignment propagates layer-wise, causing the cumulative error in cross-modal interaction to grow as $\mathbb{E}_L = \sum_l \gamma^l \epsilon_l$, where $\epsilon_l$ denotes the layer-specific alignment error. This phenomenon aligns with theoretical insights in [B], where pure attention mechanisms suffer from **rank collapse** – a critical factor exacerbating the attention distribution. (3) **MODA vs existing methods:** To address this issue, we propose MODA, which introduces modality-aware gating and layer-wise error compensation to mitigate DDA. Unlike existing methods, MODA dynamically adjusts cross-modal interactions through modality-specific gates and compensates for alignment errors using adaptive propagation mechanisms. Further, MODA extracts the modality-specific feature and introduces the gram matrix as basis vectors for adaptive mapping. Thanks for the suggestions and we will re-organize the literature review in the introduction and related work. [A] On the role of attention masks and layernorm in Transformers, NeurIPS, 2024 [B] Attention is not all you need: Pure attention loses rank doubly exponentially with depth, ICML, 2021 **A3-1. Mapping function in V&T-aligner:** The mapping functions in V-Aligner and T-Aligner are designed to address modality alignment issues and exploit four alternative designs as shown in Tab.1c of our manuscript. We explore direct replacement ($X_a$ as the original feature, $X_p$ as the mapped feature), concatenation, and element-wise addition for token fusion. The use of basis vectors enables structured modality space representation, improving alignment. **A3-2. Formulas:** $||$ in Equ.5 represents the normalization operation. **A3-3. Self-attention&cross-attention:** We modularize the influence of self- and cross-attention separately from two aspects: pull the tokens by alignment and correct the focus by masking. **A3-4. MODA for broader MLLMs:** MODA can be integrated into the existing MLLMs by simply replacing each attention layer in the Transformer block. In our paper, we implement the most commonly used pre-norm type MLLM and verify its effectiveness on LLaMA-based MLLM. As a result, MODA can be integrated into the LLaVA series and Vicuna-based, Yi-based, Wizardlm2-based MLLMs because they share the same architecture as the implemented one of MODA. For clarity, we have prepared a torch-style pseudocode in Alg. 2 of the attached file. **A4-1. Detailed experimental setup:** As suggested, we provide all the experimental settings, including the model, data, hyperparameter, and details. Due to the character limits, please refer to the Q1 of reviewer XinM. **A4-2&A4-3&A4-4. Analysis and Discussion of MODA:** We thank the reviewer for the feedback. To address this, we added a detailed analysis of the advantages and disadvantages of each model and module across perception, cognition, and emotion tasks. MODA, as most MLLMs are, suffer from the risk of generating counterfactual responses, leading to false narratives or misrepresentations.
Summary: This paper proposes a novel attention mechanism called Modular Duplex Attention (MODA) to tackle the attention inconsistency problem in Multimodal Large Language Models (MLLMs). MODA showcases outstanding performance in multimodal perception, cognition, and emotion understanding tasks. Specifically, the 34B version of MODA outperforms GPT-4V comprehensively. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make eminent sense for the problem and application at hand. Theoretical Claims: Yes, I've checked the theoretical claims' proofs. For MODA, the duplex attention alignment's math based on gram matrix vectors is logical. The modular attention mask's equations and strategies are well - defined. The use of the normed gram matrix in both components is sound. Overall, the proofs are correct and support the MODA mechanism. Experimental Designs Or Analyses: Yes, I have checked the soundness and validity of the experimental designs and analyses in the paper. In the ablation study, different components of MODA, such as duplex attention alignment and modular attention mask, were systematically removed or modified. This design is valid as it helps to isolate the impact of each component on the overall performance, answering important research questions about their individual contributions. Supplementary Material: Yes, I reviewed the supplementary material. The parts I read include some visualization examples and prompts for each subtask. However, I expected to see more detailed training information about the model. Relation To Broader Scientific Literature: Prior studies have highlighted the importance of attention mechanisms in multimodal learning, but faced issues with inconsistent cross-modal attention. MODA builds on these by specifically addressing this problem. Essential References Not Discussed: none Other Strengths And Weaknesses: Weakness: 1. As mentioned in **Supplementary Material**, I wanted to know if the model only trained on a SFT stage or followed a two stage training process like common models. Also, I'd like to know more details like learnable parameters during the training process. Other Comments Or Suggestions: none Questions For Authors: I was wondering why MODA has such a significant improvement in emotion tasks. I'm willing to raise my score if deeper explanations and training details can be provided. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: ## **General Response** We sincerely appreciate all the Reviewers and the Area Chair for their time and effort in reviewing our paper. Following the valuable suggestions and insights provided in the reviews, we summarize **the additional results and evidence** included in the rebuttal based on the reviewers' suggestions: - We provided the theoretical explanation of the deficit disorder attention problem and illustrated the diagram **[A1&A2 for Ack2 and Fig.1 of the attached file]**. - We conducted experiments on 9 benchmark datasets to discuss the negative impact of the critical issue of MLLM, i.e., the deficit disorder attention problem **[A1 for Mf8z and Fig.2 of the attached file]**. - We conducted more qualitative comparisons with SOTA MLLMs **[Fig.3 of the attached file]**. - We provided the pseudo-code for attention score computation and MODA for clarity **[A6 for Mf8z, A3-4 for Ack2, and Alg. 1 & Alg. 2 of the attached file]**. - We provided the computational complexity analysis of the proposed module and conducted comparisons among existing MLLMs and alternative attention mechanisms **[A1 & A2 & A3 for cYUZ]**. - We provided the detailed experimental setup and illustrated more information, including training time and hardware **[A1 for XinM]**. - We corrected the mentioned typos and carefully checked our manuscript again. Moreover, we also modified the figures and added the discussions as suggested **[Fig.4 of the attached file]**. Additionally, we follow the guideline of ICML 2025 and attach the figures as well as the pseudo-code in the [`anonymous link`](https://anonymous.4open.science/r/icml25_rebuttal-F06A/README.md): https://anonymous.4open.science/r/icml25_rebuttal-F06A/README.md --- ## **Response to Reviewer XinM** Thank you for your insightful comments and questions. **A1. Training details, including backbone, data, hyperparameter, and hardware&time** (1) **Same setting as Cambrian-1:** To ensure a fair comparison, we adopt the same experiment setup, including the backbone and data, as Cambrian-1. Specifically, our training begins with a one-stage SFT following pretraining, which utilizes the pre-trained ViT and adaptor. During the training, we unfreeze the LLM backbone to fine-tune the attention part in the LLM and fully exploit the power of MODA. (2) **Details.** We attach a configuration table below, which outlines the backbone, data, hyperparameter, and details during the training process. We will include details about training in the revised manuscript. | | Backbone | | Data | Param. | | | Details | | | ------------- | --------------- | ------------------------ | ----------- | ------ | ------ | ------ | ------------ | -------- | | **Model** | **LLM** | **Vision** | - | **lr** | **wd** | **bs** | **Hardware** | **Time** | | MODA-8B | LLaMA3-Ins-8B | OpenAI CLIP ViT-L/14@336 | Cambrian-7M | 2e-5 | 0 | 1024 | 2x 8 A800 | 6 days | | Cambrian-1-8B | LLaMA3-Ins-8B | 4 Vision Encoders* | Cambrian-7M | 2e-5 | 0 | 512 | 128 TPUv4 | - | | MODA-34B | Hermes-2-Yi-34B | OpenAI CLIP ViT-L/14@336 | Cambrian-7M | 2e-5 | 0 | 2048 | 4x 8 A800 | 14 days | | Cambrian-34B | Hermes-2-Yi-34B | 4 Vision Encoders* | Cambrian-7M | 2e-5 | 0 | 1024 | 512 TPUv4 | - | - Backbone: Cambrian-1 uses 4 vision encoders* including OpenAI CLIP ViT-L/14@336, SigLIP ViT-SO400M/14@384, DINOv2 ViT-L/14@518, and Open-CLIP ConvNeXt-XXL@1024. In contrast, we only use OpenAI CLIP ViT-L/14@336 as other popular MLLMs. - Data: We follow the same setting as Cambrian and train our MODA by Cambrian-7M. - Hyperparameter: We follow the common setting and train the model by using different lr for LLM and vision encoder. - Hardware&time: For the 8B model, we use 2x A800 nodes to train for 6 days. For the 34B model, we use 4x A800 nodes to train for 14 days. --- Rebuttal Comment 1.1: Comment: The responses has addressed my concerns, and I’m willing to raise my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer XinM, Thank you for kindly providing invaluable suggestions on this work. We authors greatly appreciate the efforts you have made to improve our manuscript. We will add the training details to the revised manuscript. If accepted, we will include `XinM` in our acknowledgments. Best Regards, Authors of paper 10155
Summary: The paper identifies a critical limitation in MLLMs, where inconsistent attention across layers leads to errors in fine-grained emotion understanding ("deficit disorder attention problem"). To address this, the authors propose MOdular Duplex Attention (MODA), which separates attention into self-modal and cross-modal components, each governed by a dedicated modulated attention mask. Extensive evaluations on various benchmark datasets demonstrate MODA’s effectiveness. ## update after rebuttal Thank you to the authors for their responses. Most of my concerns have been addressed, and I am happy to raise my rate. Claims And Evidence: - The authors introduce the deficit disorder attention problem, arguing that it leads to neglect fine-grained details. However, this claim seems to be an overgeneralization. - Also, the visualizations of attention scores across different modalities (Figures 1(c), 2(c), and 5) may not provide convincing evidence of an inherent problem. Different modalities inherently contain varying levels of similarity between tokens. For instance, in visual inputs, adjacent tokens often exhibit higher similarity due to the spatial structure of images, naturally resulting in smaller self-attention scores. This phenomenon may not necessarily indicate a limitation but rather a characteristic of multimodal attention behavior. Methods And Evaluation Criteria: The authors claim that their proposed approach enhances fine-grained content understanding and support this with qualitative results (e.g. Figure 6). However, they do not provide a direct comparison with other MLLMs. Theoretical Claims: The primary concern from this reviewer is in Section 3, particularly regarding the clarity and alignment of the model description. - The overall approach and flow of the proposed model architecture is unclear. While Figure 3 is intended to illustrate the framework, it does not clearly align with the explanation in Section 3, making it difficult to understand how the components interact. - In Figure 2(a), what do the x-axis and y-axis represent? How does the visualization support their claim about attention inconsistency across layers (Lines 133–137)? Experimental Designs Or Analyses: Some details are missing from the experimental results section. - How do the authors compute the attention scores? Are these derived from a single sample or aggregated from multiple samples? Additionally, which dataset(s) were used for this analysis? - What does [M] represent in Table 1? - In Line 184, the term "the original ones" is vague. - While the paper states that the mask is split into $\mathbf{M}^m$ and $\mathbf{M}^\bar{m}$, it is unclear what the mask itself represents. Supplementary Material: I reviewed the supplementary material and checked the additional visualizations provided. Relation To Broader Scientific Literature: It may be benefitial for multimodal perception, cognition, and emotion understanding. Essential References Not Discussed: No major references appear to be missing. Other Strengths And Weaknesses: **Weaknesses:** - The paper requires proofreading as there are several missing details (as noted in my comments above), and some sentences are incomplete or unclear. ```(Line 204) To alleviate the collapsed attention matrix and prevent it from under-smoothed tokens. We propose a modular attention mask that chooses to store unnecessary attention values on these pseudo-attention scores.``` Other Comments Or Suggestions: It might be useful to provide a more explanation of the Gram matrix, as MODA relies on it. Questions For Authors: Please check my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## **Response to Reviewer Mf8z** Thank you for your insightful comments and questions. For your reference, we summarized the main results and included the attached file in our response to Reviewer XinM. **A1. Claims on DDA** (1) Actually, DDA is supported by evidence from two aspects: rationale and observation. - **Rationale**: Based on the graph theory, the interaction of multimodal tokens are discorrupted from the input level, where most important visual tokens are throw away. Further, the layer-by-layer propagation introduces the coefficient of accumulated ignorance. The illustration is included in Fig.1 of the attached file. - **Observation**: First, attention scores exhibit a significant bias toward the textual modality, as shown in Fig.2a, where visual features are underrepresented. Second, Fig.2b&2c highlight a clear layer-wise attention decay, with attention inconsistencies becoming more pronounced in deeper layers. Finally, qualitative analyses in Fig.4&6 reveal that baseline models fail to capture subtle multimodal cues, resulting in incorrect or overly generic responses. (2) We thank the reviewer for the valuable comments and will **clarify this** in the revised manuscript accordingly. **A2. Discussion on DDA** (1) **Similar visual tokens lead to high attention scores.** The authors would like to clarify that the statement of the reviewer 'adjacent tokens often exhibit higher similarity due to the spatial structure of images, naturally resulting in smaller self-attention scores.' is *wrong*. In fact, the similar visual tokens have a high attention score [A], according to the definition of attention $A = Softmax(QK/\sqrt{d})\propto QK$. $QK$ represents the similarity between tokens and is directly proportional to the attention score. Therefore, DDA suffers from imbalanced attention scores. The visual part is assigned a low attention score, leading to the neglect of important visual details. (2) **DDA brings negative impact in neglecting fine-grained details.** We use a grounded MLLM (SA2VA) prompt by the ground truth answer to segment the key regions in visual content. Experimental results on 9 vision-centric perception, cognition, and emotion benchmarks revealed that the SOTA MLLM throws away 17% of the crucial visual tokens. For visualization results, please refer to Fig.2 of the attached file. (3) **DDA limits real-world applications.** The imbalanced multimodal attention introduces bias in the flow of tokens, which can lead to inadequate token fusion and yield hallucination results [B]. As shown in Fig.1a &1b of our original manuscript, the model failed to recognize the waiter in the image background, posing critical challenges for practical scenarios, including OCR and conversational agent tasks. [A] Attention is all you need, NIPS, 2017. [B] Mitigating modality prior-induced hallucinations in multimodal large language models via deciphering attention causality, ICLR, 2025 **A3. Comparison on fine-grained understanding:** We would like to clarify that our manuscript already *INCLUDES BOTH quantitative and qualitative direct comparisons with other leading MLLMs* in fine-grained content understanding, as presented in Tab.2, 3, 4, and Fig.4, 7. (1) **Tab.2**: We compare MODA with 6 close-sourced and 6 open-sourced MLLMs (GPT4V, Gemini 1.5 Pro, Grok-1.5, MM-1-30B, Mini-Gemini-HD, LLaVA-NeXT, Cambrian-1) on vision-centric and OCR tasks, which rely on fine-grained cue understanding. (2) **Tab.3&4**: We compare MODA with 4 close-sourced and 7 open-sourced MLLMs (GPT4o, Gemini 1.5 Pro, Claude 3 Opus, Owen-VL-Max, Mini-Gemini-HD, LLaVA-NeXT, Cambrian-1, MMRole) on cognition and emotion tasks requiring contextual understanding. (3) **Fig.4&7**: we provide qualitative comparisons with SOTA Cambrian-1. (4) **Fig.3 of the attached file:** Additionally, we conduct further quantitative comparisons with Cambrian-1 on human-centric understanding and planning tasks. **A4. Framework and pipeline in Fig.3 and Sec. 3:** Thanks for the reminder and we include the modified pipeline and caption as Fig.4 of the attached file. Besides, we will add an overview section in Sec 3.3. **A5. Illustration of Fig.2:** The x-axis and y-axis represent the attention values and the corresponding probability. The x-axis ranges from 1e-4 to 1e-1 and is a logarithmic scale. **A6. Attention score:** Following [C], we compute token-level attention scores on all the testing set and perform average. The pseudocode is included as Alg.1 in the attached file. [C] Efficient streaming language models with attention sinks, NeurIPS, 2024 **A7&A8. Writing:** (1) [M] replaces modular masked attention with learnable tokens to control attention distribution. (2) The term refers to the residual features before Gram matrix mapping. (3) For the token sequence $X^m \in \mathbb{R}^{N_m \times D}$ of modality $m$, the mask controls its visibility with the entire multimodal sequence $X \in \mathbb{R}^{(N_m+N_\bar{m}) \times D}$.
null
null
null
null
null
null
FedSSI: Rehearsal-Free Continual Federated Learning with Synergistic Synaptic Intelligence
Accept (spotlight poster)
Summary: This paper introduces FedSSI, a novel regularization algorithm for continual federated learning that addresses the challenges of knowledge forgetting and data heterogeneity without replay. FedSSI can empirically and theoretically reduce computational overhead and outperform state-of-the-art methods. ## update after rebuttal My concerns are mainly addressed during rebuttal. Thus, I will keep my positive rating. Claims And Evidence: The claims in this paper are well-supported by clear and compelling evidence. Methods And Evaluation Criteria: FedSSI is significant in addressing the current challenges in CFL by saving resources and alleviating heterogeneity. Theoretical Claims: The theoretical claims in FedSSI are supported by clear proofs. Experimental Designs Or Analyses: The paper presents a comprehensive evaluation with sufficient baselines across various datasets and scenarios. Supplementary Material: The appendix provides the experiment settings, and tests the resource cost, detailed results for each incremental task. This offers solid data support for the practicality of the methods. Relation To Broader Scientific Literature: NA Essential References Not Discussed: References are high-quality and sufficient. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow, with a clear motivation and a thorough discussion of the limitations of previous work. 2. This paper is highly commendable for its pioneering exploration of CFL from the perspective of resource constraints. The research provides valuable insights and inspiration for future work. 3. The proposed method FedSSI is well-motivated and technically sound. 4. The experiment design is reasonable and comprehensive, and the analysis of results is thorough and exhaustive. Weaknesses: 1. Although the proposed method avoids data rehearsal, introducing the PSM could add storage overhead that poses challenges for edge devices. The authors are supposed to discuss the storage costs or provide strategies to mitigate this issue. Other Comments Or Suggestions: NA Questions For Authors: 1. Generative AI is an emerging topic. Only classification tasks have been studied and evaluated in the current paper. Can FedSSI generalize to generation tasks well, e.g., using diffusion models? 2. The proposed method depends on SI. If SI has inherent limitations or does not perform well under certain conditions, how to ensure the performance of FedSSI? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable comments. In the following, we give point-by-point responses to each comment. > **Q1. Concerns about the storage overhead caused by PSM.** **R1:** Thank you for this constructive suggestion. The PSM will be trained along with the global model on the current local task. Since this is purely local training assisted by an already converged global model, the training of the PSM is very fast (accounting for only **1/40** of the training cost per task and requiring no communication). We calculate and save the parameter contributions during the local convergence process of the PSM, which can then be locally **discarded** after its contribution has been computed. Then, each client trains on the new task with the local model and parameter contribution scores. The model for an FL task is practically not as large as edge clients are mostly resource-limited, and the memory demand by PSM is relatively similar to existing methods like FedProx, as it manipulates over an extra model that has the same size as the federated model. **Possible Strategy:** If an FL task has large models and limited memory left on the edge, a prerequisite can be reasonably assumed satisfied: transmission capacity (probably after optimization) is sufficient for the system. In such a case, we can record the PSM model in the server, which is only downloaded and updated locally to compute parameter contributions and then uploaded to the server again. > **Q2. Combination of Gernerative AI and FedSSI.** **R2:** Thank you for raising this concern. Generative AI has emerged as a prominent research area in recent years, as it leverages the generalization capabilities of LLMs to enhance task performance and drive productivity improvements significantly. However, even in centralized learning paradigms, continual learning for Generative AI remains hindered by multiple challenges. Unlike conventional backbone models that primarily face catastrophic forgetting, Generative AI systems more frequently encounter difficulties acquiring new knowledge. Additionally, the current deployment of Generative AI models through quantization compression on small edge devices complicates continual learning implementation under resource-constrained conditions. While the continual federated learning has not yet published studies specifically addressing Generative AI and FedSSI is designed for traditional CFL tasks, the growing deployment of edge devices equipped with Generative AI capabilities and their potential to collect real-world data from novel scenarios underscores the urgent need for such research. > **Q3. Dependence on SI Algorithm.** **R3:** Thank you for raising this concern. Although the FedSSI algorithm uses PSM to improve the SI algorithm, the selection of the SI algorithm itself is not random. In our empirical experiments, we analyzed a vast number of existing CFL methods and techniques, as well as traditional CL techniques. The SI algorithm was chosen as the appropriate one among them, and it has been widely recognized for its efficiency and feasibility in most scenarios. We believe that this assumption is **acceptable**, similar to how many FL studies are based on CNN and ResNet series networks without considering the feasibility of techniques on a single Linear network.
Summary: This paper focuses on the continual federated learning and systematically analyzes the resource consumption of existing works. The authors propose a resource-friendly method based on the SI algorithm, FedSSI, which balances local and global knowledge. Extensive experiments and analytical understanding have been done to verify the effectiveness. ## update after rebuttal After reviewing all the author rebuttal and discussions, given the quality of this paper and their thorough rebuttal, I would like to vote for the acceptance of this paper. The authors have solved my concerns regarding experimental settings and paper details. Claims And Evidence: All claims are well supported by evidence. Methods And Evaluation Criteria: Studying the lightweight federated continual learning is interesting and is crucial for the practical deployment in real-world applications. The proposed FedSSI is simple yet effective with abundant experiments and analysis. Theoretical Claims: The analytical understanding is easy to read and solid. Experimental Designs Or Analyses: The paper presents a comprehensive evaluation with sufficient baselines across various datasets and scenarios. Supplementary Material: I have read the supplementary materials about experiments and settings. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: References are sufficient. Other Strengths And Weaknesses: Strengths: 1. The paper is well-organized and easy to read. 2. The experiments are adequate and sufficient. 3. The research top is interesting and may contribute to the practical applications. 4. The proposed method is easy to follow and seems promising. Weaknesses: 1. Although this article demonstrates a high level of quality, there are still minor typos in its presentation. The Eq. (1) is not rigorous. To the left of the equal sign, should be w, and authors should unify the notation of the model w. In Table 2, the spelling of “CIFAI100” is wrong. 2. The authors should provide more details about the data partition and task setting. It is better to expand this, especially for readers who may not be familiar with continual federated learning. Other Comments Or Suggestions: N/A Questions For Authors: While FedSSI calculates the contribution of each parameter during the training process, will its performance and cost be affected by the number of model parameters? Nowadays, LLM is a very hot topic. Can this method still work based on LLM? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for providing us with positive comments. In the following, we give detailed responses to each review. > **Q1. Concerns about the selection of the hyperparameter $\lambda$** **R1:** Thanks a lot for raising this concern. In Table 3, $\alpha$ refers to the degree of data heterogeneity, while $\lambda$ is a control coefficient in the training process of PSM. In Proposition 1, we show that adjusting $\lambda$ can control whether the proportion of knowledge in PSM leans towards the local or global distribution (i.e., it is related to $\alpha$). When $\alpha$ has a higher value, indicating a trend towards homogeneity in distribution, clients need to focus more on local knowledge. This means that by setting a larger $\lambda$ value, PSM can rely more on local knowledge. Although we cannot directly relate $\alpha$ and $\lambda$ with a simple formula due to the complexity of the problem, even in specialized research on personalized federated learning (PFL), methods such as **Gaussian mixture modeling** are relied upon. This specific study goes beyond the resource-efficient CFL that we focus on in this manuscript, so we did not develop new mechanisms for this issue. In this paper, we can empirically and theoretically judge that there exists a positive correlation between $\alpha$ and $\lambda$, which is supported by Proposition 1 and extensive experiments conducted in Table 3. We will consider this issue in our future research work. > **Q2. Concerns about framework of FedSSI and more details about CFL** **R2:** Thank you for this helpful comment. We agree that a framework diagram could improve accessibility for readers less familiar with CFL. However, the regularization-based methods in FedSSI are inherently theoretical, making it challenging to explicitly visualize their nuanced mechanisms (e.g., PSM step or SI module) within a high-level framework. To address this, we have included **Algorithm 1**, which details the iterative steps of FedSSI. We appreciate your suggestion and will consider providing the framework in our final version. Moreover, we will further provide the relevant experimental details in the supplementary: we assign different numbers of tasks to various datasets. Using CIFAR-10 as an example, we set five tasks for class-incremental tasks, each covering two classes without data overlap. For domain-incremental tasks, each domain represents one task. > **Q3. Concerns about FedSSI's adaptability to heterogeneous models** **R3:** Thank you for raising this concern. FedSSI still can work under model heterogeneity, but this heterogeneity will introduce novel challenges to CFL systems that have never been addressed. FedSSI’s core mechanism utilizes a PSM Module to address data heterogeneity and employs SI for continual learning. Notably, the SI algorithm is architecture-agnostic, as it operates by quantifying parameter contributions during gradient updates. Similarly, the PSM module—an initial copy of the local model—is independent of other clients’ model architectures. However, model heterogeneity exacerbates system heterogeneity due to divergent feature representation spaces across architectures. As CFL is an emerging research area, existing studies have yet to address model heterogeneity systematically. The reviewer’s suggestion highlights a promising research direction we will prioritize in future work. --- Rebuttal Comment 1.1: Comment: The authors have well addressed my problems. Therefore, I vote for the acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Sincere thanks for your response! We will further improve our manuscript later. Best of luck!
Summary: The paper introduces a continual federated learning method, FedSSI, aimed to mitigate catastrophic forgetting without rehearsal. FedSSI employs the personalized surrogate model to strike a balance between global and local knowledge during the training process. Experimental results show that FedSSI can outperform other baselines. Claims And Evidence: The claims in the paper are clear and supported by convincing evidence. Methods And Evaluation Criteria: The proposed method sounds technical and the evaluation is sufficient with various settings and advanced baselines. Theoretical Claims: The theoretical analysis is explicit and easy to understand. Experimental Designs Or Analyses: The experimental designs and statistical analyses are rigorous and valid. Supplementary Material: The supplementary material is a comprehensive description of the settings and additional experiment results. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: All essential references are included. Other Strengths And Weaknesses: Pros: - It is meaningful to explore the training cost in CFL, where edge devices are often equipped with portable but weak hardware. - The paper is well-organized and easy to read. - This paper innovatively introduces the PSM and successfully addresses the issue of data heterogeneity in CFL at a low cost. - The authors have conducted extensive experimental validations across multiple datasets and CFL scenarios, demonstrating the effectiveness of FedSSI. Cons: - The hyperparameter $\lambda$ is based on data heterogeneity. An adaptive adjustment strategy can be discussed to enhance the robustness of FedSSI. - The authors can provide the framework of FedSSI for readers who are not so familiar with CFL. It will help enhance the understanding of the technique and CFL settings. Other Comments Or Suggestions: N/A Questions For Authors: For edge devices with limited resources, each device may employ models with different architectures. In such a situation, can FedSSI still maintain its advantage? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for this professional review. The critical comments have been addressed carefully, and responses have been given one by one. > **Q1. Minor typos in our manuscript.** **R1:** Thank you very much for this helpful comment. We are sorry for the wrong spelling in Table 2 and will correct it. We will carefully polish our manuscript to further improve the presentation. In Eq.(1), we formulate the overall optimization objective of CFL. $w^t$ denotes the converged global model, and the superscript $t$ represents the number of tasks here. > **Q2. Insufficient description of experimental settings.** **R2:** Thank you for this valuable comment. We will further provide the relevant experimental details in the supplementary: we assign different numbers of tasks to various datasets. Using CIFAR-10 as an example, we set five tasks for class-incremental tasks, each covering two classes without data overlap. For domain-incremental tasks, each domain represents one task. > **Q3. Concerns about the LLM foundation for FedSSI.** **R3:** Thanks for raising this concern. LLMs have gained significant attention for their strong performance on conventional tasks, but their high computational and communication overheads prevent deployment on edge devices. Current CFL methods, including FedSSI, mainly focus on training with traditional architectures (e.g., CNN, ResNet), and we will later consider the LLM foundation for CFL in our future research. Moreover, calculating the contribution for each parameter incurs negligible computational overhead due to the PSM module. The PSM will be trained along with the global model on the current local task. Since this is purely local training assisted by an already converged global model, the training of the PSM is very fast (accounting for only **1/40** of the training cost per task and requiring no communication). We calculate and save the parameter contributions during the local convergence process of the PSM, which can then be locally **discarded** after its contribution has been computed. Then, each client trains on the new task with the local model and parameter contribution scores.
Summary: The paper introduces FedSSI, a regularization-based continual federated learning (CFL) method designed to address catastrophic forgetting and data heterogeneity without requiring data rehearsal or heavy computational overhead. It identifies limitations in applying traditional regularization techniques like Synaptic Intelligence (SI) to heterogeneous data in federated learning scenarios. To overcome this, FedSSI proposes a Personalized Surrogate Model (PSM) that leverages both local and global information to calculate a surrogate loss tailored to client-specific data heterogeneity effectively. Experiments conducted across multiple benchmarks—including CIFAR10, CIFAR100, Tiny-ImageNet, Digit10, Office31, and Office-Caltech-10—demonstrate that FedSSI significantly outperforms existing methods, achieving accuracy improvements of up to 11.52% across different scenarios. Claims And Evidence: The claims made by the authors, particularly the effectiveness of FedSSI in handling data heterogeneity and mitigating catastrophic forgetting, are supported by substantial experimental evidence. Experiments cover diverse datasets, clearly demonstrating performance gains over baseline methods. The claim that FedSSI addresses limitations of traditional regularization methods (e.g., SI) is convincingly supported by experiments illustrating the superior performance of FedSSI under various levels of data heterogeneity. Methods And Evaluation Criteria: The evaluation criteria and methods proposed (such as Class-Incremental and Domain-Incremental learning tasks, along with different data heterogeneity levels using Dirichlet distribution) are appropriate for validating the method in realistic CFL scenarios. The benchmarks and comparison baselines used in the evaluation make sense and cover a comprehensive range of existing approaches in the literature. Theoretical Claims: The paper includes theoretical discussions about the convergence and effectiveness of the Personalized Surrogate Model (PSM). The authors provide a theoretical analysis of the personalized surrogate model's convergence. Proposition 1 and Theorem 1 are theoretically sound and based on prior established results. No specific proof issues or errors were identified upon review. Experimental Designs Or Analyses: The experimental designs are thorough, clearly defined, and valid for the CFL tasks explored. The authors have conducted comprehensive experiments across multiple datasets and scenarios, including consideration of data heterogeneity levels, and the analyses provided (e.g., comparing test accuracy and communication efficiency) are sound and convincing. Supplementary Material: The supplementary material mentioned includes appendices with additional experimental details, baseline descriptions, and hyperparameter settings. I reviewed these parts as described in the main paper, and they adequately support the primary results. Relation To Broader Scientific Literature: The paper situates itself clearly within the broader literature on Continual Federated Learning and builds explicitly on Synaptic Intelligence (SI), extending this traditional continual learning technique to federated learning settings. It also positions itself relative to recent works addressing catastrophic forgetting in federated scenarios (FedWeIT, FOT, FedCIL, etc.), clearly articulating its contributions against existing CFL approaches. Essential References Not Discussed: This paper reviewed relevant references. Other Strengths And Weaknesses: Strengths ------------- 1. The paper addresses a challenging and important problem in CFL: catastrophic forgetting without rehearsal. 2. FedSSI extends synaptic intelligence for federated learning scenarios and heterogeneous data distributions. 3. Extensive experimental validation convincingly supports the efficacy of the proposed method. Weakness ------------- 1. The personalized surrogate model introduces an additional local model update step, and while computational overhead is claimed to be minimal, practical implications of this overhead on low-resource edge devices might require further clarification. 2. The balance hyper-parameter (λ) requires careful tuning; however, the paper doesn't provide a fully automatic or adaptive mechanism for setting it dynamically in real-world deployments. Other Comments Or Suggestions: The paper is well written. No such major suggestions on writing. Questions For Authors: **1. Hyperparameter Selection** How sensitive is FedSSI to the choice of λ, particularly under realistic scenarios where data distribution shifts might be unpredictable? Could an adaptive strategy for tuning λ be implemented practically? By clarifying this question would help to assess the practical usability of FedSSI in dynamic scenarios. **2. Computational Overhead** While FedSSI is designed to be computationally efficient, but computing surrogate model in resource-limited edge devices will cause an computational overhead? **3. Client Participation** How does FedSSI's performance scale with increasing numbers of clients? Have you evaluated its robustness or convergence speed in highly scaled FL scenarios (hundreds or thousands of clients with partial participation)? **4. Theoretical Bound on λ** Is there a theoretical guideline or bound for selecting the optimal λ based on measurable properties of client data distributions (non-IID)? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: > **Q1&Q4. Concerns about selection of the hyperparameter $\lambda$ and its theoretical bound** **R1:** Thank you for this valuable comment. We conducted relative experiments in Table 3. In Table 3, $\alpha$ refers to the degree of data heterogeneity, while $\lambda$ is a control coefficient in the training process of PSM. In Proposition 1, we show that adjusting $\lambda$ can control whether the proportion of knowledge in PSM leans towards the local distribution or the global distribution (i.e., it is related to $\alpha$). When $\alpha$ has a higher value, indicating a trend towards homogeneity in distribution, clients need to focus more on local knowledge. This means that by setting a larger $\lambda$ value, PSM can rely more on local knowledge. Although we cannot directly relate $\alpha$ and $\lambda$ with a simple formula due to the complexity of the problem, even in specialized research on personalized federated learning (PFL), strategies such as **Gaussian mixture modeling** are relied upon. Another possible approach can be adapted from **APFL** [1], where the optimal weights are empirically determined through iterative gradient descent during optimization. However, this process may introduce additional model parameters and computational overhead. This specific study goes beyond the resource-efficient CFL that we focus on in this manuscript, so we did not develop new mechanisms for this issue. In this paper, we can empirically and theoretically judge that there exists a positive correlation between $\alpha$ and $\lambda$, which is supported by Proposition 1 and extensive experiments conducted in Table 3. We will consider this issue in our future research work. [1] Deng Y, Kamani M M, Mahdavi M. Adaptive personalized federated learning[J]. arXiv preprint arXiv:2003.13461, 2020. > **Q2. Concerns about the computational overhead brought by PSM.** **R2:** Thank you for raising this concern. The computational overhead of PSM is negligible compared to the overall CFL training process. The computational overhead of the PSM module scales proportionally with the complexity of learning tasks. For edge devices with limited computational capacity, their CFL tasks are typically simpler, thus the PSM’s computational demands scale down correspondingly. Specifically, the PSM will be trained along with the global model on the current local task. Since this is purely local training assisted by an already converged global model, the training of the PSM is very fast (accounting for only **1/40** of the training cost per task and requiring no communication). We calculate and save the parameter contributions during the local convergence process of the PSM, which can then be locally **discarded** after its contribution has been computed. Then, each client trains on the new task with the local model and parameter contribution scores. **We analyze this issue in Line 232 in our submitted manuscript.** > **Q3. Concerns about the scalability of FedSSI.** **R3:** Thank you for your helpful comment. We apologize that we are unable to simulate thousands of clients to conduct scalability experiments due to hardware limitations. But we validated the scalability of FedSSI on 100 clients, which is also a common scale experiment in FL's work. We conducted further experiments by increasing the number of clients to 100 while reducing the client selection rate to 10%. We performed some related experiments on CIFAR10 and Digit10 ($\alpha=10.0$), with the results as follows: | | Metric | FedAvg | FL+EWC | Re-Fed | FOT | FedSSI | | :---------: | :-------: | :----: | :----: | :----: | :---: | :-------: | | **CIFAR10** | $A(f)$ | 18.67 | 19.93 | 19.44 | 21.26 | **23.61** | | | $\bar A$ | 45.8 | 46.38 | 44.08 | 47.02 | **47.14** | | **Digit10** | $ A(f) $ | 55.91 | 56.82 | 54.91 | 56.06 | **59.35** | | | $ \bar A$ | 70.37 | 70.4 | 66.24 | 69.69 | **71.27** | Since the dataset needs to be divided into different numbers of tasks, an excessive number of clients can lead to a very small number of samples per client, making model training difficult. However, **FedSSI still maintains a leading position.**
null
null
null
null
null
null
Ab Initio Nonparametric Variable Selection for Scalable Symbolic Regression with Large $p$
Accept (poster)
Summary: This paper proposes a variable selection method for input variables related to the output. The proposed method is used for data preprocessing in symbolic regression and can improve the accuracy and speed of symbolic regression. Specifically, the authors propose a method called PAN+SR, which combines a key idea of ab initio nonparametric variable selection with SR to efficiently pre-screen large input spaces and reduce search complexity while maintaining accuracy. In the paper, the authors emphasize the importance of FNR, as any erroneous exclusion of related variables will lead to the failure of subsequent symbolic regression. The authors conducted numerous experiments in the paper to demonstrate the effectiveness of the proposed method, including the improvement of regression accuracy and the robustness of the method to noise. Claims And Evidence: The supporting evidence is insufficient, as the data used in the paper is self constructed and lacks comparison with similar methods. Methods And Evaluation Criteria: Yes,The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand. Theoretical Claims: The paper has no theoretical proof, and its contribution lies in the design of the algorithm. However, I think the author's emphasis on the importance of low FNR is correct. Experimental Designs Or Analyses: There are some issues with the experimental design of the paper. Firstly, the contribution of the paper lies in proposing a variable selection method, rather than a symbolic regression algorithm. The paper lacks comparison with other variable selection algorithms. In addition, in the noise robustness experiment, the paper only considered the presence of noise in the output variable y, which is inconsistent with reality. In practice, both input and output variables may have noise. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are related to the correlation analysis and information theory in other literature. In previous studies, correlation coefficients and mutual information between variables were often used to determine whether variables were correlated. Essential References Not Discussed: The contribution of the paper lies in proposing a variable selection algorithm rather than a symbolic regression algorithm, but the author only discussed the symbolic regression algorithm in Related Works and did not discuss other variable selection algorithms. Moreover, there was no comparison with other variable selection algorithms in the experiment, which is puzzling. Here are three relevant literature for the author's reference. 1.Q. Chen, M. Zhang and B. Xue, "Feature Selection to Improve Generalization of Genetic Programming for High-Dimensional Symbolic Regression," in IEEE Transactions on Evolutionary Computation, vol. 21, no. 5, pp. 792-806, Oct. 2017, doi: 10.1109/TEVC.2017.2683489. 2.Q. Chen, B. Xue, B. Niu and M. Zhang, "Improving generalisation of genetic programming for high-dimensional symbolic regression with feature selection," 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 2016, pp. 3793-3800, doi: 10.1109/CEC.2016.7744270. 3.B. Al-Helali, Q. Chen, B. Xue and M. Zhang, "Genetic Programming for Feature Selection Based on Feature Removal Impact in High-Dimensional Symbolic Regression," in IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 8, no. 3, pp. 2269-2282, June 2024, doi: 10.1109/TETCI.2024.3369407. Other Strengths And Weaknesses: The paper has done a great job, with standardized writing and clear discourse. However, there are the following suggestions for the author to further improve. 1.The paper constructed a new high-dimensional SR dataset for testing variable selection, which is a good work. However, the current constructed dataset is still too simple. It is possible to consider constructing some more difficult redundant variables, such as constructing new redundant variables like x3=x1+x2. 2. The paper only considered adding noise to the output variable y in the experiment, and it is recommended that the author also add noise to the input variable in the future. Other Comments Or Suggestions: All comments and suggestions are provided above. Questions For Authors: All comments and suggestions are provided above. Here, I would like to emphasize the two issues that concern me the most: first, the comparison with other variable selection methods, and second, adding noise to the input variables as well. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewers' thoughtful, constructive, and positive feedback on our work. We are glad that the motivation behind our approach was found to be clearly presented (pCrp, gjaN, pZx6), and that our focus on minimizing false negatives (FNs) in variable selection (VS) for symbolic regression (SR) was considered both important and well justified (pCrp, gjaN). We're pleased that reviewers found our method to be novel and effective (gjaN, GHAd, pZx6), and that the empirical evaluation was described as solid, convincing, and comprehensive (gjaN, GHAd, pZx6). We also appreciate the recognition of PAN+SR's novel direction to improve scalability in SR (pZx6), as well as our contribution to SR benchmarking and software development (gjaN). We'd like to take this opportunity to address your specific concerns and suggestions. We will include necessary revisions to clarify our work and strengthen the manuscript. ### Concern 1: Lack of comparison with other VS methods Contrary to the impression that no comparisons were made, our paper includes a **comprehensive evaluation of the proposed VS method (PAN) against 4 strong nonparametric VS methods**. These results are presented in **Appendix D.2** and referenced in **Section 4 paragraph 3**, where we note PAN consistently achieves the highest true positive rate (TPR) across all settings. We'll make this pointer more prominent in the revision to avoid confusion. Second, while the suggested papers share some surface-level similarities with our work and are worth citing, their focus is meaningfully different. Specifically, they use genetic programming (GP)—an SR method—as a tool for VS. In contrast, our work develops a **model-free VS** method that serves as a pre-screening step to improve the scalability and performance of SR methods, including but not limited to GP-based SR. Moreover, GP-based VS methods have several scalability and expressive power limitations. In addition to the number of input variables $p$, their effectiveness and tractability also depend heavily on: 1. The complexity and size of the operator set used for expression constructions 2. The maximum tree depth allowed As these grow, GP's search space **expands combinatorially**, making it less scalable and less suitable for high-dimensional problems. Furthermore, the reliance on a **pre-defined operator set** makes them inherently not model-free, while our proposed approach is designed to be **modular, scalable, and model-free**. Finally, we respectfully disagree with the characterization that "the contribution of the paper lies in proposing a variable selection algorithm." Our work goes beyond this—we address scaling SR to extreme dimensions, where the synergy between VS and SR is central. Our key message, supported by comprehensive experiments, is that a carefully designed model-free VS step consistently improves downstream SR across diverse methods. ### Concern 2: Performance under correlated and/or noisy predictors (also raised by gjaN and GHAd). We also appreciate your thoughtful suggestion regarding noise and correlation in predictors. While standard practice in SR benchmarks focuses on no/low noise in the output and independence in the predictors, we agree that noisy and/or correlated inputs represent a realistic and important challenge. That said, our current experimental design already introduces several challenges **beyond existing SR benchmarks**: 1. high dimensionality 2. irrelevant features 3. high output noises (8 levels) 4. low sample sizes (4 levels) leading to $8 \cdot 4 \cdot 100 \cdot 10=\textbf{32000}$ simulation settings. Although we're not able to test all suggested settings, they're not uncovered by our work. First, it's worth noting that PAN+SR already **performs well on the real-world datasets** (see Figure 1), where **27/35 (77%)** of datasets have at least 1 pair of predictors with **correlation >= 0.85**. In addition, **motivated by your comment**, we conducted a pilot experiment using the Friedman equation (also used in [3]) under a **high correlation structure**. Predictors $x_1,\ldots,x_p$ are drawn from a multivariate Uniform(0,1) distribution with an autocorrelation structure: $\rho_{ij} = 0.9^{|i-j|}$, and the responses were generated as $$y = 10\sin(\pi x_1x_2) + 20(x_3 – 0.5)^2 + 10x_4 + 5x_5 + \varepsilon, \qquad\varepsilon \sim N(0,\sigma^2).$$ We fixed $n=1000$, $p=100$, and ran 100 trials per SNR level: | Metric | SNR=10 | SNR=5 | SNR=1 | |-|-|-|-| | TPR | 100% | 100% | 99.8% | | FPR | 15.98% | 23.26% | 33.23% | Even so, **PAN consistently identified all 5 relevant predictors**, except for 1 run under SNR=1. We plan to include the discussion of correlated structure in the paper to help motivate future research focused on more realistic simulation settings. Thank you again for your time and constructive feedback. We appreciate the opportunity to clarify our work and believe these revisions will help strengthen the paper. --- Rebuttal Comment 1.1: Comment: 1.In the comparative experiment in Appendix D.2, although the proposed algorithm has the highest TPR, its FPR is much higher than other algorithms, which may not be convincing. 2.Regarding the issue of noise, what I mean is that in practice, noise exists in both the independent(input) and dependent(output) variables, not just in the dependent variable(output). 3.One of my suggestion is to construct some more difficult datasets, such as constructing x3=x1+x2, where x3 can be represented by x1 and x2, and therefore x3 can be deleted. --- Reply to Comment 1.1.1: Comment: Thank you for reading our Appendix and for the opportunity to clarify these points. We sincerely appreciate your constructive suggestion—it helped us strengthen the validation of PAN and enhance the realism of our simulation framework. **We hope these clarifications and additional experiments support a more favorable evaluation of our work.** ## Point 1. Asymmetric role of TPR and FPR in the context of SR First, we completely agree that in **conventional variable selection** (VS) problems, the trade-off between TPR and FPR is important, and a balance between the two is typically expected. However, our goal is to develop **VS methods specifically to support the scaling of SR methods** to large-$p$ datasets. This focus naturally leads to an asymmetric role of TPR and FPR. In particular, an effective VS method in our setting should prioritize maximizing TPR (ideally 100%), and only then aim to minimize FPR (which is secondary). This is because, in SR, the cost of excluding a relevant predictor (i.e., a FN) is much higher than that of including an irrelevant one (i.e., a FP). For example, if the true expression is $y = x_1 + x_2$ and $x_2$ is mistakenly excluded during pre-screening, **the correct expression becomes unrecoverable**. In contrast, if the selected set includes $x_1, x_2, x_3, x_4$, the correct expression remains accessible, and the irrelevant predictors can be ignored during symbolic expression search. See Lines 50-71 (left),119-128 (left), 148-152 (left), 251-260 (right), 355-384 (left), 405-419 (right) for related discussion. Our design choice of favoring FPs over FNs reflects a conservative and robust approach: it errs on the side of inclusion when uncertainty is high, ensuring that true signals are retained. In Appendix D.2, we show that PAN has the highest TPR (or the lowest FNR) across all settings, which is exactly the desirable behavior in SR pre-screening. Second, many SR algorithms incorporate implicit regularization mechanisms—such as penalizing complexity or preferring parsimonious expressions—that help filter out spurious variables during the model construction stage. Thus, even if some irrelevant predictors pass through the VS stage, they are unlikely to persist in the final expression, further reducing FPR. We hope this helps clarify the key distinction between our work and conventional VS problems, and highlights our unique focus driven by the challenges of extreme-scale SR. ## Points 2 & 3 First, we fully agree that in real-world applications, noise can affect both the input (predictor) and output variables. In fact, this is reflected in the real-world datasets we evaluated, where PAN+SR demonstrates strong performance—suggesting robustness to such noise (and/or other scenarios not fully covered by our simulation) even without explicitly simulating it. We believe these real-data results offer a compelling and complementary benchmark alongside our simulations. Second, while our current simulation framework does not include noise in predictors, we have already incorporated four major challenges that are underrepresented in standard SR benchmarks: (1) high dimensionality, (2) presence of irrelevant predictors, (3) varying sample sizes (4 levels), and (4) multiple levels of output noise (8 levels), totaling 32,000 distinct settings. Extending this already comprehensive design to include input noise and/or redundancy for each of the 32,000 settings would significantly increase the computational burden and is beyond the scope of this study. However, motivated by your comment, we managed to extend the Friedman simulation used in our earlier response with **several new scenarios that directly address your Point 2 and Point 3**: 1. Baseline: Independent, noiseless, irreducible predictors 2. Noisy predictors (Point 2): Gaussian noise added to each predictor with noise variance equal to 1/5 of the signal variance 3. Duplicate predictor (Point 3): $x_6 = x_1+x_2$, where $x_1$ and $x_2$ are relevant predictors. 4. Correlated predictors: As in our earlier rebuttal, predictors follow an autoregressive structure with $\rho_{ij} = 0.9^{|i-j|}$ All scenarios include additive output noise with a SNR of 10. Each setting was repeated 100 times, and the average performance is summarized below: | Scenario | TPR | FPR | |-|-|-| | Baseline | 100% | 10.58% | | Noisy $X$ | 100% | 26.42% | | Duplicated $X$ | 100% | 11.11% | | Correlated $X$ | 100% | 15.98% | **PAN consistently achieves a 100% TPR**, showing robustness to (1) input noise, (2) redundancy via linear combinations, and (3) strong correlation structures in the predictors. We will include these new results in the final version of the paper, along with the corresponding code in our GitHub repository to ensure reproducibility. We sincerely thank you for this constructive suggestion, which has helped us further validate the robustness of PAN and improve the realism of our simulation framework.
Summary: This paper proposes a rank-clustering PAN strategy for screening for relevant features before running symbolic regression (SR) methods on very high dimensional data, where the goal is to minimize the false negative rate (avoiding missing important variables). The idea is to repeatedly run BART and use the ranking of feature importance measures to cluster relevant and irrelevant features. Then, only the features identified as relevant are fed into a symbolic regression algorithm. Through extensive simulation studies, this strategy is shown to be effective in improving the performance and reduce the running time of all popular downstream SR algorithms. Claims And Evidence: The claims for the benefits of the proposed method are mostly supported by empirical evidence. It would be nice to have some theoretical insights. Methods And Evaluation Criteria: Yes. Theoretical Claims: This submission does not contain much theoretical aspects. Experimental Designs Or Analyses: I have checked the experimental setups, including data generating process, algorithms used, and their implementations. Existing experiments are solid, but it would further benefit from a more comprehensive design, which I elaborate in the "Questions" section. Supplementary Material: Yes, I reviewed the code and supplementary experimental results in the SM. Relation To Broader Scientific Literature: The key contributions include: - Formalize the goal of variable selection in SR (minimizing FNR) and its distinction from standard variable selection. - Propose a strategy to conduct variable selection, which boosts existing methods, and may also inspire future development in this direction. - Add to the software and benchmark development of the SR field. Essential References Not Discussed: I haven't found such a case. Other Strengths And Weaknesses: Please see my comments and questions. Other Comments Or Suggestions: - While the problem considered is well motivated and the argument on limiting FNR is solid, the proposed solution seems rather heuristic. It is not clear why the authors settle down to this specific solution. Some theoretical justifications will be helpful in establishing this method. - The experiments are limited to i.i.d. covariates where the importance of features can be readily assessed by ranking and clustering. However, this kind of ranking strategy might be problematic when features are correlated. It will be helpful to extend the experiments. Questions For Authors: Similar to my comments, - Is there theoretical justification for the method? Why do you pick this specific strategy? - How do the methods work for correlated features? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewers' thoughtful, constructive, and positive feedback on our work. We are glad that the motivation behind our approach was found to be clearly presented (pCrp, gjaN, pZx6), and that our focus on minimizing false negatives (FNs) in variable selection (VS) for symbolic regression (SR) was considered both important and well justified (pCrp, gjaN). We're pleased that reviewers found our method to be novel and effective (gjaN, GHAd, pZx6), and that the empirical evaluation was described as solid, convincing, and comprehensive (gjaN, GHAd, pZx6). We also appreciate the recognition of PAN+SR's novel direction to improve scalability in SR (pZx6), as well as our contribution to SR benchmarking and software development (gjaN). We'd like to take this opportunity to address your specific concerns and suggestions. We will include necessary revisions to clarify our work and strengthen the manuscript. ### Concern 1: Motivation and theoretical justification for the proposed method (PAN) We agree that the motivation for PAN can be more clearly articulated. PAN is motivated by the observation that VIP rankings, unlike raw VIP values, tend to exhibit a clear **2-group structure** separating relevant from irrelevant predictors. This **2-group structure** is obscured in raw VIPs due to their bounded nature (summing to 1), whereas the ranking scale is wider and more interpretable. In Section 4 (paragraphs 5-8) and Appendix D.1, we show that under the uniform assumption, the expected mean rank $\bar{r}_{j\cdot}$ equals $(1+p_0)/2$ for relevant predictors and $(p_0+1+p)/2$ for irrelevant ones, naturally forming 2 clusters. This led us to **frame VS as a clustering problem** over mean VIP rankings $\bar{r}_{j\cdot}$. Although not included in the paper, we tested several clustering algorithms, including HAC, $k$-means, affinity propagation, Gaussian mixture model (GMM), spectral clustering, mean shift, DBSCAN, and BIRCH. We found that **HAC consistently outperformed others**. A key reason is its **robustness to class imbalance**, which is intrinsic to sparse regression problems where $p_0 \ll p$. In imbalanced data, large clusters can dominate centroid positions and overshadow the density signals of smaller clusters, making centroid-based and density-based methods less suitable for this task. In contrast, HAC starts with each point as its own cluster and merges based purely on pairwise distances. This allows small clusters (i.e., relevant predictors) to remain distinguishable. We plan to add an ablation study to compare the effect of different clustering algorithms on selection accuracy to justify our design choice further. We believe these additions will strengthen the motivation behind PAN and provide useful insights into the challenges posed by class imbalance in VS. ### Concern 2: Performance under correlated predictors (also raised by pCrp and GHAd) We appreciate your suggestion regarding correlation in the predictors. While the standard practice in SR benchmarks focuses on independence among the predictors, we agree that correlated inputs represent a realistic and important challenge. That said, our current experimental design already introduces several challenges **beyond existing SR benchmarks**: 1. high dimensionality 2. irrelevant features 3. high output noises (8 levels) 4. low sample sizes (4 levels) leading to $8 \cdot 4 \cdot 100 \cdot 10=\textbf{32000}$ simulation settings. Although we're not able to test all suggested settings, they are not uncovered by our work. First, it's worth noting that PAN+SR already **performs well on the real-world datasets** (see Figure 1), where **27/35 (77%)** of datasets have at least 1 pair of predictors with **correlation >= 0.85**. In addition, **motivated by your comment**, we conducted a pilot experiment using the Friedman equation under a **high correlation structure**. Predictors $x_1,\ldots,x_p$ are drawn from a multivariate Uniform(0,1) with an autocorrelation structure: $\rho_{ij} = 0.9^{|i-j|}$, and the responses were generated as $$y = 10\sin(\pi x_1x_2) + 20(x_3 – 0.5)^2 + 10x_4 + 5x_5 + \varepsilon, \qquad\varepsilon \sim N(0,\sigma^2).$$ We fixed $n=1000$, $p=100$, and ran 100 trials per SNR level: | Metric | SNR=10 | SNR=5 | SNR=1 | |-|-|-|-| | TPR | 100% | 100% | 99.8% | | FPR | 15.98% | 23.26% | 33.23% | Even so, **PAN consistently identified all 5 relevant predictors**, except for 1 run under SNR=1. We plan to include the discussion of correlated structure in the paper to help motivate future research focused on more realistic simulation settings. Thank you again for your time, constructive feedback, and thoughtful suggestions. We appreciate the opportunity to clarify our work and believe the revisions and additional discussions will help strengthen the paper.
Summary: The authors propose a feature selection preprocessing step to enhance the performance of symbolic regression algorithms. They introduce the method and evaluate its usage on SRBench across a number of algorithms and datasets. Claims And Evidence: The authors do a good job overall of making evidence-based claims. Evidence is convincing as it is over multiple algorithms, datasets, levels of noise, across metrics, and in comparison to other feature selection strategies (appendix). Methods And Evaluation Criteria: Overall the methods make sense and the evaluation is commendable - the authors test their algorithm in combination with a number of SR algorithms and across real-world and synthetic datasets in a robust way. They also compare their feature selection strategy with others in terms of false positive/negative rates (of feature selection) with different levels of noise. The only part of the methodology that didn't make total sense to me was the use of hierarchical agglomerative clustering to partition the feature space into two groups. It seems like overkill, and it isn't fully motivated. if you're just clustering the feature ranks over many trials, can't you use a much simpler threshold finder than HAC? Theoretical Claims: There are no proofs Experimental Designs Or Analyses: Authors appear to follow recommended benchmarking practices. There aren't statistical tests for differences but the effect sizes are reported and laid out visually. Supplementary Material: i read the supplemental part of the paper. Relation To Broader Scientific Literature: Most work in SR develops a new method and then reports its benchmark results on SRBench in comparison to previous results. The authors instead evaluate a broadly applicable preprocessing strategy that synergizes with many methods. The relevant literature is appropriately cited in symbolic regression, and reviewed in terms of feature selection as well. Essential References Not Discussed: There are some SR-based feature selection research papers that might be worth including but it is ancillary. Other Strengths And Weaknesses: Overall I thought the paper was well done. One weakness, in general, is that the strategy of preferring False Positives to false negatives may be less optimal on real-world data where there is a lot of cross-correlation, confounding, and redundancy in the dataset. It's a bit easier to optimize for that criteria on synthetic benchmarks of known physical systems than it might be for unknown systems. The comparison on real-world datasets mitigates some of this worry but it is worth mentioning. Other Comments Or Suggestions: - redefine PAN outside of the abstract - the authors motivate the problem by mentioning SR is NP-hard, but they might as well also note that feature selection is hard as well (http://www.jmlr.org/proceedings/papers/v40/Foster15.pdf) Questions For Authors: None apart from those mentioned thus far Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewers' thoughtful, constructive, and positive feedback on our work. We are glad that the motivation behind our approach was found to be clearly presented (pCrp, gjaN, pZx6), and that our focus on minimizing false negatives (FNs) in variable selection (VS) for symbolic regression (SR) was considered both important and well justified (pCrp, gjaN). We're pleased that reviewers found our method to be novel and effective (gjaN, GHAd, pZx6), and that the empirical evaluation was described as solid, convincing, and comprehensive (gjaN, GHAd, pZx6). We also appreciate the recognition of PAN+SR's novel direction to improve scalability in SR (pZx6), as well as our contribution to SR benchmarking and software development (gjaN). We'd like to take this opportunity to address your specific concerns and suggestions. We will include necessary revisions to clarify our work and strengthen the manuscript. ### Concern 1: Motivation behind HAC We fully agree with you that the usage of HAC is not fully motivated, and further justification is warranted in the main paper. We chose to skip this motivation primarily due to the page limit. Although not included in the paper, we tested several clustering algorithms, including HAC, $k$-means, affinity propagation, Gaussian mixture model (GMM), spectral clustering, mean shift, DBSCAN, and BIRCH. We found that **HAC consistently outperformed others**. A key reason for choosing HAC is its **robustness to class imbalance**, which is intrinsic to sparse regression problems where the number of relevant predictors ($p_0$) is much smaller than the number of irrelevant ones ($p-p_0$). In imbalanced data, large clusters can dominate centroid positions and overshadow the density signals of smaller clusters, making centroid-based and density-based methods less suitable for this task. In contrast, HAC starts with each point as its own cluster and merges based purely on pairwise distances. This allows small clusters (i.e., relevant predictors) to remain distinguishable. We appreciate you highlighting this gap and will revise the manuscript to include a **dedicated explanation of our motivation and intuition in the method section**. We also plan to **add an ablation study** to compare the effect of different clustering algorithms on selection accuracy to justify our design choice further. We believe these additions will strengthen the motivation behind PAN and provide useful insights into the challenges posed by class imbalance in VS. ### Concern 2: Performance under correlated structure (by pCrp and gjaN) We appreciate your insightful comment on the role of correlated, confounding, and redundant variables in real-world settings. While our strategy of favoring false positives (FPs) over false negatives (FNs) may lead to more FPs in the presence of correlated predictors, we view this as a desired property. When uncertainty is high, it is preferable to retain a broader pool of potentially relevant features rather than risk excluding true signals. Furthermore, SR algorithms typically possess implicit regularization, which helps to further filter out irrelevant variables during model construction. As you rightly noted, PAN+SR already **performs well on the real-world datasets**, where **27/35 (77%)** of datasets have at least 1 pair of predictors with **correlation >= 0.85**. In addition, **motivated by your comment**, we conducted a pilot experiment using the Friedman equation under a **high correlation structure**. Predictors $x_1,\ldots,x_p$ are drawn from a multivariate Uniform(0,1) distribution with an autocorrelation structure: $\rho_{ij} = 0.9^{|i-j|}$, and the responses were generated as $$y = 10\sin(\pi x_1x_2) + 20(x_3 – 0.5)^2 + 10x_4 + 5x_5 + \varepsilon, \qquad\varepsilon \sim N(0,\sigma^2).$$ We fixed $n=1000$, $p=100$, and ran 100 trials per SNR level: | Metric | SNR=10 | SNR=5 | SNR=1 | |-|-|-|-| | TPR | 100% | 100% | 99.8% | | FPR | 15.98% | 23.26% | 33.23% | Even so, **PAN consistently identified all 5 relevant predictors**, except for 1 run under SNR=1. We plan to include the discussion of correlated structure in the paper to help motivate future research focused on more realistic simulation settings. We are also more than happy to incorporate your suggestions on redefining PAN outside of the abstract and to motivate the difficulty of variable selection using the provided reference. Thank you again for your time, constructive feedback, and thoughtful suggestions. We appreciate the opportunity to clarify our work and believe the revisions and additional discussions will help strengthen the paper.
Summary: The authors are interested in the problem of discovering mathematical equations from raw data. One of the largest bottleneck for SR methods is that it's extremely hard to scale the equation search to > 10 variables. This is because each additional variable considered combinatorially increases the search space which makes the search less efficient. The authors hypothesize that, given a large number of variables, we can analyze certain statistical properties to cluster the variables based on their relevance and prune the irrelevant variables, which should considerably increase the search efficiency. Towards this extend, the authors propose PAN-SR, a framework which modifies the BART variable selection method to select a set of relevant variables to run an off-the-shelf SR algorithm on. The authors demonstrate that, for almost all models on SRBench, preprocessing the input with PAN-SR improves performance (although there is a higher delta improvement for lower performing models compared to higher performing models). Claims And Evidence: Claim: PAN+SR is a general purpose algorithm to increase SR scalability. Comments: I'm not 100% convinced that PAN+SR "solves" the scalability problem but the authors present a novel direction of improvement for SR scalability which is pretty exciting. Previous work has generally handled this scalability challenge by (1) inducing programs with neural networks (which exposes an out of distribution problem) `[1, 2]` and (2) by using LLMs to induce programs `[3, 4, 5]` which are expensive to run. PAN+SR circumvents this problem by pre-selecting the variables but I'm not sure if a clean variable clustering exists in extremely noisy settings. Regardless, PAN+SR provides empirical justification that the model scales better than baseline models. Claim: PAN is the most performant pre-screening strategy. (Not really an explicit claim but an implicit one) Comments: I generally agree that the algorithm is well motivated but it's not completely certain whether the increased performance is a result of pre-screening in general or pre-screening specifically with PAN+SR. Some additional experiments using other variable selection methods (e.g: BART) would be extremely insightful here. `[1]`: https://proceedings.mlr.press/v139/biggio21a/biggio21a.pdf `[2]`: https://github.com/deep-symbolic-mathematics/TPSR `[3]`: https://arxiv.org/abs/2404.18400 `[4]`: https://arxiv.org/abs/2409.09359 `[5]`: https://ai-2-ase.github.io/papers/52_InceptionSR_AAAI_25.pdf Methods And Evaluation Criteria: The authors use a modified version of SRBench to simulate sampling equations with noisy irrelevant variables. These variables are drawn from the same data distribution to increase the task hardness. Overall, I found the evaluation criteria to be well motivated. One small nitpick: The data for each variable in SRBench (ground-truth) is sampled from a normally distributed random variable. Would PAN+SR's performance suffer if the empirical data is not normally distributed? Specifically, we know that physical laws tend to have very diverse data distributions `[6]`. e.g.: Newton's law and Coulomb's law have very similar equation sketches but the scale at which they operate is extremely different. Specifically, an additional experiment of PAN+SR's performance on `[6]` would be extremely helpful! Overall, I believe this paper will lead to great discussions at the conference and am in favor of **accepting** this paper. PAN+SR presents a new and refreshing direction for scaling SR methods by pre-screening the variable set using statistical heuristics that is empirically validated. `[6]`: https://arxiv.org/abs/2206.10540 Theoretical Claims: . Experimental Designs Or Analyses: The methodology is sufficiently sound. The authors utilize and describe the evaluation setup proposed in SRBench. Supplementary Material: . Relation To Broader Scientific Literature: . Essential References Not Discussed: . Other Strengths And Weaknesses: I generally found the paper to be polished and easy to read. I think a bit more time can be devoted (in the appendix maybe) to go through how the pre-screening strategy would work on a small example, but otherwise this was a pretty interesting read. Other Comments Or Suggestions: . Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewers' thoughtful, constructive, and positive feedback on our work. We are glad that the motivation behind our approach was found to be clearly presented (pCrp, gjaN, pZx6), and that our focus on minimizing false negatives (FNs) in variable selection (VS) for symbolic regression (SR) was considered both important and well justified (pCrp, gjaN). We're pleased that reviewers found our method to be novel and effective (gjaN, GHAd, pZx6), and that the empirical evaluation was described as solid, convincing, and comprehensive (gjaN, GHAd, pZx6). We also appreciate the recognition of PAN+SR's novel direction to improve scalability in SR (pZx6), as well as our contribution to SR benchmarking and software development (gjaN). We'd like to take this opportunity to address your specific concerns and suggestions. We will include necessary revisions to clarify our work and strengthen the manuscript. ### Concern 1: Scalability challenge being solved by PAN+SR We agree that PAN+SR does not completely resolve the scalability challenge, but we believe it takes a significant step forward. In addition to the performance improvements presented, we'd like to share further concrete evidence, which will be included in the final paper for clarity. For context, the average values of $p$ and $p_0$ in the Feynman datasets are 186.15 and 3.65, respectively. On average, PAN reduces $p$ to 5.23 (97% reduction) in the best-case scenario (no noise), and to 63.1 (66% reduction) in the worst-case scenario (SNR=0.5). Furthermore, scalable SR methods such as TPSR [2] and DySymNet also benefit from PAN+SR, as shown in Figures 1 & 2. We chose not to include NeSymReS [1] as a baseline because TPSR [2] already employs a pre-trained NeSymReS backbone in our implementation. ### Concern 2: Existence of a clear separation under noisy settings As you rightly pointed out, separating signal from noise becomes harder in extremely noisy settings (Figures 5 & 6 in Appendix D.1). Nonetheless, relevant predictors still tend to cluster around the low-mean cluster, which explains the consistently high true positive rate (TPR) despite heavy noise (Figure 7 in Appendix D.2). ### Concern 3: PAN's role in improving performance In Appendix D.2, we compared PAN against 4 other nonparametric VS methods on the high-dimensional Feynman database. As shown in Figures 7 and 8 of Appendix D.2, PAN consistently achieves the highest TPR across all settings, albeit at the cost of frequent false positives. Since identifying TPs is more critical in the SR pre-screening context, we believe PAN offers the most practical, effective, and safe solution. ### Concern 4: PAN's robustness to different sampling distributions for the predictors Your observation about the distributional assumptions in SRBench and the suggestion to consider [6] are also much appreciated. Indeed, each variable in the Feynman dataset is sampled from a uniform distribution, a common design choice in empirical studies aimed at ensuring even coverage of the input space to support the generalizability and robustness of the study. As [6] highlights, however, the sampling range of variables can influence SR performance. We agree with many points raised in [6] and find it to be a valuable reference for the related work section. However, we do not expect sampling range to impact the pre-screening performance of PAN+SR. This is because **tree-based methods like BART are invariant to monotonic transformations of the input features**. Thus, the pre-screening result should remain robust to variation in the sampling range. Additionally, while [6] enhances the Feynman dataset by adding 1-3 irrelevant variables, our experimental setup considers a **substantially more challenging high-dimensional regime**, adding $50p_0$ irrelevant variables (ranging **from 100 to 450**). We also note that while the original treatment of constants and integer-valued variables as real-valued variables does violate their physical meanings—as [6] rightly points out—this choice inadvertently increases the difficulty of the task, thereby providing a more stringent test for both pre-screening and SR modeling. For these reasons, we believe additional experiments based on [6] would not significantly change the conclusion of our study, though we acknowledge its importance and will include it in the discussion of future directions. However, motivated by other reviewers' comments, we will 1. add an **ablation study to compare the effect of different clustering algorithms**, and 2. include a **high correlation** experiment using the Friedman equation. See response to gjaN for details. We believe these additions will strengthen the motivation behind PAN and broaden our experimental evaluation. Thank you again for your time, constructive feedback, and thoughtful suggestions. We appreciate the opportunity to clarify our work and believe the revisions and additional discussions will help strengthen the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the additional clarification. I'll be maintaining my current score. Great work!
null
null
null
null
null
null
Field Matching: an Electrostatic Paradigm to Generate and Transfer Data
Accept (poster)
Summary: The paper proposed a mechanism to train generative models. The main idea is to regard each data sample as a charge. The training process involves training networks to predict fields (gradients of potentials). The sampling process is done by solving an ordinary differential equation (moving data samples along fields). Claims And Evidence: Yes Methods And Evaluation Criteria: Only toy datasets like 2D point sets and MNIST are provided. Theoretical Claims: The idea sounds interesting at first sight. However, when I review the training algorithm, they looked very similar tthe o existing diffusion model design. 1. The interpolation in Eq 19 is exactly the one used in rectified flow and flow matching (RF/FM), except for the random noise term. 2. Eq 11 and Eq 16 are used for the target truth in training the network. The term in Eq 11 involves a term (x-x') which is also the one used in RF/FM, except for the normalizing term. Thus the training loss is just a scaled version of RF/FM. Experimental Designs Or Analyses: Considering the similarity between the paper and flow matching, I would like to know about some ablation studies. 1. Is the term in Eq 19 important? In flow matching's training, we do not need the term. I still do not know if it is necessary to make the design work. 2. The results are weak. Only some visual comparisons with PFGM are provided. However, a comparison with RF/FM is necessary. Supplementary Material: The code is provided. Relation To Broader Scientific Literature: The method is inspired by fields in physics. However, the method is just a complicated version of rectified flow. It is hard for me to judge the quality of the method based on the current draft. Essential References Not Discussed: Yes Other Strengths And Weaknesses: No Other Comments Or Suggestions: After reading the paper, I have no interest in using the method in my projects. The designs are similar to rectified flows but more complicated. If the authors can ablate the designs, I would be more interested in the paper. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer, thank you for reviewing our paper. Below we answer your questions and comments. **(Q1) Only some visual comparisons with PFGM are provided. However, a comparison with RF/FM is necessary. Only toy datasets like 2D point sets and MNIST are provided.** It is worth noting that our main goal is to demonstrate proof-of-concept of our method in the experimental section. We agree that providing additional generating experiments with more complex datasets enhances the understanding of the method's performance. Further scalability of the method is a promising avenue for future research. Nevertheless following your request, we include $\textbf{additional experiments}$ with more complex data such as CIFAR-10. For qualitative analysis, we demonstrate our EFM's results as well as PFGM's. We would like to ask you to get acquainted with Fig. 3 that is available via the anonymous link https://drive.google.com/file/d/1DTbQR_GNah7hVGGnDF822aD96iWxjR-k/view?usp=sharing. For quantitative analysis, we calculate FID/CMMD scores on test part of the aforementioned datasets and compare our method with PFGM, DDPM, DDIM, GLOW. Firstly, we demonstrate quantitative performance of our method on full Colored MNIST dataset. We see that our method outperforms other approaches on full Colored MNIST, reaching the lowest FID/CMMD. | Metrics/Method | EFM (our) | PFGM | DDPM | DDIM | GLOW | |----------------|----------|------|------|------|------| | FID | **0.92** | 1.88 | 2.18 | 2.23 | 25.9 | | CMMD | **1.47** | 2.28 | 2.68 | 2.85 | - | Secondly, we demonstrate quantitative performance of our method on CIFAR-10 dataset. We see that in image generation on CIFAR-10, our performance is comparable to PFGM. *At the same time, we remind that our method is also capable of performing data-to-data transfer while PFGM is not capable of doing that*. | Metrics/Method | EFM (our) | PFGM | DDPM | DDIM | GLOW | |----------------|-----------|------------|------|------|-------| | FID | 2.62 | **2.48** | 3.17 | 4.16 | 48.9 | | CMMD | **1.87** | 1.93 | 2.98 | 3.25 | - | In accordance of your request of comparison with FM, we quantitatively and qualitatively demonstrate the performance of our approach with SB-based(DDIB, $\alpha$-DSBM ) approaches as well as GAN-based (Cycle-GAN ) for unpaired data setups on Colored MNIST. Please, familiarize yourself with Fig. 6 via the afore mentioned link. For quantitative analysis, we demonstrate FID and CMMD metrics on Colored MNIST dataset for our and compared methods. | Metrics/Method | EFM (our) | FM | DSBM | DDIB | GAN | |----------------|-----------|-------|------|------|------| | FID | **4.45** | 19.87 | 7.21 | 8.24 | 4.57 | | CMMD | **2.37** | - | 4.02 | 4.11 | 2.45 | We see that our method demonstrates the highest FID and CMMD among compared approaches. **(Q2) [...] rectified flow and flow matching (RF/FM) [...] the training loss is just a scaled version of RF/FM.** We think that there might be a misunderstanding of our EFM method. In fact, we do not have a lot in common with flow matching (FM), except the fact that we learn an ODE to generate or transfer data. In particular, all the theoretical derivations and motivation totally differ. - We work in $D+1$ dimensional space and learn a static (non time-dependent) vector field to transfer data, while the field in FM is time-dependent and is in $D$-dimensional space. - The FM defines the interpolation $tx + (1-t)y$ between data samples $x$ and $y$ from two different data sets to regress a velocity field on $y-x$, where $t$ is sampled from a standard uniform distribution. In fact, the usage of this particular interpolant is principle for their loss construction. In turn, in our case, this interpolation is just a $\textit{technical}$ way to define some inter-plate points $\widetilde{x}$ between data distributions where to approximate Coulomb field $E(\widetilde{x})$ at these points. In principle, we can use any other way to define the intermediate points. To illustrate the aforementioned fact, we conduct the following 2-dimensional experiment with Swiss roll dataset. In the first case, we define the inter-plate points via the interpolation $tx + (1-t)y$ and approximate Coulomb field there. In the second case, we define uniform cube mesh between plates. The result of experiments demonstrates that the performance does not depend on interpolation. Please, familiarize yourself with the Fig.5 via the main link. Since the geometry of data distributions is sufficiently difficult in high-dimensional spaces, we use Eq. (19) as the way to define intermediate points just to cover a bigger volume of space. **Concluding remarks.** We would be grateful if you could let us know if the explanations we gave are satisfactory. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have.
Summary: The paper introduces Electrostatic Field Matching (EFM), a novel generative modeling framework inspired by the physics of an electrical capacitor. In EFM, source and target data distributions are assigned positive and negative charges on two parallel plates, and a neural network is used to learn the resulting electrostatic field. By moving samples along the field lines from one plate to the other, the method provably transforms the source distribution into the target distribution. This approach is versatile, addressing both noise-to-data and data-to-data generation tasks, and its theoretical guarantees and experimental results on toy and image datasets position it as a compelling alternative to existing diffusion and flow-based models. Claims And Evidence: The method’s performance on high-dimensional or complex tasks (real-images) is not convincingly demonstrated, and its sensitivity to hyper-parameters—such as the inter-plate distance—raises concerns about practical robustness. Methods And Evaluation Criteria: This paper primarily generates visual results, while quantitative results are lacking. Theoretical Claims: Yes, I have checked definitions and theorems in 3.1. Experimental Designs Or Analyses: The method is evaluated on three tasks: 1. Gaussian-to-Swiss Roll Experiment: We consider a 2-dimensional, zero-centered Gaussian distribution with an identity covariance matrix as P(x+), and a Swiss Roll distribution as Q(x−). 2. Image-to-Image Translation Experiment: This task involves transforming colored digit 3 into colored digit 2. 3. Image Generation Task: This involves generating 32×32 colored images of digit 2 from the MNIST dataset. Supplementary Material: Yes, I review the proof and experiment part. Relation To Broader Scientific Literature: Shaul, Neta, et al. "Flow Matching with General Discrete Paths: A Kinetic-Optimal Perspective." arXiv preprint arXiv:2412.03487 (2024). Essential References Not Discussed: None Other Strengths And Weaknesses: 1. The methods are well-motivated by electrostatic theory and the chosen evaluation tasks—using toy datasets and colored MNIST experiments—are standard for proof-of-concept generative modeling. 1. However, further testing on more diverse and complex datasets could strengthen the practical validation. Other Comments Or Suggestions: I would like to raise rating if fair quantitive results and the evidence that this method can be scaled can be presented. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, thank you for reviewing our paper. Below we answer to your questions and comments. **(Q1) [...] performance on high-dimensional or complex tasks (real-images) [...] testing on more diverse and complex datasets [...] quantitative results are lacking..** It is worth noting that our main goal is to demonstrate proof-of-concept of our method in the experimental section. We agree that providing additional generating experiments with more complex datasets enhances the understanding of method's performance. Further scalability of the method is a promising avenue for future research. Nevertheless following your request, we include **additional experiments** with more complex data such as CIFAR-10. For qualitative analysis, we demonstrate our EFM's results as well as PFGM's performance. We would like to ask you to get acquainted with Fig. 3 that is available via the anonymous link https://drive.google.com/file/d/1DTbQR_GNah7hVGGnDF822aD96iWxjR-k/view?usp=sharing. For quantitative analysis, we calculate FID and CMMD scores on test part of the aforementioned datasets and compare our method with PFGM, DDPM, DDIM and GLOW. Firstly, we demonstrate quantitative performance of our method on full Colored MNIST dataset. We see that our method outperforms other approaches on full Colored MNIST, reaching the lowest FID and CMMD. | Metrics/Method | EFM (our) | PFGM | DDPM | DDIM | GLOW | |----------------|----------|------|------|------|------| | FID | **0.92** | 1.88 | 2.18 | 2.23 | 25.9 | | CMMD | **1.47** | 2.28 | 2.68 | 2.85 | - | Secondly, we demonstrate quantitative performance of our method on CIFAR-10 dataset. We see that in image generation on CIFAR-10, our performance is comparable to PFGM. *At the same time, we remind that our method is also capable of performing data-to-data transfer while PFGM is not capable of doing that*. | Metrics/Method | EFM (our) | PFGM | DDPM | DDIM | GLOW | |----------------|-----------|------------|------|------|-------| | FID | 2.62 | **2.48** | 3.17 | 4.16 | 48.9 | | CMMD | **1.87** | 1.93 | 2.98 | 3.25 | - | Additionally, we quantitatively and qualitatively compare the performance of our approach with SB-based (DDIB ,$\alpha$-DSBM ) approaches as well as GAN-based (Cycle-GAN ) for unpaired data setups on Colored MNIST. Please, familiarize yourself with Fig.6 via the afore mentioned link. For quantitative analysis, we demonstrate FID and CMMD metrics on Colored MNIST dataset for our and compared methods. | Metrics/Method | EFM (our) | FM | DSBM | DDIB | GAN | |----------------|-----------|-------|------|------|------| | FID | **4.45** | 19.87 | 7.21 | 8.24 | 4.57 | | CMMD | **2.37** | - | 4.02 | 4.11 | 2.45 | We see that our method demonstrates the highest FID and CMMD among compared approaches. **(Q2) [...] sensitivity to hyper-parameters [...] The inter-plate distance** We conduct additional experiment that deals with the influence $L$ on the performance ( see Fig.4 via the link). The more distance between plates the worse approximation of the field. If $L$ is too small, field is recoverable , but there is "edge effect" and it might has influence on performance. **Concluding remarks.** We would be grateful if you could let us know if the explanations we gave are satisfactory. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have.
Summary: The paper proposes Electrostatic Field Matching (EFM), a method for generative modeling and distribution transfer based on electrostatic principles. EFM generalizes the Poisson Flow Generative Model (PFGM) by enabling mapping between arbitrary distributions. It represents source and target distributions as charged capacitor plates and learns the electrostatic field between them using a neural network. Claims And Evidence: The authors introduce a new generative approach and present proof-of-concept experiments. Their claims are supported by visual evaluations. Methods And Evaluation Criteria: The benchmarks are relevant but very limited. Only toy datasets (Swiss Roll and MNIST) are used, with no quantitative evaluations or comparisons to other methods from the literature. Theoretical Claims: I did not find any issues with the theoretical claims in the paper. The manuscript primarily follows the derivations from [1]. [1] Xu, Yilun, et al. "Poisson flow generative models." Advances in Neural Information Processing Systems 35 (2022): 16782-16795. Experimental Designs Or Analyses: Lack of reproducibility: The code in the Appendix imports functions that are not included in the provided package. The paper is presented as a proof-of-concept without quantitative experiments, providing only visual results of the method. The details of the visual experiments are given in Appendix C and source code. Supplementary Material: I verified the source code provided by the authors. Unfortunately, there are references to files that are not included in supplementary materials, making the code unable to run. Relation To Broader Scientific Literature: The proposed method extends the idea introduced in [1] by replacing the source distribution—from a known uniform distribution projected onto a (D+1)-dimensional hemisphere—with any arbitrary distribution placed on a hyperplane parallel to the target distribution. This concept of constructing the vector field from one distribution to another is known as a Schrodinger Bridge in Diffusion Models nomenclature. [1] Xu, Yilun, et al. "Poisson flow generative models." Advances in Neural Information Processing Systems 35 (2022): 16782-16795. Essential References Not Discussed: The references in this paper are relevant. Other Strengths And Weaknesses: Strengths: - The authors propose a creative method that uses Poisson Flow [1] as a Schrödinger Bridge for interpolating between two arbitrary distributions. - The authors claim that the method works with unpaired datasets. - The paper is well-written and easy to follow, with the Related Works section being particularly well-structured. Weaknesses: - There are no quantitative evaluations of the method; the authors provide only visual results. - The method is not compared to alternative approaches from the literature (e.g., SB-based: [2, 3, 4], GAN-based: [5]). - The authors use only toy datasets (Swiss Roll, MNIST). Using more complex datasets (like CIFAR or ImageNet) would help determine whether the estimated field is sensitive to the dimensionality of the data. - In line 250, the authors mention that the inference process is summarized in Algorithm 1, but the algorithm only describes the training procedure. - The source code utilizes functions not included in the provided package, making it impossible to reproduce the method. Weaknesses 1-3 limit the work to a proof-of-concept, reducing its overall contribution. [1] Xu, Yilun, et al. "Poisson flow generative models." Advances in Neural Information Processing Systems 35 (2022): 16782-16795. [2] Su, Xuan, et al. "Dual diffusion implicit bridges for image-to-image translation." arXiv preprint arXiv:2203.08382 (2022). [3] Kim, Beomsu, et al. "Unpaired image-to-image translation via neural schr\" odinger bridge." arXiv preprint arXiv:2305.15086 (2023). [4] De Bortoli, Valentin, et al. "Schrodinger Bridge Flow for Unpaired Data Translation." Advances in Neural Information Processing Systems 37 (2024): 103384-103441. [5] Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." Proceedings of the IEEE international conference on computer vision. 2017. Other Comments Or Suggestions: I wonder if, in Algorithm 1, t should be sampled from Uniform(0, 1) to ensure proper interpolation between x+ and x-. Questions For Authors: - As far as I understand, the field is estimated based on all samples from the batch. How does the batch size influence the stability of the training? Is it possible to train such a model with the batch size of 1? - Unlike PFGM, your approach does not project the Q distribution on the (D+1)-dimensional hemisphere, but places it on z=L hyperplane. I wonder if samples that are on the periphery of the P distribution would not be pushed away and land far away from Q distribution as a result? - The hyperparameter L seems crucial, but it is difficult to tune. Based on the experiments, if the data has higher dimensionality, L should be increased. Is it possible to propose a function that assigns an L value for any given number of dimensions D? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. Please find the answers to your questions below. **(Q1) [...] quantitative evaluations or comparisons [...] alternative approaches from the literature (e.g., SB-based, GAN-based). CIFAR [...]** Following your request, we include **additional experiments** with more complex data such as CIFAR-10. For qualitative analysis, we demonstrate our EFM's results as well as PFGM's performance. See Fig.3 via the anonymous link https://drive.google.com/file/d/1DTbQR_GNah7hVGGnDF822aD96iWxjR-k/view?usp=sharing. For quantitative evaluation, we calculate FID and CMMD[1] on test of CIFAR-10 and compare with PFGM, DDPM, DDIM and GLOW. Please see the table with results in the [answer to YMMk](https://drive.google.com/file/d/1DTbQR_GNah7hVGGnDF822aD96iWxjR-k).We see that in image generation our performance is comparable to PFGM. *At the same time, we remind that our method is also capable of performing data-to-data transfer while PFGM is not capable of doing that*. In accordance of your request of comparison with GAN and SB-based methods, we also demonstrate the performance of our approach with SB-based (DDIB,$\alpha$-DSBM ) approaches as well as GAN-based (Cycle-GAN ) for unpaired data setups on Colored MNIST. Please, familiarize yourself with Fig.6 via the afore mentioned link. We also see that our method demonstrates the highest FID and CMMD among compared approaches. | Metrics/Method | EFM (our) | FM | DSBM | DDIB | GAN | |----------------|-----------|-------|------|------|------| | FID | **4.45** | 19.87 | 7.21 | 8.24 | 4.57 | | CMMD | **2.37** | - | 4.02 | 4.11 | 2.45 | **(Q2) The manuscript primarily follows the derivations from [1].** We respectfully disagree. Our significant theoretical advancement compared to PFGM is using the following fundamental property of electric field lines in $D$-dimensional space: field lines starting from the positive charge distribution $\mathbb{P}(\cdot)$ almost surely terminate in the negative charge distribution $\mathbb{Q}(\cdot)$ (Lemma A.7). This property — *previously not considered in electrostatic generative models* — combined with field flux conservation along current tubes, formally establishes (our Theorem 3.1) that field line trajectories transport samples between $\mathbb{P}$ and $\mathbb{Q}$. This constitutes our main theoretical contribution enabling data-to-data transfer. **(Q3) In line 250, the authors mention that the inference process is summarized in Algorithm 1, but the algorithm only describes the training procedure.** Thanks for pointing that out. The inference algorithm is just a simulation of the learned ODE as described in Section 3. For convenience of the reader, we will add a separate algorithm box with inference procedure in the final version of the paper. **(Q4) I wonder if, in Algorithm 1, t should be sampled from Uniform(0, 1) to ensure proper interpolation between x+ and x-** The uniform sampling $t \sim \text{Uniform}(0,1)$ represents no more than a particular training volume selection strategy for interpolating between $\widetilde{\mathbf{x}}^+$ and $\widetilde{\mathbf{x}}^- $, controlling the density of training points along the interplanar axis. As demonstrated in Fig. 5 (link from Q1) for the Gauss-to-Swiss-roll experiment, both the proposed uniform training volume (Algorithm~1) and conventional cubic lattice initialization produce nearly identical electric field line configurations. **(Q5) How does the batch size influence the stability of the training? [...] batch size of 1?** Theoretically, it is possible to train our method with a batch size equal to 1. However, Monte Carlo integration has a higher variance for a small number of samples. Following the question, we conducted an extra experiment with different batch sizes. We demonstrate that performance almost stops to increase with batch sizes exceeding 64, see Fig. 2 in extra material (link from Q1) **(Q6) I wonder if samples that are on the periphery of the P distribution would not be pushed away and land far away from Q distribution as a result?** This scenario is precluded by Theorem 3.1, which guarantees transport between distributions $\mathbb{P}(\cdot)$ and $\mathbb{Q}(\cdot)$. While peripheral samples exhibit greater curvature compared to central trajectories due to boundary effects, this geometric difference does not prevent their convergence to $\mathbb{Q}(\cdot)$. The continuity of field lines (Lemma A.7) ensures termination on the target distribution almost surely. **(Q7) [...] source code [...] unable to run. [...] functions not included in the provided package** We will fix this in the final code version. **Concluding remarks.** We would be grateful if you could let us know if the explanations we gave are satisfactory. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. **(Q1) [...] quantitative evaluations or comparisons [...] alternative approaches from the literature (e.g., SB-based, GAN-based). CIFAR [...]** The experiments you conducted are sufficient and effectively demonstrate how your method works. It enhances the value of your paper significantly. I kindly suggest including these tables and figures in the final version of the manuscript. I have one additional question regarding the backbone used in these experiments. Specifically, do the different approaches share the same neural network architecture and have the same number of function evaluations (NFE) during inference? If they do, it would be worth highlighting. Otherwise, I kindly suggest adding two rows to the results table: one for the number of parameters and another for the number of network evaluations. **(Q2) The manuscript primarily follows the derivations from [1].** I was pointing out that your work has a similar theoretical background and, therefore, shares certain derivations. My intention was not to undermine the contribution of your paper. **(Q4) I wonder if, in Algorithm 1, t should be sampled from Uniform(0, 1) to ensure proper interpolation between x+ and x-** I have to admit that this part is the most confusing to me. From line 257, we know that $\tilde{x}^+ = [x^+, 0]$ and $\tilde{x}^- = [x^-, L]$, which positions the two distributions on parallel planes with a distance of $L$. If we sample $t \sim Uniform(0, L)$ and $L > 1$, then following the formula $$x_t = t \tilde{x}^+ + (1 - t) \tilde{x}^- + \epsilon. $$ could lead to **linear extrapolation** rather than interpolation. As a result, this might produce negative values in the time dimension, and the extrapolation could significantly increase the variance of the prior distribution. Was this intentional? **(Q5) How does the batch size influence the stability of the training? [...] batch size of 1?** I’m glad this experiment was conducted, as I find it especially valuable for the community. At a high level, we can say that SB methods trained on paired datasets (e.g., I2SB, ResShift, InDI, IR-SDE, etc.) use one positive and one negative sample to construct the ground truth vector field. This approach works well for paired tasks such as image enhancement. However, in domain shift scenarios where the data is unpaired, the results tend to concentrate in regions where the digits appear very bold. I believe this represents plausible mass centers of clusters (medoids) within the target distribution, likely due to averaging in the vector field. When using a larger batch size, this issue disappears, as the approximations become more precise. **(Q6) I wonder if samples that are on the periphery of the P distribution would not be pushed away and land far away from Q distribution as a result?** Your arguments are valid when discussing the theoretical vector field. However, the goal here is to train a neural network to approximate that field. This approximation may be biased due to: (a) Monte Carlo sampling used to estimate the vector field during training, and (b) inherent imperfections in the model. My main concern is that when a sample lies at the boundary of the prior distribution, the network is trained to **push** it even further. This could create issues in later stages of the trajectory, as the space expands and the model must learn the vector field over a much broader region. This approach contrasts with most SB methods, which aim to find an optimal transport path, thereby reducing the intermediate space that needs to be modeled (eg. https://arxiv.org/pdf/2302.00482). **Conclusion** The additional experiments have significantly enhanced the value of this submission. As a result, I believe my initial score is no longer appropriate, and I would like to raise it after I receive short answers to **Q4** and **Q6**. --- Reply to Comment 1.1.1: Comment: Dear reviewer, we are exceedingly glad that you found our additional experiments as the efficient demonstration of our methodology, pointing out the significance of our results. Undoubtedly, we are going to add new additional experiments, quantitative evaluations and comparisons in the final version of our paper. **(Q1) [...] the same neural network architecture and have the same number of function evaluations (NFE) during inference?[...]** Undoubtedly, we use the same neural network architecture for PFGM, DDPM and our approach to provide honest comparison. Also, the majority of hyper-parameters (learning rate, ema decay and so on) are the same as in PFGM because our codebase is based on PFGM code. As for NFE, we also use the same 100 steps for inference process for our EFM as well as PFGM, following PFGM's configs of code. **(Q4) [...] I wonder if, in Algorithm 1, $t$ should be sampled from Uniform(0, 1) to ensure proper interpolation between $x+$ and $x-$ [...]** We always sample $t$ from uniform distribution Uniform(0,1). There is the typo in the text and we will correct this moment in the final version of our paper. We are grateful to you for pointing it out. **(Q6) [...] I wonder if samples that are on the periphery of the P distribution would not be pushed away and land far away from Q distribution as a result? [...]** Peripheral samples generate field lines that exhibit greater curvature compared to central trajectories. While these curved paths originate from positive charges, they necessarily terminate on negative charges. Thus, the network will not push the peripheral samples further away in our framework. The only real problem with peripheral samples is the problem of limited training volume. When field lines extend beyond this volume, reliable transport cannot be guaranteed. Strictly speaking, complete coverage would require training across an infinite volume. However, since the electric field strength decays as $1/r^{D-1}$, in practice this ensures that most of the field lines remain within the finite region. Furthermore, appropriate selection of the hyper parameter $L$ can render a significant portion of field lines nearly straight (see the toy experiment in Figure 7). However, optimal training volume selection remains an open research question. Regarding Monte Carlo sampling, while batch size equal to 1 is theoretically sufficient, we observe an improved convergence with larger batches (Figure 2 in supplementary materials), with performance saturating at batch size = 64.
Summary: In this work, the authors propose Electrostatic Field Matching (EFM), which transforms between two distributions in the same space by placing the two distributions on two parallel plates with opposite charge, training a neural network to predict the electric field in the space between the two plates, and trace a test charge through the field from one plate to the other. Beside theoretical justifications, some interesting results on random generation and translation between different digits on a colored MNIST dataset are shown as a proof of concept. Claims And Evidence: While the theoretical proofs for the principles appear sound, the experimental results are limited, as acknowledged by the authors who present them as toy examples without making broad claims. Methods And Evaluation Criteria: I have several concerns about the method. First, how can we ensure accurate approximation of the ground truth field using Monte Carlo integration? In high dimensions, field contributions from distant charges decay rapidly with distance, making nearby charges significantly more important. However, in high dimensions, Monte Carlo samples are unlikely to land near these crucial regions, potentially leading to poor sampling of the most important areas. Though I acknowledge PFGM faces a similar challenge. Second, parallel plates with opposite charges create a dipole effect, where field lines inevitably spread into the surrounding space rather than staying confined between the plates. Some field lines initially travel backward, away from the target plate, make extensive detours, and eventually reach the target plate from behind. While PFGM can focus on one side due to symmetry, the lack of symmetry here raises questions about whether we can safely consider only the "right" side while ignoring the "wrong" side. Even if we can focus on one side, field lines can travel far from the plates before returning—a significant issue in high dimensions where most directions are approximately perpendicular. When training samples are created by interpolating between two random points on the plates, they remain between the plates. This raises the question: how can we ensure accurate tracing of outward field lines? PFGM avoids this issue since it intentionally allows field lines to extend to infinity. Finally, there's the matter of parameter L. The Swiss Roll experiment demonstrates L's crucial role—larger values cause more field lines to deviate sideways, negatively affecting results. While smaller L values likely have their own drawbacks, the authors should provide guidelines for determining appropriate L values. Theoretical Claims: I didn’t find any specific issues. Experimental Designs Or Analyses: Experiments on more complex datasets are desirable. It seems to me that the jump from “placing the target distribution on a changed plate surrounded by an infinite sphere and run a test charge” (PFGM) to “placing two distributions on two parallel plates with opposite charges and run a test charge” (proposed method) is not a big one, especially considering that the underlying theory is the same, so I would say this concept is in large parts already proven. Thus, I think mere proof-of-concept experiments are not sufficient. Even if we are only getting simple examples, there should at least be the result of random generation on the full color-MNIST dataset, instead of just a subset featuring the same digit. In addition, some quantitative evaluations would also help. Supplementary Material: No Relation To Broader Scientific Literature: Not evaluated Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: The authors dedicate one and a half pages to basic physics concepts that, given their elementary nature indicated by the section heading, could be condensed. This space would be better used for additional experimental results. Questions For Authors: 1.What is the effect if L is too small? 2. Is it safe to ignore field lines that start out facing the wrong direction? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, thank you for reviewing our paper. Below we answer your questions. **(Q1) [...] Experiments on more complex datasets[..]some quantitative evaluations would also help.** Following your request, we include **additional experiments** with more complex data such as CIFAR-10 and the requested full color-MNIST dataset. For qualitative analysis, we demonstrate our EFM's results as well as PFGM's performance. We ask you to get acquainted with Fig. 1 and Fig. 3 that are available via the anonymous link https://drive.google.com/file/d/1DTbQR_GNah7hVGGnDF822aD96iWxjR-k/view?usp=sharing. For quantitative analysis, we calculate FID/CMMD scores and compare with another methods. Please, familiarize yourself with the the [answer to YMMk](https://openreview.net/forum?id=9dHilxylvC&noteId=1boFlYDdKN). **(Q2) [...] accurate approximation of the field using Monte Carlo integration? [...] In high dimensions, field contributions from distant charges decay rapidly with distance [...]** If we correctly understand your question, your ask about the accurate approximation of $E(\widetilde{x})$ in Eq. (11) for a given $\widetilde{x}$ via Monte Carlo sampling. Since Monte Carlo integration provides an unbiased estimation of an integral and its variance does not depend on the dimensionality, this estimation might be used for accurate approximation of the field. However, the more samples are used in batch for estimation the lower variance. We conduct **additional experiments** with different batch sizes and its influence on our EFM's performance. Please, familiarize yourself with the Fig. 2 via the main link. **(Q3) [...] dipole effect [...] Some field lines initially travel backward [...] reach the target plate from behind. [...] how can we ensure accurate tracing of outward field lines?** Indeed, each plate emits two distinct sets of field lines. One set is directed towards the second plate and another oriented in the opposite direction. Crucially, the properties of electric field lines (LemmaA.7) ensure that both sets almost surely terminate on the opposing plate. The primary distinction lies in their geometric trajectories: the forward-directed lines exhibit smaller curvature and reach the target plate more efficiently (faster) than their backward-oriented counterparts. From a practical point of view, prioritizing forward-directed trajectories is advantageous for computational efficiency while remaining theoretically sound due to Theorem~3.1, which guarantees almost sure transport from $\mathbb{P}(\cdot)$ to $\mathbb{Q}(\cdot)$. To address field lines extending beyond the plate boundary ($z > L$) before reaching $\mathbb{Q}(\widetilde{\mathbf{x}}^-)$, one may use the following natural criterion to distinguish the valid termination on $\mathbb{Q}(\cdot)$ from the transient crossings of $z = L$: \begin{equation} \begin{cases} E_z(z \to L^-) = E_z(z \to L^+) & \implies \text{The line goes away past the distribution}, \\ E_z(z \to L^-) \neq E_z(z \to L^+) & \implies \text{Valid termination on } \mathbb{Q}(\widetilde{\mathbf{x}}^-). \end{cases} \end{equation} This criterion derives from: 1) Field continuity along current tubes (Lemma A.3, Corollary A.4), 2) Boundary field behavior (LemmaA.5): Near a plate, the field is determined by the charge of this plate and is directed away from the plate in the case of positive charge (and towards the plate in the case of negative charge, see lemma A.5). Therefore the direction and magnitude of the field must change in the case if we arrive at the target distribution. Thus, discontinuities in $E_z$ explicitly signal successful transitions to $\mathbb{Q}(\cdot)$, while continuity indicates a need for further integration. **(Q4) matter of parameter L [...]** We conduct **additional experiment** that deals with the influence $L$ on the performance (see Fig.4 via the link). The more distance between plates the worse approximation of the field. If L is too small, field is recoverable , but there is "edge effect" and it might has influence on performance. **(Q5) PFGM [...] the underlying theory is the same [...]** We respectfully disagree. Our significant theoretical advancement compared to PFGM is using the following fundamental property of electric field lines in $D$-dimensional space. The field lines starting from the positive charge distribution $\mathbb{P}(\cdot)$ almost surely terminate in the negative charge distribution $\mathbb{Q}(\cdot)$ (Lemma A.7). This property — *previously not considered in electrostatic generative models* — combined with field flux conservation along current tubes, formally establishes (our Theorem 3.1) that field line trajectories transport samples between $\mathbb{P}$ and $\mathbb{Q}$. This constitutes our main theoretical contribution. **(Q6) Is it safe to ignore field lines that start out facing the wrong direction?** Yes, it is safe due to our main theorem. See the answer to the **Q3** for details.
null
null
null
null
null
null
TCP-Diffusion: A Multi-modal Diffusion Model for Global Tropical Cyclone Precipitation Forecasting with Change Awareness
Accept (poster)
Summary: This paper introduces TCP-Diffusion, a multi-modal diffusion model for tropical cyclone precipitation forecasting. It leverages an Adjacent Residual Prediction (ARP) mechanism to predict rainfall changes, integrates numerical weather prediction data, and employs an Environmentally-Aware 3D U-Net within a diffusion framework. The study evaluates the model against deep learning and numerical baselines, reporting improvements in predicting medium and heavy rainfall. ## update after rebuttal Thank you for the author's response. However, from my perspective, the quality of this paper still needs to be improved to meet ICML's standards. Therefore, I will maintain my score. Claims And Evidence: The paper’s claims are generally supported by experimental results. However, the motivation for using a diffusion model instead of other representation learning-based methods for the prediction task is not clearly justified. Methods And Evaluation Criteria: The paper’s methodological approach is generally well-structured. However, the study presents various comparative experiments, but it does not explicitly assess how well the model adapts to different TC lifecycle stages (e.g., formation, intensification, dissipation). Theoretical Claims: No Theorems are presented in this paper. Experimental Designs Or Analyses: The experimental setup follows standard practices, utilizing benchmark meteorological datasets (ERA5, ECMWF-IFS, MSWEP) and established evaluation metrics (ETS, TP-MAE). However, while ARP’s contribution is evaluated, the paper does not analyze how individual encoders (historical vs. future data) affect performance separately. Additionally, while different lead times are tested, a deeper analysis of how performance degrades over time is missing. Supplementary Material: Yes, I review the model development and extended experiments. Relation To Broader Scientific Literature: The paper builds upon existing work by referencing deep learning (DL) applications in meteorology, numerical weather prediction (NWP), and generative models. It builds upon prior work in precipitation forecasting, particularly U-Net-based methods and generative adversarial networks (GANs), and extends recent applications of diffusion models in atmospheric prediction. Essential References Not Discussed: The paper appears to have included most of the essential references needed to understand its contributions. Other Strengths And Weaknesses: **Other Strengths:** 1. The paper applying diffusion models to tropical cyclone precipitation forecasting is an interesting attempt. 2. The ARP mechanism helps mitigate cumulative errors and ensures temporal consistency in forecasts. 3. The topic of applying AI tools for climate research is a hot topic. **Other Weaknesses:** 1. The paper describes the use of physical processes only from the results of numerical weather prediction (NWP), but incorporating more TC-related information as model input is an intuitive approach rather than one inspired by physical principles. Instead of seeing a combination of existing deep learning modules for precipitation forecasting, I would prefer to see a novel deep learning design explicitly guided by meteorological physics. 2. Sensitivity to different cyclone intensities or unseen meteorological conditions is not analyzed, which could impact generalization. 3. A strong motivation for using a diffusion model over other deep learning approaches (e.g., VAEs, Transformers) for precipitation forecasting is not clearly provided. 4. I appreciate the authors' effort, but the work lacks sufficient appeal from a machine learning contribution perspective (e.g., introducing new meteorological or physics-informed machine learning approaches). As a result, it may be more suitable for a good meteorology-focused journal. Other Comments Or Suggestions: N/A Questions For Authors: 1. Is incorporating NWP outputs in the model truly reasonable? Traditional numerical weather prediction (NWP) methods involve significant computational costs. If a new deep learning-based approach still depends on NWP results, it does not improve forecasting speed, making it difficult to apply in rapidly changing precipitation prediction scenarios. Could you clarify how this approach balances efficiency and accuracy? 2. The motivation for choosing a diffusion model over alternatives like VAEs, normalizing flows, or Transformers is not clearly explained. What specific properties of diffusion models make them particularly suited for TC precipitation forecasting? 3. Have ablation studies been conducted to evaluate the individual contributions of different encoders (historical vs. future data)? How does model performance change when certain encoders are removed or altered? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for acknowledging the contributions of our ARP mechanism. We also understand your concerns regarding the motivation behind using both NWP data and the diffusion model. Below, we provide detailed responses to these questions, and we hope they will help alleviate some of your concerns about our method. **Q1: A strong motivation for using a diffusion model over other deep learning approaches (e.g., VAEs, Transformers) for precipitation forecasting is not clearly provided.** We would like to clarify the motivation for choosing diffusion models over other representation learning-based approaches. 1. TC precipitation exhibits chaotic behavior and inherent uncertainty, which makes probabilistic forecasting more suitable than deterministic approaches. This is supported by the theoretical discussion in Atmospheric Modeling, Data Assimilation and Predictability, particularly Section 6.5, which states:"*The chaotic behavior of the atmosphere requires the replacement of single ‘deterministic’ forecasts by ‘ensembles’ of forecasts.*" Deterministic models like VAEs can be seen as single forecasts, whereas diffusion models naturally resemble ensemble forecasting by sampling multiple possible futures. This makes them well-aligned with the demands of real-world atmospheric prediction, especially under extreme conditions such as TCs. 2. Previous methods—including deterministic models like U-Net, VAEs, and Transformer-based architectures—often produce over-smoothed forecasts with limited fine-grained spatial details. We briefly discuss these issues in Lines 69–86 (left column) of the paper. In contrast, diffusion models naturally incorporate noise and denoising processes, allowing them to generate more detailed and realistic outputs. **Q2: The paper describes the use of physical processes only from the results of NWP, but incorporating more TC-related information as model input is an intuitive approach rather than one inspired by physical principles. (For Weaknesses 1 and 4)** It is true that our approach adopts a relatively simple yet effective strategy to integrate deep learning with NWP forecasts. However, to the best of our knowledge, this is the first work that combines deep learning with NWP data specifically for TC precipitation forecasting. While the idea may appear intuitive in hindsight, however, the novelty must be evaluated before the idea existed. The inventive novelty was to have the idea in the first place. If it is easy to explain and obvious in hindsight, this in no way diminishes the creativity (and novelty) of the idea. In our approach, we do not directly embed physical equations into the model (such as PINNs), but instead let the model learn the embedded physical knowledge present in the IFS forecast. This is a simple yet practical alternative to explicitly encoding physical constraints, especially considering the complexity of modeling TC precipitation—a highly nonlinear and chaotic process. That said, we fully agree that incorporating meteorological physics in a more explicit way (e.g., through constraint-aware loss functions or hybrid physics-ML models) is a promising direction. Bridging the gap between data-driven modeling and physical interpretability is part of our long-term research vision. **Q3: Is incorporating NWP outputs in the model truly reasonable?** Reviewer Ndz5 raised a similar concern, which we responded to in Q1. For a more detailed explanation, please refer to that response. **Q4: The ablation study of individual encoders (historical vs. future data) is missing. Additionally, while different lead times are tested, a deeper analysis of how performance degrades over time is missing.** We would like to clarify that ablation experiments on the historical and future data encoders were indeed included in Table 3. Specifically, the third row in Table 3 represents our model with the ARP and multimodal encoder (M), but without the historical encoders and future data encoders. The fourth row shows the full version of our model, which includes both historical and future encoders. The performance improvement between these two rows demonstrates the effectiveness of integrating NWP forecasts and our encoder designs. Regarding performance degradation over time, it is a well-known challenge in sequential forecasting tasks. In our setting, this issue is particularly pronounced due to the chaotic nature of atmospheric systems, where small disturbances can amplify over time and lead to substantial divergence in predictions (as described by the butterfly effect). We will provide a more comprehensive analysis in Section D.1 (Line 755) of the camera-ready version. **Q5: The performances of our method on different TC lifecycle stages and intensities.** The results are shown at https://limewire.com/d/HHv6Z#HXP582VLSv. Please refer to this link for detailed results. ***If you have more questions, We'd like to discuss them with you during the author-reviewer discussion period.***
Summary: This paper addresses two key challenges in medium-range tropical cyclone forecasting: current methods suffer from cumulative errors and the lack of physical consistency. A multi-modal model is proposed, equipped with ARP mechanism to focus on rainfall change to reduce cumulative errors. The integration of NWP system helps to enhance physical consistency. Claims And Evidence: The flexibility of the method is limited to some extent. Since the future data from an NWP system is necessary, when such data is not accessible, the system seems to stop working. Also NWP system generally requires large-scale computations. If this is the case, it significantly increase the latency for the predictions. Methods And Evaluation Criteria: - The overall contribution could be limited. The idea of the adjacent residual prediction is very similar to DiffCast (CVPR2024). What’s your unique model design contributing to TC prediction compared to this method? The idea of using multmodal meteorological factors has been explored in other works such as Pangu and Fengwu. - Line 258-260, the encoding mechanism is a combination of concatenation and convolution operations. What’s the phylosophy for such computations? Will transformers/attention mechanisms be better? There are many multimodal features fusion methods are available, justifications for the design are expected to be clearly outlined. Yu, Demin, et al. "Diffcast: A unified framework via residual diffusion for precipitation nowcasting." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Theoretical Claims: There is no theoretical proofs relevant to this study. Experimental Designs Or Analyses: - More experiments can be conducted on diverse datasets or similar extreme weather prediction such as lightning. - It is necessary to visualize the residual prediction and show the contribution of how it can reduce cumulative errors. Supplementary Material: N/A Relation To Broader Scientific Literature: This work proposes an idea of using residual information to better capture the spatial and temporal information to medium-range TC precipitation forecasting. Essential References Not Discussed: The references is good to the reviewer. Other Strengths And Weaknesses: Strengths - It considers the influence of TC-related meteorological factors and the useful information from NWP model forecasts. Even though DL methods outperform traditional NWP, the results of NWP can still be useful. Weaknesses - The significance of the task and input selection is not clear to me, as mentioned in previous review questions. Since the future data from an NWP system is required, when the future data is not accessible, the system may fail to work. Other Comments Or Suggestions: N/A Questions For Authors: Please see my comments in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for recognizing the value of integrating our method with NWP. We also understand your concerns regarding the potential impact of using NWP data on the flexibility of our approach. These insights will serve as valuable guidance for our future research. Below, we provide responses to your questions, and we hope they will help address some of your concerns about our method. **Q1: The flexibility of the method is limited to some extent due to the use of NWP data.** We appreciate the reviewer’s concern regarding the flexibility and real-world applicability of our method. 1. Please note that our model also provides a version that does not rely on IFS forecasts. As shown in Table 3 (Row 3), our model without IFS still outperforms other deep learning baselines listed in Table 1. This is also discussed in Lines 413-418 (right column) of the manuscript. Therefore, our approach remains functional and competitive even in the absence of IFS inputs. 2. ECMWF has announced that it will make its forecast data fully open starting in October 2025 (https://www.ecmwf.int/en/about/media-centre/news/2025/ecmwf-achieve-fully-open-data-status-2025). This will significantly reduce the difficulty of accessing IFS forecasts, making it increasingly feasible to incorporate them in practice. We believe our work offers a timely exploration of how to leverage such forecasts effectively in deep learning frameworks. 3. we emphasize that IFS and deep learning are not mutually exclusive; rather, their integration can lead to more accurate and efficient forecasts. To our knowledge, this work is the first to integrate NWP forecasts with deep learning for TC precipitation prediction. While IFS captures physical laws through numerical simulation, deep learning excels at learning hidden patterns from data. Our design—which directly feeds IFS outputs into a dedicated encoder for representation learning—offers a simple yet effective fusion strategy. This approach may serve as a useful reference for future efforts in combining NWP with data-driven models across various forecasting tasks. **Q2: The difference with DiffCast, Pangu and Fengwu.** We would like to clarify the unique contributions of our model and how they differ from prior works: 1. **Difference from DiffCast**: While both our approach and DiffCast involve residual prediction, the underlying concepts are fundamentally different. In our model, we view precipitation forecasting as an accumulative process of rainfall changes over time, where the model directly learns to generate the adjacent residual (future rainfall change) rather than the absolute future value. This is reflected in lines 102-108 (left column). In contrast, DiffCast models the residual between a deterministic forecast and the ground truth, as shown in Section 4.2 of their paper. Their diffusion module acts more as a correction mechanism to add details to the output of a deterministic backbone, rather than directly modeling rainfall evolution through temporal differences. Although DiffCast is an inspiring work, our method tackles a different formulation and learning objective. We will discuss this paper in the camera-ready version. 2. **Difference from Pangu and Fengwu**: The motivation for using multimodal meteorological inputs in our work is distinct. The prediction targets of Pangu and Fengwu are the future states of multiple variables themselves, hence multimodal inputs are a natural part of the task. In contrast, we focus solely on predicting TC rainfall, and we incorporate multimodal inputs to compensate for the limitations of using rainfall data alone, especially in capturing TC rainfall dynamics. Our design emphasizes the integration of environmental and physical information to enhance prediction quality, addressing a gap in existing regular precipitation forecasting models. 3. **Empirical Validation**: As shown in our ablation results (Table 3), both the proposed ARP and the multimodal input design (M) contribute significantly to model performance. This supports the value of our innovations, particularly in the context of TC precipitation forecasting. **Q3: It is necessary to visualize the residual prediction and show the contribution of how it can reduce cumulative errors.** The residual prediction visualizations are shown at this link: https://limewire.com/d/HHv6Z#HXP582VLSv. Please refer to this link for detailed results. Regarding its role in reducing cumulative errors, predicting residuals rather than absolute rainfall values allows the model to estimate smaller relative shifts at each time step. As a result, even if there are minor prediction errors at individual steps, their cumulative impact is less severe. This approach helps mitigate long-term drift and leads to more stable multi-step forecasts. ***Owing to space constraints, we would be glad to elaborate further on Methods and Evaluation Criteria (Point 2) or other questions during the author-reviewer discussion period.***
Summary: This paper proposes a diffusion model to do precipitation nowcasting relative to a predefined tropical cyclone. The idea is to track the location of a tropical cyclone and to do nowcasting relative to the tracked location. The proposed model also incorporates additional information for the forecasting including IFS, tropical cyclone information, and historical reanalysis. The model acheives a state-of-the-art result compared to other baselines on the newly task. ## Update after rebuttal I appreciate the efforts of the authors to clarify and address my concerns. Unfortunately, I would not recommend to accept the manuscript in its current version. The manuscript lacks an appropriate comparison to standard baselines for global tropical cyclone precipitation forecasting i.e., using global/regional weather forecast. This comparison should have been made before the main submission and the rebuttal period. The results from the FuXi model are irrelevant since the model was trained on different target. Moreover, as mentioned in the rebuttal, I would not consider ARP as a novel contribution since many works in weather forecast proposed this technique before. Finally, the persistence baseline still achieves a similar skill to the proposed model. Claims And Evidence: - L58-60: Using $\Delta^{t}_{x}$ is a well-known technique for weather forecast see i.e., GenCast (https://doi.org/10.1038/s41586-024-08252-9). However, how Adjacent Residual Prediction (ARP) is going to reduce accumulative errors? $\Delta$ is actually accumulate errors with rollout. That is why some works use continuous or direct forecast i.e., https://arxiv.org/pdf/2312.03876. There is also no ablation study on this claim. - Without IFS, the prediction is not better than persistence baseline. Compare the first row in Table 1 with the third row in Table 3. I think the improvement comes from the IFS forecast itself since this forecast includes precipitation and correlated variables as well. Methods And Evaluation Criteria: - Concern about the practicality of the approach for real-world scenario: As far as I know ERA5 can't be obtained in real-time. IFS forecasts also need time to be generated. I thin the experimental setting is not realistic. - I think a better evaluation would be to compare to precipitation nowcasting centering around the center of the TC for the proposed model with other baselines that do forecasting but without centering on TC (i.e., this can be done using global forecast). - Table-2 ECMWF-Ifs: I think the reason why the proposed model is better than IFS is because IFS forecasts the precipitation similar to ERA5. While the proposed model was trained and evaluated against different rainfall Data (MSWEP). It is also not clear if the evaluation was done using total precipitation or just rain. Theoretical Claims: The paper doesn't include proofs. Experimental Designs Or Analyses: - I think an ablation about Adjacent Residual Prediction (ARP) or $\Delta^{t}_{x}$ is currently missing in the paper. - PreDiff performs worse than a persistence baseline while in the original paper it achieves much higher performance than a persistence baseline (see Table 1 https://arxiv.org/pdf/2307.10422). I think the baselines should be evaluated with out centering or at least they should have a positional encoding to adapt for the task. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: Precipitation nowcasting is performed in an absolute sense i.e., over a specific domain or on a global grid. The main contribution of this paper is to do nowcasting of precipitation in relation to a tropical cyclone movement. The novelty is to track the location of the tropical cyclone and then to do prediction relative to the tracked location. Essential References Not Discussed: The paper cited main references reasonably. Other Strengths And Weaknesses: Strength: - It is novel to perform precipitation nowcasting relative to tracked trobicla cyclone. - The paper includes many experiments and ablation studies which make understanding the method clearer. - The concept of the proposed method is well explained, and the paper is written concisely. Weakness: - The model relies a lot on the IFS forecast from ECMWF. - The paper argues that predicting relative change in precipitation is better than predicting precipitation nowcasting on a global or regional scales. However, there is no experiment to support this argument. Other Comments Or Suggestions: Please check equation 6. What type of loss function was used? Do you mean MSE? Questions For Authors: 1- Why CNN3d was chosen to handle 2D data and a transformer to handle 1D data? And why Resnet? It is not clear from the text i.e., one can use transformer also for 2D data. 2- Do other baselines use the same input information as the proposed model? And what about the IFS forecast, do other baselines use this information? 3- Did you report the baseline scores for the eight test sets similar to the proposed model? see Table 5. 4- Why U-Net is better for ETS-6? Is there any explanation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for recognizing our work, including the novelty of the task itself, the comprehensiveness of our experiments, and the clarity of the manuscript. We also understand the reviewer’s concerns regarding our use of IFS data. Below are our responses to some of the issues raised, and we hope they can help alleviate some of your concerns. **Q1: The model relies a lot on the IFS forecast from ECMWF.** 1. Comparing the first row in Table 1 with the third row in Table 3 (the model version without IFS), we can observe that our method still slightly outperforms the persistence baseline overall. Furthermore, even without IFS inputs, our model achieves better performance than other deep learning-based baselines, as also mentioned in Lines 413-418 (right column) of the manuscript. 2. Our work is, to our knowledge, the first to integrate deep learning methods with NWP (specifically IFS data) for TC precipitation forecasting. As one of the contributions of this work, integrating with NWP can improve the performance of our model, which supports the effectiveness of this contribution and offers useful insights for broader weather forecasting tasks. 3. The acquisition and usage of IFS data will be further simplified. Please refer to the details in the response to the Q1 of Reviewer Ndz5, point 2. **Q2: The ablation study of ARP is currently missing.** It is possible that our explanation of Table 3 was not sufficiently clear. In fact, the ablation study of the ARP mechanism is already presented in the first and second rows of Table 3. The first row corresponds to the original baseline of our method without ARP, while the second row shows the results after incorporating the ARP mechanism. The performance improvement demonstrates the effectiveness of ARP. **Q3: Using ARP is a well-known technique for weather forecast see i.e., GenCast** We appreciate the reviewer for bringing GenCast (published in December 2024) to our attention. We have carefully studied the paper and will cite and briefly discuss it in our camera-ready version. Notably, we found that the Residual Prediction in GenCast shares conceptual similarities with our ARP module, which further supports the value of residual modeling in weather-related prediction tasks. Our ARP design was inspired by the denoising process in diffusion models. Just as diffusion models iteratively refine predictions by removing noise, we view precipitation forecasting as a residual step-by-step accumulation process, as discussed in Lines 102-108 (left column) of our paper. We later found theoretical support for this idea in Chapter 6 of Atmospheric Modeling, Data Assimilation and Predictability, which further reinforced the motivation for adopting ARP in our framework. To the best of our knowledge, this is the first work to apply the ARP mechanism to the task of TC precipitation forecasting. We believe our work can provide inspiration for future research on specialized weather forecasting tasks. **Q4: Compare with global forecast model FuXi.(For the 2nd point of Methods And Evaluation Criteria)** We appreciate the reviewer’s suggestion and have supplemented additional experiments to address this point. Among existing large global forecasting models, FuXi (FuXi: a cascade machine learning forecasting system for 15-day global weather forecast) is currently the only one capable of providing global-scale precipitation predictions. However, FuXi does not publicly release official predictions for the total precipitation (TP) variable. So, we reproduced the FuXi model using the official codebase and evaluated its performance on TC precipitation. From the results, we observe that FuXi performs poorly and tends to significantly overestimate rainfall intensity. More details are shown at the link:https://limewire.com/d/HHv6Z#HXP582VLSv **Q5: Concern about the practicality of the approach for real-world scenario. (For the 1st point of Methods And Evaluation Criteria)** Regarding ERA5, although it is a reanalysis product, near real-time access is possible through collaboration with ECMWF. For instance, Pangu-Weather also utilizes ERA5 reanalysis data and has already been adopted within ECMWF's operational forecasting system for real-time prediction. Therefore, we believe the use of ERA5 in research and applied forecasting scenarios is reasonable and increasingly practical. For IFS data, the generation time is shorter than that of ERA5 data. Moreover, since we use low-resolution IFS data, the required time is further reduced. This has been mentioned in Lines 427-428 (left column). **Q6: How ARP is going to reduce accumulative errors?** A similar concern was raised by Reviewer Ndz5, and we have addressed it in our response to their Q3. Please refer to that response for detailed clarification. ***Owing to space constraints, we would be glad to elaborate further on Experimental Designs Or Analyses (Point 2) or other questions during the author-reviewer discussion period.*** --- Rebuttal Comment 1.1: Comment: > Comparing the first row in Table 1 with the third row in Table 3 (the model version without IFS), we can observe that our method still slightly outperforms the persistence baseline overall. Furthermore, even without IFS inputs, our model achieves better performance than other deep learning-based baselines, as also mentioned in Lines 413-418 (right column) of the manuscript. The model isn't better than the persistence baselines: | Model Name | ETS-6 ↑ | ETS-24 ↑ | ETS-60 ↑ | TP MAE ↓ | | ----------------- | :---------: | :-----------: | :-----------: | :------------: | |Persistence|0.41640|**0.14530** | 0.00564|**0.44558**| |TCP-Diffusion|**0.42926**|0.14253 | **0.00589**|0.44632| In addition, it looks like the baselines are not optimized for the task i.e., a simple U-Net can outperform both PreDiff and NowcastNet models. > The ablation study of ARP is currently missing Sorry what I mean here is the second point of the weaknesses: The paper argues that predicting relative change in precipitation while centering on TC is better than predicting precipitation nowcasting on a global or regional scales without centering on TC. However, there is no experiment to support this argument. > Novelty of ARP. As mentioned in the review, I would not consider ARP as a novel contribution since SwinVRNN (appeared 2022 and published 2023), GenCast (first appeared in 2023), Graph-EFM (NeurIPS24) and Stormer (first appeared in 2023 and then published at NeurIPS24 https://arxiv.org/abs/2312.03876v1) already used such a technique. > For the 2nd point of Methods And Evaluation Criteria. FuXi model and many other open-sourced global weather forecast models were trained on ERA5 data (which has biases in precipitation), while the target data in this paper is different. The weather forecast baselines should have been trained on the same target data. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s willingness to engage in further discussion during the author-reviewer discussion period. Below are our detailed responses to the comments raised. **Q1: The model isn't better than the persistence baselines. In addition, it looks like the baselines are not optimized for the task i.e., a simple U-Net performers better.** Because, the performance gains of our method on ETS-6 and ETS-60 are greater than the performance gaps on the other two metrics, we believe the overall performance is marginally better. It’s worth noting that the persistence baseline achieves relatively strong performance in this specific task, as discussed In Section D.3 of the appendix (“Analysis of Persistence’s Good Performance”) Regarding the baselines, they were originally designed for regular rainfall nowcasting. In our re-implementation, we preserved their original parameter settings as much as possible without specific tuning for TC rainfall prediction. Thus, their suboptimal performance in this task highlights a critical issue: methods designed for generic rainfall tasks may not transfer well to the TC rainfall prediction setting. This further underscores the value of developing models specifically for TC rainfall forecasting. As for U-Net, it does not consistently outperform other models. In fact, it performs well in light rainfall prediction but performs poorly in heavy rainfall prediction. Prior studies have shown that U-Net can outperform more advanced models under light rainfall. For instance, in PreDiff (https://arxiv.org/pdf/2307.10422), Table 14 (BIAS-16). Moreover, in the paper *Skilful precipitation nowcasting using deep generative models of radar*, U-Net achieves a better CSI-2 (light rainfall) than the proposed DGMR in Fig. 1b. In Fig. 2a, U-Net also performs well for light rain. These results suggest that classical models like U-Net can still be competitive or even superior under certain conditions. We also investigated why U-Net achieved a good performance in ETS-6. U-Net’s tendency to generate averaged predictions helps minimize its MSE loss. However, light rainfall dominates in TC rainfall, covering approximately 86.3% of the area around the cyclone center (10°*10°). This causes U-Net to focus more on light rain accuracy, but at the expense of moderate and heavy rainfall prediction, explaining why it fails under heavy rain scenarios. **Q2: Second point of the weaknesses and the performance of Fuxi.** We would like to clarify that our paper does **not** claim that "predicting relative change in precipitation while centering on TC is better than predicting precipitation nowcasting on global or regional scales without centering on TC," or make any similar assertions. That may have led to our initial misunderstanding of your concern. Besides, we have now included a direct comparison with FuXi, a global weather forecast model, during the rebuttal phase. Our method outperforms FuXi not only in quantitative metrics but also in qualitative visualizations (additional visualizations are provided at https://limewire.com/d/HHv6Z#HXP582VLSv). We acknowledge that FuXi was trained on ERA5 data, which is known to have biases in precipitation. In contrast, our evaluation is based on the MSWEP V2 dataset (https://www.gloh2o.org/mswep/), which provides more accurate precipitation measurements. Therefore, our model’s better alignment with MSWEP V2 indicates that TCP-Diffusion achieves superior performance in TC rainfall prediction compared to FuXi. This also highlights the importance of developing a TC rainfall-specific model. We believe that re-training FuXi or similar large-scale models on MSWEP V2 is neither feasible nor a reasonable requirement. FuXi is capable of producing precipitation forecasts for typhoon scenarios, and directly comparing its outputs with those of TCP-Diffusion is both fair and meaningful. Moreover, re-training FuXi would require substantial computational resources, and FuXi have not publicly released their training code, making such re-training practically infeasible. **Q3: Novelty of ARP**. We sincerely thank the reviewer for pointing out these recent works and for providing a valuable list of global weather forecasting methods that incorporate similar ARP-like strategies. We acknowledge that we overlooked these references due to the different problem formulations, and we will properly cite and discuss them in the camera-ready version. While the use of ARP may seem obvious in hindsight, especially from a general global forecasting perspective, but to the best of our knowledge, **our work is the first to apply an ARP mechanism in TC precipitation forecasting**. **We believe that novelty must be evaluated before the idea (using ARP in TC precipitation forecasting) existed. The inventive novelty was to have the idea in the first place. If it is easy to explain and obvious in hindsight, this in no way diminishes the creativity (and novelty) of the idea.**
Summary: The article proposes a multimodal diffusion model that integrates data on rainfall, environment, tropical cyclone attributes, and meteorological predictions to generate precipitation due to tropical cyclones globally. Its results outperform other deep learning methods and numerical weather prediction (NWP) models. Among its contributions, the algorithm can predict precipitation globally, demonstrates that forecasting temporal changes in precipitation is more effective, and integrates multidimensional meteorological information. Claims And Evidence: The authors provide a detailed, clear, and extensive description of the prediction of precipitation changes, the diffusion model, and its solution architecture. The description of their claims is convincing. However, the submission did not include code or data. Having access to them could increase confidence in their results. The authors suggest that the code and data will be available once the article is accepted. Methods And Evaluation Criteria: The total precipitation mean absolute error, the radially averaged power spectral density, quantifying the precipitation's accuracy, spatial structure, and realism. The paper's metrics make sense in the context of the problem. Theoretical Claims: I reviewed the concept of ARP, which is straightforward. The diffusion model is standard and generally accepted. The algorithms on which I am providing comments. Experimental Designs Or Analyses: I verified the results and the quantitative and qualitative analysis, the precipitation frequency distribution, and the power spectral density. I did not find any issues with them. Supplementary Material: The supplementary material is rich and detailed. It includes more details on the data, model development, metric definitions, and additional experiments. I found them convincing. Relation To Broader Scientific Literature: The article reviews the related literature, including current approaches based on numerical methods and Deep Learning. I find the review to be comprehensive. Essential References Not Discussed: Recently, on December 4, 2024, the article "Probabilistic Weather Forecasting with Machine Learning" was published. I believe the results presented in that article would complement some of the elements described in this article. Other Strengths And Weaknesses: Overall, I observe a well-formulated article with a strong experimental foundation. Elements such as ARP have been successfully leveraged. The diffusion model for precipitation prediction follows the trend of utilizing them to determine meteorological variables. I would have liked to review the code and associated data. While the results appear promising, having access to them would have strengthened my confidence in the approach. Additionally, I include some stylistic recommendations later in my review. Other Comments Or Suggestions: *** I recommend using elements such as \text{Rainfall}_{\text{Current}} when writing text within math environments in LaTeX. *** ablation for \Delta Rainfall as oppose to total rainfall *** ablation to computing rainfall everywhere need as opposed to the TC moving center *** X_t^h is called input data in line 138 (first column). I believe you want to call it historical ** embedding in Figure 2. *** check equation in line 203, left column *** use same fonts to name in the text (line 206, first column) and equation (5). Now that you are correcting that, add ',' or '.' at the end of the equations, as needed. *** change "the following pseudocode" to "the pseudocode in Algorithm 1 (or 2, as needed)"? *** include input/outputs and variable description in the algorithms *** pathes, line 259 right column *** Predif, line 312 right column Questions For Authors: 1. Would it be possible to provide access to at least a subset of the dataset or a simplified version of the model during the review process? 2. Have the authors considered benchmarking their model against probabilistic forecasting techniques, especially in terms of uncertainty quantification? 3. How adaptable is TCP-Diffusion to future improvements in NWP methodologies? Would retraining be necessary with every update to the NWP model, or can the framework accommodate updated inputs dynamically? 4. Could the authors clarify whether the model also improves fine-scale precipitation patterns, or does it tend to smooth out small-scale variability? 5. Could the authors provide insights into whether there is a feasible way to optimize the model's computational efficiency while maintaining its accuracy? 6. Does the model show any systematic underprediction or overprediction of extreme precipitation events? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing our work and for your valuable comments and stylistic recommendations. These suggestions will help us further improve the quality of this manuscript. We will incorporate the corresponding revisions in the camera-ready version. Below are our responses to the issues you raised. **Q1:Would it be possible to provide access to at least a subset of the dataset or a simplified version of the model during the review process?** A1:We have created an anonymous GitHub repository (https://anonymous.4open.science/r/TCP-Diffusion-ICML-review/README.md) that includes the testing code and a subset of the test data for our method. **Q2:Have the authors considered benchmarking their model against probabilistic forecasting techniques, especially in terms of uncertainty quantification?** A2:We have compared our method with PreDiff, which is also a probabilistic forecasting approach. While we have not explicitly benchmarked uncertainty quantification metrics against PreDiff at this stage, we have evaluated the stability of our predictions. As shown in Table 5, the standard deviation of our method's results is notably small, indicating that our model performs consistently across the dataset. **Q3:How adaptable is TCP-Diffusion to future improvements in NWP methodologies? Would retraining be necessary with every update to the NWP model, or can the framework accommodate updated inputs dynamically?** A3:Currently, our model does not dynamically adapt to different NWP methodologies. This is primarily because different NWP models may produce outputs with varying resolutions, spatial-temporal scales, and embedded physical assumptions. Existing offline deep learning frameworks, including ours, typically require consistent input characteristics and cannot yet generalize across heterogeneous NWP outputs without retraining. We greatly appreciate this insightful suggestion—it highlights a direction for making our work more practically applicable. In future research, we plan to incorporate adaptability to diverse NWP models into our framework design. **Q4:Could the authors clarify whether the model also improves fine-scale precipitation patterns, or does it tend to smooth out small-scale variability?** A4:Compared to deterministic forecasting methods, our model provides richer details in the prediction of fine-scale precipitation patterns. As illustrated in Figures 3 and 9, especially around the TC center, our method is capable of capturing the shape and structure of heavy rainfall regions. In contrast, deterministic baselines such as U-Net tend to smooth out these localized high-intensity features. This demonstrates the advantage of our probabilistic diffusion-based approach in preserving small-scale variability. **Q5:Could the authors provide insights into whether there is a feasible way to optimize the model's computational efficiency while maintaining its accuracy?** A5:As shown in Table 4, our method requires longer training time compared to some traditional deep learning models, and the inference time is also slightly higher than that of non-diffusion approaches. However, given the complexity of the TC precipitation forecasting task, we believe the computational cost of our model remains reasonable. Furthermore, our method is significantly more computationally efficient than NWP systems. In terms of model design, we adopt a hybrid architecture combining CNNs and Transformers. For high-dimensional data, we utilize CNNs, which are computationally efficient and lightweight. For lower-dimensional inputs, we use Transformers, which, though more computationally intensive, offer stronger feature extraction capabilities—allowing for a more fine-grained understanding. Additionally, we are exploring model transfer strategies, where a pre-trained large model is adapted to downstream tasks like TC precipitation forecasting. This would allow us to fine-tune only a small portion of the parameters while leveraging the representational power of the large model. This direction represents a promising way to further enhance computational efficiency and is a focus of our future research. **Q6: Does the model show any systematic underprediction or overprediction of extreme precipitation events?** A6: We conducted a focused evaluation on extreme precipitation events, which account for approximately 6.73% of the total test samples. The results show that our model tends to slightly overpredict these events, with an average overestimation of about 0.206 mm per 3 hours. These findings provide valuable insights for future improvements. In particular, we plan to explore techniques such as weighted loss functions or targeted fine-tuning to better handle rare but high-impact precipitation extremes. Addressing this challenge is critical for enhancing the model’s reliability under severe weather conditions. ***If you have more questions, We'd like to discuss them with you during the author-reviewer discussion period.***
null
null
null
null
null
null
Towards the Causal Complete Cause of Multi-Modal Representation Learning
Accept (poster)
Summary: The paper explores causal completeness in multi-modal representation learning, addressing issues where existing methods may capture unnecessary or insufficient information. It introduces the Causal Complete Cause (C3) framework, which ensures learned representations are both sufficient (contain all necessary information) and necessary (exclude irrelevant details). The authors propose a twin-network approach using instrumental variables and counterfactual modeling to estimate and enforce C3, leading to a new regularization method (C3R). Experimental results show that this approach improves robustness and accuracy, especially in scenarios with spurious correlations and missing modalities Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I only checked proof of theorem 3.2. Experimental Designs Or Analyses: Yes. Supplementary Material: I partially checked appendix for the proof of theorem 3.2 Relation To Broader Scientific Literature: I am not aware of the literature in multi-modal representation learning. Essential References Not Discussed: I can not certainly comment on this because I am not aware of the literature in multi-modal representation learning. Other Strengths And Weaknesses: > Strengths 1. The paper is written well and easy to understand. 2. Even though the idea of causal sufficiency and necessity is borrowed from literature, applying it to multi-model learning shows. > Weaknesses 1. In theorem 3.2, the the term obtained after identification still contains interventional terms. This is in contrast to traditional identifiability where the final result is statistical estimand instead of an estimand containing interventional terms. It may be good to use another word instead of "identifiability". 2. Because the proposed method is used to improve the existing methods as shown in Table 1, what is the additional computational cost of the proposed method adds to the existing methods? 3. In real-world data, there is a high chance that hidden confounding exists between causal and spurious feature, how does the proposed method handle such scenarios? > Minor issue: 1. It is crucial to highlight the difference between causal sufficiency assumption used in causality literature to avoid confusion. 2. How are the ideas related to the paper: https://arxiv.org/abs/2109.03795 ? The ideas of causal sufficiency and necessity of representations has been studied in the paper. It is crucial to discuss this paper in related works. Other Comments Or Suggestions: > Typos: 1. Line 78: Looks like there is a grammatical error in the sentence. 2. Line 156: Shouldn't it be "we construct a structural causal model"? 3. Lin2 121 (right): Should be "an SCM". Questions For Authors: Please see weaknesses above. Addressing those points is crucial for strengthening the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer eaJB's constructive feedback and the time and effort dedicated to the review process. We are also grateful for the recognition of our work and sincerely hope the following responses can eliminate the concerns. ## Response to W1 We appreciate the suggestions and apologize for any misunderstandings. To distinguish it from traditional identifiability, we change the word to "Causal Identifiability." According to Pearl (2009) and Lee & Bareinboim (2020), causal identifiability refers to establishing the effect of intervening on a set of variables on another set, using observational or interventional distributions under given assumptions. It permits the inclusion of interventional terms if they are uniquely determined from the observational distribution (Shpitser, 2012). Based on these, Theorem 3.2 is derived. When the model satisfies local invertibility, we can uniquely recover the variable distribution from the observed data, thereby ensuring causal identifiability and estimation of $C^3$. ## Response to W2 The experiments on computational cost in Appendix H.4 (Figure 7) indicate that introducing $C^3$R increases the computational cost by less than 1.3x compared to the original. ## Response to W3 We provide an outline to explain how it "handles hidden confounding": - Theoretically: The "hidden confounding" issues correspond to the relaxation of the exogeneity assumption, which requires satisfying causal identifiability of $C^3$ in the environment that contains hidden confoundings. To achieve the relaxation, we introduce an instrumental variable $V$ to eliminate confounding effects, thus achieving causal identifiability regardless of whether hidden confounding exists (Section 3.2). We model $V$ using the improved self-attention mechanism, which employs alignment scores to capture causal factors $F_c$, theoretically constrain $Z$ to only contain $F_c$, and eliminate confounding effects (L220-257 and Appendix A.3). - Methodology: Based on theoretical results, we constrain the causal completeness of the representations with $C^3$ risk. The concept of causal completeness also means without "hidden confounding" in causality (Pearl, 2009). We propose $C^3$R to constrain it by reducing $C^3$ risk. It is achieved through a twin network: the real-world branch uses $V$ to adjust $Z$ so that it only contains $F_c$ for accurate prediction, achieving causal sufficiency and eliminating hidden confounding; the hypothetical branch employs gradient-based counterfactual modeling to further calibrate the causal factors, constraining causal necessity and further filtering out hidden confounding (L258-307 and Appendix D.4). Table 1-8 prove its effectiveness. ## Response to Minor 1 We sincerely appreciate the suggestions and would like to provide an outline for illustration (further emphasized in Section 3.1): - Meaning of Causal Sufficiency: Our concept of causal sufficiency (Definition 3.1) is adapted from Definition 9.2.2 in Pearl (2009), i.e., “setting x would produce y in a situation where x and y are in fact absent” which is considered the original definition of causal sufficiency. It also aligns with another type of illustration in causality literature, e.g., “the absence of latent confounder”, as we utilize instrumental variable and twin network in Sections 3.3 and 4 to eliminate confounders, aiming to satisfy causal sufficiency. - Clarification on Assumptions: Previous works rely on an exogeneity assumption, assuming there are no confounders in environments. In contrast, we relax these assumptions to account for possible confounders in practice (L80–99 and Appendix D.3). It does not alter the meaning of causal sufficiency; it means that we do not assume that there is no confounding in the environment, but introduce instrumental variables to eliminate the impact of confounding to relax the assumption. ## Response to Minor 2 We sincerely appreciate the suggestions and have carefully reviewed the mentioned paper. Below is a brief illustration of the differences (cited and supplemented in Section 7). Wang & Jordan aim to construct measures of non-spuriousness and disentanglement. Their exploitation of causal necessity and sufficiency concept is to align it with non-spuriousness and “invoke” the corresponding measure in Pearl (2009) to construct the measure of non-spuriousness. We focus on modeling the concept of causal sufficiency and necessity itself in MML. We relax the exogeneity and monotonicity assumptions that previous works depend on, including Wang & Jordan, and propose a new method to measure and constrain MML-specific causal completeness. Thus, although both works draw inspiration from Pearl (2009), they differ significantly in problem, theoretical framework, methodology, and experimental validation. ## Regarding Comments We appreciate the reviewer's suggestions and have polished the text accordingly, i.e., changing “with both” to “be both”, “conduct” to “construct”, and “a” to “an”. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. I've read the responses and I will keep my positive score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer eaJB, We sincerely appreciate your feedback which has greatly encouraged us. We would like to express our gratitude again for the time and effort you have dedicated to reviewing it, which helped us improve our work further. Best regards, The Authors
Summary: This paper addresses the problem of multi-modal representation learning from a causal perspective. It analyzes the insufficiency and redundancy of information across multiple modalities. The authors propose a novel concept termed Causal Complete Cause ($C^3$), supported by identifiability guarantees under weaker assumptions (non-exogeneity and non-monotonicity). A twin network is introduced to estimate the $C^3$ measurement, and extensive experiments are conducted to validate the proposed method. Claims And Evidence: Yes. This paper is mainly discussing the definition (explained in Section 2 & 3.1), identifiability (explained in Section 3.2), and measurement (explained in Section 3.3) of the novel causal complete cause ($C^3$). Methods And Evaluation Criteria: Yes, the proposed method is the causal complete cause ($C^3$), which makes sense for multi-modal representation learning, and the evaluation criteria is mainly the classification accuracy, which is also reasonable. Theoretical Claims: Yes, the key theoretical claims that I checked include Theorem 3.2 (Identifiability under Non-Exogeneity), Theorem 3.3 (Identifiability under Non-Monotonicity and Non-Exogeneity), and Theorem 3.4 (Modeling Instrumental Variable V in MML). - In Theorem 3.2, what do you mean by "local invertibility"? - how to interpret that non-monotonicity in Theorem 3.3, compared to the monotonicity requirement Theorem 3.2? What is the key modification such that monotonicity is not required anymore. - In Theorem 3.4, what is the intuition of choosing self-attention mechanism, instead of other methods? Experimental Designs Or Analyses: Yes. The experimental design is sound and valid, the experimental evaluation is comprehensive, covering 17 baseline methods and 6 datasets across various tasks including scene recognition, image-text classification, and segmentation. Supplementary Material: Yes, the supplementary material includes more detailed proofs, analysis, pseudo-code, and experimental results. I go through all the contents briefly, while paying more attention and time to the main paper. Relation To Broader Scientific Literature: This paper presented the definition, identifiability, and measurement of $C^3$ with theoretical support, without exogeneity and monotonicity assumptions. This is a relaxed and quantifiable framework, compared to previous work. Essential References Not Discussed: In general, this paper has covered most related work. It would be more comprehensive if also comparing and discussing the following disentangled/causal representation learning papers, to name a few: - Schölkopf et al. "Toward causal representation learning", Proceddings of IEEE, 2021. - Brehmer et al. "Weakly supervised causal representation learning", NeurIPS, 2022. - Ahuja et al. "Interventional causal representation learning", ICML, 2023. - Yao et al. "Multi-view causal representation learning with partial observability", ICLR, 2024. - Sun et al. "Causal representation learning from multimodal biological observations", ICLR 2025. Other Strengths And Weaknesses: **Strengths:** - This paper is well-written and clearly-organized. - The $C^3$ concept is novel and intriguing. The authors provide a new measurement backed by identifiability guarantees, along with a thorough analysis of sufficiency and necessity. - The experimental evaluation is comprehensive, covering 17 baseline methods and 6 datasets across various tasks including scene recognition, image-text classification, and segmentation. **Weaknesses and Comments:** - The example in Figure 1 remains somewhat ambiguous. The distinction between sufficient and necessary features in the image domain is unclear. Including an additional example that explicitly illustrates features that are both sufficient and necessary would be helpful. Additionally, the figure should more clearly reflect the elements of multi-modality. - In Figure 2, both anti-causal (Y → X) and causal (X → Y) directions are discussed. However, in the absence of spurious correlations, either direction could represent the true data-generating process. The comparison between an anti-causal mechanism (true, without spurious correlation) and a causal mechanism (learned, with spurious correlation) may be misleading. - Terminology clarification: The term causal sufficiency in Section 3 is typically used in causal inference to mean the absence of latent confounders. It would be clearer if the authors explicitly clarify their intended meaning in the context of this paper. - In Theorem 3.2, the terms local invertibility and non-exogeneity are not clearly defined. Providing a brief explanation or formal definition immediately following the theorem would improve clarity. - The derivations of Equations (3) and (4) are not clearly explained. A more detailed explanation or derivation would help the reader better understand the methodology. - What is the intuition of choosing self-attention mechanism particularly for generating instrumental variable V? Other Comments Or Suggestions: There is no obvious typo so far. Questions For Authors: - Can the learned representations be mapped to interpretable latent concepts, or are they inherently abstract? - How does the method perform in domains with real-world data complexities, such as missing data, or measurement error? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer fkSa's constructive feedback and the time and effort dedicated to the review. We are grateful for the recognition of our work and sincerely hope the following responses can eliminate the concerns. ## Response to W1 We sincerely appreciate the suggestions and have refined Fig.1 accordingly. The examples in Fig.1 are based on specific tasks and data conditions (L129-155). A sufficient and necessary feature can be "flat duck bill" as its presence indicates "duck" and every "duck" sample includes it. We also added the textual modality, e.g., "A duck has a flat bill and orange webbed feet standing with its wings folded" as suggested for better clarification. ## Response to W2 We agree “in the absence...” but would like to kindly clarify that the SCMs in Fig.2 are conducted under different settings, aiming to illustrate potential confounding issues in MML instead of true data-generating direction (L129-155). Left is for how MML sample $X$ is generated based on causal generating mechanism; Right is to align with the practical MML process, where factors are typically coupled for predicting $Y$. We apologize for any ambiguity that may caused by the caption and have refined it accordingly. ## Response to W3 We sincerely appreciate the suggestions and would like to clarify that our concept of causal sufficiency (Definition 3.1) is adapted from Definition 9.2.2 in Pearl (2009), i.e., “setting x would produce y where x and y are in fact absent”. It aligns with the mentioned “the absence of latent confounder”, as we utilize instrumental variable and twin network to eliminate confounders (Sections 3.3 and 4), aiming to satisfy causal sufficiency. We further emphasized it in Section 3.1 according to the valuable suggestion. ## Response to W4, Theoretical Claims 1&2 - We have illustrated “local invertibility” in Proposition D.3 and will further elaborate on it immediately following Theorem 3.2 as suggested. Briefly, it states that the model can uniquely recover the distribution of exogenous variable $s$ from the conditional distribution of $Y$ given its parents $Pa(Y)$. - We have provided a brief explanation about non-exogeneity, e.g., $P(Y_{do(Z=c)}) \neq P(Y \mid Z=c)$ in L185-191 with detailed in Appendix D.3. We will further refine this immediately following the theorem. - Compared to Theorem 3.2, the non-monotonicity in Theorem 3.3 allows the effect of $Z$ on $Y$ to vary in direction or intensity. The key modification is introducing $V$ for piecewise estimation through integration (Eq.5). ## Response to W5 We sincerely appreciate the suggestions and provide a brief derivation following Theorem 9.2.15 in (Pearl, 2009): $C^3(Z)$ can be decomposed into (i) when $Z\neq c$ occurs with probability $1-P(Z=c)$, its contribution reflects $C^3_{su}(Z)$ for $Z=c$; (ii) when $Z=c$ occurs with $P(Z=c)$, its contribution reflects $C^3_{ne}(Z)$ of $Z=\bar{c}$ on $Y$. By normalizing, we get $ C^3_{su}(Z) = \frac{C^3(Z)}{1-P(Z=c)}$ and $C^3_{ne}(Z) = \frac{C^3(Z)}{P(Z=c)}$ as Eq.3 and Eq.4. We will add it in Appendix A.1 for better understanding. ## Response to W6 & Theoretical Claim 3 The intuition is twofold: - Aligning with the modeling objectives of $V$ (L220–248): The self-attention mechanism, dynamically measures the alignment score between modalities, can emphasize the important features for $Y$ across modalities (i.e., achieves higher scores on $F_c$) while downplaying $F_s$. Intuitively, using self-attention-based $V$ on $Z$ with distance penalties satisfies the goal that "constrain $Z$ only contains $F_c$". - Higher efficiency: Although custom-designed networks can also achieve the above goal, the self-attention mechanism is typically more lightweight without substantially increasing model complexity. ## Response to Q1 The learned representations can "be mapped to interpretable latent concepts". Such a representation must both fully explain the decision (sufficiency) and be indispensable (necessity), revealing the most important factors behind predictions. For example, a causally complete representation captures all critical factors for classifying "happy", e.g., visual (upturned mouth corners, wide-open eyes) and audio cues (high pitch) where removing "upturned mouth corners" may result in "angry". ## Response to Q2 The results in Table 2 (missing modality) and Table 8 (data noise) show that $C^3$R achieves stable performance improvements in domains with real-world data complexities. ## Regarding Essential References Not Discussed We appreciate the suggestion and carefully reviewed the papers (cited and supplemented in Section 7). Schölkopf et al. present a review for causal inference; the rest four mainly focus on identifiability under different settings and problems, e.g., weak supervision and partial observability. We focus on the causal sufficiency and necessity concept, aiming to explore and model it for MML, where the problem, theory, method, and experiments are all different.
Summary: This paper proposes Causal Complete Cause Regularization (C³R) Risk, a metric that quantifies the likelihood that a learned representation is causally complete. A lower C³ Risk indicates that the representation satisfies both causal sufficiency and causal necessity, meaning that spurious correlations have been removed, and all essential causal factors are retained. In conventional Multi-Modal Learning (MML) methods, two major issues arise. 1) Spurious correlations exist in sufficiency evaluation, leading to unreliable representations. To address this, the paper introduces Twin Network’s real-world branch, which leverages a self-attention-based instrumental variable to ensure that representations do not rely on non-causal factors, effectively mitigating spurious correlations. 2) For necessity evaluation, the challenge is that it requires observing label changes when a representation is absent, which is impossible in real-world data. The proposed method overcomes this limitation using Gradient-Based Counterfactual Modeling in Twin Network’s hypothetical-world branch. Since directly removing a representation is not feasible, the model instead adjusts gradients in the loss function to guide the representation toward a counterfactual direction, generating an approximation of counterfactual representations. Claims And Evidence: Table 1 and Table 2 demonstrate the effectiveness of applying C³ Risk to various benchmark datasets, including NYU Depth V2, SUN RGBD, FOOD 101, MVSA, and BraTS. The results show that incorporating C³R improves performance across scene recognition, image-text classification, and segmentation tasks, particularly by enhancing both average and worst-case accuracy, thereby increasing model robustness. Additionally, Figure 4's Ablation Study experimentally verifies the contribution of each component in the proposed loss function. The study confirms that these components play a crucial role in improving model performance by evaluating performance degradation when removing C³ Risk, the instrumental variable, and counterfactual representation modeling. The results indicate that the proposed loss function is effective in learning robust and causally complete representations. Methods And Evaluation Criteria: The proposed C³R method and evaluation criteria are well-aligned with the goal of learning causally complete representations in Multi-Modal Learning (MML). The introduction of C³ Risk effectively quantifies causal sufficiency and necessity, while the Twin Network architecture provides a structured approach to estimating counterfactual effects. The use of instrumental variables further helps disentangle causal from spurious factors, which is essential for robust multimodal learning. The evaluation strategy is comprehensive, covering six benchmark datasets spanning classification and segmentation tasks, including NYU Depth V2, SUN RGBD, FOOD 101, MVSA, and BraTS. The robustness analysis under Gaussian noise (image) and blank noise (text) strengthens the credibility of the results, ensuring that C³R improves both average and worst-case accuracy. The ablation study further confirms that each component of C³R contributes meaningfully to performance. Theoretical Claims: The proposed methods and evaluation criteria are well-aligned with the problem, as they have been quantitatively validated across various MML methods and diverse MML tasks. Experimental Designs Or Analyses: The soundness and validity of the experimental designs were verified. In Table 1, the proposed method was validated by comparing it with recent approaches across various MML benchmark datasets, including NYU Depth V2, SUN RGBD, FOOD 101, and MVSA, which cover different tasks. Additionally, the ablation study confirmed the effectiveness of each regularization term. Supplementary Material: Yes, I reviewed the supplementary material. I examined Section D.5 (Multi-modal Representation Learning on Synthetic Data) to understand the process of synthetic data generation. Additionally, Section E (Benchmark Datasets) provided insights into the types of datasets used in the evaluation. Lastly, Section F (Baselines) was reviewed to verify the error margins in the evaluation results. Relation To Broader Scientific Literature: The key contributions of this paper are closely related to existing research in multi-modal learning (MML). By adopting the concepts of sufficiency and necessity from Pearl (2009), it introduces a novel approach—Causal Sufficiency and Causal Necessity. which significantly advances MML learning. Essential References Not Discussed: Causal Mode Multiplexer: A Novel Framework for Unbiased Multispectral Pedestrian Detection, CVPR 2024 Other Strengths And Weaknesses: * Strengths: The paper effectively identifies the limitations of existing causal representation learning research and introduces a novel approach that ensures both causal sufficiency and necessity, which is highly compelling. Additionally, the use of Gradient-Based Counterfactual Modeling to disentangle causal factors and approximate counterfactual effects without generating new data is particularly interesting, as it allows for counterfactual reasoning without access to actual counterfactual samples. * Weaknesses: While the Twin Network generates counterfactual representations, there is a lack of direct validation on how well these estimated counterfactuals align with true counterfactual effects. A comparison with manually curated counterfactual examples or other counterfactual modeling approaches would enhance the credibility of the proposed method. Other Comments Or Suggestions: The paper claims that instrumental variables are used to mitigate spurious correlations, but there is a lack of quantitative experiments measuring how much spurious correlation has actually been reduced. A more compelling validation would include quantitative and qualitative comparisons of representations before and after applying C³R on datasets with a high presence of spurious correlations. Questions For Authors: 1. How can C³ Risk be directly verified as an indicator of causally complete representations? Specifically, what experimental approaches could be used to validate that a lower C³ Risk truly corresponds to representations that satisfy both causal sufficiency and necessity? 2. If you have conducted experiments using datasets with a high concentration of spurious correlations, could you provide details on the quantitative and qualitative comparisons of representations before and after applying C³R? 3. Are there any quantitative or qualitative methods to evaluate how similar the counterfactual representations generated through Gradient-Based Counterfactual Modeling using the Twin Network are to actual counterfactual representations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer ZspQ's feedback and the time and effort dedicated to the review. We are also grateful for the recognition of our work and sincerely hope the following responses can eliminate the concerns. ## Response to W1 & Q3 - The true counterfactual effect involves unobserved outcomes, making direct modeling difficult (Pearl, 2009). Existing work typically adopts the "minimal change" principle to estimate counterfactuals, which is proven to align with true counterfactual effects (Galles & Pearl, 1998; Kusner et al., 2017). For instance, Chapters III–V in Wachter et al. (2017) demonstrate that by making only minor adjustments to the treatment variable while preserving the distribution of other covariates, the counterfactual samples maintain the original data’s characteristics, ensuring accurate counterfactual effect estimates. Based on these results, we leverage gradient intervention to satisfy the "minimal change" principle for counterfactual effect estimation with theoretical guarantees (Appendix D.4). To assess whether the generated counterfactual data adhere to the principle for accurate estimation, we develop a distribution consistency test with Wasserstein distance $D_w$, i.e., whether the distribution of the covariates matches that of the original data. The lower $D_w$ shown below proves that our method satisfies the principle. - According to the suggestions, we conduct a toy experiment on LCKD and NYU Depth V2 to demonstrate credibility. We follow (Galles & Pearl, 1998) to make manually curated examples and select the recently proposed transport-based counterfactual modeling method (Lara et al., 2024) as another baseline. The table below shows the advantages of our method, i.e., superior accuracy and lowest computational cost. |Method|Acc|Calculation overhead|$D_w$| |-|-|-|-| |$C^3$R|77.6|$1\times$|0.9| |manually curated|77.4|$4.9\times$|0.7| |Transport-based|75.2|$2.3\times$|2.6| ## Response to Q1 We have conducted experiments to validate "$C^3$ risk serves as an accurate indicator": - Appendix H.5: We calculate the heatmaps of samples using $C^3$ risk in Fig.7, where high-weight elements correspond to low $C^3$ risk. If low $C^3$ risk reflects causally sufficient and necessary factors, then reducing the weights of these elements would degrade model performance. Table 7 shows that reducing the weights of low $C^3$ risk elements indeed lowers performance, confirming that $C^3$ risk is an accurate indicator. - Section 6.2: We conduct experiments on MMLSynData, which includes four types of generated data, i.e., sufficient and necessary causes (SNC), sufficient but unnecessary causes (SC), necessary but insufficient causes (NC), and spurious correlations (SP). By evaluating the correlation between the representation learned by minimizing $C^3$ risk and SNC, we can validate whether low $C^3$ risk refers to causal sufficient and necessary causes. Figures 3 & 6 indicate that $C^3$R significantly increases the correlation between representations and SNC, proving the accuracy. - Section 6.2 & Appendix H: If $C^3$ risk can be the indicator, then $C^3$R should lead to performance improvements by minimizing $C^3$ risk. Tables 1-8 show that $C^3$R achieves significant performance gains across all baselines, proving effectiveness. ## Response to Q2 & Comments We have conducted experiments on datasets with "high spurious correlations" and provide the outline below: - Quantitative (Section 6.2, Appendices D.5 and H.3): We conduct experiments on MMLSynData, which contains SNC, SC, NC, and SP as mentioned in "Response to Q1 (2)". Different degrees of $D_{sp}$ are set to control the level of spurious correlations. By evaluating the correlation between the learned representation and SP/SNC, we can validate the effectiveness of the corresponding model in mitigating spurious correlations/learning causal complete representation. Figures 3 and 6 show that even with high spurious correlation ($D_{sp}=0.6$), $C^3$R markedly reduces the correlation with SP (0.4 -> 0.1) while significantly increasing the correlation with SNC (0.5 -> 0.9), proving advantages. - Qualitative (Appendix H.5): As in "Response to Q1", the visualization shows that $C^3$R learns causally complete representations and effectively eliminates spurious correlations. ## Regarding Essential References Not Discussed We sincerely thank the suggestion and carefully reviewed the paper, finding differences with ours in problem, theory, method, experiments, etc. (cited and added in Section 7). Kim et al. aim to address modality bias in multispectral pedestrian detection tasks, e.g., "prediction on ROTX without thermal features". They conduct SCMs for analysis and propose CMM for the tasks. Differently, we focus on the concepts of causal sufficiency and necessity and are the first to explore them in MML. We propose a new theoretical framework and a plug-and-play method to learn causal complete representations, which can also be embedded in CMM.
null
null
null
null
null
null
null
null
Segment Anyword: Mask Prompt Inversion for Open-Set Grounded Segmentation
Accept (poster)
Summary: The authors identify an issue in VLMs / MLLMs of unstable segmentation against variations in textual prompt. They propose a text-to-image diffusion model based test-time optimization technique combined with language guided prompt tuning to solve this issue. Their resulting framework, tagged Segment Anyword, is evaluated across diverse tasks and datasets to show strong performance. Claims And Evidence: Inadequate evidence on how method solves the motivation problem of how VLMs *struggle with diverse terms* in textual prompt. Provide some simple quantitative evaluation (maybe your own benchmark using the dataset used in Figure 3) to back this claim. This will strengthen the paper. See more info in weaknesses. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, good. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: Relevant and useful topic explored. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** 1. Interesting motivation analysis 2. Tackles a difficult task of reducing VLM sensitivity to input prompts 3. Through evaluation to show consistent performance improvements across benchmarks. **Weaknesses** 1. Sec 2.2 / Figure 3 unclear: please provide more details regarding the visualization in the Figure caption. * What is the exact model used to generate segmentations here? * In plot, is IoU calculated against the ground-truth? * What exactly does each dot correspond to? A single image with multiple captions (mean / std along caption dimension)? * Can you use a different colour for the second red? You use red for two examples * What's inside brackets? I'm assuming it's IoU against GT? * "Thus we extend each image associating with additional generated 2-5 mutated expressions (e.g. [ apple pieces]→ [ apple pieces, apple slices, cut up apples])." - how are these generated? Templates? LLM? 2. Accomplishment of motivation. * How does Figure 3 plot look if you add your model there? * Can you provide table with average std (i.e. calculate std for each image as done currently, and then average across all images) of baseline vs your method? * If the problem identified in contributions (1) - *struggle with diverse terms* - is solved by method, the average std (in an experiment like Figure 3) should drop for your method compared to baseline. Is this correct? Discuss this and back this motivation with quantitative results. 3. Inference Speed (time / compute) * This method appears much slower compared to other approaches that do not use test-time optimization * Please provide a table comparing the inference time with that of baselines * Maybe you can compare to slower, similar test-time optimization method like [1] to justify a slow speed * "Training-free" claim in abstract / intro - is this fair, since you are training parameters during inference? In fact, you have at table showing the parameters you *train*. 4. Method details missing * "we parse the text expression into Noun-Phrases (NP) and Verb-Phrases (VP) and identify each rooted noun subject (root) within the phrase." - how is this done? What algorithms / methods are used? How accurate is this? * If baselines are given this additional information (noun - verb phrases), will they perform as well? [1] Burgert, R., Ranasinghe, K., Li, X., & Ryoo, M.S. (2023). Peekaboo: Text to Image Diffusion Models are Zero-Shot Segmentors. CVPR 2023. Other Comments Or Suggestions: Consider discussing related work: * Ranasinghe, K., McKinzie, B., Ravi, S., Yang, Y., Toshev, A., & Shlens, J. (2023). Perceptual Grouping in Contrastive Vision-Language Models. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 5548-5561. * Mukhoti, J., Lin, T., Poursaeed, O., Wang, R., Shah, A., Torr, P.H., & Lim, S.N. (2022). Open Vocabulary Semantic Segmentation with Patch Aligned Contrastive Learning. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19413-19423. * Burgert, R., Ranasinghe, K., Li, X., & Ryoo, M.S. (2023). Peekaboo: Text to Image Diffusion Models are Zero-Shot Segmentors. CVPR 2023. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: __Q1__: "__Exact model to generate segmentations__" __R1__: We use the official implementation and pretrained checkpoint from ReLA. The final segmentation mask is produced by the pixel decoder whose outputs are weighted by a language-supervised object activation map. __Q2__: "In plot, is IoU calculated against the ground-truth?" __R2__: Yes. __Q3__: "__Each dot means?__" __R3__: Yes, each dot corresponds to a single image paired with multiple descriptions. The mean and standard deviation are calculated along the caption dimension. __Q4__: "__Use different colour__" __R4__: We will revise the color coding in the figures. __Q5__: "__IoU inside brackets?__" __R5__: Yes it is IoU against GT. __Q6__: "__how to generate mutated expressions?__" __R6__: We generate the mutated expressions by querying ChatGPT4o, with a template "_Genrate a list of __n__ synonyms of the noun phrases in the following [sentence] and output the list separated by '&'_", where __n__ is in randint(2, 5) and _[sentence]_ is set to text referring expression. __Q7__:"__Figure 3 plot add your model?__" __R7__: Please find link3 and link4 in our reply R10 to reviewer 4Tbz. __Q8__: "__Table with average std__" __R8__: We provide the table with average std below: | STD Avergage Comparison | | |-------------------------|----------| | | RefCOCO+ | | ETRIS | 10.958 | | Segment Anyword | 8.021 | | | | | | gRefCOCO | | RELA | 13.217 | | Segment Anyword | 4.365 | __Q9__: "__the average std should drop for your method?__" __R9__: That's correct. We focus on test-time prompt embedding alignment. We show that out method is simple but very effective to reduce the sensitivity of input variance. __Q10__: "__Slow inference speed__" __R10__: Thank you for your feedback. Please refer our reply to reviewer jmV5. __Q11__: "__inference time comparison__" __R11__: We present a Table comparing the inference time with related baselines, including CLIPasRNN, Peekaboo. | | Average Speed Per Image | Inference Steps Per Image | |-------------------|-------------------------|---------------------------| | Segment Anyword | 470s | 1100steps | | Segment Anyword_f | 28s | 50steps | | Peekaboo | 150s | 300steps | | CLIPasRNN | 180s | - | All implementations were evaluated on a single NVIDIA A100 40GB GPU. As previously discussed, test-time optimization methods inherently involve a trade-off between inference speed and mask quality. Nevertheless, we demonstrate that our method can be significantly accelerated—reducing inference time from 470s to 28s—by fine-tuning the text encoder on a small number of target-domain samples and decreasing the number of inference steps from 1100 to just 50, with minimal performance degradation. __Q12__: "__Training-free overclaim?__" __R12__: Please refer our reply R1, R2 and R3 to reviwer 4Tbz. __Q13__: "__Concerns on expression parsing__" __R13__: Please refer our reply R4 and R5 to reviewer ixNi __Q14__: "__baseline + additional info__" __R14__: We observe that certain baseline methods, such as GLamm, LISA, and OMG-LLaVA, already incorporate similar linguistic information into their input pipelines. For instance, by integrating complete noun-phrase during feature fusion, or translating a special [SEG] token with context into segmentation masks. These models achieve impressive results but require significant training efforts. Other baseline methods focused on test-time optimization, such as CLIPasRNN, OVDiff, and Peekaboo, could theoretically leverage this linguistic information as auxiliary input; however, empirical observations suggest that their performance remains suboptimal under these conditions. In contrast, our proposed method explicitly formalizes this linguistic knowledge as prompt regularization. This strategy enables robust mining of noise-tolerant mask prompts and consequently yields refined and higher-quality segmentation masks, even without extensive training or configuration overhead. __Q15__: "__Discussing related works.__" __R15__: We thank the reviewer for highlighting these important works, which also aim to leverage intermediate features from large vision-language models (VLMs) for downstream segmentation tasks. Both CLIPpy and PACL assign labels by contrasting image patch embeddings with object text label embeddings. However, these methods still require training configurations. Peekaboo is closely related to ours, as both aim to learn visual concepts using off-the-shelf diffusion models. However, Peekaboo heavily relies on alpha map initialization and is sensitive to its quality. We will add this discussion to our manuscript.
Summary: This paper introduces Segment Anyword, a training-free framework for open-set language-grounded image segmentation. It leverages token-level cross-attention maps from a frozen diffusion model to generate mask prompts, which are then refined by the Segment Anything Model (SAM) for accurate segmentation. To address the lack of coherence and consistency in initial prompts, the authors propose Linguistic-Guided Visual Prompt Regularization, which incorporates syntactic and dependency structures to improve prompt quality. The method shows significant performance gains and strong generalization across diverse datasets without the need for extensive training or fine-tuning. Claims And Evidence: Overall, the paper presents its claims with clear and convincing evidence. The authors support their critique of existing open-set segmentation methods—namely, their reliance on extensive training and difficulty in achieving consistent object segmentation across diverse textual expressions—through both quantitative and qualitative analysis, particularly in Figure 3. To address the issue of incoherent and inconsistent prompt quality, the proposed linguistic-guided visual prompt regularization enhances alignment between text expressions and visual masks. The effectiveness of this module is demonstrated both qualitatively (Figure 6) and quantitatively (Table 6). The claim that the method is computationally lightweight compared to existing approaches is substantiated in Table 1, where the number of trainable parameters is significantly lower than that of most prior methods. However, the repeated emphasis on being "training-free" warrants scrutiny. In practice, the method updates textual embeddings for up to 1100 steps during inference, which arguably constitutes test-time adaptation or lightweight training. Additionally, the use of LoRA to fine-tune the BERT encoder further complicates the notion of being entirely training-free. While not the central contribution, this aspect should be more carefully framed to avoid overstating the claim. Finally, although the method effectively provides initial localization through cross-attention maps, much of the segmentation accuracy relies heavily on the use of SAM as a post-processing module. This dependency raises questions about how much of the final segmentation performance can be attributed solely to the proposed method itself. Methods And Evaluation Criteria: The proposed method is well-aligned with the problem of open-set language-grounded segmentation and effectively addresses its key challenges. The evaluations are conducted on appropriate benchmark datasets using standard metrics, and the results, along with ablation studies, support the method’s validity. Theoretical Claims: The paper does not present formal theoretical claims; it is primarily empirical, focusing on experimental results and practical effectiveness. Experimental Designs Or Analyses: Most of the experiments are based on empirical results and are considered sound. Supplementary Material: I examined related work, implementation details, explanations about the baseline, and additional qualitative examples. Relation To Broader Scientific Literature: The proposed method focuses on generating masks more efficiently compared to existing methods. Being able to effectively perform segmentation for various text expressions in an open-world scenario can be highly impactful. Essential References Not Discussed: Prompt learning papers, beginning with "CoOp: Conditional Prompt Learning for Vision-Language Models" as the seminal work, should be addressed, followed by its successors. CoOp’s experimental design is empirically sound, introducing prompt learning to adapt vision-language models like CLIP for few-shot tasks, efficiently generating masks and outperforming baselines like zero-shot CLIP across datasets such as ImageNet; however, it struggles with unseen classes. Follow-up works like "CoCoOp" (CVPR 2022) enhance this by adding input-conditional adaptability for open-world segmentation of diverse text expressions, maintaining soundness with robust testing but introducing computational complexity. Other Strengths And Weaknesses: The core framework claimed in this paper heavily borrows from the assertions in "An Image is Worth Multiple Words: Discovering Object-Level Concepts using Multi-Concept Prompt Learning," but it lacks sufficient evidence to explain what differentiates the proposed method. It directly adopts the approach of averaging cross-attention maps and merely refines the masks using SAM afterward. Given that this is a training-free method not utilized for learning, I believe a more significant approach is needed. Relatedly, I think the ablation study for the proposed method is insufficient. There’s little detail on how SAM is applied or what design choices are made in the subsequent regularization step, where deeper design exploration and ablation studies seem necessary. Overall, the explanations for figures and tables are lacking. In Figure 3, the intent is somewhat clear, but it’s unclear what the green, blue, and red colors represent. Similarly, in Table 6, terms like PL, R1, and R2 are presented without any explanation of their meaning. Other Comments Or Suggestions: entirely : off-shelf → off-the-shelf? L296 : eight → seven? Questions For Authors: As mentioned earlier, I have doubts about the novelty of this paper. Most of the ideas are derived from existing methods, and reinforcing the noun part of text descriptions with adjectives or adverbs isn’t particularly a new idea either. This concern needs to be addressed. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed feedback. We are glad that the reviewer found our paper to present clear claims with evidence, supported critiques. And our method is computationally lightweight. __Q1__: "__training-free warrants scrutiny__" __R1__: The term “training-free” refers to the fact that our method does not require access to or processing of the training dataset. In contrast to prior approaches that demand significant computational resources to train or fine-tune on curated datasets, our method avoids such data-dependent training procedures. Instead, we perform at test-time, which is more suitable for real-world, open-set scenarios—where test samples often involve novel concepts that has not been seen during training. In such cases, updating prompt embeddings during inference is necessary to align them with the target objects effectively. To avoid confusion, we will revise our terminology throughout the manuscript, replacing “training-free” with “test-time optimization.” __Q2__: "__fine-tune the BERT encoder increase model complexity.__" __R2__: We demonstrate that it is possible to reduce the number of diffusion steps during test time textual embedding update from 1100 to as few as 50—by adapting the text encoder using a small number of samples, without accessing the full training dataset. This lightweight adaptation, achieved via LoRA, is intended purely for accelerating test-time optimization, not for conventional training or fine-tuning on a full dataset. Importantly, this step does not involve large-scale training or supervision and serves as a practical means to align the text encoder with the target domain efficiently. We emphasize that this adaptation remains consistent with our goal of avoiding full model training and supports fast, sample-efficient test-time optimization. We will add this discussion to our manuscript. __Q3__: "__avoid overstating claim.__" __R3__: We are sorry for the confusion terms and we will replace with "test-time optimization" for clarification. __Q4__: "__Proposed method's impact on performance.__" __R4__: We thank the reviwer for raising this issue. We presnet additional experiments of how different prompt influence the downstream mask generation. We compare our Segment Anyword's mask prompt against pure text input, using official implementation of LanguageSAM. We present both quantitive and qualitative results [link1](https://anonymous.4open.science/api/repo/ICML-Submission3702-Rebuttal/file/Imgs/Ours%20vs%20LangSAM.pdf) on GranDf validation set. Results show that our method is very effective on improving mask prompt quality. | | GranDf Val | | |-----------------|------------|------| | | mAP | mIoU | | Segment Anyword | 31.3 | 67.4 | | LangSAM | 17.6 | 33.5 | We will add this additional experiment to our manuscript. __Q5__: "__Discuss with CoOp and CoCoOp__" __R5__: We thank the reviewer for bringing up CoOp and CoCoOp, two important works in visual prompt learning. We agree with reviewer's comments and will include the discussion in our manuscript. __Q6__: "__difference from MCPL__" __R6__: MCPL serves primarily as the inversion backbone in our method. However, our approach is not limited to MCPL and can be seamlessly integrated with other inversion or concept-discovery methods as well. We present a qualitative results with different cross-attention source, showing that our method is composible with various diffusion models and inversion algorithms [link2](https://anonymous.4open.science/api/repo/ICML-Submission3702-Rebuttal/file/Imgs/Ours%20with%20different%20cross%20attention%20source.pdf). We will add this discussion to our manuscript. __Q7__: "__green, blue, and red represent?__" __R7__: These colors illustrates easy/medium/hard samples for achieve accurate and stable segmentation, categorized by its IoU mean and std. __Q8__: "__PL, R1, and R2 represent?__" __R8__: PL stands for prompt learning. R1 and R2 are two linguistic guided prompt regularizations. We will revise the figure and table captions to enhance clarity and avoid ambiguity. __Q9__: __Typos__ __R9__: We will revise our typos throughout the paper. __Q10__: "__novelty of this paper__" __R10__: As demonstrated through comprehensive experiments and additional results provided in [link3](https://anonymous.4open.science/api/repo/ICML-Submission3702-Rebuttal/file/Imgs/gRefCOCO_ext_Scatter_comparison.pdf) and [link4](https://anonymous.4open.science/api/repo/ICML-Submission3702-Rebuttal/file/Imgs/RefCOCO_TestB_Scatter_comparison.pdf), our method effectively improves both accuracy and stability by leveraging an off-the-shelf diffusion model to extract mask prompts via an inversion process. This approach is modular and plugable, offering several advantages—including the ability to handle novel visual and linguistic concepts—without requiring additional supervision or complex training procedures.
Summary: This work proposes Segment Anyword, an approach for language-guided open-set segmentation. It uses a diffusion model to create initial correspondence between words in the text prompt and points in the image, refines the point prompts based on linguistic analysis, and prompts SAM to generate the final segmentation masks. Empirical results on tasks like referring image segmentation show promising results. Claims And Evidence: - Table 1 introduces some attributes. To the reviewer, "Word-Grounding" and "Novel-Concept" are not clearly defined. It is also unclear why the most recent models like GSVA and SAM4MLLM do not satisfy these conditions. - The motivation study is limited to ReLA, a previous model not equipped with large language models. The conclusion of this study may not hold true for more recent MLLM-based models. Methods And Evaluation Criteria: - The method heavily relies on an external model (a fine-tuned Vicuna, as indicated in Appendix B.1) and a tagging method to parse the text prompt, whose parsing accuracy is not validated. In reference image segmentation, some text expressions are written by human annotators with incorrect grammar structures. How to correctly understand these expressions when the parsing is noisy? Theoretical Claims: This work does not include theoretical claims. Experimental Designs Or Analyses: - The experiment on "open-set grounded segmentation" is very misleading. In the original work, GLaMM, GranDf is proposed for the "grounded conversation generation" task, where a model is required to generate a detailed description of the image and ground the noun phrases within the description. GLaMM does not mention the task as "open-set grounded segmentation." Furthermore, as detailed in Appendix B.2, this work "only focus on segmentation capability evaluation by using ground truth text expression as segmentation reference." The comparison in Table 2 is not fair and the performance of Segment Anyword cannot be considered as high, given that the text descriptions are from the ground truth. Supplementary Material: The supplementary material includes details of related work, implementation and baselines, as well as additional qualitative results. Relation To Broader Scientific Literature: This work introduces a new framework for open-set segmentation with language prompts without leveraging extensive supervision. However, the experiment designs have several flaws and even misleading results, which should be fixed before publication. The proposed method is also a bit too complicated and relies on several external models (diffusion model, parsing LLM and SAM), limiting its applicability in a wider range of tasks. Essential References Not Discussed: No concerns. Other Strengths And Weaknesses: No other concerns. Other Comments Or Suggestions: No other comments. Questions For Authors: - In the test-time cross-attention collection step, which layer(s) of the denoising UNet are considered when collecting the cross-attention values? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful questions and detailed feedback, which have significantly helped improve the quality of our manuscript. Below, we address the reviewer’s concerns point by point. __Q1__: "__Definition on "Word-Grounding" and "Novel-Concept"__" __R1__: We refer "word-grounding" to the model’s ability to associate or align words from a sentence prompt with specific object regions in an image. We refer "novel-concept" to that the model can recognize or handle new concepts it has not encountered during training. We will make amendements on Table caption. __Q2__: "__Additional motivation study.__" __R2__: We thank the reviewer for raising this question. We present additional motivational study with another state-of-the-art model ETRIS (ICCV23) ([link1](https://anonymous.4open.science/api/repo/ICML-Submission3702-Rebuttal/file/Imgs/ETRIS-RefCOCO+_TestB-Scatter.pdf)), which focus on aligning representation from pre-trained visual and language encoder by adding intermediate fine-tuned adapters. The result also validates our previous findings that without test-time alignment, current open-set segmentation models suffer from input variance, resulting in unstable segmentation results. We will extend this discussion to motivation section in our manuscript. __Q3__: "__why baseline methods failed with certain conditions.__" __R3__: Both GSVA and SAM4MLLM generate multiple object masks within a single binary segmentation map, but these masks are not explicitly associated with specific word indices. Additionally, neither model is capable of handling novel concepts, as both require training or fine-tuning on the target dataset. This limits their ability to generalize to unseen concepts during test time. We will add this extended discussion to our manuscript. __Q4__: "__Text parsing relies on external model__" __R4__: The primary focus of the proposed Segment Anyword is to improve the quality of automatic mask prompts without relying on complex training configurations (Page 2 Contribution 2). The external language model is merely used to parse and index object-related words for cross-attention map retrieval. Importantly, the use of a fine-tuned LLM (e.g., Vicuna) is not required—our method is compatible with state-of-the-art language models such as GPT-4o, which can provide strong reasoning capabilities out of the box. Additionally, standard NLP libraries such as NLTK and SpaCy can be used for text pre-processing as well. These tools are widely adopted in prior work, known to be fast and reliable. We will add this discussion to our manuscript. __Q5__: "__Text could be noisy__" __R5__: We acknowledge that noisy inputs, including sentences with incorrect grammar, may occur in the test dataset. However, addressing noisy text parsing is not the focus of our work. Instead, our goal is to generate mask prompts that are robust to such noise, without relying on supervised training. We show an example that our method can still handle typos such as "_catstatue_" and "_kittytoi_" in [link2](https://anonymous.4open.science/api/repo/ICML-Submission3702-Rebuttal/file/Imgs/Ours%20vs%20Peekaboo.pdf) In extreme cases, each word’s cross-attention mask can be matched directly against the ground-truth segmentation masks to determine the best ranking and pairing, thereby compensating for parsing inaccuracies. We will add these additional results. __Q6__: "__Fairness on GranDf comparison__" __R6__: We thank the reviewer for raising this issue. To address it, we conducted an additional experiment using GLamm-generated captions as the textual input for our method, we report both quantitative and qualitative results [link3](https://anonymous.4open.science/api/repo/ICML-3702-Rebuttal2/file/Ours%20GT%20Text%20vs%20GLamm%20Text.pdf) | | | GranDf Val | | |-------------------|------|------------|--------| | | AP50 | mIoU | Recall | | Segment Anyword_f | | | | | w/ GT Text | 30.2 | 65.9 | 42.4 | | w/ GLamm Text | 27.1 | 62.5 | 37.7 | In general, since GLamm-generated text is not always accurate, it can affect the alignment of prompt embeddings in our method. Nevertheless, our approach remains competitive, as it is capable of refining and adapting the prompt embeddings—even from noisy or imprecise text—to better match the target object during test-time optimization. We will update the experiment and result. __Q7__: "__In the test-time cross-attention collection step, which layer(s) of the denoising UNet are considered when collecting the cross-attention values?__" __R7__: We use the cross-attention maps at the 16×16 resolution, averaged across all denoising time steps to obtain the final cross-attention. This setup is inherited from prior works such as MCPL and Prompt2Prompt to ensure a fair and consistent implementation for textual embedding update. We will add this to our implementation section. --- Rebuttal Comment 1.1: Comment: The authors' response is greatly appreciated. However, some previous concerns remain. - The claim that previous methods like GSVA and SAM4MLLM cannot handle novel concepts needs to be verified with qualitative or quantitative results. - Changing the text parsing model to GPT-4o or SpaCy does not address the concern. Again, the parsing accuracy or the final performance of such alternatives should be validated. - For the comparison on GranDf, after switching captions to GLaMM-generated ones, the performance drops below GLaMM and OMG-LLaVAf shown in Table 2. In addition, I think other reviewers' concerns on training (or test-time optimization) complexity and inference efficiency are valid. Given the presence of these concerns, I keep my rating as "weak reject." --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the additional questions, which have significantly contributed to improving the quality of our manuscript. __Q8__: "__Verify claim__" __R8__: We present a qualitative comparison involving several novel concepts, such as *building paint*, *kittytoi*, and *brown bull* in [link4](https://anonymous.4open.science/api/repo/ICML-3702-Rebuttal3/file/novel%20concept%20comparison.pdf). For GSVA, we use the LLaMA-7B base weights with __LLaVA-Lightning-7B-delta-v1-1__ and __SAM ViT-H__; For SAM4MLLM, we use LLaMA-LLaVA-next-8b and __efficientvit-sam-xl1__. Both methods can localize but struggle to produce accurate masks, lacking detail around parts and boundaries. Note that both GSVA and SAM4MLLM require training or fine-tuning of LLaVA and SAM on the training set, which involves considerable computational resources. In contrast, our method operates purely at test time without accessing the full training set—making it both simple and effective for novel concepts. To avoid further ambiguity, we will revise Table 1 from "*Novel Concept*" to "*Localize Novel Concept*", and update the classification accordingly. __Q9__: "__Parsing Ablation__" __R9__: We thank the reviewer for suggesting this important ablation setting. Due to time constraints, we conducted the study using 100 randomly selected image-text pairs from RefCOCO. For the SpaCy, we used the en_core_web_trf pipeline based on RoBERTa. We filtered tokens with pos tag "NOUN" and "ADJ", and indexed the token with the nsubj dependency label as the referred object. For GPT-4o and Vicuna-7B, we used the following prompt: Prompt: *As a NLP expert, you will be provided a caption describing an image. Please do pos tag the caption and identify the only one referred subject object and all adjective attributes. Your response should be in the format of "[(attribute1, attribute2, attribute3, ...), object1]"* *Conditions:* *(1) If the attribute is long, short it by picking one original word.* *(2) Please include one original word possessive source into the attributes for the subject.* Below, we present the final segmentation results. The corresponding parsing outputs have also been uploaded at [anonymous link5](https://anonymous.4open.science/api/repo/ICML-3702-Rebuttal4/file/refCOCO_testA_Spacy.json) and [anonymous link6](https://anonymous.4open.science/api/repo/ICML-3702-Rebuttal4/file/refCOCO_testA_GPT4o.json). While spaCy offers fast, offline parsing speed, it often misses key adjectives such as color terms like "white". In contrast, GPT-4o provides the most accurate parsing results, capturing fine-grained attributes more reliably. | Parsing Ablation | mIoU | |------------------|:----:| | GPT4o | 68.2 | | spaCy(RoBERTa) | 46.9 | | Vicuna-7B | 59.7 | Our empirical recommendation for users is as follows: for large-scale processing where speed is a priority, static NLP libraries such as spaCy are more suitable due to their efficiency. However, for more detailed interactions involving concept learning and prompt refinement, advanced language models like GPT-4o are preferred for their superior reasoning and parsing capabilities. We will include this discussion in the revised manuscript. __Q10__: "__Performance drop using GLamm generated text__" __R10__: The performance drop is expected, as the text input shifts from human-annotated ground truth to generated grounded conversations. However, we observe that the performance remains competitive even with GLamm-generated text, which is outperforming or matching several baseline methods that require training or fine-tuning on the entire training dataset. __Q11__: "__Other reviewer concern on inference complexity and efficiency__" __R11__: The initial concern raised by jmV5 relates to the inference cost of using a diffusion model. We acknowledged that there is a trade-off between the step numbers and mask prompt quality. While this trade-off cannot be entirely eliminated, we pointed out that using HyperDiffusion can possible reduce the denoising steps as future engineering work. Additionally, we addressed a related concern raised by reviewer SSnW, leading to raised score from 3 to 4, by comparing our method to other test-time optimization baselines such as Peekaboo and CLIPasRNN. As in SSnW R11 and in [link5](https://anonymous.4open.science/api/repo/ICML-Submission3702-Rebuttal/file/Imgs/Ours%20vs%20Peekaboo.pdf)) , our method achieves stable and accurate results within a reasonable time, without additional input dependency such as mask initialization or object prototype.
Summary: This paper introduces Segment Anyword, a novel framework for open-set grounded segmentation. The key idea is to invert the mask prompting process by leveraging a pre-trained diffusion-based text-to-image model (e.g., Stable Diffusion) to generate high-quality, grounded segmentation masks. The method achieves competitive performance on both in-distribution and out-of-distribution datasets. Claims And Evidence: The paper claims that Segment Anyword works effectively on both in-distribution and out-of-distribution datasets. This is supported by quantitative results on multiple benchmarks (GranDf, gRefCOCO, PC59, synthetic/medical datasets), showing strong performance, particularly in handling novel categories without retraining. The paper introduces positive adjective binding and negative mutual-exclusive binding as key components, which are well-supported by clear and convincing evidence in Table 6. The ablation results directly isolate and quantify their impact on segmentation performance, confirming their utility. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, there is no proof for theoretical claim. Experimental Designs Or Analyses: The experimental designs are sound Supplementary Material: Yes Relation To Broader Scientific Literature: Yes Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The paper introduces a novel approach by inverting the typical mask prompting paradigm. Instead of designing prompts manually, it optimizes them at test-time, leveraging diffusion models' cross-attention layers for precise mask generation. A significant advantage is that the method requires no additional training of the diffusion model or SAM, making it resource-efficient and appealing for real-world deployment. The approach demonstrates robust results on both in-distribution and out-of-distribution datasets, confirming its effectiveness in open-set scenarios. Weakness: While the method smartly avoids retraining large models, it heavily depends on test-time optimization, involving multiple gradient steps to optimize textual prompts and collect cross-attention maps from a diffusion model. This inference-time overhead can be significant, especially considering the use of diffusion models that already involve multiple denoising steps. Other Comments Or Suggestions: See weakness Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable questions and insights. We are pleased that they find our proposed method novel, resource-efficient, and highly generalizable. We have made several amendments and addressed the reviewer's specific queries as detailed below: __Q1__: "__Inference time cost / Slow speed.__"" We appreciate the reviewer highlighting the concern regarding test-time computational costs. We acknowledge there is indeed a trade-off in test-time optimization speed and mask prompt quality. Specifically: 1. Previous methods typically require extensive resources and effort to train or fine-tune large vision-language models on training datasets. This approach is resource-intensive with limited generalization capabilities. 2. In contrast, our proposed test-time prompt optimization method is more efficient and better suited to real-world open-set applications, where test samples may contain novel linguistic and visual concepts. In such scenarios, textual embeddings need to be dynamically updated and aligned with the target object during testing. Consequently, this necessary trade-off between inference speed and embedding alignment cannot be entirely avoided. 3. However, we have demonstrated that it is possible to substantially reduce the number of test-time steps from 1100 to 50 by adapting the text encoder to the target dataset domain fine-tuned by only a samll number of samples. Furthermore, future work could focus on additional engineering optimizations, such as replacing the current backbone with HyperDiffusion, to further enhance inference speed.
null
null
null
null
null
null
GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation
Accept (poster)
Summary: The authors propose GIVE, an innovative framework designed to enhance the performance of large language models in scientific reasoning tasks. The framework consists of three main stages: expert data observation, divergent thinking, and information synthesizing. The large language model constructs a structured reasoning path by combining its internal parameterized knowledge with external non-parametric knowledge through a knowledge graph-inspired method called veracity extrapolation. This approach reduces hallucinations by utilizing counterfactual knowledge. It significantly improves the model’s accuracy and interpretability in biomedical, commonsense, and open-domain reasoning tasks, achieving an efficient balance in integrating large language models with knowledge sources that are either limited or noisy. As a result, it delivers superior reasoning outcomes. Claims And Evidence: Firstly, the authors propose that the introduction of expert knowledge improves reasoning accuracy, as demonstrated in the experimental analysis in Section 4.7.3. However, there seems to be an issue with the experimental data. Figure 4 shows that as the expert knowledge ratio increases, the average accuracy gradually improves, eventually reaching 100%. On one hand, the authors do not specify which backbone models are used for the data shown in Figure 4, and the full experimental details are not provided. On the other hand, according to the results, an injection of around 20% expert knowledge already leads to an average accuracy of 100%, which seems contrary to common sense. It would be helpful for the authors to provide further clarification on this part. Secondly, the authors propose that GIVE can handle limited external knowledge bases, as demonstrated in Sections 4.4 and 4.5. However, in Table 4 of Section 4.5, there is no significant difference between GIVE, RAG, and ToG methods when providing 10%, 50%, and 100% scale knowledge graphs. It is possible that for commonsense reasoning tasks, the knowledge graph ConceptNet does not play a major role, and this argument is hard to substantiate. It is suggested that the authors provide experimental data for the 0% scenario for further analysis. Methods And Evaluation Criteria: From the main experimental results in Table 3, it can be observed that in the biomedical dataset selected by the authors, overall, the performance of GIVE_a+c is better than that of GIVE_a+c+e. This indicates that the introduction of expert knowledge may not only be ineffective but could even lead to a decrease in performance. Meanwhile, results on CommonsenseQA shown in Table 4, the difference between GIVE_a+c and GIVE_a+c+e is within the range of 0.1 to 0.5. In the proposed framework, the introduction of expert knowledge is an important part, and it would be helpful if the authors could provide further clarification on this matter. Theoretical Claims: The paper doesn't present complex theoretical proofs but focuses on algorithmic contributions. Experimental Designs Or Analyses: Firstly, Figure 4 shows that as the expert knowledge ratio increases, the average accuracy gradually improves, eventually reaching 100%. On one hand, the authors do not specify which backbone models are used for the data presented in Figure 4, and the full experimental details are not provided. On the other hand, according to the results, an injection of around 20% expert knowledge already leads to an average accuracy of 100%, which seems contrary to common sense. It would be helpful for the authors to provide further clarification on this part. Secondly, in Table 4, regardless of whether it is GIVE, RAG, or ToG methods, there is no significant difference when providing 10%, 50%, and 100% scale knowledge graphs. It is possible that for commonsense reasoning tasks, the knowledge graph ConceptNet does not play a major role, and this argument is hard to substantiate. It is suggested that the authors provide experimental data for the 0% scenario for further analysis. Supplementary Material: The author did not provide the supplementary material Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The key contribution is overcoming the limitations of relying solely on internal or external knowledge and fully utilizing both external knowledge bases and the model’s existing knowledge. This has been preliminarily explored in the EMNLP 2024 paper "Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models," which uses the model's existing knowledge to assess the relevance and validity of external knowledge, thereby improving answer accuracy. It is suggested that the authors include a discussion on models of this type of RALM. [1] Yu, Wenhao, et al. "Chain-of-note: Enhancing robustness in retrieval-augmented language models." arXiv preprint arXiv:2311.09210 (2023). Other Strengths And Weaknesses: Please refer to Questions For Authors Other Comments Or Suggestions: If the example provided in Figure 2 could demonstrate connecting all four sets of entities, it might be more helpful for readers to understand. Currently, it shows cross-group connections with two sets per group. Questions For Authors: 1.Discussion of RALM Models in the Context of Your Framework The key contribution of your paper involves overcoming the limitations of relying solely on internal or external knowledge, fully utilizing both external knowledge bases and the model’s existing knowledge. This concept has been explored in the EMNLP 2024 paper "Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models." Could you include a discussion on models of this type of RALM and how they relate to your framework? 2.Role of Expert Knowledge in Cross-Group Connections The paper suggests that "The expert's cross-group connections serve as evidence, guiding the LLM to extrapolate the veracity of potential relationships among similar concepts." However, there seems to be insufficient explanation of the direct role of expert knowledge in establishing cross-group connections. Could you clarify how expert knowledge directly contributes to this process? 3.Performance of GIVE_a+c vs. GIVE_a+c+e in Biomedical Dataset In Table 3, the performance of GIVE_a+c is better than that of GIVE_a+c+e in the biomedical dataset. Can you provide an explanation as to why the introduction of expert knowledge may lead to a decrease in performance in this case? 4.Expert Knowledge Injection and Accuracy The results in Figure 4 show that injecting around 20% expert knowledge leads to an average accuracy of 100%. This seems counterintuitive and contrary to common sense. Could you provide further clarification on this, and explain why such a small amount of expert knowledge leads to such high accuracy? 5.Clarification on Backbone Models and Experimental Details Could you specify which backbone models are used for the data presented in Figure 4 and provide full experimental details? The lack of such information raises concerns about the reliability and reproducibility of the results. 6.Discrepancy Between GIVE, RAG, and ToG in Table 4 In Table 4, there seems to be no significant difference between GIVE, RAG, and ToG methods when providing 10%, 50%, and 100% scale knowledge graphs. Could you explain why the knowledge graph ConceptNet does not show a significant effect, especially for commonsense reasoning tasks? It would also be helpful if you could provide experimental data for the 0% scenario for further analysis. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We understand the reviewer's concerns about the role of expert knowledge in the reasoning process, as well as in the answer-generation process, and we appreciate your suggestion on our literature discussion and experiments. We hope the following clarifications addressed all the questions and we will make sure to include these discussions in our revised manuscript. > For Claims And Evidence paragraph 1: We apologize for the confusion. To clarify, GIVE enriches the expert knowledge in the KG by extrapolating expert knowledge towards relevant concepts akin to the queried ones, as detailed in Section 3.4.3. Direct injection of expert knowledge into the question-answering is $\textbf{not}$ assumed, due to the uncertain quality of the accessible knowledge base. Figure 4 shows that the performance of GIVE improves with an increased ratio of expert knowledge to total knowledge (expert plus extrapolated). Higher expert knowledge ratio ensures that the neighborhood related to the query is well-connected in the KG, offering substantial evidence for veracity extrapolation. Lower ratio indicates reliance on internal knowledge, as discussed in Section 3.3 and Open Relations in Section 3.4.3. GIVE performs well with ample expert evidence guiding reasoning, quantitatively when expert knowledge ratio reaches about 20\%, $\textbf{not}$ when injecting solely 20\% of this knowledge. All ablation studies in Section 4.7 utilize GPT3.5-turbo, with other details consistent with Section 4.2. > For Claims and Evidence Paragraph 2: Commonsense reasoning experiments aim to test GIVE's generalizability in three ways: (1) on noisy, large-scale KG, (2) varying sparsity levels, and (3) tasks where pre-trained knowledge is extensive. This setting presents challenges, as the basic LLM already achieves about 70\% accuracy; pre-training on commonsense knowledge is inherently easier than on scientific information. In such cases, misinformation can lead to hallucination. GIVE improves performance by up to 3.4\% and 4.9\% over RAG and ToG, and consistently outperforms base LLM, indicating its reasoning process avoids hallucinations. ConceptNet is indeed important, the challenge is how to wisely use its information to further enhance the rich pre-training knowledge without causing hallucination, especially with its sparse versions. Including a 0\% scenario, as the reviewer suggested, offers more insight: GIVE_a and GIVE_a+c achieve 69.84\% and 69.36\%, respectively, due to the exclusion of expert information. Without expert information, the knowledge provided by GIVE is derived solely from the LLM's internal knowledge (Sections 3.3 and Open Relations in 3.4.3). This aligns with our observation in Figure 4, where the expert knowledge ratio is 0. The better performance in this case is attributable to richer pre-training in commonsense compared to biomedical knowledge. > For Methods and Evaluation Criteria: We understand the reviewer's concern about the role of expert knowledge. GIVE is designed for situations where a high-quality KG is inaccessible in domain-specific tasks. In such scenarios, GIVE does $\textbf{not}$ intend to utilize expert knowledge directly for question answering but rather as an "inspiration," illustrated by the transition from solid to dashed lines in Figure 2. The KG links some entities unrelated to the query; GIVE encourages the model to assess whether similar connections exist among other related entity pairs. In Tables 3 and 4, GIVE_a+c+e generally demonstrates improved or equivalent performance. The minor margin does not imply that expert information is trivial, but rather reflects that they are not directly solving the query. Expert knowledge (solid lines in Figure 2) is pivotal for the success of GIVE_a and GIVE_a+c, as it contributes to knowledge extrapolated from expert connections among relevant entities. > For Essential References Not Discussed: We will include discussion of CoN in our revised manuscript. GIVE and CoN differ fundamentally in several aspects: (1) Motivation: GIVE is a reasoning framework that formulates a faithful thinking process by populating the expert KG triplets towards the query, whereas CoN is a robust retrieval system excluding similar yet irrelevant documents. (2) Use of internal knowledge: GIVE employs LLM's internal knowledge for "veracity extrapolation," depicted by the transition from solid to dashed lines in Figure 2, whereas CoN uses it to create document summaries (notes) for accurate relevance analysis. (4) Use of external knowledge: All expert knowledge in GIVE is integral to its reasoning process, as shown by the solid lines in Figure 2. Meanwhile, CoN filters out irrelevant documents for question-answering. (5) GIVE is designed to reason $\textbf{beyond}$ the accessible KG, whereas CoN identifies and focuses on documents containing the essential context. > For Other Comments Or Suggestions: We will include all connections in the Figure for our revised manuscript.
Summary: The paper introduces Graph Inspired Veracity Extrapolation, a reasoning framework that enhances LLMs by integrating parametric and non-parametric memories for more accurate reasoning with minimal external input. GIVE operates through three key steps to select relevant expert data, engage in query-specific divergent thinking, and synthesize information for final outputs. Extensive experiments show that GIVE improves LLM performance across different model sizes, allowing smaller models to outperform larger ones in scientific tasks. GIVE supports reasoning with both restricted and noisy knowledge sources and underscores the value of combining internal and external knowledge to enhance LLM reasoning capabilities for complex scientific tasks. Claims And Evidence: The claims made in the paper are supported by experimental evidence. The authors demonstrate through extensive experiments that GIVE improves LLM performance across various sizes and domains. Methods And Evaluation Criteria: The proposed methods make sense for improving LLM reasoning in scenarios where internal knowledge is insufficient and external knowledge is limited or noisy. The evaluation criteria, including accuracy on various reasoning tasks, are appropriate for assessing the effectiveness of the proposed framework. The use of both scientific and open-domain datasets provides a comprehensive evaluation of the method's applicability. Theoretical Claims: No theoretical claims are provided. Experimental Designs Or Analyses: The proposed approach is only compared with naïve RAG and GraphRAG. However, many graph-based RAG methods have recently emerged, with their code publicly available. It would be beneficial to include comparisons with these methods, such as KAG and HOLMES, to provide a more comprehensive evaluation. Supplementary Material: I have reviewed the supplementary material, including algorithm details, additional ablation studies, efficiency analysis, and prompt examples. Relation To Broader Scientific Literature: The key contribution of this paper is: This work proposes a structured reasoning framework that integrates internal and external knowledge, along with a veracity extrapolation method that enriches limited information by establishing provisional connections between query concepts and incorporating counterfactual reasoning to mitigate hallucinations. Essential References Not Discussed: 1. HOLMES:Hyper-Relational Knowledge Graphs for Multi-hop Question Answering using LLMs 2. KAG:Boosting LLMs in Professional Domains via Knowledge Augmented Generation Other Strengths And Weaknesses: Weaknesses The token usage and time consumption are significantly higher than other methods, yet the paper provides limited discussion on token efficiency, potential optimizations, and the impact on scalability and real-world applications. Other Comments Or Suggestions: 1. The proposed approach is compared only with naïve RAG and GraphRAG. However, several recent graph-based RAG methods, with publicly available code, have been introduced. Including comparisons with methods like KAG and HOLMES would provide a more comprehensive evaluation of the approach. 2. It seems that the BioASQ dataset is highly sensitive to the retrieved knowledge. A more detailed discussion and analysis of this sensitivity would be valuable. 3. While time consumption is provided in the supplementary material, a more in-depth analysis of token consumption would be helpful. Questions For Authors: please refer the suggestions and weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > In-depth analysis of token consumption We understand the reviewer's concern about the token efficiency and provide a comparison of token consumption between GIVE and ToG on 100 random questions from each dataset. For each question, we calculate the total number of input/output token in the whole problem-solving process. We use tiktoken with GPT3.5-turbo for this comparison. For GIVE we use n=1, and for ToG, we use D=5, the setting is the same as Table 8 in the Appendix. |Avg no.input tokens | PubmedQA | BioASQ | ProcessBank | CSQA/10\% ConceptNet | CSQA/50\% ConceptNet | CSQA/100\% ConceptNet | |--------|----------|----------|----------|----------|----------|----------| | GIVE | 14518.5 | 7970.3 | 19460.5 | 5321.1 | 7203.0 | 7398.7 | | ToG | 12701.1 | 7010.0 | 11995.6 | 4934.8 | 6704.1 | 6679.7 | |Avg no.output tokens | PubmedQA | BioASQ | ProcessBank | CSQA/10\% ConceptNet | CSQA/50\% ConceptNet | CSQA/100\% ConceptNet | |--------|----------|----------|----------|----------|----------|----------| | GIVE | 183.1 | 80.0 | 232.5 | 34.9 | 45.2 | 46.3 | | ToG | 104.2 | 60.2 | 100.5 | 23.1 | 34.8 | 36.6 | The observation supports the conclusion of Table 8, where GIVE\_n=1 effectively balances efficiency and accuracy in challenging scientific reasoning tasks. Specifically, in 5 out of 6 datasets, GIVE consumes around 10\% more input tokens than ToG, generates just 80 and 20 more tokens for PubmedQA and BioASQ, respectively, while reaching a 3-fold and 5-fold increase in accuracy. The variance in token usage across datasets is due to the differing numbers of entity groups and numbers of candidate relations, leading to varying candidate connections, as depicted in Figure 6 of the appendix. GIVE also demonstrates strong generalizability in deployment on KGs with varying sizes and sparsities. > Discussion of KAG and HOLMES Thank you for pointing out the related works; we will add a discussion of KAG and HOLMES in our revised manuscript. Although we acknowledge that KAG and HOLMES leverage KG to enhance LLM output, we respectfully contend that they are solving fundamentally different problems with GIVE. KAG and HOLMES are advanced $\textbf{retrieval}$ systems that retrieve and integrate accurate expert knowledge in question-answering tasks. In contrast, GIVE, as a $\textbf{reasoning}$ framework, uses irrelevant information pieces to promote divergent thinking and extrapolates knowledge to generate answers. Thus, KAG and HOLMES concentrate on extracting quality KG from corpus or documents to aid precise information matching via a refined SemanticGraph or Graph Schema. They use KG for more accurate information matching. Here, the LLM iteratively synthesizes the responses as an answer generator. GIVE, however, guides LLMs in problem-solving like a human expert, foregoing preprocessing irrelevant knowledge by using the structured nature of KGs to seamlessly connect expert information with a query. KG data (entities and their connections) support the construction of a reasoning chain. LLM solves queries by verifying the inferred links suggested by KG, as illustrated in Figure 2. Although we were committed to including them as baselines, as suggested by the reviewer, HOLMES currently lacks an open-source implementation, and the public KAG code lacks the kag-model, rendering them unusable in our experimental settings with query and KG as inputs. We believe that these methods will perform similarly to GraphRAG, emphasizing information retrieval and injecting over reasoning. > Discussion of sensitivity to retrieved knowledge We appreciate your insightful discussion. To clarify, the experiments in Figure 4 consistently show that GIVE effectively uses expert knowledge not directly related to the question to create an accurate reasoning process. This is evidenced by an improvement in performance from n = 0 to n = 1 and improved results with a higher ratio of expert knowledge. Furthermore, with the same expert knowledge ratio, BioASQ performs better as its questions are more easily connected to expert knowledge. The veracity extrapolation process links expert knowledge to the query by the extrapolated connections; a lower expert knowledge ratio means fewer such "bridges" for the model to leverage the unrelated KG information. We will incorporate this discussion into our revised manuscript.
Summary: This paper proposes a novel reasoning method based on LLMs and an external knowledge graph. The motivation of this paper is to address the reasoning quality under the situations that the knowledge graph is sparse. This is reasonable as CoT has no external knowledge, while RAG and ToG suffer from the scale of the external base knowledge. To address these challenges, the authors first transfer the queries into entities and relations formed in knowledge graphs, and then construct the entity groups by computing the cosine similarities in the embedding space. After this, the authors execute the inner-group to filter broader related concepts and inter-group connections to link both KG relations between groups and question related relations. The experiments show significant improvements in biomedical QA and robustness and scalability. ## update after rebuttal I would like to maintain the current rating. Claims And Evidence: Yes, the claims that the authors made are intuitively correct. For example, the authors list the challenges of CoT, RAG, ToG and show why they are unworkable with a sparse knowledge base graph. The visualizations are clear to illustrate this issue with a typical example. Methods And Evaluation Criteria: Yes, the method is reasonable. The benchmark datasets are sufficient to support the claims. Theoretical Claims: No theoretical claim is provided. Experimental Designs Or Analyses: The experiments are sufficient to answer five questions provided by the authors in Section 4.1. Supplementary Material: The experiments are sufficient to answer five questions provided by the authors in Section 4.1. Relation To Broader Scientific Literature: This work is in terms of the LLM reasoning field. Related works include Chain-of-Thoughts (CoT), Retrieval-Augmented-Generation (RAG), GraphRAG, etc. Essential References Not Discussed: The literature is sufficient. Other Strengths And Weaknesses: Other Strengths: - The authors provide a comprehensive method for LLM reasoning that significantly improves the LLM reasoning quality when the knowledge graph is large and the information is sparse. This setting is widely existing in the industry. - The paper is well-written with clear visualizations. - The authors have conducted lots of experiments. Particularly, the authors test the performance of GIVE on knowledge graphs with scales from 135 entities to 844K, which validates GIVE’s capability of retrieving on both small and large knowledge graphs. Other Weaknesses: Overall, I do not find severe problems in this paper, the following could be some `weaknesses’ that the authors or other reviewers might concern: - Compared to RAG that can retrieve any text information, graph-based retrievers (also for GraphRAT, etc.) naturally require an external knowledge base graph that is of high quality and is typically not yet learned by the LLMs. This might not be a `weakness’ but the common limitation of graph-based retrievers. - GIVE requires the preprocessing on the knowledge graph, the extraction of query information, and retrieval, which take some time. Some of the processes are before inference (like preprocessing), while others (extraction and retrieval) are during inference. - The method introduces some additional hyperparameters like m and n, increasing the difficulty of adjusting hyperparameters. Other Comments Or Suggestions: Typos: - line 86, 'focuses on uses’ -> 'focuses on using’. - line 192, ',The set’. Questions For Authors: - Since I did not find the code link, I just want to ask the authors whether they would provide the corresponding repository to the public? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: > Compared to RAG that can retrieve any text information, graph-based retrievers (also for GraphRAT, etc.) naturally require an external knowledge base graph that is of high quality and is typically not yet learned by the LLMs. This might not be a `weakness’ but the common limitation of graph-based retrievers. We appreciate the recognition of our contributions and agree with the common limitation of graph-based retrievers as mentioned by the reviewer. GIVE plays a pivotal role in addressing the challenge where high-quality Knowledge Graphs (KG) are not readily available. Knowledge Graphs possess certain distinct advantages over textual corpora. The primary goal of GIVE is to bridge the gap between the limited parametric knowledge typically available and the accurate reasoning required for scientific queries by effectively populating limited expert information towards the query. Our choice of KGs as a data source is due to their structured relational knowledge, which naturally provides a reasoning pathway. This choice simplifies the task to effectively creating connections between the expert KG knowledge with the ultimate question-answering task. In such scenarios, the process of constructing a relevant entity set and retrieving the connections between different groups proves efficient due to the inherent structure of graph data. Recent studies [1] have focused on converting text into KGs, and we are optimistic that textual corpora and KGs will become mutually interchangeable with advancements in this field of research. [1] KGGEN: EXTRACTING KNOWLEDGE GRAPHS FROM PLAIN TEXT WITH LANGUAGE MODELS. > GIVE requires the preprocessing on the knowledge graph, the extraction of query information, and retrieval, which take some time. Some of the processes are before inference (like preprocessing), while others (extraction and retrieval) are during inference. We appreciate the constructive feedback from the reviewer. We agree with the reviewer's insight that the reasoning process used by GIVE demands additional inference time, particularly in the construction of entity groups and the extrapolation of veracity. To substantiate our claims, we present comprehensive experiments in Table 8 in the appendix. These experiments demonstrate that when the number of entities per group (n) is set to 1, GIVE surpasses other competing baselines in performance, while maintaining similar or even lesser execution times. Notably, this trend is more significant when applying larger and denser Knowledge Graphs (KGs). Efficiency of GIVE could be further improved by incorporating batch pruning for the veracity extrapolation process. Moreover, as elaborated in Section 5, future research could involve automating the "veracity extrapolation" procedure, for instance, by developing a corresponding process reward that can be incorporated into Reinforcement Learning (RL) post-training frameworks. We are confident that the contributions of GIVE offer substantial benefits for future endeavors in this area of research. > The method introduces some additional hyperparameters like m and n, increasing the difficulty of adjusting hyperparameters. We appreciate your effort in highlighting this issue. It is important to elaborate further that GIVE's only hyperparameter is the number of entities per group, denoted as \( n \). In contrast, \( m \) represents the number of entity groups which is a characteristic inherent to the query, as explained in Sections 3.1 and 3.2. For each queried entity, an entity group is created using its analogous concepts in the Knowledge Graph (KG). The findings illustrated in Figure 4 and Table 8 substantiate that GIVE offers commendable performance even with \( n = 1 \). This facilitates GIVE in achieving an optimal balance between efficiency and accuracy without the necessity of hyperparameter tuning. Moreover, experiments detailed in Appendix C.1 reveal that the average value of \( m \) is typically 3 or 4 across all datasets. > Since I did not find the code link, I just want to ask the authors whether they would provide the corresponding repository to the public? Thank you for raising this issue. We hold open-source research in high regard and are committed to including a link to the repository containing all the relevant codes and datasets in the revised manuscript.
Summary: This paper introduces GIVE to facilitate the reasoning ability of LLMs in specific domains. GIVE extracts the relevant information of a knowledge graph and bridges it with the query by using LLM’s internal knowledge to justify the veracity of the extrapolated knowledge. GIVE also incorporates counterfactual knowledge and progressive answer generation to alleviate hallucination caused by the additional knowledge. The authors conducted extensive experiments using both domain-specific and open-domain benchmarks, utilizing KGs of various sizes and sparcities. The empirical results proved the effectiveness and generalizability of GIVE. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Previous works on LLM with external knowledge integration heavily focuses on the task of KGQA, where the gold answer or reasoning path is contained in the given KG, which is diMerent from the research problem defined in this paper. GIVE specializes in using limited information to prompt LLM reasoning in the specific domains, the authors include suMicient discussion of the previous research in the introduction, the motivation to propose GIVE for the hard tasks where retrieving from high-quality knowledge sources is clear. Essential References Not Discussed: no Other Strengths And Weaknesses: Strength: 1. The idea is novel. GIVE innovates in directing LLM to further populate the retrieved information to formulate a multi-step reasoning chain, differing from the reasoning methods on self-knowledge or the retrieval methods for gold context. 2. Experiments are comprehensive. In addition to various benchmarks, the author includes ablation studies to study the sensitivity of GIVE to parameters, and “expert knowledge ratio”, These findings further support the intuition of GIVE. 3. Analysis is sufficient. The Supp further validates the claim on the efficiency of GIVE with detailed context length and accuracy of the proposed method with different parameters, providing insights to the community. Weakness: 1. From Figure 4 we see that the expert knowledge set plays an important role for GIVE. However, from Table 8, GIVE achieves better performance than ToG and RAG when using n=1 and only the affirmative knowledge set, why can GIVE achieve good performance without using the expert knowledge set? 2. In Table 3, why does CoT harm the performance of GPT4, as CoT does not introduce new knowledge which causes hallucination, as clarified by the authors. 3. In Figure 3, there is a huge performance gap between GIVE_a and GIVE_a+c, GIVE_a+c+e. Can the authors clarify the cause of this performance gap Other Comments Or Suggestions: no Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > From Figure 4 we see that the expert knowledge set plays an important role for GIVE. However, from Table 8, GIVE achieves better performance than ToG and RAG when using n=1 and only the affirmative knowledge set, why can GIVE achieve good performance without using the expert knowledge set? We are grateful for the insightful discussion brought up by the reviewer. It is essential to further elucidate that GIVE employs the expert knowledge extracted from the Knowledge Graph (KG) with the purpose of enabling faithful reasoning through knowledge extrapolation, rather than being directly utilized for answer generation. As demonstrated in the experiments reflected in Figure 4, the findings indicate that the efficacy of GIVE's "veracity extrapolation" is intricately linked to the ratio of expert knowledge. This ratio evaluates the degree to which the KG is pertinent to the questions, encompassing both the entities and their interrelations. The reason for this relationship is that expert knowledge constitutes the KG's connections among the pertinent entities, as detailed in Section 3.2 and depicted by the solid lines in Figure 2 of our initial manuscript. Although this expert knowledge might not directly facilitate the answering of questions, as it is not immediately connected to the queries, it serves as a valuable source of "inspiration" for the model to execute divergent reasoning. This is a primary rationale behind proposing a framework that integrates retrieval with reasoning when dealing with external knowledge that is sparse and noisy. > In Table 3, why does CoT harm the performance of GPT4, as CoT does not introduce new knowledge which causes hallucination, as clarified by the authors. We appreciate your question. In Section 4.4, we critically evaluate various approaches by assessing their performance on scientific benchmarks that demand intensive reasoning. Large language models (LLMs) lack sufficient parametric knowledge to effectively address these complex queries. This is primarily because it is impractical to extensively train these models on highly specialized domains. Our observations, derived from comparing the outcomes of GPT-3.5-turbo and GPT-4 on datasets such as PubMedQA and ProcessBank, illustrate that simply scaling up pre-training does not necessarily enhance accuracy. In such instances, the Chain-of-Thought (CoT) approach seeks to promote a logical step-by-step reasoning process that relies exclusively on the model's internal knowledge, thus results in fluctuating performance levels due to its inherent randomness. > In Figure 3, there is a huge performance gap between GIVE_a and GIVE_a+c, GIVE_a+c+e. Can the authors clarify the cause of this performance gap Thank you for raising this question. Figure 3 illustrates our experiments with TruthfulQA, an open-domain text generation task. This dataset is included to demonstrate that GIVE is effective across both domain-specific and open-domain tasks. GIVE_a achieves superior results primarily because inquiries are directed at information accuracy. Consider the example query for TruthfulQA "What is the original meaning of SOS?" In this scenario, incorporating an affirmative knowledge set naturally leads to a more coherent context, whereas integrating counter-factual information tends to result in hallucinations. Additionally, as we employed the 10\% version of ConceptNet in this experiment, the retrieved expert knowledge is not directly associated with the query. Instead, it is utilized to motivate the model towards "veracity extrapolation," which significantly contributes to the excellent performance of GIVE_a. This is elaborated upon in Section 4.6 (lines 332-338), and the solid to dashed line process in Figure 2 of our original manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal from the authors, which well resolves my concerns. I therefore increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for appreciating our work and your continuous engagement in the discussion. Your input has been important for us to improve the paper.
null
null
null
null
null
null
Improving Memory Efficiency for Training KANs via Meta Learning
Accept (poster)
Summary: This paper proposes MetaKAN, which leverages a hypernetwork to generate the B-spline coefficients of KANs. Each edge function is associated with a learnable prompt (usually 1D) so the G+K+1 number of coefficients is compressed down to a single scalar, achieving parameter reduction. They apply MetaKANs to function-fitting tasks (Meta + KANs), PDE solving and image cognition tasks (Meta + Convolutional KANs). ## Update after rebuttal The authors have addressed my questions. Overall this is a great paper, but 5 is a bit of a stretch, so I raised my score from 3 to 4. Claims And Evidence: The claim about parameter saving (in conclusion) should be tuned down (add "most of the times" or "for large-scale KANs") since results in Figure 2 show that sometimes KANs are more parameter efficient than MetaKANs, especially when KANs themselves are small. Methods And Evaluation Criteria: Yes. They compare models based on model size and performance (loss or accuracy). Theoretical Claims: Not applicable. No theoretical claims in the paper. Experimental Designs Or Analyses: Yes. Supplementary Material: I briefly walked through the whole SM. Relation To Broader Scientific Literature: KANs (Liu et al.) demonstrated the superior performance of KANs over MLPs in small-scale tasks. KANs can become quite parameter-inefficient when they are scaled up. In particular, given the same shape (width and depth), KANs have (G+k+1) times more parameters than MLPs due to the modeling of the B-splines. As a result, it is urgent to reduce the number of parameters for KANs to make them practically useful. This paper presents one interesting solution via meta-learning. Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths** * The paper is nicely written and well-motivated * The paper proposes an interesting and effective solution (probably, see my questions below) to reducing the size of KANs. * Experiments are diverse -- functional fitting, PDE solving and image recognition. **Weaknesses** * Some algorithmic choices seem arbitrary * In some cases metaKANs appear to have more parameters than KANs. This is not well discussed and acknowledged as a limitation. Other Comments Or Suggestions: * Boldfaces are not consistently used in tables. For example in Table 2, the comparison on # of parameters for KAN/MetaKAN is not shown for G=5. Questions For Authors: * I'm not sure why depth (Sec 3.4) makes the story different. Can we simply increase the prompt dimension (usually one) to promote diversity? * In Table 2, there are a few cases when MetaKANs have more parameters than KANs. why is this the case? Also, boldfaces are not used consistently in Table 2. * I would love to see this visualization: changing the prompt dimension, how does the B-spline function change? This gives us a sense of what the family of functions looks like. * Does this meta-learning trick still work when sparsification is added to prune the network? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: C1: Some algorithmic choices seem arbitrary A1: We appreciate the reviewer's point regarding the clarity of our algorithmic choices. We acknowledge that the rationale behind certain design choices, such as the specific architecture of the meta-learner (e.g., a two-layer MLP) and the initial dimension of the learnable prompts (e.g., d=1), may not have been sufficiently stated in the original manuscript. In the revised version, we have added explanations for these choices in Section 3.2.2. We selected a two-layer MLP as the meta-learner because it balances expressive power and parameter efficiency, which is a common choice in many meta-learning and hypernetwork studies [1-3]. For the prompt dimension d=1, we chose it as the most parameter-minimal for experimentation. We also explored the impact of different prompt dimensions in the ablation study in Appendix C.4. The results show that increasing the dimension can improve performance but also comes with additional parameters and potential training instability. This confirms the rationale behind our initial choice of d=1 as the baseline setting, while allowing for flexibility to adjust the dimension when needed. We strive to ensure that all design choices are grounded in empirical observations or inspired by related work, and we will clarify these considerations more explicitly in the revised manuscript. [1] HyperNetworks, ICLR 2017. [2] Meta-weight-net, NeurIPS, 2019. [3] Continual learning with hypernetworks, ICLR, 2020. C2:Why MetaKANs have more parameters than KANs in Table 2. A2: We sincerely appreciate the reviewer's careful attention to these important details. You're absolutely correct to note that MetaKANs can have more parameters in certain configurations. Please refer the details about the analysis of the situation to the A3.2 for reviewer tj4z. Besides, the potential benifits of MetaKANs are scaling up to even larger KAN models (the results of large KANs in Table 2; please refer to response for Reviewer A7wa). And we will reformat all tables to highlight only cases with both MSE improvement and parameter reduction. C3: Why does depth matter? A3: The depth of MetaKAN motivates our design choice of using multiple meta-learners (hypernetworks) in deeper architectures. In deep networks, features become more abstract with depth—early layers capture low-level patterns, while deeper layers model higher-level semantics. Accordingly, the optimal form of activation functions may vary across layers. Using a single global hypernetwork to generate all activation functions may be overly restrictive, as it must capture diverse functional shape through input embeddings. Instead, grouping layers and assigning a separate meta-learner to each group allows better specialization, aligning with the hierarchical structure of deep features. Our ablation study (Appendix C.4) supports this: increasing the prompt dimension from $d=1$ to $d=4$ improves accuracy but also increases variance. Using multiple meta-learners per layer group reduces this variance, especially in deeper networks, suggesting enhanced stability. We plan to further explore the theoretical basis of this effect in future work. C4: Visualize how B-spline functions vary with prompt dimension A4: We appreciate the reviewer’s suggestion regarding visualization. In response, we conducted an experiment by varying the promp dimension from 1 to 4 and visualized the corresponding learned B-spline functions trained on the dataset $f=\exp(\sin(x_1^2+x_2^2)+\sin(x_3^2+x_4^2))$ . These visualizations reveal how increasing the embedding dimension enriches the expressivity of each spline function. | Dimension of Embedding | MSE ($10^{-3}$) | |-----------------------|-----------| | 1 | 3.60 | | 2 | 3.45| | 3 | 3.64 | | 4 | 4.21 | With a lower embedding dimension, the splines exhibit relatively simple, smooth transformations. As we increase the embedding dimension to 3 and 4, the spline structures become more complex, reflecting the model’s enhanced capacity to capture more complex functions. For the detailed visualization, please refer to the link https://github.com/icmlrebuttal25/Append C5: Combining with network pruning A5: Thank you for your interest in combining our meta-learning approach with network sparsification/pruning. Since MetaKAN focuses on generating activation parameters without altering network structure, pruning can be applied independently. For example, in symbolic regression tasks (Appendix B), we applied sparsification by removing low-activation edges and using symbolic simplification. These results show MetaKAN works well with sparsification techniques.
Summary: In this work, the Authors combine KANs with Hypernetworks and show the improved/comparable accuracy at the lower parameter counts. Claims And Evidence: The claims are supported by the evidence: the proposed model is designed in a standard way, as far as hypernetworks are considered, and is tested on the datasets previously used in the field of KANs. Methods And Evaluation Criteria: Benchmark datasets here are the same as used in prior works in the field of KANs, enabling the direct comparisons of the results here with prior models. Such comparisons are also provided in the paper. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is standard for the field of Hypernetworks, therefore, it is validated and justified. Supplementary Material: I have glanced through the supplementary material. The main test here is sufficient to appreciate the scope and the quality of the work. Relation To Broader Scientific Literature: The work extends two lines of research: on Hypernetworks and on KANs, combining them for the first time and showing the utility of such a combination on up-to-date models from literature. Essential References Not Discussed: The work thoroughly discusses prior literature on hypernetworks, KANs, and benchmarks. Other Strengths And Weaknesses: Strength: the text is very well written and the experiments are well-documented. Weakness: the evaluation is only performed on the tasks already solved with KANs (which is a necessary step but may not be a sufficient step). It would be interesting to see MetaKANs solving a task that was not possible to solve with the regular KANs. Such a demonstration would strengthen the work by showing a unique capability of the proposed model. Other Comments Or Suggestions: Line 197: functions -> function Line 215: formalizes -> formalize Questions For Authors: Table 2: There are some apparent inconsistencies. On Line 362, the text says that, at G = 5, MetaKAN achieves a lower MSE in 14 out of 16 functions with parameter reductions. None of these numbers match the table: There are 17 functions, 10 of them correspond to the parameter reduction (by the way, why is that the case?), and all of them are highlighted by the bold font implying the improvement in the MSE. Could you please explain this inconsistency? Could you please provide an example task that is intractable with KANs but tractable with MetaKAN? I’m happy to revisit my score based on responses to these questions. ________________________ Post-rebuttal: the typos are fixed and the comments are addressed. Raising my score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: C1: Show a task solvable by MetaKAN but not by KAN A1:We thank the reviewer for this important suggestion. In fact, our experiments in Section C.1 (High dimensional function) and Table 4 (function fitting task) already demonstrate a key scenario where standard KANs fail while MetaKAN succeeds: For function $f₂(x) = ∑x² + x³$ at dimension $d=50$, standard KANs completely fail to converge (marked as "NA" in Table 4), while MetaKAN achieves stable performance (MSE=1.15×10⁻³). At $d=1000$, KANs show severe error amplification (MSE=$1.68×10²$), whereas MetaKAN maintains reasonable accuracy (MSE=$1.43×10⁻¹)$ with 7× fewer parameters. From the theoretical FLOPs analysis in A1 (response to Reviewer A7wa), as network size increases, the computational complexity of KAN grows rapidly with the number of edges. In contrast, MetaKAN benefits from reduced backward-pass complexity, which can result in faster training times on large-scale tasks. For instance, in the empirical comparison table provided in A1, MetaKAN not only consumes just 30% of the peak memory used by KAN, but also achieves faster training time in the largest network stucture. These results highlight MetaKAN’s strong potential for scaling to even larger models—scenarios where traditional KANs may become impractical due to memory limitations and the high cost of gradient computations. C2: Line 197: functions -> function Line 215: formalizes -> formalize A2:We have corrected the typos. C3: Table 2: There are some apparent inconsistencies. On Line 362, the text says that, at G = 5, MetaKAN achieves a lower MSE in 14 out of 16 functions with parameter reductions. None of these numbers match the table: There are 17 functions, 10 of them correspond to the parameter reduction (by the way, why is that the case?), and all of them are highlighted by the bold font implying the improvement in the MSE. Could you please explain this inconsistency? A3:We apologize for the confusion in Table 2. The inconsistencies arose because: 1. The text mistakenly stated "16 functions" (now corrected to 17). 2. For detailed comparasion of the parameter count, We take $G=5,k=3$, and the network struncture is $[n_0,n_1,...,n_L]$, the parameter count of KAN is $\sum_{l=0}^{L-1}(n_l\times n_{l+1})\times (G+k+1) =9\sum_{l=0}^{L-1}(n_l\times n_{l+1}) $, and the parameter count of MetaKAN is $\sum_{l=0}^{L-1}(n_l\times n_{l+1})+(d_{hidden}+1)\times (G+k+1) = \sum_{l=0}^{L-1}(n_l\times n_{l+1})+9(d_{hidden}+1)$.The condition for MetaKAN to have fewer parameters than KAN is: $9\sum_{l=0}^{L-1}(n_l\times n_{l+1})>\sum_{l=0}^{L-1}(n_l\times n_{l+1})+9(d_{hidden}+1)$, which implies the $d_{hidden} < \frac{8}{9}\sum_{l=0}^{L-1}(n_l\times n_{l+1})-1$. So as for some simple network structure (where $\sum_{l=0}^{L-1}(n_l\times n_{l+1})$is small), it does not have the parameter reduction cases. 3. Bold font was incorrectly applied to all MSE improvements, even without parameter reduction. We have now highlighted only cases with both MSE improvement and parameter reduction. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments. Specifically, A1 provides a good example that, basically, MetaKAN works at scale, where KAN is intractable. In light of these responses, I am raising my score.
Summary: The paper proposes a novel memory optimization method for Kolmogorov-Arnold Networks known as MetaKAN. MetaKAN leverages the use of meta-learners—2-layer neural networks that generate B-spline coefficients on-the-fly—to reduce the parameter count of KANs to a level comparable to that of MLP’s. Across various experiments, ranging from simple function fitting to deep learning tasks, MetaKAN consistently achieves an average parameter count reduction relative to KAN ranging from 18% to nearly 90%, while maintaining and often exceeding base KAN performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. I reviewed the justification for the theoretical parameter count of MetaKAN. Experimental Designs Or Analyses: Yes. I reviewed the experimental designs presented in Section 4 related to function fitting and various tasks using variations on the base convolutional architecture. Supplementary Material: n/a Relation To Broader Scientific Literature: MetaKAN innovates on the original KAN architecture presented in Liu et al., 2024 to address some of its inherent complexity issues. Various prior approaches have attempted to address these issues by replacing the B-splines utilized in Liu et al., 2024 with various alternatives, such as Chebyshev polynomials (SS et al., 2024), rational functions (Yang & Wang, 2024), wavelets (Bozorgasl & Chen, 2024), and radial functions (Li, 2024). MetaKAN retains the B-spline functions and reduces overall parameter count by generating B-spline coefficients via a meta-learner (inspired by Kong et al., 2020), rather than training those coefficients directly. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: - Addresses part of the problematic complexity of KAN by reducing overall parameter count to a level comparable to that of MLP. - Achieves performance comparable to or exceeding that of base KAN variants on a variety of tasks, including symbolic regression, partial differential equation solving, and image classification. - Method is portable to several prominent KAN variants, yielding reduced parameter counts and competitive performance. Weaknesses: - One of the primary motivations stated in the paper is to reduce the training cost of KAN’s relative to MLP’s; however, the paper does not provide sufficient support for its claim that training costs are reduced. While a reduction in parameter count may reduce the overall training time, energy demands, etc., increased computational complexity can offset those reductions, and the paper does not provide sufficient evidence to suggest computational complexity or training time are comparable or reduced. - While the paper identifies in Figure 12 that the theoretical parameter count of MetaKAN is comparable to that of MLP, it does not specify the theoretical parameter count of the modified MetaKAN architecture set forth in Section 3.4 for use in deeper KAN’s. Given that the parameter count disparity between KAN and MLP is most acute in the deep network context, a discussion of such theoretical complexity is warranted. While the paper does present empirical evidence of parameter count in several deep network experiments to demonstrate the relative parameter efficiency of MetaKAN in relation to KAN, it does not include comparison results against MLP models. - The paper does not address in sufficient detail the impact of the MetaKAN innovations on computational complexity / effective computation time relative to MLP, KAN, or any of the KAN variants. Although the MetaKAN architecture reduces the overall parameter count relative to KAN, it is unclear what the impact of this is on computation time. One of the primary issues with KAN is its training time relative to MLP, and the paper does not address the impact of MetaKAN on this concern. Other Comments Or Suggestions: - In Section 3.1.2, Figure 3, the term in the bottom left corner of the matrix uses d_out rather than n_(l+1). For consistency with the remainder of the figure and the surrounding discussion, I believe this should be n_(l+1). - In Section 3.2.2, Figure 7, the indexing notation for l is inconsistent with that of Figure 4. In particular, the union ranges from l=1 to l=L-1, which does not align with Figure 4's range from l=0 to l=L-1. I believe Figure 7 should be updated to align with Figure 4. - The parameter counts for MetaKAN provided in Table 1 vs. in Figure 12 are inconsistent. I believe the Table 1 count is accurate, as it is the parameter count of a 2-layer MLP with dimensions din = 1, dhidden, dout = G + k + 1. Questions For Authors: - What is the computation time of MetaKAN relative to KAN and MLP? You have demonstrated competitive performance with the reduced parameter count in MetaKAN, but it is unclear what the effect of these changes are on computational complexity. - Have you performed any experiments testing the performance of MetaKAN relative to MLP (for example, comparing performance with similar architecture dimensions or parameter counts)? Some discussion on the performance of MetaKAN relative to MLP would be helpful in evaluating its utility in the contexts for which testing has been conducted. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: C1: Complexity comparison and Training time between KAN and MetaKAN. A1: We thank the reviewer for the insightful question. Below is a summary of theoretical complexity 1. Complexity Analysis notations see https://github.com/icmlrebuttal25/Append | Model | Total FLOPs | | ------- | ----------------------------------------- | | KAN | $F_{act} N_{nodes} + N_{edges} C_{spline} + 2N_{node}(K^2 + GK) + 3N_{edges}D_{spline}$ | | MetaKAN | $F_{act} N_{nodes} + N_{edges} C_{spline}$ + $N_{edges} C_{meta}$ + $(2N_{edges} + 4d_{hidden}) D_{spline}$ | 2. Advantage condition MetaKAN becomes cheaper when $N_{edges} > 4d_{hidden}$, typically when $N_{edges} \sim 10^4$ or higher. This is due to trading forward-pass overhead for lower backward complexity. In practice, with >4 layers and >500 neurons per layer ($N_{edges} \sim 10^6$), MetaKAN training time becomes comparable to KAN. For deeper networks (8+ layers, width >1000), MetaKAN can be faster due to backward efficiency gains. 3. Empirical Comparison | HiddenWidth x depth| Model | # Trainable Params | Train Time(s) | Peak Mem.(MB) | | ------------ | ------- | ------------------ | ------------- | ------------- | | 1024×8 | KAN | 211M | 5396 | 7508 | | 1024×8 | MetaKAN | 8M | 5161 | 2519 | | 512×6 | KAN | 45M | 1399 | 4108 | | 512×6 | MetaKAN | 1.7M | 1435 | 1212 | | 512×4 | KAN | 31M | 1006 | 2854 | | 512×4 | MetaKAN | 1.2M | 1060 | 931 | 4. Potential These results align well with our theoretical FLOPs analysis. Although MetaKAN introduces a modest overhead in the forward pass due to the hypernetwork, it significantly reduces the cost of the backward pass—a major bottleneck when training large models. Furthermore, MetaKAN achieves this with drastically fewer trainable parameters and significantly lower memory usage. These advantages indicate that MetaKAN is not only efficient in current setups but also holds strong potential for scaling up to even larger KAN models—scenarios where traditional KANs may become impractical due to memory and gradient computation overhead. C2: Performance and training time comparison between MLP and MetaKAN A2: 1. For deeper structure, the theoretical parameter count of MetaKAN scales mainly with the number of edges and the number of meta-learners leads to a few parameter count increasing. | Name | # Params. | | ------------------------------- | ------------------------------------------------------------ | | MLP | $N_{edges}+\sum_{l=1}^L n_l$ | | KAN | $N_{edges} \times (G + k + 1)$ | | MetaKAN(Single meta-learner) | $N_{edges} + (d_{\text{hidden}} + 1) \times (G + k + 1)$ | | MetaKAN(Multiple meta-learners) | $N_{edges} + C(d_{\text{hidden}} + 1) \times (G + k + 1)$ | 2. We compare the two models on two tasks: function fitting and image classification. On the function fitting task $f_1(x)=\exp\left(\sum_n(1/n)\sin^2(\pi x/2)\right)$ with $n=1000$, MLP trains faster, but its accuracy is significantly lower. MetaKAN retains the inductive bias of spline-based models, achieving both compactness and high accuracy, making it ideal for tasks requiring interpretable and function representations. For image classification (MNIST), MLPs again train faster and achieve slightly better accuracy. However, MetaKAN still maintains strong performance with substantially fewer parameters. | Task | Model | Structure | # Params | Training Time(s) | Metric | Peak Memory(MB) | | -------------------- | ------- | ------------------ | -------- | ---------------- | ------ | --------------- | | Function Fitting | MLP | (1000,3000,3000,1) | 12.7M | 2.53 | 0.7900 | 193.89 | | | MetaKAN | (1000,512,512,1) | 1.6M | 12.37 | 0.0116 | 372.35 | | MNIST Classification | MLP | (784,16000,10) | 12.7M | 282.19 | 97.41% | 195.24 | | | MetaKAN | (784,2048,10) | 1.6M | 717.64 | 97.47% | 2012.32 | | | MLP | (784,1800×3,10) | 7.9M | 310.55 | 98.11% | 121.33 | | | MetaKAN | (784,512×3,10) | 0.93M | 593.61 | 97.95% | 640.75 | 3. In summary, while MetaKAN trains slower than MLPs, it retains the core strengths of KANs—strong inductive bias and interpretability—while dramatically reducing parameter count and memory usage compared to KAN. This allows MetaKAN to close the training cost gap with MLPs, offering a practical and scalable alternative that balances efficiency with symbolic inductive biase. --- Rebuttal Comment 1.1: Comment: The additional analysis and results in the rebuttal are very helpful to address my questions. I am raising the score.
Summary: This paper concerns a meta-learning approach to training Kolmogorov Arnold Networks (KANs) that enables a reduction in the number of trainable parameters in KANs that is substantially larger than that of standard deep learning models like MLPs. The meta-learning approach is quite standard; it proceeds by assuming that part of the trainable parameters are outputs to a hypernetwork. This output of the hypernetwork then couples together the activation-function-related parameters, significantly reducing the number of parameters (cubic to quadratic). Numerical benchmark illustrates that the thus introduced MetaKAN still achieves good performance while having significantly fewer parameters. Claims And Evidence: The authors claim that the MetaKAN requires fewer parameters while achieving similar accuracy. Methods And Evaluation Criteria: The numerical test are standard benchmarks that allow a straightforward comparison with other KANs (INR-type tasks plus standard ML classification tasks). The setup is reasonable and the results look convincing. Theoretical Claims: I noticed no substantial theoretical claims, other than parameter counts which appears correct. Experimental Designs Or Analyses: The experiments are standard, I did not notice any serious issues. Supplementary Material: The supplementary regarding direct functional approximations and PDEs were reviewed. The methods and comparisons are very standard, I did not notice any serious issues. Relation To Broader Scientific Literature: The paper aligns well with recent works in literature attempting to find low-dimensional structures within DL architecture via meta-learning. I believe this is one of the first papers attempting so for KANs, as far as I am aware. Essential References Not Discussed: More broadly, especially regarding KANs' use in solving PDEs or representing solutions to PDEs, there are other meta-learning approaches that the author can mention. For example, there are well-known papers in the field, Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby, "Physics-Informed Neural Networks (PINNs) for Parameterized PDEs: A Metalearning Approach" Shaowu Pan, Steven L. Brunton, J. Nathan Kutz, "Neural implicit flow: a mesh-agnostic dimensionality reduction paradigm of spatio-temporal data" Woojin Cho, Kookjin Lee, Donsub Rim, Noseong Park, "Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural Networks" Other Strengths And Weaknesses: Strengths The paper is well-written and propounds the major points concisely and clearly. The tests seem reasonable. Weaknesses In this reviewer's point of view, the paper does not expand sufficiently upon the implications of the specific meta-learning set up. I understand some of this is due to heuristics (ie.. hindsight; some set up works, some not) but are there any settings where the use of MLP arises naturally? For example, Low-Rank PINNs meta-learning approach has some theoretical connections to conservation laws (D. Rim, G. Welper, "Low Rank Neural Representation of Entropy Solutions." Other Comments Or Suggestions: Some suggestions: - Figure 3, which takes up a significant space, was hard to decipher for this reviewer. Does the diagram on the LHS shows the actual activation functions that were learned? Or are they simply for diagram representation? The cosine similarity on the RHS also appears puzzling... if the MLP pre-output hidden-state is low dimensional, it would already be clear that the output parameters feeding into the activations will also be low-rank. I suggest showing the amplitude. Does the sign in the cosine similarity matter here? If not, perhaps showing its magnitude in log-color scale will be better? Questions For Authors: - How small can you make the hypernetwork? How does the architecture constrain the weights? What is the extent it can be reduced? - Is there potential in jointly reducing the coefficients as well as alongside this MetaKAN approach? - Does the latent parameters Z represent anything physical upon training? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: C1:Why use the MLP as the meta-learner? A1: We understand the reviewer's interest in the theoretical motivation and deeper implications of employing an MLP as the hypernetwork, particularly in comparison to works like Low-Rank PINNs where architectural choices may have stronger connections to the underlying problem structure (e.g., conservation laws). We acknowledge that our paper could expand more on this aspect. Our choice of an MLP was primarily driven by several considerations: Firstly, as a universal function approximator, the MLP has been widely adopted in the meta-learning field to learn the common task distribution, such as [1-3] . Secondly, the MLP architecture is relatively simple, computationally efficient, and straightforward to implement and train. [1] HyperNetworks, ICLR 2017. [2] Meta-weight-net, NeurIPS, 2019. [3] Continual learning with hypernetworks, ICLR, 2020. C2:Clarify Figure 3 A2: We apologize for any lack of clarity in Figure 3. To clarify: the LHS does show the actual shapes of activation functions learned by MetaKAN, visualized from the weights generated by the trained hypernetwork. The RHS displays the cosine similarity between these generated spline coefficient vectors to reveal structural similarities in the learned function shapes. We could utilize the different color (positive sign or negative sign) to group the activation functions into different classes and see the compact structure of the learned function family with only a few function classes. C3:How small can you make the hypernetwork? How does the architecture constrain the weights? What is the extent it can be reduced? A3: We thank the reviewer for these insightful questions regarding the hypernetwork's design and its impact. These three aspects – minimum size, architectural constraints, and the extent of reduction – are closely related and central to the MetaKAN approach. The smallest effective hypernetwork is task-dependent. On simpler tasks, small hidden dimensions (e.g., 4–16) suffice. More complex tasks typically need larger dimensions (e.g., 32–128) to maintain expressivity. The choice reflects finding an point on the efficiency-performance trade-off for the specific application. The hypernetwork architecture constrains the generated weights by acting as a shared meta-learner for all activation function coefficients. Instead of learning separate parameters, MetaKAN learns this shared rule and a unique, low-dimensional embedding Z for each function, as mentioned in our approach. The hypernetwork takes this small embedding Z as input and uses the learned rule to generate the much larger set of spline coefficients (W). Because all coefficients originate from the common function family. The hypernetwork's hidden dimension restricting the variety and complexity of coefficients that can be produced from the input embedding Z. The extent of parameter reduction achievable by MetaKAN primarily scales with the ratio of the chosen embedding dimension to the original spline coefficient dimension, as the total parameters become dominated by the embeddings in larger networks. As illustrated in table, for small networks (e.g., [4,2,2,1]), the hypernetwork's own size , related to hidden dimension significantly impacts the total count, making a compact hypernetwork essential for savings. However, for larger networks, increasing hidden dimension(from 32 to 64) adds negligible parameters (0.1% increase) compared to the vast overall reduction (89%) achieved relative to KAN. Thus, while hypernetwork size matters for small models, the principal reduction factor in practical, larger-scale scenarios is determined by the embedding dimension. | Structure | Model | # Param | | ----------- | -------------------- | ------- | | [4,2,2,1] | KAN | 117 | | | MetaKAN, d_hidden=4 | 58 | | | MetaKAN, d_hidden=16 | 166 | | [784,32,10] | KAN | 228,672 | | | MetaKAN, d_hidden=32 | 25,705 | | | MetaKAN, d_hidden=64 | 25,993 | C4:Improvement with KAN coefficient reduction. A4: We strongly agree that combining MetaKAN with other KAN coefficient reduction techniques holds significant potential. Our generative approach (reducing learned parameters) is largely orthogonal to methods like pruning, quantization, or coefficient sharing (reducing final coefficient complexity or count). Integrating these strategies could lead to further efficiency gains and is a promising avenue for future research. C5:Physical meaning of latent parameters Z. A5: In MetaKAN, the Z embeddings identify the specific activation functions needed per edge. These embeddings organize meaningfully in latent space: nearby Z values produce similar activation shapes. This was verified by comparing similarity matrices of Z and the generated coefficients, which showed strong alignment (as in Figure 3). Thus, Z effectively encodes functional similarity.
null
null
null
null
null
null
Copilot Arena: A Platform for Code LLM Evaluation in the Wild
Accept (poster)
Summary: This paper presents EvalX, a platform for evaluating coding LLMs in real-world environments. Integrated into developers' IDEs, it collects user preferences on code completions. Unlike static benchmarks, EvalX provides real coding tasks and optimizes latency. Findings show model rankings differ from traditional evaluations, highlighting real-world coding insights. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. By collecting user preferences, the author evaluated the models. Supplementary Material: Yes. Authors uploaded their platform source code and analysis code. Relation To Broader Scientific Literature: This work is related to the evaluation of code generation for large models. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. This work proposes a novel method for llm code evaluation, which collects user preferences for models through a vscode extension. 2. Compared with static code evaluation benchmarks, this method can better reflect users' preferences for llm in real scenarios. **Weaknesses** 1. The evaluation metric of the model is relatively simple and can only reflect user preferences. 2. Although the benchmark is multilingual, the uneven use of programming languages ​​by users may result in inaccurate benchmarks for less common languages. Other Comments Or Suggestions: No. Questions For Authors: Can this method include evaluations of some open-source models? Doing so seems to negatively impact the user experience. How do you think you should balance user experience with incorporating more models into the evaluation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your helpful comments. We address your comments below: *[Weakness 1: The evaluation metric of the model is relatively simple and can only reflect user preferences.]* - There is an extensive list of existing literature that follows this paradigm (see the first paragraph of the related work section). Despite its simplicity, it has proven to be an effective way to have humans-in-the-loop to evaluate models - However, the data we collect enables other forms of evaluation. For example, by utilizing the snapshots from a user’s code trajectory, we can analyze the long-term impact of each code completion. This is a direction we are actively pursuing as future work. *[Weakness 2: Although the benchmark is multilingual, the uneven use of programming languages ​​by users may result in inaccurate benchmarks for less common languages.]* - We agree with your concern but note that we report model performance aggregating across languages, rather than performance on an individual language. - Your comment prompted us to do further analysis into the distribution across languages. We found that even accounting for examples with over 50+ samples, we still have 23 programming languages. This is significant compared to previous static benchmarks (e.g., those mentioned in Table 1). Please see the following table for a more thorough distribution of our programming languages: - We propose to add this table and modify the writing. For example, in our data analysis (Section 5.1), we can explicitly discuss how the data is not evenly distributed over all 103 languages, but there is a core set of languages for which there are a substantial number of votes. We believe these changes will help reduce any misinterpretations of our results on multi-lingual programming languages. | Vote Count | # Programming Languages | |---------------|----------------------| | 5 | 65 | | 10 | 45 | | 25 | 31 | | 50 | 23 | | 100 | 17 | *[Question 1: Can this method include evaluations of some open-source models? How do you think you should balance user experience with incorporating more models into the evaluation?]* - Yes, EvalX can include open-source models! In fact, we reported results in our submission on 4 open-source models from multiple organizations (e.g., Llama 70b [1], Llama 405b [2], Qwen32b Coder[3], Codestral [4]). Since EvalX is an ongoing data collection effort, we continue to add open-source models to our platform. - However, you do raise a great point that we will clarify in our revision. We don’t anticipate being able to evaluate all models using EvalX because we do not want to impact user experience negatively. One way to operationalize this is to select models that perform well on existing benchmarks, indicating they will be usable in practical settings. This is what we did when selecting models for this work. [1] https://huggingface.co/meta-llama/Llama-3.1-70B. [2] https://huggingface.co/meta-llama/Llama-3.1-405B [3] https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct [4] https://huggingface.co/mistralai/Codestral-22B-v0.1
Summary: This paper discusses EvalX, a system deployed in-the-wild to gather human preferences regarding code. It constructs a leaderboard based on user preferences and identifies notable differences compared to existing static benchmarks and human preference leaderboards. By analyzing EvalX’s diverse and unique data distribution, this study derives new insights into user preferences for code. ## Update after rebuttal The contribution is ok for me. This paper is acceptable. Claims And Evidence: Well supported. Methods And Evaluation Criteria: Yes. It's reasonable. Theoretical Claims: NA Experimental Designs Or Analyses: The soundness is pretty good. Supplementary Material: All the supplementary material. Relation To Broader Scientific Literature: Provide a dataset in the real world. Essential References Not Discussed: No Other Strengths And Weaknesses: I have some concerns regarding the scope of this paper. While the deliverables are well-suited for the community, the main methodology appears to offer limited contribution to this ML community. Despite these concerns, I am inclined to accept the paper. Other Comments Or Suggestions: I recommend rephrasing Section 2.3, as the organization of the entire subsection is difficult to follow. Additionally, sentences like "we use offline datasets to improve chat models’ infilling capabilities" are not easy to comprehend. Could you clarify whether the offline datasets are used for tuning the models? The observation that "smaller models seem to outperform in other static benchmarks compared to our leaderboard" might be attributed to the FiM task involved in the evaluation. I suggest further clarification of the underlying causes. Questions For Authors: How do you determine whether a completion is a FiM task? Although there might be a suffix, the completion may not be related to it. Classifying it as a FiM task affects the evaluation of model performance. How do you categorize the domain of the completion, such as frontend or backend? Is it particularly challenging to further scale the collected data in terms of increasing the data size for a specific language? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful suggestions. We address the comments below: *[Weakness 1: While the deliverables are well-suited for the community, the main methodology appears to offer limited contribution to this ML community. Despite these concerns, I am inclined to accept the paper.]* We thank the reviewer for advocating for accepting our work. We believe there are multiple reasons why the EvalX is a timely and important contribution to the ML community. Since the final version allows an additional page, we propose to add a Section 5.3 to explicitly summarize the insights and takeaways for researchers building new coding assistants: - **Different models excel in different settings.** Claude 3.5 Sonnet performs better at frontend/backend tasks, while Gemini and Deepseek excel at longer contexts. In contrast, changing models for different programming languages seems unnecessary. More generally, a routing approach based on input code context is an interesting direction for future research. - **Models should be trained and evaluated on varying code structures.** We observed a variety of tasks and code structures in our platform (e.g., FiM, docstrings, and inline comments). Future benchmarks and training schemes should explicitly account for variations in code structures. - **Models should be trained on human preference for code.** Given the clear gap between our leaderboard and static benchmarks, models should be trained on human preferences (of which our dataset is one option). Recent approaches have considered this, but still have significant gaps (e.g., Qwen-2.5 Coder trains on LLM-as-a-Judge preferences as a proxy for human preferences). By releasing EvalX data, our work can help to address the need for human preference data in real-world coding contexts. *[Suggestion 1: I recommend rephrasing Section 2.3….clarify whether the offline datasets are used for tuning the models?]* - As discussed in L166 (right), we do not fine-tune models, but rather only use the dataset for evaluation and tuning our prompts (L195, left). - However, we agree that Section 2.3 can be further clarified! To this end, we will also significantly expand on existing details in Appendix A and release all corresponding code for these experiments. - Altogether, with changes to Section 2.3 writing, additional details in Appendix A, and full release of code, we believe this will reduce any confusion on our prompting methodology. *[Suggestion 2: explain FiM task relation to "smaller models seem to outperform in other static benchmarks compared to our leaderboard".]* - In Section 5.2, our analysis indicates that FiM may not be the main contributor to model performance. While direct comparisons between models with and without FiM training would be ideal, most providers don't disclose their training paradigms. Our ablation using the Deepseek API (Appendix E, Table 6) demonstrates that FiM as an input format doesn't significantly impact performance. We'll clarify this nuance in our revised text. *[Question 1: How do you determine whether a completion is a FiM task?]* - To answer this, we conducted further analysis on whether the suffix is related to the completion. We broadly categorized our suffixes as either 1) inline (in which case it is clearly related) or 2) on a new line. Suffixes on the next line could be in scope (i.e., in the same function or loop as the last line of the prefix) or out of scope. - As shown in the table below, the vast majority (~80%) of our suffixes are related to the completion. We will include this analysis in the Appendix. | Formatting | Scope | Percentage | |------------|---------------|------------| | Newline | Out of Scope | 20.4% | | Newline | In Scope | 22.2% | | Inline | In Scope | 57.3% | *[Question 2: How do you categorize the domain of the completion, such as frontend or backend?]* We follow a two-step process detailed in the Appendix (starting from L880), which we summarize here: - First, we ask a model (e.g., GPT-4o-mini) to summarize all code contexts into short one-sentence descriptions. - Next, we prompt a model (e.g., GPT-4o) to cluster all one-sentence descriptions. - Finally, we provide the full code context and ask the model to categorize the context given the aforementioned clusters. We additionally note that two authors of the work verified that the categorizations were sensical before scaling. *[Question 3: Is it particularly challenging to further scale the collected data ... for a specific language?]* - Since we are collecting data in the wild, we cannot directly control what languages we collect. However, we strive to grow the user base of EvalX to reach broader audiences, which will lead to more data in additional programming and natural languages. --- Rebuttal Comment 1.1: Comment: I find the response satisfactory. Please incorporate the details into the accepted version. I will not change my rating.
Summary: The paper presents EvalX, a platform for evaluating coding capabilities of large language models (LLMs) in real-world settings. Unlike existing evaluations that rely on synthetic benchmarks or chat-based interactions, EvalX integrates directly into developers' VSCode environments to collect authentic user preferences on code completion pairs. Key contributions include: (1) a novel interface for comparing model outputs directly in the IDE, (2) a sampling strategy to reduce latency, (3) a prompting scheme to enable FiM code completion functionality, and (4) insights into user preferences across different programming contexts. The authors collected over 11k pairwise judgments across 10 models and found differences between their leaderboard and existing static benchmarks. Claims And Evidence: Most claims are reasonably supported, with several issues: - The position bias in the interface design is concerning - with 86% of users selecting the first completion (requiring just Tab vs. Shift+Tab). While the authors acknowledge this bias and analyze decision times (median 6s for first completion, 9s for second), they don't sufficiently address how this fundamental asymmetry might invalidate preference data. Lots of the preference data may reflect convenience rather than quality assessment, which may lower the data quality. - While the authors claim supporting 103 programming languages, the actual distribution is skewed. Python alone accounts for 6000+ samples, while many languages have minimal representation. Methods And Evaluation Criteria: The methods employed are novel and address a real need in the field. The in-the-wild evaluation approach is valuable, and the Bradley-Terry model for ranking is appropriate. - While optimizing for lower latency improves user experience, the sampling strategy potentially undersamples specific model pairs, is there any way to quantify the potential impact on the reliability of specific model pair comparisons (e.g., will the lack of certain pairs)? Theoretical Claims: The paper appropriately focuses on empirical findings rather than theoretical claims. The mathematical formulation of the sampling strategy (equations 1 and 2) is clearly presented, and the Bradley-Terry model application is sound. Experimental Designs Or Analyses: The user study design shows impressive scale and ecological validity. Supplementary Material: I reviewed appendix, and I did not "carefully" examine the supplementary code materials in detail for reproducibility Relation To Broader Scientific Literature: The work makes a significant contribution by bridging static benchmarks and human preference evaluation platforms, providing super valuable human preference data (manual labelled). It extends evaluation methodologies from chat-based platforms like Chatbot Arena. Essential References Not Discussed: I think this paper might also be related to some repo-level completion evaluation works, e.g.,: - CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023) - RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems (ICLR 2024) - REPOEXEC: Evaluate Code Generation with a Repository-Level Executable Benchmark (seems still under review) etc.. But it seems hard to directly compare but may worth mentioning Other Strengths And Weaknesses: ### Strengths: - The "Snip-It" method enabling non-FiM models to perform FiM tasks is an innovative contribution beyond the evaluation platform itself. - The analysis of different factors influencing preferences (Figure 7) provides valuable insights for future model development. - Extensive ablation studies. - Valuable dataset ### Weakness: - see above Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful suggestions. We address your comments below: *\[Claim \+ Evidence 1: The position bias in the interface design is concerning. The preference data may reflect convenience rather than quality assessment, which may lower the data quality.\]* * Since we randomize the ordering of the two generated responses (L149), the position bias affects **all** models equally. This means that the overall **quality** of the assessment is not impacted. However, although the **quality** of the assessment is not impacted, the **efficiency** of our platform is impacted. In short, since there’s a significant position bias, we need more votes to shrink the confidence intervals enough to draw reliable conclusions. Recall, we use logistic regression to estimate Bradley-Terry coefficients and bootstrap the samples to build confidence intervals (which allow us to be able to tell whether models are statistically significantly different). Overall, despite the efficiency reduction from positional bias, EvalX collects a sufficient number of votes to yield tight confidence intervals on model comparisons. *\[Claim \+ Evidence 2: While the authors claim supporting 103 programming languages, the actual distribution is skewed.\]* * We agree with your concern but note that we report model performance *aggregating* across languages, rather than performance on an individual language. * Your comment prompted us to do further analysis of the distribution across languages. We found that even accounting for examples with over 50+ votes, we still have 23 programming languages. This is significant compared to previous static benchmarks (e.g., those mentioned in Table 1). Please see the following table for a more thorough distribution of our programming languages: * We propose to add this table and modify the writing. For example, in our data analysis (Section 5.1), we can explicitly discuss how the data is not evenly distributed over all 103 languages, but there is a core set of languages for which there are a substantial number of votes. We believe these changes will help reduce any misinterpretations of our results on multi-lingual programming and natural languages. | Vote Count | \# Programming Languages | |---------------|----------------------| | 5 | 65 | | 10 | 45 | | 25 | 31 | | 50 | 23 | | 100 | 17 | *\[Method 1: Is there any way to quantify the potential impact on the reliability of specific model pair comparisons?\]* * In the analysis conducted in the original submission, we took a few steps to mitigate this impact. * First, when computing Bradley-Terry coefficients, we use an L2 regularization term to prevent overfitting and bias towards models that have received more votes. * Second, we conduct a statistical bootstrap of the leaderboard so we can obtain confidence intervals around the estimated coefficients, which improves the reliability of results. * Finally, we ensure that we have reasonable coverage across all model pairs (e.g., \>150 votes per pair), so votes are not too sparse across some pairs. * Inspired by the author's suggestion, we did additional simulations using our data. We upsampled model pairs that had fewer comparisons and downsampled model pairs that had more comparisons so that the difference between the most and least voted pair is within 10%. We find that the rankings remain **identical to those** we report in the original submission. This provides additional confidence that we collected sufficient votes across all pairs. * Overall, we appreciate this suggestion and will highlight this more prominently as a point of consideration for future evaluation platforms. Finally, thank you for the helpful pointers to additional evaluation benchmarks\! We agree it would be challenging to include in our comparisons, but we will definitely mention them in the related work and opportunities for extending EvalX.
Summary: Authors introduce EvalX a platform to compare the effectiveness of different LLMs for the use case of coding assistants. Their deployed platform has already collected over 11000 responses on comparisons between 10 different models. The model ranking presented from these results gives new insights on user preferences under different tasks that differ from the previous benchmarks. Claims And Evidence: Authors advocate the need for a coding specialized benchmark beyond existing works like Chatbot Arena. While I agree with this claim, I am afraid their exclusive focus on code completion with the FIM/L2R setup fails to capture the entire spectrum of AI tools used in the IDE. Most leading developer AI assistants (Github Copilot, Cursor) today provide the LLM interface in multiple formats beyond standard L2R or FIM completion, particularly the chat functionality (where for instance users can understand a repo by including it as a context). I'm not certain of the relative usage of such features (code completion vs chat), and a discussion on this aspect can immensely benefit this paper. But I find the sole emphasis on code completion to be a limitation of this work, as this does not throw light on the effectiveness of a candidate model in the chat setting. In my assessment, this work has immense value for the research community, but it is crucial for the paper to accurately convey what this benchmark captures (code completion capabilities), and what it does not (chat style functionalities). This could perhaps be accomplished by specifically referring to the capabilities being assessed rather than claiming wholistic real-world evaluation of Code LLMs. This is only briefly expressed towards the end of the paper (Section 7) currently. Secondly, authors propose that LLM evaluation can be entirely based on human preference evaluation, but this may not necessarily be true. Say model A gives a more readable but less performant piece of code than model B, and the user selects model A's output under EvalX - that does not necessarily reflect the superiority of model A over B? User preference may be biased towards readability while ignoring aspects like efficiency (runtime and space), maintainability, and security vulnerability of generated code. While static benchmarks do not represent real world use cases well enough, they can be adapted to capture these attributes of LLM generated code, and they offer significant value in that regard, which is difficult to record via human preferences. EvalX's fine-grained evaluation (user feedback on specific completions) also ignores aspects like the long term impact of LLMs on developer productivity (e.g. by what factor is the time required to complete a project reduced with a specific model behind a coding assistant). Methods And Evaluation Criteria: In my assessment the proposed methods and evaluation make sense for the problem at hand. Theoretical Claims: NA. Experimental Designs Or Analyses: - Given that users are less likely to vote when experiencing high latency, authors have optimized model pair sampling to reduce the latency observed with some regularization to stay close to uniform. - The data is skewed towards faster models this way - does that not impact the quality of the resulting dataset? And the observations you can draw from it? Related questions: - How to quantify the impact of this model sampling scheme - What'd be the difference in response/ranking rate with your model sampling vs uniform sampling? - How would tweaking $\tau$ impact coverage and user responsiveness? Supplementary Material: No. Relation To Broader Scientific Literature: Several prior works (Chatbot Arena, LiveBench, BigCodeBench) have focused on evaluation of LLMs in different contexts including for code. The proposed EvalX benchmark attempts to instead capture real world in-IDE use cases when comparing LLMs, which addresses significant gaps in prior evaluation that relied on a fixed set of problems and the LLM's ability to solve these or were only focused on chat based use-cases. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths - Very promising and convincing approach on comparing and evaluating LLMs for developer assistance that addresses severe limitations of prior work. The tasks involved in EvalX are by design realistic and represent several diverse aspects of real-world programming use-cases. - Authors have designed effective tooling for user preference collection while ensuring several biases are mitigated e.g. top-bottom random swapping; latency masking of the slower model. - Several ethical considerations have been implemented by the authors given the potentially sensitive nature of data they're collecting from users of this study (Section 3). I'd encourage them to practice utmost care in releasing this dataset. Other Comments Or Suggestions: - Other benchmarks could be described with a little more detail to help readers understand how EvalX fits into the broader context of LLM evaluation - Lack of insights/takeaways for model training or prompting - Would you advocate specialized (perhaps smaller models) over general (large models) for developers? - I believe the paper could immensely benefit from insights or recommendations for users of coding assistants and model builders (of LLMs powering AI assistants) - Minor: - Chatbot Arena citation in Section 1 - Takeaways not clear in Section 1 Questions For Authors: - What would you assess as the primary user motivation to participate? Free generations from commercial LLMs in IDE? - Do users get to pick the candidate models A and B? - What does fidelity mean in the context of Fig 2's caption? - Section 2.3: Any data or citation to support the first sentence? - Notation could be clearer in Eqn 1 and 2: - What does $l$ mean? - It is perhaps easier for a reader if you use the notation $l_\text{max}$ instead of using the subscript for $F$ - as the maximum operator is applied on the observed latencies of the 2 models. - Did you mean $F$ to be the CDF of latency? $F(x) = P(X < x)$, I don't follow how the CDF of maximum latency would make sense in this context. - What is the cost incurred so far with the ~11k preferences collected by EvalX or whatever is the total number of completions offered to users? - When users opt out of data collection - do you still log model comparisons but not their pref/suff/completion code? - Line 346: How is each element of this matrix defined and what range can it take? I didn't follow how it can $\in R$ Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed review and appreciation of our work. We aim to address your comments below: *\[Claim \+ Evidence 1: focus on code completions\]* * We agree that the paper already provides value to the research community and will be more precise about the scope of our work early on (e.g., we will revise L37 to state “... coding assistants, **specifically focusing on their ability to generate code completions**”). * While there are many ways to interact with AI programming assistants, code completions are one of the most frequent use cases, as found by [https://dl.acm.org/doi/10.1145/3597503.3608128](https://dl.acm.org/doi/10.1145/3597503.3608128). * Further, EvalX naturally leads to many interesting directions for research for the broader AI and software engineering community, which we discuss in Section 7\. For example, EvalX can be extended to include more interaction modes beyond code completion. Since our submission, we have already added a prompt-based editing feature. The reviewer’s suggestion of long-term impact is another direction that can be studied using data collected from EvalX. *\[Claim \+ Evidence 2: preferences can be biased, compared to static benchmarks which can be nuanced\]* * We agree, and believe our work is *complementary* to existing approaches and does not seek to replace them—both are necessary for a holistic view of LLM code evaluation. We discuss this in Related Work (L417), but will update the Introduction to be clearer. * We also speculate that building evaluations in the IDE may mitigate the reviewer’s concern for biases towards readability. Since our platform is embedded in a user’s **real development environment**, users are likely to prefer “useful” generations (readability being only one aspect of usefulness) as they would when writing code with Github Copilot. *\[Experimental design 1: Does the model distribution (skewed towards latency) impact the quality of data? Impact of $\tau$?\]* * In the analysis conducted in the original submission, we took a few steps to mitigate this impact. * First, when computing Bradley-Terry coefficients, we use an L2 regularization term to prevent overfitting and bias towards models that have received more votes. * Second, we conduct a statistical bootstrap of the leaderboard so we can obtain confidence intervals around the estimated coefficients, improving reliability of our results. * Finally, we ensure that we have reasonable coverage across all model pairs (e.g., $>150$ votes per pair), so votes are not too sparse across some pairs. * Inspired by the author's suggestion, we did additional simulations using our data. We upsampled and downsampled model pairs that had fewer and more comparisons respectively so that the difference between the most and least voted pair is within 10%. We find that the rankings remain **identical** to those we report in the original submission. This provides additional confidence that we collected sufficient votes across all pairs. * As $\tau$ increases, our coverage becomes more evenly distributed (uniform at $\tau \rightarrow \infty$). *\[Suggestion 1: More detail about other benchmarks\]* While we briefly cover metrics for each benchmark in Table 1, we will additionally provide a short summary of each benchmark in Section 4.2. *\[Suggestion 2: “insights/takeaways for model training or prompting\]* Since the final version allows an additional page, we will add a new Section 5.3, which will explicitly summarize the insights and takeaways for researchers building new coding assistants. Due to character limits in the response, please see our first response to Reviewer 3 (**1UGk**) for additional details. Additionally, if the reviewer has any suggestions, we would be more than happy to incorporate them\! *Questions* (not repeated due to space): * We believe there are three main motivations: 1\) free generations, 2\) overall interest in participating in ML research, and 3\) GitHub Copilot or alternatives may be unavailable in some countries. * Models are randomly selected (as discussed in L149). * Fidelity refers to the quality of the response, largely in terms of formatting. We will clarify the caption. * We observe this in our own data\! Figure 6 shows that 65% of votes are on FiM tasks. * l is the maximum latency between models $i$ and $j$ at one point. $F\_{max}(l; i,j)$ is the CDF of these latencies. $F\_{max}(l;i,j) \= P(max(L\_i,L\_j) ≤ l)$ where $L\_i$ and $L\_j$ are random variables for the latency of $i$ and $j$. We agree that $l\_{max}$ is a better notation and will fix this. * The total cost is approximately $5k USD. * Correct, we always log model comparisons. Users are required to accept this before using the platform. * Each element of the matrix is the win-rate between two models over all their battles. The win-rate would be $\in \[0, 1\]$. We will clarify this in the paper by saying $W ∈ ℝ^{(M×M)}$ with $W\_{ij} ∈ \[0,1\] \forall i,j$
null
null
null
null
null
null
One-dimensional Path Convolution
Accept (poster)
Summary: The authors propose an alternative for 2D convolutional networks that is lighter. They use a topological 1D path across all the pixels based on Hilbert or Z paths. They use more than one such path at each layer to make up for locality. Other than the convolution layer they have several other building blocks such as channel attention inter and intra paths, activations and more. They show results comparable to resnets with more parameters. They have implemented their layers in CUDA. Throughout the fields below, strength, weaknesses and questions will be marked by **(S), (W), (Q)** . ## update after rebuttal I thank the authors for their response. The authors acknowledged some of the concerns I raised. For one concern they provided some scaling evidence, applying their method on ImageNet resolution. While not fully convinced that generally scaling is practical, I'm glad they provided this example and at the very least showed that there are ways to adjust. I maintain my opinion that this paper is novel and has significant scientific contribution, but not a method ready for actual practical use (which is fine). I think this paper should be accepted and I maintain my original favorable score. Claims And Evidence: **(S1)** As far as I could see, the claims made are backed by existing theory, proofs or empirical evidence. Methods And Evaluation Criteria: **(W1)** There is no evidence of availability to scale up. I am not convinced by the statement in lines 344-345: "We use low-resolution datasets as they amplify locality preservation issues...". In practice there are many great ideas that can be demonstrated on lighter data. It is ok if some idea is not ripe enough for bigger scales. But it should be laid out that currently we have not shown to be of practical use. **(S2)** The results of the experiments are well specified, fairly testing various setups. **(S3)** There is an ablation study that sheds light on the need of the positional encoding and the PACA. **(W2)** However, it feels like the ablation is not testing the core elements. An experiment that would have been good is taking an existing CNN and replacing its layers with the 1D layers. It could be that it doesn;t make sense without additional building blocks, in which case these ones need to be considered as part of the basic layer. Another thing that would have been enlightening is, if possible, see the effect of some of the other introduced blocks in baseline CNNs. Theoretical Claims: Theoretical claims are backed. Experimental Designs Or Analyses: See Methods And Evaluation Criteria. Supplementary Material: The appendix contains needed details, examples and proofs. As far as I'm concerned, covering everything that is needed. Relation To Broader Scientific Literature: **(S4)** Details, examples and proofs are well provided in the appendix. Essential References Not Discussed: Couldn't find any. Other Strengths And Weaknesses: **(S5)** This is a refreshing original idea, bringing mathematical allegedly unrelated elements into improving deep learning. It is novel and original, creative and interesting. I find that it has significant scientific value. **(W3)** The gain of having less parameters in the final model is not the most important thing in efficiency. The two more important elements are wall clock time and the memory size of the activations. The former seems to be very similar to baselines and the latter is not mentioned. It is important to know that there is no significant activations memory cost because this is the true practical obstacle for using models in different environments. The parameter count is important usually because of its correlation with the other two elements and because of theoretical value about expressiveness. I think that reduction to 1/3 parameters is not a super-impressive result. It is less than an order of magnitude, and it is hard to judge whether some other design choices help more than the main story and maybe such that could be combined to regular CNNs. Other Comments Or Suggestions: This paper has a novel and significant contribution. This is a type of works we should see more- out of the main stream, daring and using math for creatively solving problems. I think this paper should be accepted and fits the venue. I believe it has potential of opening a door to a new line of research. I do have to say that I think that the authors made a mistake, branding their work as a working useful method. There is not enough evidence to back this up. This is more of first-step kind of work, a POC perhaps. A respectable, elegant POC but not yet something we can genuinely offer practitioners as a method to use. Questions For Authors: I'm wondering about striding freedom- It seems that having this fractal structure comes with a ratio by which the fractal repeats. This suggests that one cannot just choose whatever stride to take in the 1d convolutions that would still locally make sense. So for example stride 3 seems like it couldn't work. Thanks! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are profoundly grateful for your exceptionally insightful and supportive evaluation of our work. Such constructive affirmation not only underscores the significance of this research within the current paradigm but also inspires our dedication to advancing this direction toward practical applications. In addition, we would like to thank you for clearly marking strengths, weaknesses, and questions to facilitate our communication. We sincerely appreciate your thoughtful recognition of the novelty of this paper. To further strengthen the discussion and address your valuable concerns, we now elaborate on the limitations raised. ## Weakness > **(W1)** There is no evidence of availability to scale up... We conducted an additional experiment applying the path-shifting method to high-resolution images, where $s=224$ follows the common setting of ImageNet. We find that a small range of $p$ is enough to find $\mathbb{C}$ with cardinality of 3. Following the structure of Table 2, we have |Path|$s$|$\mathbb{C}^{\ast}$| |-|-|-| |||{13, [10, 5], 270}| |Hilbert|224|{10, [5, 0], 270}| |||{7, [7, 0], 90}| |||{0, [0, 0], 0}| |Z-order|224|{32, [11, 14], 180}| |||{32, [16, 14], 0}| Due to space limitations, please refer to the first section in the response to reviewer z3TX for details. In terms of experiments on normal-resolution datasets and performance for downstream tasks, we indeed do not have sufficient results at the current stage and will address this limitation in our future work. > **(W2)** ...An experiment that would have been good is taking an existing CNN and replacing its layers with the 1D layers... Another thing ...see the effect of some of the other introduced blocks in baseline CNNs. This is a very inspirational suggestion. This paper seeks to validate the feasibility of building a CNN using only 1D kernels using space-filling shifted paths. Hence, the current experiments, including ablation studies, focus on the PathConv model design and path selection. We admit that the current experimental design is a bit rough. To replace layers in 2D backbones with 1D layers is a feasible way to identify critical block design as suggested. We intend to draw on this approach in our next research. We express sincere appreciation for this suggestion. > **(W3)** ...The two more important elements are wall clock time and the memory size of the activations...it is hard to judge whether some other design choices help more than the main story and maybe such that could be combined to regular CNNs. We totally agree that memory consumption represents a key consideration with respect to efficiency and edge computing. In fact, we plan to validate the efficiency by implementing and deploying the PathConv model on hardware platforms like FPGAs. As path sampling is read-only, substantial optimization opportunities exist at the hardware level. Subsequently, memory consumption comparisons can be conducted directly on the board, closely approximating real-world deployment conditions. Regarding the next comment on whether alternative design choices may help, further exploration is indeed necessary. As your previous comments (W2) suggest, a viable approach might be to substitute layers and find the decisive block(s). We also wonder whether some different design principles would be more appropriate than adapting principles from existing 2D backbones. We will investigate this in future work. We sincerely appreciate your insight in distinguishing between theoretical significance and implementation maturity. ## Regarding the `other comments or suggestions` section We are deeply honored by your recognition of this work's novelty and potential. Your characterization of the contribution as *out of the mainstream, daring, and mathematically creative* resonates with our original aspiration to challenge conventional assumptions. Meanwhile, we also concur that framing this paper as a *practically useful method* in the current state is sub-optimal. I will take this advice to heart and adjust the writing style. ## Questions > I'm wondering about striding freedom- It seems that having this fractal structure comes with a ratio by which the fractal repeats. This suggests that one cannot just choose whatever stride to take in the 1d convolutions that would still locally make sense. So for example stride 3 seems like it couldn't work. Thanks! Your observation about stride constraints is insightful. We tend to believe the stride size may not have a big influence. $\mathbb{C}$ has guaranteed that regardless of which pixel to start, we will always obtain better locality preservation than raster scanning in at least one path. Meanwhile, there are multiple different paths. Even if a spatially discontinuous sequence of pixels is obtained on one path, potentially, it is possible to avoid this phenomenon on other paths. Still, we think it's worth exploring. We will investigate this issue in future research. Thank you very much! --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. The authors acknowledged some of the concerns I raised. For one concern they provided some scaling evidence, applying their method on ImageNet resolution. While not fully convinced that generally scaling is practical, I'm glad they provided this example and at the very least showed that there are ways to adjust. I maintain my opinion that this paper is novel and has significant scientific contribution, but not a method ready for actual practical use (which is fine). I think this paper should be accepted and I maintain my original favorable score. --- Reply to Comment 1.1.1: Comment: Thank you for your kind reply. We are gratified that the contribution of this paper has been recognized. The scalability for higher-resolution data and downstream tasks remains our primary concern at present. We will endeavor to adapt this methodology for practical applications. Again, we express our sincere appreciation for your positive evaluation of this paper.
Summary: The use of 1D CNN kernel can greatly reduce the model size comparing aginst the traditional 2D. However, the lack of locality information hidders the use of 1D kernel. This paper proposes a new traversal mechanism which can be used for Hilbert and Z-order scans for 1D CNN model. The result shows the proposed 1D CNN design can achieve competitive result to the traditional 2D model with fewer parameters. ## update after rebuttal: According to the guideline, my review after rebuttal is attached in the rebuttal comment below, "Rebuttal Comment by Reviewer 5Xg4". Claims And Evidence: 1. It can be easily proven that the use of 1D filter can reduce the model size of 2D baseline, the number of parameters, this claim is shown in Tab. 1. 2. The main issue is the lost locality information. This study shows how Hilbert and Z-order curve can somehow solve this problem, the scanpath can still capture the local feature, and this claim is clearly showed in Fig. 3. 3. However, the use of the scanpath still sacrifice the information near quadrant boundaries, thus shifted + rotated scanpath are added as supplementary to fix this missing information. This main proposal is shown in Fig. 4 and Section 3.2. The claims are easy to follow, the only question is if the result is convincing, (discussed below). Methods And Evaluation Criteria: The method is proposed to improve the 1D convolution process, which can be used for a wide range of vision tasks. The current draft is evaluated based on image classification, accuracy is reported to compared against the traditional raster scan. Accuracy on classification can somehow show the effectiveness of the method, but it is better to compare the method on "standard" resolution (256*256) or larger. Moreover, convolution can be used in other vision tasks, validating the method on other tasks, e.g., detection or segmentation could be more convincing, like oriented-1D did[Kirchmeyer & Deng, 2023]. One main drawback in the evaluation is the baseline setup, please see the weaknesses section. Theoretical Claims: The theoretical claims in this paper are easy to understand, including: 1. 1D filter requires less parameters than 2D, Table 3. 2. Flattening the input image for 1D filter will destroy the locality information. 3. Hilbert and Z-order scan can focus more on locality, Figs. 2. 4. There still exists information lost (wrong neighbour distance) in step 3, Fig. 3. 5. Shifting and rotating the traversal path could capture the lost information in step 4, Fig. 4. Experimental Designs Or Analyses: The proposed method is validated based on image classification, thus it is reasonable to compare the proposed method with traditional model using the widely used measure, accuracy. The experiment was conduceted on CIFAR, SVHN(32*32) and ImageNet-64(64*64). The result shows the proposed method can achieve competitive results against the traditional 2D model, ResNet. This result seems "partially" convincing to me to validate the proposed method. However, I also have several questions regarding the experiments shared in the secion below. Supplementary Material: I looked the supplementary materials, which is additional support for the claims shown in the paper. Relation To Broader Scientific Literature: This paper studies an essential problem that how to organize and process the input information. This question is essential to almost any convolutional systems, which can be used in various vision applications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I generally like the writing of this draft, the problem is introduced clearly and solution seems easy to understand. However, a main problem in the draft is about the related work. The references could be adequate, but the difference is not discussed in depth. I can find those related literatures in the draft for oriented filter, or steerable filters. However, the comparison between this study (scanpath) and their methods are not clear, "popular deep-learning libraries do not natively support them" is not convincing to me, especially for theory discussion. Futher, the only baseline used for the experiment is the standard raster scan with ResNet, this baseline is not convincing, I wonder why the proposed method cannot be compared against those oriented-1D or steerable solutions? One critical issue is the choice of P in this design. From section 5.3, it seems the selection of an appropriate cardinality is important, but this also makes the whole design more complicated. And this choice could be dataset specific, otherwise it will "impede model convergence", which makes the whole design more complicated, as discussed in App.D. And choosing the right 3 is more complicated. Other Comments Or Suggestions: Please see below. Questions For Authors: Page 4, the "sacrificed" information lost in hilbert and Z-order are the purple points in Fig. 3, (please correct me if I am wrong). If so, it might be better to highlight this main drawback in a rectangle in Fig. 3, which is solved by the proposed shifting method. In Table 4, C+ represents the doubled cardinality P=6, but the FLOP numbers are all the same, I am not sure if this is a typo, or I might miss sth in the context. Higher cardinality represents more (configurations) input data, am I rigtht? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback. We have summarized your concerns and will respond to each accordingly, followed by addressing your questions. ## Related works & experimental settings > Moreover, convolution can be used in other vision tasks, validating the method on other tasks, e.g., detection or segmentation could be more convincing, like oriented-1D did [Kirchmeyer & Deng, 2023]. > However, the comparison between this study (scanpath) and their methods are not clear, "popular deep-learning libraries do not natively support them" is not convincing to me, especially for theory discussion. > ...I wonder why the proposed method cannot be compared against those oriented-1D or steerable solutions? We thank the reviewer for these insightful suggestions. We emphasize that this paper aims to build a vision model only using 1D kernels to maximize parameter efficiency and achieve locality preservation by space-filling path shifting and the PathConv model design. Whereas oriented-1D utilizes a different solution based on directional 1D kernels requiring non-trivial implementations. In contrast, the proposed method is conceptually distinct from oriented-1D. It is logical to compare oriented-1D (with different principles) after demonstrating the viability of our method in this paper. Hence, we agree on this point and will present a comprehensive comparison in future studies. Furthermore, we fully acknowledge the absence of PathConv performance for high-resolution images. We hypothesize that hierarchical paths across different resolution levels can be an effective approach for higher-resolution images. The locality features of high-resolution images may involve much more pixels than low-resolution ones, which probably prove challenging to capture using the current method. When solving this problem, we will evaluate the performance of downstream tasks, whereupon the comparison with oriented-1D will be more comprehensive. Nevertheless, we emphasize that the conclusions of this paper hold substantial value for related research. The comparison with oriented-1D is essential and will constitute our main focus in future work. ## Choice of paths > One critical issue is the choice of P in this design. From section 5.3, it seems the selection of an appropriate cardinality is important, but this also makes the whole design more complicated. And this choice could be dataset specific, otherwise it will "impede model convergence", which makes the whole design more complicated, as discussed in App.D. And choosing the right 3 is more complicated. The impact of path selection is investigated in Section 5.3, with corresponding results presented in Table 4. While $\mathcal{H}_s/\mathcal{Z}_s$ occasionally perform better on small datasets, $\mathbb{C}^{\ast}$ can stably deliver robust performance for all datasets, especially for ImageNet-64. Therefore, we can conclude that $\mathbb{C}^{\ast}$ is the best choice to balance model efficiency and performance, independent of the dataset. Consequently, $\mathbb{C}^{\ast}$ can be universally applied to PathConv models in practice, where detailed configurations are provided in Table 2. Determining the appropriate set of paths is indeed challenging. However, this issue has already been addressed in Section 3.3 and Appendices C and D. As a result, we have $\mathbb{C}^{\ast}$ in Table 2 for different $s$. Moreover, $\mathbb{C}^{\ast}$ is dataset independent, thereby only needing to be calculated once. Hence, although determining the right three paths is complicated, we contend that this challenge has been substantially resolved. ## Question > Page 4, the "sacrificed" information lost in hilbert and Z-order are the purple points in Fig. 3, (please correct me if I am wrong). If so, it might be better to highlight this main drawback in a rectangle in Fig. 3... Yes, you are correct. The sacrificed pixels are in purple in Figure 3. And yes, highlighting sacrificed pixels would be more intuitive for readers. We may consider to apply this idea. Thank you for your advice! > In Table 4, C+ represents the doubled cardinality P=6, but the FLOP numbers are all the same, ... Higher cardinality represents more (configurations) input data, am I rigtht? Yes, you are correct. Thank you for pointing this out. This is of vital importance for us. Some fields have rounding problems. We provide correct figures below. PathConvS (s=32): |P|#param.|FLOPs| |-|-|-| |1|3,614,912|1,144,381,120| |3|3,615,676|1,144,966,848| |6|3,617,248|1,145,844,096| PathConvS (s=64): |P|#param.|FLOPs| |-|-|-| |1|3,762,368|4,575,092,416| |3|3,763,132|4,577,447,616| |6|3,764,704|4,580,979,072| PathConvS (s=32): |P|#param.|FLOPs| |-|-|-| |1|7,820,316|2,494,394,480| |3|7,821,274|2,495,123,760| |6|7,823,230|2,496,214,560| PathConvS (s=64): |P|#param.|FLOPs| |-|-|-| |1|8,004,636|9,974,468,720| |3|8,005,594|9,977,409,840| |6|8,007,550|9,981,818,400| --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the further explaination. The only concern to me is still the validation on larger resolution data, I also saw this concern in other feedback(Reviewer kfqd). The authors claim that the choice of C* should be data independent, I agree with this otherwise the whole idea would be very ad-hoc or hacky. However, the experiment did not show convincing result to support this claim. I also gree with reviewer kfqd that the contribution is interesting to this community so that I will maintain my rating. Further validation can be extended later, e.g., larger resolution or other down-stream vision tasks. --- Reply to Comment 1.1.1: Comment: We appreciate your reply and your evaluation of this paper. We would like to thank you again for finding the rounding problems in the tables. We are profoundly grateful to see that the contributions of this paper are considered interesting in the community. The scalability for higher-resolution data and downstream tasks will be our main focus in the following work.
Summary: This paper proposes a path convolution (PathConv) architecture based on one-dimensional convolution, aiming to solve the problem that traditional one-dimensional convolution destroys the spatial locality of the image. By introducing the Hilbert/Z-order curve as the image traversal path and combining the path shifting technology to adjust the position of the sacrificed pixel, only three shifting paths are needed to globally retain the locality that is better than traditional raster scanning. In addition, a lightweight path-aware channel attention (PACA) mechanism is designed to alleviate the convergence problem caused by multiple paths. Experiments show that PathConv achieves ResNet-level performance on datasets such as CIFAR-10, SVHN, and ImageNet-64 with only 1/3 of the parameters. *----------------------------------------------------------------------------------------------------------------------------------------------------------* ## Response to the Authors' rebuttal: Thanks for your response. Although some parts have not addressed in a good shape, e.g., 1) no experimental results for ImageNet-256 or non-square images (such as COCO) are provided. Only a hierarchical path hypothesis is proposed, which lacks actual data support; 2) The basis for selecting translation parameters (e.g., fill size) is still unclear, and no parameter sensitivity analysis is provided, which affects the reproducibility of the method; 3) The computational complexity of path selection for ultra-large images such as 1024×1024 has not been analyzed, and the scalability of the NP problem is questionable, the core contribution of the paper (achieving locality preservation and parameter efficiency of one-dimensional CNN through path translation and set covering theory) has good theoretical and engineering value, and the experimental design fully verifies the basic assumptions on CIFAR/SVHN/ImageNet-64. Hence, I tend to keep my original score unchanged. Claims And Evidence: (1) Claim: Path shifting can effectively improve locality preservation. (1) Evidence: Table 1 shows that the ratio of sacrificed pixels ($P_sd$) of the Hilbert/Z-order path increases with the resolution, but the total distance may still be inferior to raster scanning; Figure 4 proves that after path shifting, the sacrificed pixels are repositioned and the locality coverage is more comprehensive. (2) Claim: Three shift paths are sufficient to satisfy the locality constraint. (2) Evidence: The problem is modeled as a set cover (NP problem) via a randomized rounding algorithm. Table 2 shows that the three-path configuration ($]\mathbb{C}$) covers all pixels in the experiment (Table 4). (3) Claim: PathConv outperforms traditional 2D CNN in parameter efficiency. (3) Evidence: Table 3 shows that PathConv-S/B has the same accuracy as ResNet-18/50 with 2/3 fewer parameters, and even slightly better in some scenarios. Methods And Evaluation Criteria: Yes Theoretical Claims: (1) Locality preservation theory: The recursive self-similar structure of Hilbert/Z-order curves can preserve spatial proximity at multiple scales, but the existence of sacrificed pixels needs to be redistributed through path shifting. (2) Set cover modeling: The minimum path number problem can be polynomially reduced to a set cover problem, and a near-optimal solution is obtained through a random rounding algorithm (Appendix C). (3) Attention mechanism design: PACA balances the information fusion of multi-path inputs through dynamic weighting of channel and path dependencies (formula and Figure 7). Experimental Designs Or Analyses: Yes, the experimental setup appears well-motivated. Supplementary Material: Yes, all of the supplementary material. Relation To Broader Scientific Literature: This paper combines the theoretical advantages of space-filling curves and the dynamic modeling capabilities of attention mechanisms in a one-dimensional convolutional framework through path shifting and PACA mechanisms, and solves the path selection problem with the help of combinatorial optimization theory. This work fills the gap between locality preservation and parameter efficiency of full one-dimensional CNNs, and provides a new direction for the design of hardware-friendly and efficient visual models. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: (1) The path selection problem is transformed into a set covering problem, and mathematical reduction and approximate solutions are given, with sufficient theoretical support. (2) Covering multiple data sets (low/medium resolution), ablation experiments (Table 5), path configuration comparison (Table 4), verifying the necessity of each component. (3) CUDA acceleration is used to implement the path traversal layer, with a speed increase of 73 times (Table 7), ensuring the practicality of the method. Weaknesses: (1) The actual performance of high-resolution (such as 256×256) or non-square images was not tested, and only briefly mentioned in Appendix B. (2) Only ResNet and WRN are compared, lacking horizontal comparison with efficient models, such as MobileNet and EfficientNet. (3) The selection basis of specific shift parameters (such as padding size) in Table 2 is not detailed, and may depend on parameter adjustment. (4) The NP nature of set coverage may cause a sharp increase in the computational cost of path selection for larger-scale images (such as 1024×1024), which is not discussed. Other Comments Or Suggestions: (1) The experiment is extended to higher resolution (such as ImageNet-256) and non-square images to verify the generalization of the method. (2) Compared with other efficient models (such as MobileNetV3), showing the parameter efficiency of the proposed PathConv. Questions For Authors: (1) Can the choice of path shift parameters (e.g., $p$) be optimized through automated search (e.g., NAS)? (2) How to handle spatiotemporal consistency of paths in dynamic inputs (e.g., videos)? (3) What is the parallel computing potential of PathConv? Is multi-path processing limited by GPU memory bandwidth? (4) Can it be combined with Transformer (e.g., ViT) to further enhance global modeling capabilities? (5) Will the robustness of PathConv drop significantly in extremely low-parameter scenarios (e.g., edge devices)? Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Other expertise'] Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful that the motivation and proposed method of this paper have received your endorsement. We will try to address your concerns in the following sections. ## Performance on high-resolution/non-square images > The actual performance of high-resolution (such as 256×256) or non-square images was not tested... > The experiment is extended to higher resolution (such as ImageNet-256) and non-square images to verify the generalization of the method. We fully understand the concern regarding the absence of PathConv performance for high-resolution images. We posit that hierarchical paths across different resolution levels can be an effective approach for higher-resolution images. The locality features of high-resolution images may involve much more pixels than low-resolution ones, which probably prove challenging to capture using the current method. In this context, a ViT-like architecture may be a better solution. This is also what we will focus on following this work. The inclusion of analysis for non-square images would indeed enhance the comprehensiveness of this paper. Performance testing for downstream tasks will be conducted in future work. Images in common object detection datasets are usually non-square (e.g., MSCOCO), thus providing a valid way to address the stated concern. ## Other backbones > Only ResNet and WRN are compared, lacking horizontal comparison with efficient models, such as MobileNet and EfficientNet. We agree that comparisons with efficient models would strengthen the manuscript. The core contribution of this work is demonstrating the viability of using only 1D kernels to build CNNs to achieve superior parameter efficiency. ResNets are widely adopted and chosen as a foundation for many CNN architectures (e.g., ConvNeXt). The performance of mentioned backbones is not notably superior to ResNet. Hence, existing results sufficiently validate the effectiveness of the proposed methods. Furthermore, especially for EfficientNet using NAS, direct comparison incurs risks conflating the benefits of our method with orthogonal optimizations. ## Path shifting parameters > The selection basis of specific shift parameters (such as padding size) in Table 2 is not detailed... The parameter spaces are provided in Section 3.3. Following these settings, the universe $\mathcal{C}$ can be determined. We then apply the randomized rounding algorithm to obtain Table 2 following the steps described in Appendix D. ## Paths for high-resolution cases > The NP nature of set coverage may cause a sharp increase in the computational cost of path selection for larger-scale images... We conducted an additional experiment applying the path-shifting method to high-resolution images, where $s=224$. We find that a small range of $p$ is enough to find $\mathbb{C}$ with cardinality of 3. Following the structure of Table 2, we have |Path|$s$|$\mathbb{C}^{\ast}$| |-|-|-| |||{13, [10, 5], 270}| |Hilbert|224|{10, [5, 0], 270}| |||{7, [7, 0], 90}| |||{0, [0, 0], 0}| |Z-order|224|{32, [11, 14], 180}| |||{32, [16, 14], 0}| Due to space limitations, please refer to the first section in the response to reviewer z3TX for details. ## Questions 1. Can the choice of path shift parameters (e.g., $p$) be optimized through automated search (e.g., NAS)? We believe this is possible. We are also considering adding more flexibility by learning a space-filling curve dependent on input images. Both methods are interesting to investigate in future works. We appreciate your inspiring comments. 2. How to handle spatiotemporal consistency of paths in dynamic inputs (e.g., videos)? A trivial way is to obtain embedding by frame and employ a temporal architecture to model sequential features. Moreover, both Hilbert and Z-order paths could be extended to higher dimensions (e.g., 3D Hilbert curve). 3. What is the parallel computing potential of PathConv? Is multi-path processing limited by GPU memory bandwidth? Path sampling operations are read-only, thereby amenable to optimization by various parallel computing strategies. This paper provides a CUDA-based solution. If GPU memory is the bottleneck, sampling operations can be offloaded to CPU while maintaining efficiency. Additionally, we try to find $\mathbb{C}$ with minimal cardinality is to minimize GPU memory requirements. Hence, we see a large potential in parallelism. 4. Can it be combined with Transformer (e.g., ViT) to further enhance global modeling capabilities? We believe this is possible as a feasible solution, especially for high-resolution cases, as explained in the first section. 5. Will the robustness of PathConv drop significantly in extremely low-parameter scenarios (e.g., edge devices)? This is a question worth exploring. A feasible solution is to follow the suggestion from reviewer z3TX to make acc./#param. and acc./FLOPs graphs and check when there is a catastrophic performance degradation to answer this question quantitatively.
Summary: This paper presents an enhancement to 1D convolutional networks with space-filling curves (traversal path) for the image classification task. Space-filling curves are paths that pass through every point within a higher-dimensional discrete space. Recently, 1D convolution has successfully been applied to several domains in computer vision using space-filling curves. This paper aims at showing that the most used space-filling in this domain (i.e. Hilbert curves and Z-order curves) lacks a locality preservation property. This work proposes a method to preserve locality in Hilbert curves and Z-order curves by sampling these last from an expanded dimension and relocating the most distant pixels in the path via a path-shifting method that relies on an attention mechanism. Determining the set of minimal paths that fulfills the locality property constitutes an NP problem, therefore, the paper uses an approximation based on heuristics, which belongs to the randomized rounding algorithms family. The cardinality of the set of minimal paths was found to be 3 via this heuristic for images of size 32x32 and 64x64. The paper then tests and compares the approach with simple raster scans, Hilbert curves, and Z-order curves for the single-label image classification task on CIFAR-10, SVHN, and a downsampled version of ImageNet1k: ImageNet-64. ## update after rebuttal The rebuttal addressed some of my concerns about scalability to higher resolutions, but questions remain about the significance of the accuracy results for the same budget of FLOPs and/or parameters. I, therefore, maintain my previous rating. Claims And Evidence: In general, the claims made by the paper are clear and supported; however, it seems that the proposed method, although preserving locality, should rely further on an attention mechanism as the depthwise 1d convolution mechanism used on the space filling curve sometimes necessitates the model to observe different regions of original images (see Figure 6). This questions the relevance of the 1D convolution on this representation of data as a space-filling curve. Moreover, the paper finds traversal paths that comply with the locality constraints for resolutions up to 64x64. As this problem is NP-hard, there is no evidence that finding such paths for practical resolutions used in ImageNet1k (224x224, or 256x256) is possible. Not to mention downstream tasks such as segmentation, which are missing from this work and which sometimes require 512x512 or full resolution. Methods And Evaluation Criteria: The proposed metric that calculates the proportion of pixels with shorter distances to their neighbors at the same positions (Psd) makes sense for evaluating the locality constraint. The datasets also make sense, however, their size is limited. The improvement described in the paper only encompasses a reduction in the total number of parameters with more or less the same accuracy. In many cases, this does not allow us to decide whether the approach is beneficial or not for the task. For example, in Table 3, the authors report an accuracy of 93.17 for ResNet-18 (11.68M params) on CIFAR10 and an accuracy of 92.66 for PathConvS-Z (3.62M params) - a reduction of a factor of 3.5 in the number of params for a 0.5 reduction in accuracy. This is not conclusive. A better approach would have been to compare the difference in accuracy for the same number of parameters. Or better still, to make a graph accuracy/parameters and a graph accuracy/FLOPs! In terms of FLOPs, the proposed methods have approximately the same cost, with marginally better or worse accuracy. In addition, the chosen baselines are not state-of-the-art. More recent models achieve better accuracy for less compute budget. There is no mention of seed variability or the sensitivity of the results with respect to randomness. In Table 4, however, the locality property put forward by the authors seems confirmed as having a significant impact on performance. Theoretical Claims: I was not able to fully check the theoretical claim about the minimal C satisfying the locality constraint and the associated appendices C and D due to lack of time. Another reviewer might have checked it. Other theoretical claims seem to be correct. Experimental Designs Or Analyses: The paper uses depthwise separable convolution and an inverted bottleneck design in the blocks. There is also a notion of stem layer. The paper builds those techniques on top of a ResNet architecture. Perhaps it would have been much easier to start from a modernized architecture that already integrates all of those techniques like ConvNeXt (Liu, Zhuang, et al. "A convnet for the 2020s."), for example, or any SOTA convolutional model, and compare the performance of the approach on top of it. Supplementary Material: Appendices A, B, and D were reviewed. The attached implementation code has a CUDA implementation for the proposed image traversal method. Relation To Broader Scientific Literature: The article follows the line of research concerning 1D convolutional neural networks with an application to image classification. Essential References Not Discussed: The usage of Hilbert and Z-order curves for image classification was presented in anterior work like Wang, Hanyu, et al. (2022) "Neural space-filling curves." and for point clouds in computer vision in Wu, Xiaoyang, et al. (2024) "Point transformer v3: Simpler faster stronger.". The current work cites those two papers and others, specifying that its contribution is an enhancement of this idea: > This paper first presents that the positions of these sacrific- ing pixels can be shifted by sampling from their spatially expanded variants, inspiring us to integrate a set of carefully selected shifted paths to ensure pixels at all positions are closer to their neighbors than raster scanning for at least one path. Other Strengths And Weaknesses: In general, the idea has a good motivation and is supported theoretically. Although not completely novel, the originality of the work lies in addressing the problem of locality in space-filling curves for 1D CNN. The major concern, however, is the significance of the results with respect to the baseline and the lack of evidence for the usability of the results in higher resolutions. Other Comments Or Suggestions: The style of the text is a little repetitive and sometimes incoherent, as if it had been written by a machine. For example, it is very hard for the reader to understand sentences like: > we analyze Hilbert/Z-order paths and expose a fundamental trade-off: improved proximity for most pixels comes at the cost of excessive distances for other sacrificed ones to their neighbors. Questions For Authors: - Since the locality preserving curves are independent of the images, wouldn't it be more convenient to pre-calculate/find all of them up to a certain dimension beforehand and save the corresponding path traversal for ulterior use? - From a biological perspective, how does locality in space filling curves relate to human or mammal eye-tracking (gaze path)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your effort for your review and insightful comments. The key concerns raised will be addressed in the following sections. ## Scalability to higher resolutions > As this problem is NP-hard, there is no evidence that finding such paths for practical resolutions used in ImageNet1k (224x224, or 256x256) is possible. Section 3.3 and Appendix C demonstrate that finding $\mathbb{C}$ with minimal cardinality satisfying the locality constraint is NP. Given a large $s$, the size of universe $|\mathcal{C}|$ becomes too large for NP problems. For instance, given $s=224$, we have $|\mathcal{C}| = 14785796$ for Hilbert paths $\mathcal{H}_s$, making it impractical to calculate $d$ (following Table 1) for every configuration, not to mention as the input of an NP problem. Therefore, - For $\mathcal{H}_{224}$, we decrease the parameter space of $p$ into $[1, 16]$ to obtain a much smaller $|\mathcal{C}| = 4484$. - For $\mathcal{Z}_{224}$, we only consider $p \in \set{0, 32}$ (explained in Section 3.3), resulting in $|\mathcal{C}| = 3969$. We find that the above settings of $p$ generate sufficiently large $|\mathcal{C}|$ to find a $\mathbb{C}$ with $|\mathbb{C}|=3$ when $s=224$, aligning with the results of low-resolution cases in this paper. We also denote such $\mathbb{C}$ as $\mathbb{C}^{\ast}$. Following the structure of Table 2, we have |Path|$s$|$\mathbb{C}^{\ast}$| |-|-|-| |||{13, [10, 5], 270}| |Hilbert|224| {10, [ 5, 0], 270}| |||{ 7, [ 7, 0], 90}| |||{ 0, [ 0, 0], 0}| |Z-order|224| {32, [11, 14], 180}| |||{32, [16, 14], 0}| Similarly, we extend Table 6 for $s=224$ |s|Path|\|$\mathcal{C}$\||Randomized rounding algorithm||Greedy algorithm|| |-|-|-|-|-|-|-| ||||avg. time|worst case \|C*\||avg. time|worst case \|C*\|| |224|Hilbert|4484|2.3s|3|0.1s|4| ||Z-order|3969|2.0s|3|0.1s|4| These results indicate that - $|\mathbb{C}^{\ast}|=3$ consistently holds for higher resolution ($224^2$). Hence, to find $\mathbb{C}^{\ast}$ is possible using the proposed method for practical resolutions. - The greedy algorithm is unable to consistently derive $|\mathbb{C}^{\ast}|=3$. - In the original paper, $p\in [1, s)$ may be unnecessarily large for finding $\mathbb{C}^{\ast}$, where $|\mathbb{C}^{\ast}|=3$. ## Evaluation > ... This is not conclusive. A better approach would have been to compare the difference in accuracy for the same number of parameters. If #param. are set to the same, the corresponding FLOPs will be higher than ResNet counterparts. In this case, even though PathConv models outperform ResNet, it is hard to conclude anything, as PathConv is with the same #param. but require higher FLOPs compared to ResNet. Hence, we make a harsher case that PathConv is with the same FLOPs but with much fewer parameters than ResNet. Additionally, the current setting can also represent the parameter efficiency of 1D kernels, which is the key motivation of this work. > ...to make a graph accuracy/parameters and a graph accuracy/FLOPs! This is an excellent suggestion for determining the optimal #param. (FLOPs)/acc trade-off. We will apply your suggestion in subsequent studies. We express our sincere gratitude. ## Reference to discuss > The usage of Hilbert and Z-order curves for image classification was presented in anterior work like Wang, Hanyu, et al. (2022) "Neural space-filling curves." and for point clouds in computer vision in Wu, Xiaoyang, et al. (2024) "Point transformer v3: Simpler faster stronger.". Neural space-filling curves are designed to learn a spatially coherent path primarily for image compression rather than classification. Point transformer v3 employs SFCs for the essential linearization of point clouds as input for transformers. Consequently, these two references are not particularly relevant to this article, as they do not share similar motivations and functions with this paper. ## Regarding your questions > Since the locality preserving curves are independent of the images, wouldn't it be more convenient to pre-calculate/find all of them up to a certain dimension beforehand and save the corresponding path traversal for ulterior use? You are absolutely correct. Actually, that's exactly how we implement `PathConv`. The paths will be only calculated once during instantiation. We will articulate this more explicitly in the main text of the manuscript. > From a biological perspective, how does locality in space filling curves relate to human or mammal eye-tracking (gaze path)? This is a very insightful question. Mammals often employ a coarse-to-fine visual search mechanism, starting with broad scanning before focusing on areas of interest [(Hochstein et al., 2002)][1]. This is similar to how space-filling curves recursively divide space and cover broad areas before filling subregions (Figure 2). [1]: Hochstein, S., & Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36(5), 791-804. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses to the reviews. The rebuttal addressed some of my concerns about scalability to higher resolutions, but questions remain about the significance of the accuracy results for the same budget of FLOPs and/or parameters. I, therefore, maintain my previous rating. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We appreciate your time and effort in evaluating this paper. We are pleased that the scalability of the proposed method for higher-resolution images has been acknowledged to some extent. Regarding the concerns about the experiments, we believe that your suggested approach (acc./FLOPs, acc./#param. plots) represents a viable way to address potential concerns. We value your recommendation and will implement it in our future work.
null
null
null
null
null
null
Expert Race: A Flexible Routing Strategy for Scaling Diffusion Transformer with Mixture of Experts
Accept (poster)
Summary: The authors introduce a novel Mixture-of-Expert model for Diffusion Transformers. The main novelty aspect lies in its routing strategy, Expert Race, which allows to tune experts assignment not only by token, but also by batch element (ie, by time-step in the diffusion process). As a consequence, the model gains the additional flexibility of employing a different number of experts depending on the stage of the diffusion process it finds itself: arguably, this is beneficial, as at the beginning of the denoising procedure (when the image is still mostly noise) noise identification is simpler than towards the end (where the denoising procedure is almost complete, and more fine-grained details must be generated). While granting additional flexibility, this strategy comes with its challenges to ensure effective training. To address these, the paper proposes a number of adjustments to the training procedure, mainly the Router Similarity loss, to ensure experts specialisation, and a per-layer regularisation, to boost gradient in shallower model layers. Additional modifications ensure stability (dropping softmax as gating function) and consistency in predictions between training and inference (this is necessary due to the different distributions of time-steps in a batch seen by the model during these two phases). The effectiveness of the model is tested by training on ImageNet and reporting FID, CMMD, and CLIP scores of the distribution of generated images, comparing them against alternative routing strategies available in the literature. Ablation studies confirm the advantages provided by the various adaptations introduced to the training procedure. Scaling laws confirm that the improvements are sustained also for large model sizes (up to ~O(1B)). Claims And Evidence: Given the results provided, I’m reasonably convinced of the validity of the method. Methods And Evaluation Criteria: Evaluation criteria are based on commonly used metrics for image quality evaluation (FID, CMMD, CLIP), and training is done on the ImageNet dataset. Both these choices make perfect sense to me. Theoretical Claims: The only theoretical claim casts the proposed Router Similarity loss to a generalised load balance loss. I haven't checked the derivation in detail, but seems legit at a glance. Experimental Designs Or Analyses: Code wasn’t made available, so I couldn’t skim through their implementation. From what I could read, the experiments design seems solid. Supplementary Material: I have only skimmed through AppC, but I’ve checked the other sections in detail Relation To Broader Scientific Literature: The method proposed in the paper is grounded in a direct generalisation of MoE routing strategy for diffusion models, expanding both (Zoph et al., 2022, Zhou et al., 2022) Essential References Not Discussed: The references provided are in my opinion sufficient to properly contextualise this work. Other Strengths And Weaknesses: The paper is -for the most part- explained well, with the main idea illustrated clearly, and its element of novelty wrt previous work properly defined. The structure is solid, and the story reads well. Results are reasonably detailed and convincing. Other Comments Or Suggestions: - L63,C1 “flexibilities greatly boost” -> “flexibility greatly boosts” - L111,C1 “share” -> “shares” - L134,C2 “build a set with \mathcal{K} largest value in tensor” -> “builds a set with the \mathcal{K} largest value in a tensor” - L281,C2: “patchily operation ()” -> reference missing? Questions For Authors: - Q1 On training vs inference - I’m honestly having a hard time understanding how your method works at inference/generation. If I got it correctly, at training time the routing strategy is deciding how to allocate the various experts while looking at the *whole* batch, which includes various samples at different time-steps / noise levels. This way, the model can learn to adaptively allocate more experts to less noisy tokens, if needed. Now, at generation, the time-steps distribution within the batch (if you’re doing batch generation at all!) is clearly different: first we de-noise samples that are all at time $t=T$, then all at $t=T-1$, and so on until $t=0$. I reckon this is the mismatch you hint at in the Training-Inference Mismatch paragraph. What I don’t understand is why the choice $\tau$ in Alg1 should help in this regard? I was expecting that some sort of explicit (learnable) dependency on $t$ was gonna be baked into the threshold directly?
 - Q2 Considerations on workload during generation - From my understanding (correct me if I’m wrong), both Token-choice and Expert-choice guarantee constant workload per time-step at generation, in the sense that, for each $t$, there is a certain fixed number of $E_i(X_j)$ functions that must be evaluated (although the distributions of tokens per expert / expert per token might vary). In your case, this is not valid anymore (Fig6 highlights this). I was wondering what repercussions this had, if any, on the total wall-clock time / memory resources necessary for generation: it’s hard for the gauge this from the top of my head, as I reckon it depends on code implementation and parallelisability considerations. To help clarify what I mean, consider the extreme case where only 1 expert is invoked for each time-step between $T$ and 1, and all remaining $k-1$ expert invocations are used at the very last diffusion step $t=0$. If all experts invocations are perfectly parallel and we have infinite resources, at the end of the day the overall time for generation doesn’t change, but the last step would generally require a spike in computational demands, which might bottleneck operations and slow down generation. Does this make sense? If so, could you comment of this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Summary: - This paper proposes an innovative MoE block integrated with a simple routing strategy and multiple regularization techniques within diffusion transformer frameworks, to concurrently optimize performance and computational efficiency. In particular, the authors broaden the routing strategy's exploration space by simultaneously considering token, batch, and expert dimensions, effectively addressing limitations inherent in existing methods that primarily emphasize either expert or token dimensions alone. Moreover, grounded in thorough empirical observations and logical justification, the authors substitute the conventional softmax routing mechanism with an identity function, complemented by a learnable threshold designed to alleviate inconsistencies arising from noise magnitude discrepancies between training and inference phases. Drawing inspiration from the Barlow Twins framework, the authors further propose an analogous loss formulation to combat mode collapse. Additionally, an auxiliary regularization branch is introduced to maintain stable output magnitudes across layers, thereby preventing gradient vanishing issues during model training. Empirical evaluations demonstrate that the proposed approach yields substantial performance gains on image generation benchmarks using the ImageNet dataset, thereby validating its effectiveness and advancing the state-of-the-art in this domain. Claims And Evidence: - In general, the claims of this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: - Evaluating the proposed methods solely on ImageNet for image generation may not be sufficient. Even when focusing exclusively on image generation (without considering other modalities), additional datasets such as COCO could offer valuable insights. Furthermore, FID is no longer considered the most suitable metric for open-domain image generation; alternative metrics like ImageReward or HPS might provide more meaningful evaluations. Theoretical Claims: - I have reviewed the theoretical claims presented in Section C of the supplementary material and found them to be correct. Experimental Designs Or Analyses: - Yes, I have assessed the soundness and validity of the experimental designs and analyses presented in the paper. Regarding the experimental designs, comparisons with similar approaches are limited; however, this limitation arises primarily due to the scarcity of existing methods related to diffusion models, particularly those employing DiT. Also, the analysis is adequate considering the contents in the supplementary material. Supplementary Material: - I review every part of the supplementary material. Relation To Broader Scientific Literature: - From my perspective, the key contribution of this paper is introducing a simple yet efficient MoE to the DiT architecture, accompanied by theoretical analysis. Given that very few related approaches exist, this paper makes a meaningful contribution to the broader scientific literature. Essential References Not Discussed: - In general, this paper discusses related approaches adequately. Other Strengths And Weaknesses: - This paper is well-crafted and effectively articulates both the proposed methodology and the corresponding experimental outcomes. - The implementation of the approach is methodical and straightforward, facilitating its practical applicability. - Comprehensive implementation details significantly enhance the reproducibility of the research. - However, certain aspects lack sufficient discussion, such as the influence of timestep and effects at different resolutions. Other Comments Or Suggestions: - The figures are somewhat low in quality; I suggest that the authors use vector graphics for better clarity. Questions For Authors: - (1) From my perspective, the influence of timestep selection on the MoE, as well as the impact of varying input complexities (e.g., performance differences across categories), is particularly intriguing. Although the authors provide a basic illustration and a brief subsection discussing these aspects, incorporating theoretical analyses would further enhance the contribution of this paper. - (2) I'm curious about the consistency and generalizability of the proposed method across different resolutions or datasets, such as COCO. - (3) Additionally, what performance does the method achieve when evaluated using alternative metrics like ImageReward or HPS? Code Of Conduct: Affirmed. Overall Recommendation: 3
Summary: This paper introduces Race-DiT, a novel approach for applying Mixture of Experts (MoE) to diffusion transformers with a flexible routing strategy called Expert Race. The key innovation is allowing tokens and experts to compete together in a global selection process, enabling dynamic allocation of computational resources based on token complexity. The authors also propose per-layer regularization to address shallow layer learning challenges and router similarity loss to prevent mode collapse among experts. Experiments on ImageNet demonstrate significant performance gains over dense models and other MoE routing strategies. The method shows promising scaling properties with both increased expert numbers and hidden dimension splits. Overall, Race-DiT achieves better FID scores with fewer activated parameters compared to traditional dense models, showing the potential of flexible MoE routing in diffusion models. Claims And Evidence: Per-layer Regularization lacks strong theoretical evidence. While the authors identify that shallow layers contribute less to DiT's pre-norm architecture, the fundamental necessity for layer balance is inadequately justified. The ablation study shows empirical improvements but doesn't conclusively demonstrate why balancing shallow and deep layers is optimal versus simply allowing deeper layers to dominate. The visual evidence in Figure 5 shows the phenomenon clearly, but the causal relationship between layer balance and overall performance could use more rigorous analysis. Additionally, the interaction between the MoE architecture and layer dynamics needs more theoretical grounding beyond empirical results. Methods And Evaluation Criteria: The evaluation methodology generally makes sense for diffusion models. However, the computational efficiency comparison between different MoE strategies (Expert Race vs. Token/Expert Choice) is notably absent. Given that MoE's primary advantage is computational efficiency, understanding the overhead introduced by global token selection would be crucial. The paper focuses heavily on quality metrics (FID, CMMD, CLIP) but lacks throughput, latency, or memory usage comparisons. Additionally, while they show strong performance against dense models, evaluation against other contemporary diffusion MoE approaches like EC-DiT or Raphael is limited to the appendix rather than being central to the main evaluation. Theoretical Claims: I checked the correctness of the Analysis of Router Similarity Loss section. The derivation showing how router similarity loss extends traditional balance loss by considering pairwise expert interactions is mathematically sound. Experimental Designs Or Analyses: The experimental design is generally valid, but comparisons with other MoE-based diffusion research are somewhat limited. The internal comparisons between different MoE strategies (Expert Race vs. Token/Expert Choice) are comprehensive. However, the paper fails to clearly quantify the additional computational complexity and actual computational cost of Expert Race compared to Expert Choice or Token Choice. Since Expert Race requires flattening and processing the entire score tensor, it likely has poorer cache efficiency and higher computational overhead. This trade-off between quality improvement and computational cost is not explicitly analyzed, which is particularly concerning for an optimization technique like MoE where efficiency is a primary consideration. Supplementary Material: I reviewed sections A (Evaluation Results of More Routing Strategies) and G (Comparisons with DiT-MoE) from the supplementary material. The additional routing strategy experiments in section A provide valuable insights into how different dimensional combinations affect performance, reinforcing the main paper's conclusions about flexibility benefits. The DiT-MoE comparison in section G shows that Race-DiT achieves better performance with fewer parameters, though differences in architecture (GLU vs. standard MLP) make direct comparison slightly challenging. Overall, the supplementary material strengthens the paper's claims with appropriate additional experiments. Relation To Broader Scientific Literature: The paper makes a significant contribution by expanding MoE routing to consider all dimensions (B, L, E) with equal weight. Particularly valuable is connecting this approach to diffusion models' unique temporal characteristics across different denoising timesteps. By recognizing that diffusion complexity varies both spatially (across image regions) and temporally (across timesteps), the authors provide insights applicable beyond this specific implementation. This connection between model architecture design and diffusion's inherent multi-task nature advances the field's understanding of how to build more efficient generation models. Essential References Not Discussed: The paper lacks a reference to "Addressing negative transfer in diffusion models" (Go et al., NeurIPS 2023), which provides explicit evidence for diffusion's multi-task nature through demonstrated negative transfer effects across timesteps. Race-DiT repeatedly emphasizes diffusion's varying temporal complexity as motivation for flexible routing, and Go et al.'s work would significantly strengthen this foundational claim with empirical support. Other Strengths And Weaknesses: # Strengths - The model achieves impressively lower FID scores compared to dense models even when considering total parameter count, not just activated parameters, demonstrating true parameter efficiency. - The proposed Expert Race mechanism effectively addresses temporal complexity variation in diffusion models, showing marked improvement in early denoising timesteps where detail preservation is critical. - The authors perform extensive ablation studies that clearly isolate the contribution of each proposed component. - Router Similarity Loss shows strong theoretical grounding with clear connections to existing balance loss formulations. # Weaknesses - The dramatic change in layer dynamics after applying Per-Layer Regularization, particularly in layers 1-2, lacks detailed analysis of why these specific layers are most affected. - The paper provides limited analysis of expert specialization patterns - what specific features or timestep characteristics different experts learn to handle. - Testing is limited to ImageNet at 256×256 resolution without validation on more diverse datasets or higher resolutions. - The throughput-vs-quality tradeoff analysis is insufficient for practical deployment considerations. Other Comments Or Suggestions: The paper would benefit from a brief discussion of inference-time optimizations for Expert Race, since the global top-k selection might introduce latency challenges in production environments. Consider adding visualization of what different experts specialize in, perhaps showing attention maps or feature visualizations from different experts. Figure 6 currently shows allocation patterns but doesn't reveal what features drive those allocations. A minor typo appears in equation (7) where the summation indices could be clarified. Questions For Authors: - How does Expert Race compare to Token/Expert Choice in terms of computational overhead during both training and inference? Could you provide wall-clock time comparisons on equivalent hardware? - Have you explored whether the benefits of Per-Layer Regularization are specific to MoE architectures, or would dense DiT models also benefit from this approach? Code Of Conduct: Affirmed. Overall Recommendation: 4
null
null
null
null
null
null
null
null
Pareto-frontier Entropy Search with Variational Lower Bound Maximization
Accept (poster)
Summary: This work introduces a multi-objective Bayesian optimization estimation framework that utilizes an information-theoretic acquisition function. It presents an effective approach for conducting variational inference when the continuous Pareto frontier is not fully known. By employing a mixture of distributions with different supports, the method enhances the estimation of the target distribution, demonstrating strong performance across diverse experiments. Claims And Evidence: Four claims are made by the authors: > PFEV is the first approach to continuous space MOBO that is based on a general lower bound of MI I agree, but this lower bound of MI on MES concept has been proposed in Takeno 2022 in single-fidelity, so I won't consider it as a MAJOR contribution; > We newly introduce an under truncation approximation for $p(\boldsymbol{f}(x)\mid\mathcal{F}^\ast). Further, we define the variational distribution as a mixture of distributions with the over and the under truncation, and show how to optimize the mixture weight The claim is correct. However, I do have some concerns about the lower bound optimization part, which I will elaborate later; > We also discuss properties and extensions of PFEV. For our MI lower bound, its relation with PI and a Monte-Carlo approximation are derived. We further discuss several extended settings such as parallel querying. I agree with the most of the claim, except the part involving PI. Although the authors show that the MI lower bound is an upper bound of PI AF, it does not explain many things. Since for acquisition function we only care finding the x that maximize them, the lower bound does not disclose any intrinsic relation between the optimal x derived from MI lower bound and optimal x derived from PI. Therefore, as I agree the results from Remark 3.2, I won't consider it as a major contribution. > Through empirical evaluation on Gaussian process generated and benchmark functions, we demonstrate effectiveness of PFEV. We empirically observed that PFEV shows a particular difference from existing over truncation based methods when output dimension >= 3 , in which the difference of two truncation becomes more apparent. The authors have clear evidence showing this claim. Methods And Evaluation Criteria: Overall, I believe the authors clearly presented their proposed methods and corresponding contributions. The mathematical notations are well-defined, and the experiments used to validate their proposed method follow standard practices. Theoretical Claims: The main concern is about the design for maximizing the lower bound $\max_{x, \lambda\in(0, 1]} L(x, \lambda)$. The author did not show this is a well-defined optimization setting (optimum exists). What if the maximum happens at $\lambda = 0$, in the sense that $p = q_O$? Some minor suggestions: - Line 121: "If $q(f_x\mid\mathcal{F}^\ast) = p(f_x\mid\mathcal{F}^\ast)$", should add "for $\mathcal{F}^\ast$ almost everywhere". - Equation (4): $L$ has been used for the number of objectives, please use another notation for the lower bound - Many approximation schemes are proposed between line 237 to line 265. It is a bit hard to follow their motivations. Could you explain more about them? Experimental Designs Or Analyses: The experimental designs are standard. I do not see any concerns in this part. Supplementary Material: I did not read supplementary material. Relation To Broader Scientific Literature: This work extends previous research on solving MES with variational inference to a multi-objective setting. It introduces a novel approach to advancing information-theoretic multi-objective Bayesian optimization, making a meaningful contribution to the community. Essential References Not Discussed: NaN Other Strengths And Weaknesses: Strengths: - The work proposes a new way to solve multi-objective BO in the variational inference perspective; - The work is solid, with thorough discussion on previous works and well-defined mathematical notations; - The experiments are solid, showing clear advantages of their proposed method. Weaknesses: - The well-definedness of the optimization problem in Equation (4); - Lack of motivations for the approximation schemes in Section 3.3. Other Comments Or Suggestions: NaN Questions For Authors: My concerns are listed in the weaknesses section. There is no other questions raised. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions. 1) About $\lambda$: We can prove $\lambda = 0$ cannot be the maximizer, and thus, the optimization problem is well-defined. This can be derived from (8). First, from the definition, we see $\theta_{MAP} \in (0,1)$ (Note that $\hat{p} \in (0,1)$). When $\lambda \to 0$, the second term of (8) goes to $\log 0$ (from the definition of $\eta_{\lambda}^{\tilde{{\cal F}}_S^*}$ shown after (4)). On the other hand, the first term is finite when $\lambda \to 0$. Therefore, if $\lambda \to 0$, we see (8) goes to $- \infty$. Further, we already show that (8) is concave in Appendix C. 2) Motivation of the lower bound approximation: The lower bound approximation can be written as (7). However, in (7), $p(\boldsymbol{f_x} \in {\cal A}_O^{\tilde{{\cal F}}^*_S} \mid \tilde{{\cal F}}^*)$ and $p(\boldsymbol{f_x} \in {\cal A}_{U \setminus O}^{\tilde{{\cal F}}^*_S} \mid \tilde{{\cal F}}^*)$ cannot be calculated analytically. A naive approximation of these two $p$ are $I( \tilde{\boldsymbol{f_x}} \in {\cal A}_O^{\tilde{{\cal F}}^*_S} )$ and $I( \tilde{\boldsymbol{f_ x}} \in {\cal A}_{U \setminus O}^{\tilde{{\cal F}}^*_S} )$, respectively (line 245-246), which we used in the estimator called naive MC in Figure 6. Instead of this direct one sample estimation, we propose using approximation with the prior knowledge about $p(\boldsymbol{f_x} \in {\cal A}_O^{\tilde{{\cal F}}^*_S} \mid \tilde{{\cal F}}^*)$, i.e., this can be approximated by $\hat{p}$. We will also reflect other comments such as notations and claims about PI. --- Rebuttal Comment 1.1: Comment: Thank you for your comments. We should have $\hat{p} \in [0, 1]$, right? In that case, the $\log 0$ term vanishes (which is equivalent to $q = q_O$). In this scenario, $\lambda = 0$ is the optimal solution. Can this method handle this situation? --- Reply to Comment 1.1.1: Comment: Thank you for your comment. We actually have $\hat{p} \in (0,1)$ (not $[0,1]$), because of its definition $\hat{p} = Z_O^{\tilde{{\cal F}}^*_S}(\boldsymbol x) / Z_U^{\tilde{{\cal F}}^*_S}(\boldsymbol x)$. Since $\boldsymbol{f_x}$ is the Gaussian distribution (the predictive distribution of the GPs), from the definitions, $Z_O^{\tilde{{\cal F}}^*_S}(\boldsymbol x) = p(\boldsymbol{f_x} \in {\cal A}_O^{\tilde{{\cal F}}^*_S}) \in (0,1)$ and $Z_O^{\tilde{{\cal F}}^*_S}(\boldsymbol x) < Z_U^{\tilde{{\cal F}}^*_S}(\boldsymbol x) = p(\boldsymbol{f_x} \in {\cal A}_U^{\tilde{{\cal F}^*_S}}) \in (0,1)$. Here, we used $\emptyset \neq {\cal A}_O^{\tilde{{\cal F}}^*_S} \subset {\cal A}_U^{\tilde{{\cal F}}^*_S} \subset \mathbb{R}^L$ (neither ${\cal A}_O^{\tilde{{\cal F}}^*_S}$ nor ${\cal A}_U^{\tilde{{\cal F}}^*_S}$ is the empty set or the entire output space, and ${\cal A}_O^{\tilde{{\cal F}}^*_S} \subset {\cal A}_U^{\tilde{{\cal F}}^*_S}$ holds as far as the sampled Pareto frontier set $\tilde{{\cal F}}_S^*$ is finite while the true Pareto frontier is continuous which is our problem setting).
Summary: This paper considers a method for multi-objective Bayesian optimization based on entropy search. Many existing entropy search methods rely on a discrete approximation of the Pareto frontier (which may be continuous in many cases), which typically leads to over-truncation of the predictive distribution. The paper highlights this problem and proposes an improvement that leverages a mixture of two distributions that over/under-truncate, respectively. The paper proposes a variational lower bound, which can be optimized to jointly select the new candidate point to evaluate and the mixture parameter, and the paper gives two ways of approximating the lower bound with different bias/variance trade-offs. The authors evaluate the approach on synthetic problems Claims And Evidence: Many of the claims are well-supported. However, there is a lack of evidence in a few areas: - For simplicity consider the case where only one sample from the GP is used (as opposed to using 10 in the paper). The issue of over-truncation issue will be most problematic when the approximate PF has very few points. This is easily avoidable but simply increasing the population size for NSGA-II. This would result in far less truncation/discretization error. Is the over-truncation problem still an issue if one uses a finer approximation of the PF (more points from NSGA-II). I could see there being a case made for this increasing computation cost (of NSGA-II and of the box decomposition), but no such argument is made and no evidence is provided to that end (and there are box decomposition algorithms that often are quite a bit faster (Lacour et al, 2017). - Is over-truncation still an issue if one uses more samples from the GP? If more samples are used then the acquisition function is integrated over different samples of the (discretized) PF. It is not clear from the paper whether over-truncation is only an issue if a small number of samples are used. Of course, even better performance would be achieve if more points on the PF were used in the discretization and more GP samples were used. Results on how much over-truncation matters when the number of points used to approximate the PF increases and number of GP samples increases seem critical to understanding the significance of this problem Methods And Evaluation Criteria: The synthetic problems are a good start, but they are all low dimensional and they are all synthetic problems. It would be nice to see performance on more realistic examples and higher dimensional problems. Theoretical Claims: The derivations looked reasonable. Experimental Designs Or Analyses: The main issues with the empirical analysis is 1) lack of hard problems (higher dimensional and more realistic), 2) lack of baselines. At a minimum, random search and qLogEHVI (Ament et al, 2023) should be included as baselines. In addition, why is DIRECT used to optimize the acquisition functions? Many of these are differentiable and gradient-based optimization is typically far more effective (Daulton et al, 2020) and is widely used in modern Bayesian optimization. 5 random starting points also likely leads to poor global optimization of the acquisition functions. How are reference points chosen for evaluation and for EHVI? That should be documented as it likely has a large effect on the performance evaluation (i.e. how different parts of the Pareto frontier contribute to the overall hypervolume). Which estimation of the lower bound is used in the experiments in the main text? Supplementary Material: Yes. In the comparison with HVKG, one-shot optimization is not used, but rather KG with a very coarse discrete set of random points is used---which would likely lead to far poorer performance and would lead to flat acquisition values. This should be updated to use one-shot optimization for the results to be meaningful. Relation To Broader Scientific Literature: This builds on work on information theoretic methods for multi-objective BO. Essential References Not Discussed: Unexpected Improvements to Expected Improvement for Bayesian Optimization, Ament et al, 2023 Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization, Daulton et al, 2020 Other Strengths And Weaknesses: These are described above. The main issue is understanding the significance of the over truncation issue and understanding how the contribution of the work compares to the SOTA. Also, no code is provided and it appears the authors implemented all baselines by hand if they are in GPy? Using the implementations by the authors of many of the baselines in BoTorch would instill more confidence in the results. Other Comments Or Suggestions: In the abstract: - truncated -> truncated - truncation -> truncations Figure 2 is confusing, and I don't understand what is being depicted. Questions For Authors: Questions are interspersed above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive comments. 1) Population size of NSGA-II: As the reviewer mentioned, if the population size of NSGA-II increases, the approximation error between over truncation and the true truncation decreases. On the other hand, the continuous Pareto-frontier is the $L-1$-dimensional space, for which the required number of points to keep the sufficient density of approximation points increases exponentially when $L$ increases. In the linked figure ([link](https://anonymous.4open.science/r/AuthorResponse-0652/Fig28-hv.pdf)), we empirically investigate the truncation error by comparing the volume of under- and over- truncations. In this figure, we assume that the Pareto-frontier is the simplex ($\sum_{i \in [L]} f^i = 1$ and $f^i \geq 0$), and approximation points are sampled uniformly in the simplex. We see that, when $L = 2$, the difference rapidly decreases, but for $L \geq 3$, the large differences remain even when population size is $1,000$. Note that in this example, over truncation is closer to the true value than under truncation, but in general, it depends on the shape of the true Pareto frontier which is unknown. 2) Truncation error and the sampling from GP: In our formulation, the over-truncation corresponds to $\lambda = 0$. Therefore, the over truncation based lower bound is biased toward smaller value, because the bound is defined as the maximizer with respect to $\lambda$ (Further, we prove that $\lambda = 0$ never becomes the maximizer in the response #1 of reviewer 8xyH). This discussion holds even in the population $L(\boldsymbol x, \lambda)$ (i.e., before introducing the sample approximation), and therefore, increasing the number of sampling from the GPs does not solve the intrinsic bias. 2) Realistic problem setting: The linked figure ([link](https://anonymous.4open.science/r/AuthorResponse-0652/Fig29-HPO.pdf)) shows the results on hyper-parameter optimization problems of a machine learning algorithm. These experiments were performed before the review available (so, baselines are same as the first submission). The problem setting is the class weight optimization for multi-class classification. We used LightGBM as a base model. We only focus on the optimization of the class weight parameters and thus the input dimension $d$ of BO is equal to the class size. The objective function is the test classification accuracy of each class (i.e., $L$ is also equal to the class size). Overall, the results on the four datasets indicate that PFEV has sufficient performance compared with other methods. 3) Higher input dimension: See response #3 of reviewer SDSC. 4) qLogNEHVI and Random: The linked figure ([link](https://anonymous.4open.science/r/AuthorResponse-0652/Fig30-31-qLogNEHVI.pdf)) shows results with qLogNEHVI and Random for GP-derived functions ($d = 3, L = 3,4,5$) and four benchmark functions. For qLogNEHVI, we used qLogNoisyExpectedHypervolumeImprovement of BoTorch. The reference point is defined by y_min - 0.1 (y_max - y_min), where y_min and y_max are vectors consisting of the minimum and the maximum of each dimension in training $\boldsymbol{y}_i$, respectively. For qLogNEHVI, we used both the gradient descent (optimize_acqf function of BoTorch) and DIRECT for the GP-derived functions. We see that the gradient optimizer slightly improves the results, but DIRECT also has the similar performance. Overall, qLogNEHVI shows good performance, while PFEV also show comparable performance (particularly when comparing with the DIRECT version of qLogNEHVI). Although implementations are not exactly consistent (PFEV is GPy base and qLogNEHVI is BoTorch), we can see that performance in our results are not largely different from the well-known package. 5) About DIRECT: See response #4 of review SDSC. Note that `5 random $\boldsymbol x$' in the end of the second paragraph of Section 6 is about the initial points of BO (not the initial points of the acquisition function optimization). Sorry for confusing. 6) Reference point of EHVI: In EHVI, the two reference points are required, shown as v_ref and w_ref in Shah and Ghahramani 2016. The worst point vector v_ref is defined by subtracting $10^{-4}$ from the vector consisting of the minimum value of each dimension of $\boldsymbol y_i$ in the training data. On the other hand, the ideal point vector $w_ref$ is defined by adding $1$ to the vector consisting of the maximum value of each dimension of $\boldsymbol{y}_i$. We will add explanation in the paper. 7) Lower bound estimator: Throughout the paper, the lower bound estimator (8) is used, except for Section 6.3 in which the performance of two estimators (8) and (6) were compared. 8) Implementation of HVKG: See response #1 of reviewer SDSC. 9) Fig2 illustrates examples of under and over truncation in a three dimensional output space. We tried to show there exist large differences in the case of $L \geq 3$. We will also reflect other comments such as typos and references.
Summary: This paper introduces Pareto-frontier Entropy Search with Variational Lower Bound Maximization (PFEV), an acquisition function for multi-objective Bayesian optimization (MOBO). It addresses a key limitation in prior information-theoretic MOBO methods, which rely on over-truncation when approximating the predictive distribution hence leading to inaccurate approximations. Instead, PFEV models this distribution as a mixture of over- and under-truncated approximations and optimizes the mixture weight variationally to minimize approximation error. Empirical results show that PFEV outperforms existing methods, particularly in high-dimensional output spaces ($L ≥ 3$), with superior relative hyper-volume performance on both Gaussian process-simulated and benchmark functions. Claims And Evidence: - The paper claims the previous approximation is crude Methods And Evaluation Criteria: - The mixtured approach makes sense to me Theoretical Claims: - I have checked Appendix A,B,C they all seems plausible to me. Experimental Designs Or Analyses: - The synthetic validation on GP samples and more complicated real problems make sense to me. - One slightly unconventional thing is the choice of synthetic function to optimize which seems not that common in MOBO, I would be interested in seeing its performance on at least one of the following commonly used benchmarks in contemporary literature: - BraninCurrin - DTLZ series Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The paper proposes a more accurate truncated Pareto frontier distribution approximation, which could be of interest in information-theoretic acquisition functions for Multi-Objective Bayesian Optimization community. Essential References Not Discussed: All the essential references seem to have been discussed. Other Strengths And Weaknesses: Strength - The approach of learning a Pareto frontier truncated conditional distribution is interesting and, to my best of knowledge, novel. - An extensive problem setting (decoupled setting, noisy observation, parallel query) has been discussed. Weakness - I think one issue is that the mixtured approximation, though could improve the original approximation, can still be too crude as there is only one parameter, in practice, it could happen is some region is overestimated, some region is underestimated, but this is not able to be faithfully characterized by this metric, hence its practical utility is not exactly clear. - While the approach makes sense theoretically, the practical empirical advantage is slightly puzzling me, as hypervolume decomposition is known to be complicated with high objective numbers, which means all of these information theoretical acquisition function, which relies on hypervolume decomposition, ideally is not expected to be leveraged there. Other Comments Or Suggestions: the paper in general is very well written, I do not have much further comments. One only additional comment is it can be helpful to plot the evolvement of $\lambda$ w.r.t BO iteration Questions For Authors: - How the learned $\lambda$ evolve over BO iteration, a plot is crucial to validate - What exactly is line 14-16 used for? it seems the BO has stopped at line 13 already. - Why $\lambda$ cnnot reach $0$ in parameterization? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive comments. 1) About benchmark function: The results on DTLZ is shown [here](https://anonymous.4open.science/r/AuthorResponse-0652/Fig26-DTLZ.pdf). Note that these were performed before the review opening. We see that, in DTLZ 2, 5, 6, and 7, the proposed method shows high RHV among compared methods. On the other hand, in DTLZ 1 and 3, all the methods show high RHV (close to 1), and differences among methods are small. In DTLZ4, EHVI and MESMO showed good performance, while the other methods were similar performance. 2) Mixture approximation: We employ one parameter ($\lambda$) model, because this parameter should be estimated by the small number of samples (i.e., $K$, for which we used 10). Using more complicated model is possible (e.g., defining $\lambda$ as a function of $\boldsymbol x$), but there is a risk of the over-fitting to the small samples. We think that investigating the performance of more complicated variational distribution is one of important directions. 3) Hyper-volume decomposition: In current our experimental setting, the hyper-volume is exactly calculated by a divide-and-conquer based approach (QHV). Therefore, even if the output dimension increases, the accuracy of the decomposition is maintained. On the other hand, the cell-based decomposition can take longer time for a high dimensional output space (e.g., $L \geq 7$). Then, introducing approximate decompositions is a possible approach. The cell decomposition is performed for calculating cumulative distribution function (CDF) of the predictive distribution. We plan to consider a pruning strategy that stops the divide-and-conquer procedure if the density $p(\boldsymbol f(\boldsymbol x))$ is sufficiently small in the (intermediately) decomposed region. 4) Transition of $\lambda$: The linked figure ([link](https://anonymous.4open.science/r/AuthorResponse-0652/Fig27-lambda-transition.pdf)) shows the transition of $\lambda$ during BO iterations (the $10$ runs average) for a GP-derived synthetic function ($d = 3, L = 4$). We see that $\lambda$ takes intermediate values in $(0,1]$, indicating that under- and over- truncated distributions are indeed mixed during BO iterations. As far as we examined so far, any increasing or decreasing tendency have not been observed during iterations. 5) Line14-16: The line 14-16 in Algorithm 1 is the definition of the function `CalcPFEV', which calculates the acquisition function value. This function is used in line 10 of Algorithm 1. 6) $\lambda$ cannot reach 0: When $\lambda = 0$, the support condition of the variational distribution (2) is not satisfied, which is required to guarantee that $L(\boldsymbol{x})$ is a lower bound.
Summary: The paper considers an information-theoretic acquisition function -- namely Pareto Frontier Entropy Search -- for multi-objective Bayesian Optimization. The authors propose a novel "variational" approximation to the predictive distribution given the Pareto frontier based on over and under truncation of the Pareto set. This results in an acquisition function that requires jointly optimizing over the (one-dimensional) variational parameter and the query location(s). The authors describe the implementation and computational details and discuss useful extensions. Finally, the paper demonstrates the empirical performance of the proposed approach in a number of benchmarks against common baselines. ### Update after rebuttal. My overall assessment of the paper remains the same. The authors' response to my review clarify some questions but it also highlights that relying on a gradient-free optimization method for the acquisition function optimization has its limits. I don't think this should make or break the paper if it's sufficiently discussed in an updated version (the additional results generated for the rebuttal should help with that); ideally providing a continuous relaxation of the indicator as discussed in the authors' response. I think this is a solid paper and think it should be accepted, but would not revolt if the majority of other reviewers feels differently. Claims And Evidence: Yes, claims are clear and demonstrated based on convincing evidence. Methods And Evaluation Criteria: Yes, the proposed method is sound and the evaluation is clear and comprehensive (modulo the discussion of higher-dimensional settings, see below) Theoretical Claims: Yes, I checked the main claims in the MT and spot-checked some of the details in the appendix. Experimental Designs Or Analyses: * Overall the experiments and conclusions drawn are solid. * The plots in the MT are all **extremely** hard to read. Please consider moving some of the technical details in the MT into the appendix to make room for more readable plots. You may also want to select a few representative results and relegate the rest of the empirical results to the appendix. * In a similar vein as above, the comparison to HVKG in Appendix F seems potentially problematic due to the simplifications in the implementation: "we evaluate the posterior mean hyper- volume after adding sampled fl(x) into the GPs only by using the pre-defined grid points in X (uniformly taken 5 points in each dimension is used"). This feels like comparing an optimized implementation of PFEV against a simplified implementation of HVKG. A comparison against the implementation in [A] would be more meaningful. [A] S. Daulton, M. Balandat, and E. Bakshy. Hypervolume knowledge gradient: A lookahead approach for multi-objective Bayesian optimization with partial information. In International Conference on Machine Learning, 2023. Supplementary Material: Yes, I reviewed the entire appendix. Relation To Broader Scientific Literature: * The key contribution is a novel variational approximation of the predictive distribution given the Pareto frontier. * This formulation is quite elegant and effectively addresses an issue with the approximation that becomes more and more problematic as the number of outcomes increases, an issue that the paper identifies and explains nicely. Essential References Not Discussed: AFAICT the paper discusses all of the essential references in this space as far as the methodology is concerned. But some of the comparisons with the EHVI-based methods do not use state-of-the-art implementations though as discussed in [A,B,C]. Other Strengths And Weaknesses: **Strengths** * The paper is well-written and clearly identifies the key contributions. * For instance, I really liked the use of Figure 1, this helps a lot with visualizing the over- and under-truncation and makes it a lot easier to follow the paper. * The authors provide a comprehensive review of related work. * The empirical evaluation setup is solid, and the baselines considered are meaningful - however, some do not use state-of-the-art implementations (see other comments) * The performance of the proposed approach especially on problems with many outcomes is demonstrated clearly (albeit only for low input dimensions). * The paper has very comprehensive supplementary material that goes into detail on many relevant extensions / variants of the main contribution (parallel, decoupled, joint, noisy). **Weaknesses** * The primary weakness of the approach is that it uses gradient-free optimization of the acquisition function (in this case the DIRECT algorithm). These methods are known to perform very poorly in higher dimensions, which presumably is the reason why the largest evaluated input dimension is d=3 (d=4 in the supplementary material). This weakness / limitation of the approach is currently not discussed at all in the paper. * I would like to see some ablations of PFEV on higher-dimensional inputs compared to approaches that use gradient-based optimization of the acquisition function such as qEHVI/qNEHVI from [B]/[C] (assuming that EHVI as evaluated in the paper also used DIRECT). [B] S. Daulton, M. Balandat, and E. Bakshy. Differentiable expected hypervolume improvement for parallel multi-objective bayesian optimization. In Advances in Neural Information Processing Systems, volume 33, 2020. [C] S. Daulton, M. Balandat, and E. Bakshy. Parallel bayesian optimization of multiple noisy objectives with expected hypervolume improvement. In Advances in Neural Information Processing Systems, volume 34, 2021. Other Comments Or Suggestions: * "Note that if VDC is not satisfied, L(x) is not defined because of log 0" <- "not defined because of log 0" is a rather odd statement; this not being defined is simply b/c p would not be absolutely continuous w.r.t. q. * "number of samplings" -> number of samples Questions For Authors: * Your approach uses a "global" mixture distribution for the PF truncation. Is it possible / would it be advisable to do have this mixture depend on the location (in the outcome space)? I'm thinking of settings in which over-truncation is more accurate in one part and under-truncation is more accurate in another part of the outcome space. * Positivity of PFES has not been clarified - what does this mean? Is this about the formulation or about the approximation / algorithm? * The results in Fig 3. show a pattern where PFEV and MOBO-RS perform quite a bit better than PFES in d=2, but the difference is less for d=3. Is this b/c PFEV is more challenging to optimize as the dimension increases? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments. 1) HVKG in Appendix F: The linked figure ([link](https://anonymous.4open.science/r/AuthorResponse-0652/Fig20-Decoupled.pdf)) shows a comparison with BoTorch HVKG (called qHypervolumeKnowledgeGradient, which is based on the well-known 'one shot' optimization of KG) in the decoupled setting (Appendix F). Because of the long computational time, we set the number of Pareto optimal points in the one-step ahead posterior mean as 10 in HVKG. For fair comparison, the proposed method also set the size of the Pareto optimal points in the sampled function (i.e., the NSGA-II population size) as 10. The number of samplings (so called fantasy points) in HVKG is also set as $10$. Although there is the implementation difference (the proposed method is the GPy based implementation), we consider the result shows the performance of the proposed method is sufficient compared with an existing package. 2) qEHVI: We performed evaluation with qLogNEHVI. See response #4 of reviewer QnYc. 3) Higher input dimension: For FES1, 2 and 3 benchmark functions, results on the $10$ dimensional input are shown in Figure 19 of Appendix K.2. We further show the results on $d = 5, \ldots, 9$ of the same benchmark functions [here](https://anonymous.4open.science/r/AuthorResponse-0652/FES.pdf), which were performed before the review becomes open. The results indicate that the proposed method stably performs optimization similar to the cases of the main text. Evaluating more higher input problems on realistic scenarios is one of our future work. 4) About DIRECT: We employed DIRECT because it does not require `initial points' unlike the gradient descent that requires the appropriate setting of the initial points (the number of initial points and locations). Our purpose is to focus more on differences of the acquisition functions and to reduce the other factors effecting the performance. On the other hand, evaluation using gradient-based approaches is also important future work because it is widely used in BO. We partially show comparison with a baseline using the gradient optimization (qLogNEHVI) in response #4 of reviewer QnYc. 5) About mixture: As the reviewer suggested, it is possible to use a mixture depending on the location (e.g., defining $\lambda$ as a function of $\boldsymbol x$, which can be estimated by the variational lower bound maximization). We employ the simple global mixture only with one parameter because it should be estimated based on the K samples, which is usually quite small ($10$ in the experiments). Introducing more complicated variational distribution is a possible future direction. 6) Positivity of PFES: PFES calculates an approximate MI through a decomposition as a difference of the entropy. On the other hand, Takeno 2022 et al. show possible negativity of the naive application of MES to the constraint BO, which has a similar formulation to PFES (the entropy difference based MI decomposition and truncation of a multi-dimensional predictive distribution by a box shaped region). This suggests that PFES also might have a risk of having a negative value, but positivity of PFES has not been proven so far. 7) Relation of the input dimension and performance: As the reviewer suggested, there is a tendency for the differences between methods to decrease as the input dimension increases. We confirmed this tendency by the boxplot ([link](https://anonymous.4open.science/r/AuthorResponse-0652/Fig24-25-boxplot_input_dim.pdf)), created by the same way as Figure 4. According to the figure of the first page, differences of all the methods gradually decrease (not only the proposed method). In the second page, we further plot separately for each length scale setting of the true objective function. We see that, particularly for small length scale problems (which tends to be a multi-modal function), differences become small. We speculate that the differences were less apparent for challenging problems with high-dimension and highly multi-modal functions. We will also reflect other comments such as plotting and references. --- Rebuttal Comment 1.1: Comment: Thanks for the response and for some of the additional results, especially on the higher dimensional problems and the gradient-based acquisition function optimization - however, ideally there would be some results on the comparison of gradient-based and gradient-free acquisition function optimization for higher-dimensional problems, since this is where we'd expect the biggest difference (the provided results have d=3 as their highest dimension). While I agree that DIRECT does seem to perform quite reasonably, there is a sizable difference between the performance of the DIRECT- and the gradient-based version of qLogNEHVI in the [provided comparison](https://anonymous.4open.science/r/AuthorResponse-0652/Fig30-31-qLogNEHVI.pdf). Also, why is the gradient-based version missing from the results in Figure 31? Also, what computational budgets were used for both approaches / what was the wall time for the optimization in each case? > Our purpose is to focus more on differences of the acquisition functions and to reduce the other factors effecting the performance. While I am sympathetic to this argument, I also think that the performance of the acquisition function cannot be fully decoupled from the way this acquisition function can be optimized. If it does not provide any gradients, and gradient-based optimization can provide significant performance improvements (see above), then that is a limitation of the method. Overall, I continue to believe that this is a good paper around a nice idea (though not an excellent paper) - I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments and suggestions. > why is the gradient-based version missing from the results in Figure 31? This is simply because of the time limitation of the author response period (we originally used a GPy-based implementation, but have implemented it from scratch using BoTorch). This time, we added qLogNEHVI with the gradient descent to FES3 with $d = 3$, and both the gradient descent and DIRECT to FES3 with $d = 10$ (FES3 was the only benchmark where the input scale is $[0,1]^d$ among the four benchmarks in Fig. 31, so it could be run first without requiring any implementation adjustments of the scale, as $[0,1]^d$ is the default of optimize_acqf in BoTorch. We will perform the others as well). We updated [Fig. 31](https://anonymous.4open.science/r/AuthorResponse-0652/Fig30-31-qLogNEHVI.pdf) (note that since some results had initial points inconsistency of qLogNEHVI in the previous Fig. 31, we corrected it though the change does not affect our discussion). In $d = 10$ (Fig. 31 (e)), DIRECT was slightly better than the gradient based counter-part of qLogNEHVI. However, we agree with that, in general, the gradient method is better for higher dimensional problems. We do not particularly focus on high input-dimensional problems, and our current interpretation is that DIRECT reasonably worked for the moderate input dimension problems that we performed in the paper. We measured the wall-clock time in the GP-derived synthetic data $(d = 3, L = 4)$, and the acquisition function optimization took 6.3 and 31.5 sec for the gradient descent and DIRECT in qLogNEHVI, respectively (The time was the 10 time average when the GPs have randomly selected $50$ points as a training dataset). The settings of optimize_acqf of BoTorch was num_restarts=10 and maxiter=200 (from settings of [qEHVI tutorial](https://botorch.org/docs/tutorials/multi_objective_bo/)) and DIRECT employed the scipy default 1000 max iterations (Note that DIRECT does not have multi-restarts). We conjecture that the gradient method was faster because of the early termination by the gradient convergence. Applying a gradient method to the proposed method is an important future work. The proposed acquisition function (8) is mostly differentiable, except for the indicator $I(\boldsymbol{\tilde{f_x}} \in {\cal A}_O^{\tilde{{\cal F}}^*_S})$ in $\theta_{MAP}(\boldsymbol{\tilde{f_x}})$ of (8). We consider that possible approaches are simply ignoring this term in the gradient (regarding the gradient of the indicator as 0) or using a continuously approximated gradient. In the continuous approximation of the gradient, we replace $I(\boldsymbol{\tilde{f_x}} \in {\cal A}_O^{\tilde{{\cal F}}^*_S}) \approx p(\boldsymbol{f} \in {\cal A}_O^{\tilde{{\cal F}}^*_S})$, where $\boldsymbol{f} \sim N(\boldsymbol{\tilde{f}_x}, \rho \boldsymbol{I})$ in which $\rho > 0$ is a fixed smoothing parameter. The right hand side of the approximation is differentiable with respect to $\boldsymbol{x}$ through the similar decomposition to (9) in Appendix B (note that $\boldsymbol{\tilde{f}_x}$ is differential because it is generated from RFM). This can be interpreted as a counter-part of the standard CDF based smoothing approximation of an indicator function, extended to the Pareto dominated region. For the calculation, although the cell-based decomposition is required for ${\cal A}_O^{\tilde{{\cal F}}^*_S}$, we can reuse the cells created in line 7 of Algorithm 1. We will include all the discussion so far somewhere in the main paper or the appendix. Once again, thank you for the constructive feedback.
null
null
null
null
null
null
Generalization Analysis for Controllable Learning
Accept (poster)
Summary: The paper establishes a statistical generalization theory for controllable learning, which enables machine learning models to dynamically adapt to task requirements during testing. This adaptability is crucial in many real-world applications of decision systems, such as recommender systems and information retrieval, where models need to accommodate diverse user preferences and changing objectives. Despite its empirical success, the generalization performance of controllable learning methods has not been comprehensively analyzed. To fill this gap, the authors propose a unified theoretical framework for controllable learning and introduce a vector-contraction inequality, which yields the tightest known generalization bounds. These bounds are largely independent of the number of task targets, except for logarithmic factors. The key mathematical technique involves the development of a novel vector-contraction inequality. The paper then derives generalization bounds for two common controllable learning methods: embedding-based methods, which adjust model inputs, and hypernetwork-based methods, which generate task-specific model parameters. Additionally, the authors present modular theoretical results for frequently used control and prediction functions, offering broad theoretical guarantees for controllable learning methods. The findings enhance the understanding of controllability in machine learning and provide new insights for further research. Claims And Evidence: I think the claims are supported by theoretical results and there is no concern in my opinion regarding the theoretical results in this paper. Methods And Evaluation Criteria: N/A. This work is entirely theoretical, which provides a new statistical/mathematical theory for the generalization of controllable machine learning. Theoretical Claims: Yes. I read the proof of Lemma 4.4 (the contraction inequality) in Appendix A.3.2, which is claimed to be the main technical tool for theoretical analysis. The proof looks valid. I also look at the proof (sketch) for Theorem 4.5 and 4.7. I did not check all the details of the proof of the other lemmas in the appendix though. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: This paper falls into the greater scientific area of statistical/mathematical theory for generalization of machine learning models and methods. To this end, this paper first develops a new mathematical theorem which is a vector contraction inequality for Rademacher complexity (i.e. Lemma 4.4). It appears to me that this contribution is of independent interest and might be applied to the theoretical analysis of controllable learning by other works. Then, based on this tool, a unified theory on controllable learning is derived in Section 4, which is another potentially extendable theoretical technique. Essential References Not Discussed: This paper describes two major areas of empirical works in controllable machine learning: the embedding based method where the specific tasks are mapped into embeddings, and hypernetwork based method which generates new model parameters on given task requirements. Other Strengths And Weaknesses: Strength: **1** Clarity: This paper is clearly written, effectively conveying its main ideas. The structure is well-organized, with general theoretical results presented in Section 4 and their application to specific machine learning frameworks in Section 5. Overall, the paper is easy to follow. **2** Originality: The paper claims to be the first theoretical work on the generalization of controllable learning. Weakness: **1** Technical challenge and significance: It is not immediately clear what major technical challenges were encountered in developing the theory in this work. While the paper presents the first generalization theory for controllable learning, this alone does not sufficiently demonstrate that it tackled and resolved a significant technical difficulty. Other Comments Or Suggestions: I believe this paper makes a valid contribution to generalization theory. However, there is room for improvement. I would suggest including a discussion on the major technical challenges in the theoretical analysis. This would clarify what has hindered previous works from establishing a generalization theory for controllable learning using conventional analytical tools. For instance, the discussion could briefly explain why the number of tasks poses difficulties for traditional analysis methods. While the discussion does not need to be long, it should explicitly highlight the obstacles that have made developing a generalization theory for controllable learning challenging and how this paper overcomes them. This would help reinforce the paper's contribution, which is particularly important for a work that claims to be the first theoretical study of this learning method. Questions For Authors: Please see my suggestion regarding major technical novelty in the above session **Comments or Suggestions**. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper. The following are our responses to the Questions: **1. Response to Suggestions.** The difficulty of theoretical analysis for controllable learning lies in two aspects. First, the development of controllable learning is driven by real-world applications, and the methods developed are very different. It is difficult to establish a unified theoretical framework to cover all these typical methods. The lack of a unified framework is the most intuitive and primary obstacle to using existing analytical tools to establish an effective theory bound for controllable learning. In addition, controllable learning needs to dynamically and adaptively respond to task requirements and may involve many completely different task targets, which increases the challenge of explicitly reflecting these factors in generalization analysis. In addition, an effective unified theoretical framework should also be extensible and provide interfaces that can cover a wider variety of methods that may be potentially developed in the future. The definition of the controllable learning function class and the modular theoretical results provided for commonly used control functions and prediction functions in controllable learning ensure the establishment of a unified theoretical framework. Second, about reducing the dependency of the bounds on the number of task targets $c$. The analysis of the number of task targets in the bounds can be traced back to a basic bound with a linear dependency on $c$, which comes from the following inequality: $$\mathbb{E}\left[\sup_{\boldsymbol{f} \in \mathcal{F}} \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^c \epsilon_{ij} f_j\left(\boldsymbol{x}_{i}\right)\right] $$ $$\leq c \max_j \mathbb{E}\left[\sup_{f_j} \frac{1}{n} \sum_{i=1}^n \epsilon_{ij} f_j\left(\boldsymbol{x}_{i}\right) \right].$$ The dependency of the bounds on $c$ can be improved to square-root or logarithmic by preserving the coupling among different components reflected by the constraint, i.e., $\\|\boldsymbol{w}\\| \leq \Lambda$. However, each task target in controllable learning can be completely different, hence the coupling relationship between the weights of the functions corresponding to each task target needs to be decoupled. Hence, we introduce the projection operator. We found that the square-root dependency of the bound on $c$ is inevitable for $\ell_2$ Lipschitz loss, which essentially comes from the $\sqrt{c}$ factor in the radius of the empirical $\ell_2$ cover of the projection function class, but the Lipschitz continuity of the loss function with respect to $\ell_\infty$ norm can eliminate it. Hence, the tight bounds with no dependency on $c$, up to logarithmic terms, can be derived. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. The rebuttal addressed my concerns and I updated my rating accordingly. Best, reviewer --- Reply to Comment 1.1.1: Comment: Dear Reviewer 9TGj, Thank you for your insightful feedback, for your constructive comments, and for updating your score. Best regards, Authors
Summary: This paper investigates the theory bounds of controllable learning methods, highlighting the need for a deeper understanding of controllable learning methods from a generalization perspective. This paper first gives a formal definition of the general function classes of controllable learning, then develops a novel vector-contraction inequality and derives tight generalization bounds for the general function classes of controllable learning. In addition, this paper analyzes two typical controllable learning methods and derives corresponding generalization bounds for specific methods. These theoretical results reveal the impact of different manipulation functions and control functions on the generalization bounds. ## update after rebuttal Claims And Evidence: This paper investigates the generalization of controllable learning methods for the first time, establishes a theoretical framework for controllable learning, and derives a series of tight generalization bounds. These theoretical results provide strong evidence for the claim that different manipulation and control functions will affect the constants in the generalization bounds. Methods And Evaluation Criteria: Yes. The proposed theoretical analysis method can produce tight theory bounds. Theoretical Claims: Yes. I have reviewed the proofs and the theoretical claims are correct. Experimental Designs Or Analyses: Yes. The analyses in this paper are complete and all the analyses are valid. Supplementary Material: Yes. I have reviewed the detailed proofs of the theoretical results in the supplementary material. Relation To Broader Scientific Literature: Previous research on controllability mainly focused on methods in information retrieval and recommender systems, and the related research lacked theoretical analysis. This work is an important step towards theoretical understanding of the controllability of machine learning from a generalization perspective, and it also explains why controllable learning methods have good generalization properties. Essential References Not Discussed: All the essential related works have been adequately discussed. Other Strengths And Weaknesses: Strengths: 1. This paper is the first work to theoretically analyze controllability. It not only provides a basic framework for the theoretical analysis of controllable learning methods, but also provides some general analysis methods and theoretical tools applicable to typical controllable learning methods, which is a key contribution to a deeper understanding of controllability in machine learning. 2. This paper introduces many valuable theoretical results and techniques, among which the proposed novel vector-contraction inequality may have potential applications in a wider range of problem settings. In addition, the capacity-based theoretical results on deep neural networks can also provide deeper insights into understanding the role of these models in controllable learning methods. 3. The theoretical results in this paper are all rigorously proved, and a lot of analysis and explanations make the theoretical results easy to understand. The assumptions corresponding to the theoretical results are reasonable and consistent with the practical situations, and the argumentation process is logically coherent. Weaknesses: 1. I noticed that the theoretical results in Section 5 are specific to the weighted Hamming loss, but the multi-target bipartite ranking loss is also analyzed in Section 4. Can the theoretical results in Section 5 be directly applied to the multi-target bipartite ranking loss? 2. Although the theoretical results presented in this paper are important, there seems to be a lack of discussion on how to use these theoretical results to guide the design of controllable learning methods in real scenarios. Other Comments Or Suggestions: Typos: Line 421, right column, "explanation the models" --> "explanation for the models". Questions For Authors: 1. Can these theoretical results provide effective guidance for model settings of different modules of controllable learning methods in practical situations? 2. The assumptions corresponding to the proposed vector-contraction inequality seem to be general. Can this lemma be applied to theoretical analysis of other problem settings? Or can it be applied to other loss functions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper. The following are our responses to the Questions: **1. Response to the Weakness 1.** The theoretical results in Section 5 can be used on the multi-target bipartite ranking loss. When the specific controllable learning method involves the multi-target bipartite ranking loss, we only need to consider replacing the corresponding constants in Theorem 4.7 (instead of Theorem 4.5) as in the proof process in Subsection 5.3. **2. Response to the Weakness 2 and Question 1.** Controllable learning often involves the selection of control functions and prediction functions corresponding to task targets. Although increasing the capacity of the model can enhance the representation ability of controllable learning, the increase in capacity is not blind. Our theoretical results show that for embedding-based controllable learning methods, the control function should choose to use a large-capacity model, and for hypernetwork-based controllable learning methods, the main model used for classification (rather than the hypernetwork) should choose to use a large-capacity model. These theoretical results are consistent with practical methods and can serve as a general guide for the structural design of controllable learning models. **3. Response to the Question 2.** The proposed vector-contraction inequality only assumes that the loss is Lipschitz continuous with respect to the $\ell_\infty$ norm. In addition, many loss functions in other problem settings also satisfy this assumption, such as multi-class classification [1], [2], multi-label learning [3], clustering [4], etc., which means that our theoretical results can also be extended when it comes to dealing with these problems, which also shows that the assumption on the loss function do not lose generality. [1] Maksim Lapin, Matthias Hein, Bernt Schiele. "Top-k Multiclass SVM", NIPS 2015. [2] Yunwen Lei, Ürün Dogan, Ding-Xuan Zhou, Marius Kloft. "Data-dependent generalization bounds for multi-class classification", IEEE TIT 2019. [3] Yi-Fan Zhang, Min-Ling Zhang. "Generalization Analysis for Multi-Label Learning", ICML 2024. [4] Shaojie Li, Yong Liu. "Sharper Generalization Bounds for Clustering", ICML 2021.
Summary: This paper analyzes the generalization of controllable learning methods to understand controllability in trustworthy machine learning better. It establishes a unified and practical framework for the theoretical study of controllable learning methods and proposes a novel vector-contraction inequality that can derive tight generalization bounds for them. In addition, it derives generalization bounds for two typical controllable learning methods: embedding-based and hypernetwork methods. These results reveal the impact of different manipulation methods based on inputs and control functions. Claims And Evidence: This paper mainly proposes two claims: 1. The generalization analysis of controllable learning needs to address two key challenges, i.e., establishing the relationship between generalization bound and controllability and reducing the dependency of generalization bounds on the number of task targets. 2. Different manipulation methods based on the input and control function will lead to significant differences in the constant $A$ of the generalization bounds in Theorem 4.5 and 4.7. The theoretical results in this paper provide clear and convincing evidence for these two claims. Methods And Evaluation Criteria: Although specific evaluation metrics are not available to measure the generalization performance of controllable learning methods, the two metrics analyzed in this paper are commonly used in practice. Theoretical Claims: I have checked most of the theoretical proofs, and the proofs' correctness supports the claims made in the main paper. Experimental Designs Or Analyses: This is a purely theoretical paper. All the analyses in the Remarks in this paper are reasonable and valid. These analyses are well structured and well supported the claims of this paper. Supplementary Material: I have reviewed all the supplementary materials, which mainly include complete proofs of the theoretical results involved in the main paper. Relation To Broader Scientific Literature: Although existing controllable learning methods have demonstrated empirical success, theoretical research on controllable learning methods is completely unexplored. This paper lays the foundation for theoretically understanding the success of controllable learning methods from the perspective of generalization. Essential References Not Discussed: This paper adequately discusses the related works. Other Strengths And Weaknesses: Strengths: 1) This paper provides a theoretical foundation for understanding the generalization performance of controllable learning methods and for further analyzing a wider range of controllable learning methods. The proposed unified theoretical framework covers two typical controllable learning methods well, and the derived generalization bound has a weak dependency on the number of task targets, which is consistent with the practical situation, that is, it provides a theoretical guarantee that proper controllable learning methods can handle multiple task targets well. 2) This paper proposes a novel vector-contraction inequality specific to controllable learning. This inequality can induce a tight generalization bound with a logarithmic dependency on the number of task targets. It decouples the associations between multiple task targets, consistent with the potentially significant differences between task targets in practice and the losses corresponding to each task target in the weighted Hamming loss. 3) This paper provides generalization analysis for two typical controllable learning methods. The modular decomposition of theoretical results on specific controllable learning methods enhances readability and provides theoretical tools and preliminary results for further analysis in the future. It also provides an interface for subsequent expansion of modular theoretical results on more models. The theoretical techniques of deep models in this paper may be of independent interest. 4) This paper is well written and structured and easy to follow. The motivations and contributions are stated clearly. Each theoretical result is explained and analyzed in detail, making the theoretical results easy to understand. In addition, the generalization bounds of specific controllable learning methods are consistent with the selection of models in practice, which explains the empirical success of existing controllable learning methods. Weaknesses: The theoretical tools and intermediate steps involved in the proofs of theoretical results in the main paper are vague. Although detailed proofs are provided in the appendix, the lack of theoretical details in the main paper may make the theoretical contributions not better presented. Other Comments Or Suggestions: I suggest providing proof sketches in the main paper to show the proof process and key theoretical techniques. Questions For Authors: Although specific evaluation metrics for the generalization performance of controllable learning are currently lacking, the question is whether the theoretical results obtained from the two losses involved in this paper are valid without loss of generality, that is, whether the theoretical results obtained from the relevant assumptions can be extended to potentially more evaluation metrics that will appear in the future. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper. The following are our responses to the Questions: **1. Response to the Weakness and Suggestion.** All proof sketches and detailed proofs of the theoretical results in this paper have been moved to the appendix due to the limitation of the paper length. We will add all proof sketches to the main body of the paper in the revised version to better demonstrate key theoretical techniques and improve the readability of the paper. **2. Response to the Question.** Although this paper only studies two loss functions, namely weighted Hamming loss and multi-target bipartite ranking loss, in fact our theoretical results are not limited to these two forms of loss. Note that our theoretical results only assume that the loss is Lipschitz continuous with respect to the $\ell_\infty$ norm. This assumption is relatively mild and easy to satisfy. It only requires that the derivative of the loss function is bounded, which can also be guaranteed to some extent by the boundedness of the loss function. Hence, for potential evaluation metrics in the future, it is only necessary to verify that they are Lipschitz continuous with respect to the $\ell_\infty$ norm, and the theoretical results here are also applicable to them, with only the difference of the Lipschitz constants $\mu$, as we showed in Proposition 4.3.
Summary: This work focuses on the generalization analysis of the controllable learning scenario. It establishes a unified theoretical framework and develops a novel vector-contraction inequality for controllable learning based on the Lipschitz continuity of loss functions. The authors first formalize the function class of controllable learning framework, and derive a general Rademacher complexity upper bound for the loss function space based on the Lipschitz continuity and the Rademacher complexity of the controllable learning function class. Based on the general analysis, the authors give the Rademacher complexities and the generalization analyses for Hamming loss and ranking loss that is used in controllable learning. For some specific learning methods, i.e., embedding-based methods and hypernetwork-based methods, the authors give the analyses of the empirical Rademacher complexities respectively. In addition, they also give the Rademacher complexity of the class of feedforward neural networks, graph neural networks and transformers. Claims And Evidence: This paper mainly considers the generalization ability of the controllable learning scenario. The authors use the empirical Rademacher complexity as the fundamental tool for generalization analysis, which is quite standard. I went through the proofs of the main theorems in this paper, and I think that the proofs are convincing and correct. Methods And Evaluation Criteria: The proposed theoretical results make sense for the problem. This paper studies the Hamming loss and ranking loss for controllable learning, which is widely adopted for this problem. The authors study the generalization ability of these two loss functions based on the Rademacher complexity, which is also a standard tool that is suitable for this problem. Theoretical Claims: I have roughly checked the correctness of two main theorems, and I think the proofs are correct. For Theorems 4.5 and 4.7, the proofs basically follow the standard Rademacher analysis for multi-label learning in [Zhang and Zhang, 2021], which is published in ICML 2024. I think the proofs of Theorems 4.5 and 4.7 are correct. For theorems in section 5, which give the Rademacher complexity upper bound for some specific models, e.g., feedforward neural networks, graph neural networks and transformers. The proofs basically follow the upper-bound of the operator norm of each component in deep learning and the Lipschitz continuity of activation functions, which is also quite standard, as in [Neyshabur et al., 2015; Bartlett et al., 2017; Trauger and Tewari, 2024]. Experimental Designs Or Analyses: This paper focuses on theoretical understanding, and hence no experiment is involved. Supplementary Material: I went through the whole appendix, which gives the detailed proofs of the theoretical results in the main paper. Relation To Broader Scientific Literature: I think that the key contributions of this paper are important. Since many algorithms for controllable learning is proposed, this paper first establishes a unified framework to analyze the generalization ability of controllable learning algorithms. It gives a formal definition of the function class of controllable learning, and apply the empirical Rademacher complexity to derive the generalization bound, which verifies the effectiveness of existing controllable learning algorithms theoretically. Essential References Not Discussed: I don’t think that the authors miss any important related works. The authors cite most theoretical works of the generalization analysis of machine learning algorithms. Most of the cited works are state-of-the-art results in this area. Other Strengths And Weaknesses: The detailed strengths of this work are as follows: 1. This paper is well-written and easy to follow. The problems are clearly stated and the key contributions make sense. 2. The originality of this paper is good. The authors combine the theoretical tool that is used to analyze the generalization ability of multi-label learning to the controllable learning scenario. This work gives the theoretical understanding of existing controllable learning algorithms, and hence have relatively significant impact on this area. Despite the strengths above, this work has the following weaknesses: 1. The formulation of controllable learning in Section 3 is a little bit confusing. It could be better if some examples were provided to further explain the function class in Eqn. (2). Other Comments Or Suggestions: The clarity of this paper would be further improved if the authors were to give some real-world examples of the controllable learning scenario. Questions For Authors: What is the relationship between controllable learning and multi-label learning? It seems that, from Section 3, controllable learning is a special application of multi-label learning. Can the theoretical results generalize to general multi-label learning scenario? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments and active interest in helping us improve the quality of the paper. The following are our responses to the Questions: **1. Response to Weakness.** We will add relevant explanations after Eqn. (2). For embedding-based controllable learning methods, the task requirement $\boldsymbol{z}$ can be editable user profiles, the control function $\psi$ often uses Transformers, and the nonlinear mapping $\zeta$ induced by classifier can use FNNs. For hypernetwork-based controllable learning methods, the task requirement can be a task indicator, the model corresponding to the control function is a hypernetwork, and the nonlinear mapping $\zeta$ induced by classifier can use GCN-based structures or Transformer-based models. **2. Response to Suggestion.** We provide two real-world examples of controllable learning scenarios as follows. 1. A news aggregator adjusts article rankings in real time using user-specified rules (e.g., exclude gaming content today) to promote diverse topics. The control function modifies inputs/parameters to enforce filters while keeping recommendations relevant. 2. A trading algorithm adapts portfolio strategies based on real-time goals (e.g., protect capital during market drops) to minimize losses and maximize risk-adjusted returns. The control function tunes parameters dynamically to balance objectives without retraining. These examples show how our framework enables systems to adapt inputs/parameters at test time to meet evolving task requirements (e.g., content preferences, market conditions) while ensuring generalization guarantees for targets like diversity and risk management. **3. Response to Question.** From the perspective of the model's output, the outputs of both controllable learning and multi-label learning can be expressed as vector-valued outputs. From the perspective of the model itself and the input, they are different. Controllable learning places more emphasis on task requirements, i.e., the learner can adaptively adjust to dynamically respond to the requirements of different task targets, while multi-label learning does not involve task requirements corresponding to different task targets, so controllable learning is not a special case of multi-label learning. However, when the control function $\psi$ in controllable learning is $\emptyset$, controllable learning will degenerate into a specific multi-label method, i.e., multi-label learning based on the label-specific representation learning strategy. At this time, the relevant theoretical results can be generalized to multi-label learning scenarios.
null
null
null
null
null
null
RZ-NAS: Enhancing LLM-guided Neural Architecture Search via Reflective Zero-Cost Strategy
Accept (poster)
Summary: This paper studies LLMs in NAS from a novel perspective. It begins by addressing the challenges in existing LLM-in-NAS strategies and explores the potential of combining LLM’s code- and text-level comprehension with zero-cost computation proxies. The proposed approach introduces the reflective module within LLM-based zero-cost NAS to improve the process of generating random populations. This method allows for architecture generation using humanoid reflections and eliminates training-related time costs. Extensive experimental results show the effectiveness across multiple search spaces and benchmarks. ## update after rebuttal The authors have addressed my concerns, and l would like to retain my positive score. Claims And Evidence: The author focuses on LLM-based zero-cost search methods and presents experimental comparisons to highlight their superiority over traditional NAS and LLM-in-NAS methods. It also explores various zero-cost proxies, demonstrating the generalizability of the approach across multiple benchmarks. Additionally, the paper performs an ablation study, confirming the effectiveness of each component. Methods And Evaluation Criteria: The analysis of the issue is thorough, and the explanation of each step of the proposed method is detailed and well-reasoned. In Section 3, the author illustrated the interaction pipeline, the construction of the framework as well as each module of prompt, with a detailed illustration of how each module in the algorithm is built. The evaluation criteria are robust. In Section 4, the author conducted extensive experiments and provided in-depth analysis, comparing the proposed method not only with traditional NAS methods but also with LLM-in-NAS approaches. Theoretical Claims: Yes, the author provides theoretical support in Sections 2 and 3. Experimental Designs Or Analyses: Yes. The author compares different existing methods, including traditional NAS, training-free NAS, and LLM-in-NAS methods, across multiple tasks (e.g., image classification and object detection) on different benchmarks like CIFAR-10, CIFAR-100, and ImageNet. Additionally, it conducts ablation experiments to demonstrate the effectiveness of different modules within the framework. Supplementary Material: Yes, the author provides the code repository and the proposed method is reproducible. Relation To Broader Scientific Literature: This paper introduces related work in Zero-Cost NAS and LLMs, comparing the most relevant previous studies across key aspects. In the literature review, the authors provide a comprehensive analysis of previous works on LLM-in-NAS, evolutionary NAS algorithms, LLM-in-evolutionary methods, and zero-cost proxies. While previous studies mainly focused on enhancing NAS performance through LLM’s text-level or code-level understanding, this paper tempts to overcome these limitations by exploiting the LLM’s understanding capabilities. By comparing factors such as search efficiency and the role of LLMs in NAS, the authors address the existing challenges in current LLM-to-NAS methods. Essential References Not Discussed: Key related works are all discussed. Other Strengths And Weaknesses: Strengths: The paper is thoughtfully structured and presents a clear narrative. It begins by establishing the background and outlining the problem being addressed. The writing is direct, making it easy to follow the progression of the work. The innovation of the proposed method and its difference from others is clearly shown in Table 1. The methodology is detailed, offering in-depth explanations of the RZ-NAS framework and its various components. This allows for a clear understanding of how the framework operates and its individual elements. The author highlights the significant improvements brought by combining LLMs with Zero-Cost NAS proxies and shows the broad applicability of the proposed framework. It demonstrating the framework’s flexibility and its potential for generalization across various Zero-Cost strategies. Weaknesses: The authors should provide more theoretical explanations or literature reviews on current reflection modules. How are these modules utilized in other LLM-driven algorithms? What sets them apart from the reflection mechanism in RZ-NAS? Additionally, it would be helpful to explain why the reflection module works from a theoretical or mechanistic perspective. While the paper provides ablation analysis to highlight the importance of mutation parts using LLMs, the strategy of Zero-Cost NAS also plays a critical role in the proposed method. The authors could enhance the paper by providing additional performance analysis without Zero-Cost proxies, along with a discussion on potential directions for further improving the method’s performance. RZ-NAS is based on the framework of evolutionary algorithms. As LLM-driven GE [1], and other similar methods, they utilize LLM to directly understand and modify evolutionary code. Why did you choose not to directly modify the code of the architecture guided by LLM? It would be better to give more explanations for better understanding the contribution of RZ-NAS. [1] LLM Guided Evolution - The Automation of Models Advancing Models, GECCO '24 Other Comments Or Suggestions: see Strengths And Weaknesses Questions For Authors: How do you ensure that the generated architecture can be converted into valid and qualified architectures? Have you encountered any issues where the generated architectures introduce bugs? Could you clarify the process used to restrict the generation of architectures to ensure that they are compilable and meet the necessary criteria? The result in the table of the test and valid accuracy on NAS-Bench-201 appear a bit unclear. As the results in bold indicate the best compared to the existing LLM-based and zero-cost methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and constructive comments! We truly value your kind words about our contributions. We hope our responses address your concerns and enhance your view of our work! If there are any additional comments or questions, you can post them in the next stage. **1. More theoretical explanations or literature reviews on current reflection modules** Thank you for your insightful suggestions. Reviewer yPSd pointed out the similar question. Due to the character limit, detailed responses include literature reviews and theoretical explanations of the LLM reflection module, can refer to the 1st and 4th responses to Reviewer yPSd. **2. Discussion the case without Zero-Cost proxies** The reason why we use zero-cost proxies is due of the heavy time cost of NAS search, which requires full training cycles to evaluate candidate architectures. Actually, our proposed LLM-in-NAS framework is designed to be flexible, which can also use validation accuracy (instead of proxies). **3. Why not to directly modify the code of the architecture guided by LLM** The main reasons why not to directly modify the architecture code are two-fold. ***Complex Search Spaces*** Previous works directly modify architecture code with LLM in simple search spaces (e.g., EvoPrompting’s chain CNNs on MNIST) , while our task is based on standard NAS search space (e.g. DARTS , NASBench), involving multi-module combinations and cross-file dependencies (e.g. the coupling between the Genotype and Network classes). ***Genotype Representation*** For both the macro and micro search spaces, we use genotype string instead. This align with previous NAS frameworks like DARTS and ENAS that also use genotypes. Genotype ensures that the network code remains unchanged, avoiding structural bias from LLM-generated code. Additionally, we can limit LLM errors to fixable issues and filter out invalid structures via regex-validated syntax. **4. How to ensure generated architectures are valid and functional** We provide the input and output of the architecture in JSON format in the in-context examples. Both of our search spaces (micro and macro) use formatted input and output. Previous LLM-related work has shown that enforcing formatted input and output in in-context examples significantly improves the results [1]. Additionally, since the candidate operators in the search space are fixed, the LLM is constrained to selecting operators that exist within the search space during the mutation process. This is specifically explained in the `Define Search Space`section of system prompt in the Appendix—where it states, "I should generate an optimal architecture by mutating an operation from this file."*—as all operators are defined in the* operator.py *file*. [1] Mind Your Format: Towards Consistent Evaluation of In-Context Learning Improvements **5. Any issues where the generated architectures introduce bugs?** Yes, we do encounter issues where the LLM’s output has problems. For example, sometimes the output may have missing operators, causing the architecture to be invalid in the validation module. However, such cases are very rare. In about 100 iterations, we typically encounter around 2 such architectures. Since we impose strict input and output format constraints, the LLM is able to understand the architecture's format. When an architecture is invalid, the exception for that iteration is passed to the reflection module as a parameter, and the architecture is not added to the population. In the next iteration, we randomly select a new architecture for mutation, reflect on the error from the previous invalid architecture, and guide a better mutation. **6. Clarify the process used to restrict the generation of architectures** After the LLM generates an architecture, we connect it to a validation module that checks the architecture. This module verifies if the operators belong to the search space, checks the input and output channels. For he micro search space, it checks the number of cells and edges. For the macro search space, it checks the number of layers. If the architecture is invalid, it is not added to the population. The validation module then outputs an exception, which is passed to the reflection module. The reflection module analyzes the error and provides a reflection result. A random architecture is then selected from the population for the next iteration, starting the mutation process guided by the LLM. **7. The NAS-Bench-201 accuracy results in the table are unclear.** Sorry for the confusion. We have rechecked the data and confirmed that the results in bold indicate the best method compared to the existing LLM-based and zero-cost methods. The revised version will present the accuracy results clearly and include an explanation in the footnote. We have also revalidated all experimental data and confirmed the consistency of the results. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns, and I appreciate their efforts in providing a discussion on the case without relying on zero-cost proxies. They have also clarified the experimental results on NAS-Bench-201 and explained how to ensure that the generated architectures are valid and functional. I am generally positive about this work, as it explores the use of LLMs in Neural Architecture Search (NAS) from a novel perspective. The paper begins by identifying the limitations of existing LLM-in-NAS approaches and investigates the potential of integrating large language models’ code- and text-level understanding with zero-cost performance proxies. --- Reply to Comment 1.1.1: Comment: Many thanks for your helpful review and positive feedback. We are glad to hear that our response has addressed your concerns. Your evaluation and support on our work are greatly appreciated! We are committed to integrating clarifications you suggested into the revised revision. Thank you again for your detailed review and consideration.
Summary: This paper rethinks the role of LLMs in NAS. It doesn't directly apply LLM to NAS architecture optimization but optimize the process of random population generation utilizing the code- and text-level understanding of LLM. The novelty lies in the proposed reflective strategy that achieves architecture generation with humanoid reflections and training-free metrics to address some key challenges in current LLM-to-NAS works and achieves good results on multiple search spaces and benchmarks. Claims And Evidence: The paper claims the reflective module in LLM-to-NAS searching methods, and gives empirical evidence to illustrate the superiority of the reflective module and its generalizability on multiple tasks. It compares the efficiency of the search process with and without the reflective module, showing that the reflective module leads to a significant reduction in the computational resources required to find optimal neural architectures. Methods And Evaluation Criteria: Overall, the evaluation approach is solid, which can be seen from the following aspects. 1. Comparing the method against established NAS proxies on relevant benchmarks like NAS-Bench-201, CIFAR-10, CIFAR-100, and ImageNet. 2. Comprehensive ablation experiments. 3. Inclusion of diverse tasks, such as image classification and object detection. 4. The use of GPT4-o for mutation generation and temperature sampling ensures diverse output. 5. Consistency with prior work, such as ZiCo and Zen NAS, ensures fair comparison. Theoretical Claims: Yes. The paper primarily focuses on empirical results and provides insights through experimental evaluations. While there are no explicit theoretical claims for assessment, it would be beneficial to explain more about the theoretical foundations of the proposed method. Experimental Designs Or Analyses: Yes. The paper displays the results on image classification and detection tasks. The author conducted experiments base on widely adopted NAS benchmarks (CIFAR-10, CIFAR-100, and ImageNet) and search spaces (DARTS, NAS-Bench-201, MobileNet) and compared with existing NAS and NAS-to-LLM methods. Supplementary Material: Yes. The paper provides detailed prompts code repository along with instructions for running the proposed method. Relation To Broader Scientific Literature: This paper builds on existing research in Zero-Cost NAS and LLMs, offering a novel integration of both to optimize the architecture search process. The authors tried to address existing challenges in current LLM-to-NAS methods, such as slow search efficiency and underutilization of LLM capabilities. By leveraging evolutionary algorithms and Zero-Cost proxies, the paper provides a more comprehensive and effective framework for diverse NAS tasks. The approach combines the strengths of Zero-Cost proxies with LLMs' text and code-level understanding, improving the efficiency of architecture exploration. The paper would benefit from a more extensive literature review, particularly on the current reflection techniques from experimental and theoretical aspects, as the reflection technique has been developed in prompt engineering and code generation [1,2]. [1] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution, NeurIPS 2024. [2] Large Language Model-Aided Evolutionary Search for Constrained Multiobjective Optimization. Essential References Not Discussed: N/A Other Strengths And Weaknesses: S1. Make a novel structured, general framework for applying LLM to NAS tasks. The integration of LLMs with Zero-Cost NAS proxies addresses several challenges in traditional NAS and LLM-to-NAS methods. The innovative combination of these two techniques creates a more efficient framework for architecture search. S2. The author introduces a comprehensive framework that combines LLMs’ text and code-level understanding. The paper compares this method with past LLM-to-NAS and traditional NAS approaches from multiple perspectives, demonstrating its versatility and effectiveness. S3. The experiments are extensive and in detail. The authors test the effectiveness of RZ-NAS across multiple NAS tasks and ablate the importance of using the reflection modules and necessity of other components. W1. The paper utilizes the zero-cost NAS strategy which may limit to image classification or detection tasks. For other downstream tasks like graph node classification task that without zero-cost proxy computation, does the proposed LLM-to-NAS strategy work without training-free computation? W2. It is suggested to explain the reflection module in LLM-based strategies in more detail. Plus, the reflection module in previous research (e.g. evolutionary algorithm, NAS-related work) could be expanded. Other Comments Or Suggestions: I would appreciate the authors' response on my main concerns listed above. Questions For Authors: 1. Could the proposed method be applied to other NAS methods beyond image-related tasks? 2. As mentioned in the Relation To Broader Scientific Literature, the authors should provide more explanations of current reflection modules. Reflection module appears to be used in language model development. What is the primary distinction compared to RZ-NAS? Further analysis and discussion on this would be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate the positive feedbacks and your recognition of our contributions! We will respond to your valuable comments and detailed suggestions! **1. More literature review on current reflection techniques** In the revised version, we will include a more complete literature review of existing reflection research, along with expanded theoretical insights in Section 2 ("Related Work"). The self-reflection mechanism in large language models (LLMs) refers to the model's ability to review its own outputs or reasoning process to identify and correct errors. *Reflexion*[1] first proposed a framework to strengthen LLM agents through language feedback. It introduced a "Generate-Evaluate-Reflect" loop, where an Actor generates a trajectory, an Evaluator computes rewards, and Self-Reflection provides natural language feedback to guide subsequent decisions. Reflexion outperformed the previous GPT-4 performance across multiple coding benchmark tests. In the field of evolution, *ReEvo*[2] combined evolutionary computation with human-like reflection to generate heuristic algorithms. Moreover, the reflection mechanism greatly improved model performance on tasks like multiple-choice reasoning[3]. **2. Detail the LLM-based reflection module** Thank you for this comment. Reviewer oqYW pointed out the similar question. Due to the character limit, detailed responses include the reflection mechanism and the input/output of the LLM reflection module, can refer to the first response to Reviewer oqYW. **3. Compare the reflection module of RZ-NAS to previous work** Unlike previous work (e.g., *Reflexion*) that used feedback mechanisms to improve closed tasks like code generation and question answering, RZ-NAS deals with a more open-ended search task. **Thus, we are the first to apply the reflection mechanism to NAS-related work, creating two types of reflection**: one is the internal reflection mechanism in LLM's system prompt, which guides the LLM to discover potentially better solutions via the iteration process; the second is the external reflection module, providing reflection suggestions for each mutation. **4. More discussion on the theoretical analysis of the reflection module** In terms of theory, while the theoretical support for the reflection mechanism effectiveness remains somewhat of a black box, there are related directions that can provide insight into it. For example, in the context of LLMs, the combination of reinforcement learning with human feedback (RLHF) allows the model to adjust based on feedback during the generation of outputs, thus simulating a process of reflection and improvement[4]. Our feedback model is similar to simulate human-like reflection, providing mutation evaluations of the model's behavior during the iterative process, which can correct potential errors in the generated architecture and accelerate the learning of architecture generation strategy through reflection. **5. Applied the method to other NAS methods beyond image-related tasks?** Yes. We further conducted experiments based on Graph NAS. Since the zc proxy is designed for image-based evaluation, in Graph NAS scenario, we can still use our proposed LLM-in-NAS framework with the evaluation metric changed to node classification accuracy. The evoluationary algorithm(EA)-based Graph NAS method combined with the LLM-in-NAS framework can get superior results on widely-used graph datasets. | Method | Search Strategy | Cora | CiteSeer | PubMed | CS | Physics | | -------------------- | --------------- | --------------- | --------------- | --------------- | ---------------- | ---------------- | | GCN | Manual | 83.1 ± 0.40 | 70.3 ± 0.70 | 79.0 ± 0.50 | 80.8 ± 0.10 | 96.38 ± 0.07 | | GAT | Manual | 83.0 ± 0.70 | 72.8 ± 0.70 | 79.2 ± 0.30 | 81.3 ± 0.30 | 96.37 ± 0.20 | | GraphNAS | RL | 84.2 ± 1.00 | 73.1 ± 0.90 | 79.6 ± 0.40 | 92.54 ± 0.09 | 93.87 ± 0.06 | | AutoGraph | EA | 83.8 ± 0.50 | 74.4 ± 0.60 | 79.2 ± 0.30 | 92.54 ± 0.09 | 94.02 ± 0.12 | | **Ours (AutoGraph)** | **EA** | **84.8 ± 0.20** | **75.0 ± 0.40** | **80.6 ± 0.28** | **93.14 ± 0.19** | **94.62 ± 0.10** | | DFG-NAS | EA | 85.2 ± 0.20 | 75.3 ± 0.30 | 81.1 ± 0.30 | 93.44 ± 0.06 | 94.50 ± 0.21 | | **Ours (DFG-NAS)** | **EA** | **85.8 ± 0.30** | **75.7 ± 0.70** | **83.1 ± 0.30** | **94.14 ± 0.06** | **94.95 ± 0.19** | [1] Reflexion: Language Agents with Verbal Reinforcement Learning [2] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution [3] Self-Reflection in LLM Agents: Effects on Problem-Solving Performance [4] Reinforcement Learning from Reflective Feedback (RLRF): Aligning and Improving LLMs via Fine-Grained Self-Reflection --- Rebuttal Comment 1.1: Comment: Thanks for the author responses, which have addressed my concerns. After carefully reading all the reviews as well as the detailed author responses, I still recommend accepting this paper due to the following reasons: 1) Novel LLM-in-NAS framework. The paper is first to combine LLM’s code- and text-level comprehension with the refection mechanism and zero-cost evaluation proxy. 2) Generalizability. The proposed LLM-in-NAS framework achieves excellent performance under different search spaces, zero-cost proxies, datasets, and search tasks. 3) Simple and Efficient. The proposed method is simple yet effective. Moreover, the search overhead is much smaller than existing LLM-in-NAS and traditional NAS methods. Reviewer oqYW and Reviewer TuuS raised some comments about clarity. The author gives detailed responses to each comment and the clarity could be enhanced in the revised version. Overall, I believe the strengths of this paper outweigh the weaknesses and the work contributes to the NAS community, which could inspire researchers to further explore the potential of LLM for neural architecture search. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed and thoughtful review. We appreciate your positive feedback and are glad that our responses have addressed your concerns. We will improve the clarity of the paper in the revised version as you suggested. We will refine the illustrations and add additional experiments from the suggestions to further enhance the understanding of our methodology and results. Thank you again for your detailed review and consideration.
Summary: This paper proposes Reflective Zero-cost NAS (RZ-NAS), a Large-Language-Model-to-Neural-Architecture-Search (LLM-to-NAS) method that integrates evolutionary search with LLM guidance and zero-cost (zc) proxies. The approach leverages LLM reflection to iteratively refine architecture mutations, improving search efficiency. Empirical results demonstrate that RZ-NAS performs well on both micro and macro search spaces, and can outperform selecting architectures based solely on zc proxy scores. Claims And Evidence: The claims are mostly supported by the reported results, which show that: 1) **zc proxies serve as valid signals for LLM reflection** and offer a much lighter alternative to accuracy-based evaluation, which requires full neural architecture training. 2) **LLM guidance improves evolutionary search**, providing meaningful refinements beyond random mutation and simply selecting the highest zc-scoring architecture. However, the paper has **serious issues regarding clarity and depth of discussion**, making it difficult to fully verify the reported results. This lack of transparency weakens the strength of the claims. I will elaborate on these issues in the later sections. Methods And Evaluation Criteria: The overall method and evaluation criteria are mostly reasonable and align with the goals of improving LLM-to-NAS efficiency with zc proxies. The proposed method is clear from a high-level perspective, and the evaluation criteria of accuracy and correlation are standard in NAS. However, the clarity issues make the methods still to be abstract and black-box: - Reflection Mechanism Detail: The paper introduces a reflective module meant to help the LLM improve its mutation process based on feedback (e.g., zc scores and exceptions). However, it only provides a high-level description of this component. For instance, while it's clear that the LLM uses “humanoid reflections” to guide future mutations, the paper doesn’t detail how the reflection is processed, what specific signals trigger a change in strategy, or how this feedback loop is quantitatively integrated into the mutation process. - ZC Proxy Integration: The framework leverages several zc proxies to evaluate architectures without full training. However, it is unclear how LLM understands the information of these zc proxies based on text and code guides, how LLM considers the existing zc scores to decide the next mutation, and what changes exist when different types of zc proxies are provided to LLM. The whole process is a black box and represented by a black-box `GENERATINGMUTATION` function in Algorithm 1. While it is understandable that interpreting the decision of an LLM can be hard, it is expected to provide at least **some levels of interpretation** of an LLM's decision. From another perspective, the paper even didn't provide **any output** of LLM except the performance of the final selected architectures. Only the input prompts are clearly outlined. **It is expected for authors to provide at least a few interactions with LLMs, including the full conversations, and how LLM responds to the given zc scores or exceptions**. Given that the codes are not provided, it is expected for the authors to provide far more details of the algorithm than the current version. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design suffers from even more severe clarity issues than the methodology. Below is a non-exhaustive list of important details that are expected but missing: - The **mutation action space** provided to the LLM. - The **number of architectures evaluated** during the search process. - The **evolutionary population size**. - The **error rate when combining with each zc proxy**, currently only briefly mentioned in the appendix of the ablation studies. - **Initialization details**—does the search start from scratch, or are some architectures collected before the first mutation? - How the **`VALIDATE` function in Algorithm 1** is implemented. - The **query costs of LLM**—are they significantly higher than local zc proxy computations? - The paper states that **five different temperature settings** were tested for the LLM, but only one result is reported—where are the other four? - **Implementation details of the baselines**—how were they run, and were zc proxy results obtained from the same number of evaluated architectures? - **Details on search spaces and benchmarks**—the paper even does not specify which are **micro vs. macro search spaces**. Given that so many critical details are missing, the lack of clarity makes the paper feel **incomplete**. Supplementary Material: Yes, I reviewed every page of the appendix. Relation To Broader Scientific Literature: This paper contributes to the broader scientific literature by combining two emerging directions in NAS: **LLM-to-NAS** and **zero-cost NAS**. By integrating these two approaches, RZ-NAS seeks to improve search efficiency while maintaining competitive performance. However, the clarity issues in the paper make it difficult to fully understand its **precise innovation in combining these two fields**. While it is clear that the method involves LLM-guided mutations and zero-cost proxy evaluations, the **lack of details on how these components interact** makes it hard to determine whether RZ-NAS introduces a fundamentally new strategy or primarily refines existing ideas. The missing implementation details further limit the ability to compare this work to prior research effectively. Essential References Not Discussed: To the best of my knowledge, no missing references are "essential" to be included in this paper, although more existing zc proxies [1] and the discussion about those zc proxies are often inconsistent and unreliable [2,3] (given that the paper uses it to replace the accuracy) can be considered to include. [1] NAS-Bench-Suite-Zero: Accelerating Research on Zero Cost Proxies. NeurIPS Datasets and Benchmarks Track 2022. [2] A Deeper Look at Zero-Cost Proxies for Lightweight NAS. ICLR Blog Track 2022. [3] Robustifying and Boosting Training-Free Neural Architecture Search. ICLR 2024. Other Strengths And Weaknesses: ## Strengths - While the idea of the combination of LLM and zc proxies is simple and intuitive, this paper is the first to provide a concrete implementation. - The explanation of the input prompt design is detailed. - Some ablation studies are provided to help understand the effects of different components. ## Weaknesses - Severe Clarity Issues. As explained above. - Lack of In-Depth Analysis of LLM Behavior. As explained above. - Dependency on LLM Quality and Ignore LLM Costs. Table 6 shows that the performance of RZ-NAS heavily depends on the quality of LLMs, which can introduce high query costs. This may raise the question of whether this is worthwhile in terms of the cost-performance tradeoff, or would it be more economical to simply let zc proxies search for more architectures? The paper does not account for LLM inference costs, making comparisons with pure zc-proxy baselines unfair. - Dependency on Robust zc Proxies. The paper implicitly assumes the zc proxies are always safe and robust to be used as reward signals to LLM, but this is not always true (see Essential References Not Discussed for reference). Authors are expected to conduct experiments on datasets on benchmarks where zc proxies generally perform poorly (e.g., TransNAS-Bench-101-micro, TransNAS-Bench-101-macro). - Reliance on Correct Prompt Engineering. The results hinge on carefully designed prompts (system instructions, reflection steps, code examples). Poorly formed prompts could yield LLM hallucinations or invalid code. The paper provides some template details, but real-world usage might require ongoing prompt tuning. If the authors can demonstrate that RZ-NAS is not highly sensitive to prompt wording, I am willing to remove this concern. Other Comments Or Suggestions: I would suggest the authors significantly improve the clarity and fill in the necessary details to make RZ-NAS **more complete**. To better understand the **contribution of LLM during the search process** (and also, the performance over different search budgets), rather than just its effect on the final selected architecture, the authors may consider **visualizing the accuracy and zc scores of each searched architecture over time**. A comparison with purely zc proxy-based baselines would help assess how LLM-guided mutations impact search dynamics. This could be effectively presented as a **line graph on NAS-Bench-201**, where ground-truth accuracies are available. ### Non-Exhaustive List of Typos and Notational Issues - Section 3.1 Problem Formulation: - The search space is first defined as $S$, but later, architectures are denoted as $a^* \in A$, leading to inconsistency. - The zc score is initially defined as $ O(a_k)$, but in Equation (1), the function is written with four parameters, which are not explained. - Algorithm 1: - The maximal depth $L$ is defined but never used. - The evolutionary population size is first denoted as $N$ but appears as $E$ on line 9. - On line 15, $z$ is undefined—does this refer to the zc score obtained on line 7? The authors are encouraged to carefully revise their manuscript to improve consistency and readability before submission. Questions For Authors: I understand that many clarity issues have already been raised in the above sections. However, I have one additional question regarding the decision rationale in Algorithm 1. In **line 3**, the architecture selected for mutation is **randomly chosen** from the population, rather than selecting the existing highest zc score architecture. In most evolutionary algorithms for NAS, selecting the architecture with the highest proxy score is a more common strategy, as it prioritizes refining the most promising candidates. Could the authors clarify the rationale behind this design choice? Additionally, what would be the performance impact if **line 3 were modified to always select the architecture with the highest zc score** instead of choosing randomly? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for recognizing our contributions and detailed feedback. Hoping our responses address your concerns. **URL** https://anonymous.4open.science/r/rebuttal-ECC4/icml_rebuttal.png **1. Reflection mechanism** Thank you for this comment. Due to the character limit, detailed responses can refer to the first response to Reviewer oqYW. **2. ZC Proxy Integration** The system prompt defines the model's task to optimize architectures for higher ZC proxy scores during mutation. The compute ZC proxy module provides a code block and proxy description for model understanding. The user prompt includes a JSON list with the current architecture, proxy type, and pre-mutation score. The LLM is guided by structured prompts, I/O formats, and in-context examples. Reflections from prior outputs further refine the process. To avoid hallucinations, calculations are computed by code not LLM. **3. Experiments** **Mutation action space:** The action space is defined in the operators of system prompt. **number of architectures evaluated:** We perform 1500 evolution rounds, far fewer than Zico’s 100k iterations, selecting one architecture per iteration. **Population size:** For NasBench201 and Cifar10, the population size is 100. For Cifar100, Imagenet and COCO, the populatioin size is 256. **Error rate** We compare five zero-cost proxies in Tables 1 and 2. For ImageNet, we follow Zico’s setup with 450M, 600M, and 1000M FLOPs budgets, selecting the best proxies for display. **Initialization** We initialize the random population from scratch. **VALIDATE** This function checks LLM-generated architectures for valid operators, channels, and structure. Invalid architectures are excluded, and exceptions are passed to the reflection module for analysis. **Query costs** Each iteration outputs a mutated architecture, taking 0.8-2.0 seconds. With fewer epochs than zero-cost proxies, our method costs about 40 minutes on CIFAR10, roughly comparable to Zico’s 0.03 GPU days. **Implementation of baselines** We evaluated 1500 architectures which are different from previous zx proxies. **Search spaces and benchmarks**: Micro: For NASben201, Cifar10, and Cifar100, we used a cell-based search space. Macro: For the Imagenet task, we used the Mobilenet search space, consistent with previous zero-cost proxy methods (e.g., Zico, Zen_Nas). For object detection, we stack operators to construct the backbone, following the same search space and network construction as Mae-DET. **Temperature**: The temperature is sampled uniformly from the domain not a fixed value. The comparison with different fixed temperatures is in Table 1 of **URL**. **4. Reflection literature** Due to the character limit, detailed responses can refer to the first response to Reviewer yPSd. **5. References** We agree that [1] you mentioned provides context for our setup and limitations discussed in [2, 3] are highly relevant. We’ll add them in the revision. **6. LLM expense** Our token is around 2500 tokens. On dataset Cifar10, we spend about $75 for one zc proxy training using GPT4o. This expense is acceptable given GPT-4o’s superior reasoning capabilities Furthermore, more cost-effective open-source models with competitive performance have emerged since our submission. We will integrate them into future work to improve cost-efficiency. **7. Dependency on robust zc proxies** We did experiements onTransNASBench-101-Mirco which show the robustness of our method in Table 2 of **URL**. **8. Reliance on pormpt engineering** We used GPT4o to rephrase prompt. We show RZ-NAS’s robustness after rephrasing on Cifar10(in Table 3 of **URL**). Previous research confirms LLMs are insensitive to phrasing changes in structured tasks when examples, I/O formats and specific context remain consistent[4]. [4]Exploring the Limits of Transfer Learning with a Unified Text-to-Text Framework. **9. Typos and notational issues** Thanks for pointing out issue. S defines set of operations and A is the set of all possible architectures generated from S. We revise the equation in the third question of oqYW. **10. Visualize acc and zc scores** We visualize and show our effectiveness on nasbench201 in Figure 1 of **URL**. **11. Random selection** Our random selection differs from traditional evolutionary strategies but is based on two key ideas, showing competitive results in Table 4 of **URL**. First, we follow selection design in previous zero-cost NAS (e.g., Zennas) which use random selection to boosts search space coverage. Our framework supports multiple zc proxies, aligning with existing methods. Second, random sampling maintains population diversity and avoid premature convergence. LLM-guided mutations steer low-scoring candidates toward better solutions. Unlike traditional algorithms which may lead to inefficiency, LLM mutations intelligently explore architectures, maintaining diversity and accelerating convergence. This aligns with prior LLM-in-NAS works like ELM and EvoPrompting. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing such detailed rebuttals and line-by-line responses. Most of my concerns have been addressed, and I encourage the authors to incorporate these clarifications into the revised paper. That said, I would like to highlight a few remaining concerns and raise some new questions prompted by the rebuttal: - **Understanding of ZC Proxies.** My original question aimed to probe *how* the LLM interprets ZC proxy scores — that is, whether the LLM truly understands the *semantics* of these scores, or whether it simply treats them as black-box rewards where “higher is better.” To phrase it more intuitively: do you observe that the LLM behaves differently depending on the *type* of ZC proxy used, not just the numerical values? This is what I meant by **some levels of interpretation**. A possible way to test this might be: you provide the LLM with scores from Proxy A, but describe them as if they came from Proxy B. If this substitution does not impact the results, it may suggest the LLM is not interpreting the scores meaningfully, but rather optimizing toward higher values regardless of their origin. - **Error Rates for Different Proxies.** In Table 3 of **URL**, I noticed that different proxies lead to noticeably different error rates. More powerful proxies (e.g., ZiCo) seem to result in lower error rates. Could the authors provide further insight or interpretation regarding this observation? - **LLM Cost.** Thank you for sharing the cost-related details. While I personally remain cautious about the trade-off between LLM cost and performance improvement, I encourage the authors to document these costs more thoroughly in the revision. This would help practitioners better assess the cost-effectiveness of integrating LLMs into their own pipelines. I will re-evaluate my recommendation after your response. **Update after reading the reply**: I thank the authors again for their detailed responses. However, I still have concerns about the interpretation of the zero-cost proxy experiments. For instance, the score type should remain consistent rather than the description type. Additionally, the performance of the ZiCo + Synflow score appears quite poor, making it resemble ZiCo + GraSP more than expected. That said, I am still not fully satisfied with the interpretation of the LLM. However, I acknowledge that the authors have made significant efforts to improve the clarity of the paper compared to the initial version. Based on their responses, I am inclined toward a weak acceptance. However, I fully understand the concerns raised by reviewer oqYW. I will raise my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your thorough feedback and for acknowledging the detailed rebuttals. Below, we address your remaining concerns and questions, and we will revise the manuscript to ensure clarity and completeness. **1. Understanding of ZC Proxies** Thank you for your insightful question about the LLM's interpretation of zc proxy scores. According to your suggestion, we conducted an experiment using the ZiCo proxy in the system prompt and provided the LLM with actual Synflow scores. This experiment, performed on NAS-Bench-201, showed a clear performance gap, indicating that the LLM's interpretation was influenced not just by the numerical values but also by the contextual understanding of the ZC proxy. | Method | CIFAR10 (valid) | CIFAR10 (test) | CIFAR100 (valid) | CIFAR100 (test) | ImageNet (valid) | ImageNet (test) | | ---------------------------- | --------------- | -------------- | ---------------- | --------------- | ---------------- | --------------- | | RZ-NAS(ZiCo) | 91.45 ± 0.10 | 94.24 ± 0.12 | 73.35 ± 0.14 | 73.30 ± 0.21 | 46.53 ± 0.24 | 46.24 ± 0.23 | | RZ-NAS(ZiCo + Synflow Score) | 90.39 ± 1.38 | 92.88 ± 2.15 | 70.77 ± 2.01 | 69.98 ± 1.93 | 42.83 ± 1.58 | 43.12 ± 2.10 | **2. Error Rates for Different Proxies** The error rate differences across zero-cost proxies are due to their distinct methods for estimating architectural performance. Proxies like ZiCo and Zen-NAS capture network expressiveness better, leading to scores that align closely with actual accuracy, thus resulting in lower error rates. In contrast, other proxies fail to capture this, resulting in higher error rates. To further illustrate these differences, the table below shows the correlation of error rates and proxy scores, highlighting how each proxy aligns with true performance. **Table 1**: Correlation coefficients between various zero-cost proxies and test accuracy on NAS-Bench-201 (KT and SPR represent Kendall’s τ and Spearman’s φ, respectively). The best results are in bold. | Method | CIFAR10 (KT) | CIFAR10 (SPR) | CIFAR100 (KT) | CIFAR100 (SPR) | ImageNet (KT) | ImageNet (SPR) | |--------------------|------------|-------------|-------------|--------------|-------------|--------------| | GraSP | 0.37 | 0.54 | 0.36 | 0.51 | 0.40 | 0.56 | | **Ours(GraSP)** | **0.43** | **0.57** | **0.41** | **0.55** | **0.44** | **0.59** | | Synflow | 0.54 | 0.73 | 0.57 | 0.76 | 0.56 | 0.75 | | **Ours(Synflow)** | **0.56** | **0.73** | **0.60** | **0.79** | **0.58** | **0.78** | | Zen-Score | 0.29 | 0.38 | 0.28 | 0.36 | 0.29 | 0.40 | | **Ours(Zen-Score)**| **0.37** | **0.54** | **0.36** | **0.51** | **0.40** | **0.56** | | ZiCo | 0.61 | 0.80 | 0.61 | 0.81 | 0.60 | 0.79 | | **Ours(ZiCo)** | **0.63** | **0.82** | **0.63** | **0.84** | **0.64** | **0.81** | **3. LLM Cost** We agree that documenting the costs is essential for evaluating cost-effectiveness. For our pipeline, we use GPT-4o model (5 USD \/ 1M Input Token and 15 USD \/ 1M Output Token), with input tokens around 2300 - 2600 and output tokens around 150 - 200. We have 1500 iterations for one proxy on one search space per proxy. Therefore, the total cost per proxy is around \$75. We recognize that while LLMs enhance NAS performance, their expense may not always be justified in cost-constrained scenarios. To mitigate this, we propose the following: - First, we will try more cost-effective models (e.g., Deepseek-r1, Open-r1) and include these considerations in the revision. These efficient models can achieve performance comparable to GPT4o but at a fraction of the cost. We believe integrating these models will significantly reduce the cost of LLM-in-NAS and provide a promising direction for our future work. - Second, we will explore more efficient LLM-in-NAS framework to reduce the LLM cost of each iteration inference. We really appreciate your detailed and valuable suggestions and we will add these additional analyses and clarifications into our revision.
Summary: The paper presents an LLM-guided NAS framework that leverages Zero-Cost proxies for evaluating neural architectures without full training. The approach employs a reflective strategy in which the LLM mutates candidate architectures and, via a reflection module that uses training-free proxy scores (e.g., GraSP, Synflow, Gradnorm), provides feedback to guide subsequent mutations. The method is evaluated across multiple NAS benchmarks and claims to achieve competitive or superior performance while significantly reducing search cost. Claims And Evidence: - Although empirical gains and computational benefits are reported, further evidence is needed on how the architectures evolve over time using the LLM reflection module. In particular, more experiments showing the utility of this module in guiding the mutation process would strengthen the claims. - It is also unclear how the initial architecture (F0) is chosen, raising the possibility that a very good prior might skew the results. Methods And Evaluation Criteria: - The methods section is not easy to follow. The description of the mutation process and reflection module is difficult to follow and requires multiple readings. More streamlined and organized presentation would help readers grasp the key details. - Equation 1 seems to have an issue; taking an arg max over architectures does not make sense given that the expectation is over all architectures. - Definitions of the Zero-Cost proxies (e.g., GraSP, Synflow, Gradnorm, Zen-NAS, ZiCo) are relegated to the appendix; integrating these into the main text would improve clarity. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: - A more detailed qualitative analysis is needed to illustrate how the architectures evolve over time—from the initial seed (F0) to the final designs. For example, visualizations showing the progression from the initial design (F0) through subsequent iterations could provide valuable insights into the diversity of the architectures chosen and effectiveness of the proposed method. - More details are required on the reflection module: for instance, the paper should explain how the LLM uses the scores and the architectures to output a reflection, and clarify the role of "line 15" in Algorithm 1. - The selection of the initial architecture F0 is not clearly discussed. It is important to know whether F0 is randomly chosen or based on a prior, as a very good initial architecture could bias the results. Supplementary Material: I checked the appendix. Relation To Broader Scientific Literature: This work introduces a novel reflective module to guide the mutation process. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths - Easy inference-based method using LLMs for NAS leveraging Zero-Cost proxies (e.g., GraSP, Synflow, Gradnorm). Weaknesses - Poor writing in several sections makes the paper hard to follow. Insufficient detail in the related works section and definitions of key concepts, which are either vague or relegated to the appendix. - Lack of qualitative evaluations and clarity on how the reflection module improves mutation. Other Comments Or Suggestions: - Related Works section needs more detail. A clearer explanation of Zero-Cost NAS would help, particularly how proxy metrics estimate model performance without full training. The statement "They mimic natural selection by iteratively selecting, mutating, and recombining candidate architectures based on proxy scores." is vague—specify the types of proxies used and their correlation with final performance. Similarly, Section 2.2 is superficial. For example, "Hao et al. (2024) demonstrates how LLMs serve as surrogate models, significantly reducing computational costs." lacks details—briefly explain the mechanism rather than just stating the benefit. Adding these refinements would improve clarity. - Some statements are not clearly explained. For example, the paper mentions, "Some researchers proposed the hypothesis that induction heads might constitute the mechanism for the actual majority of all in-context learning in large transformer models." This claim needs further elaboration and context. - “To avoid the issue of prompt leakage, restrictions should be specified in the system prompt of not disclosing any information defined in the example.” Could the authors provide more insights on this? - The search space ‘S’ is chosen tailored to a downstream task such as the MobileNet search space. It is not clear whether ‘S’ can be defined more generally. Questions For Authors: Please see above. Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks a lot for your insightful comments and detailed suggestions! We hope our responses address your concerns and enhance your view of our work! **1. Detailed explanation about reflection mechanism** The reflection mechanism includes two modules: internal system reflection in the system prompt (e.g., `Reflect to generate better mutation`) and external LLM reflection module (e.g., `Ask for suggestion...`). The external module takes the current iteration's architecture, zc proxy score, and potential exceptions as input, and outputs dynamic, structured reflection suggestions. These two modules form a closed-loop feedback system: the external module provides explicit optimization directions, while the internal module enables implicit semantic reasoning. Their efficacy is validated in ablation studies. The role of `line 15` in Algorithm 1 refers to the external LLM reflection module. Next, we provide an example of the iterative reflection under the NAS-Bench-201 search space and the GradNorm zc proxy. ***Input of the randomly selected architecture*** `{"arch":"|nor_conv_1x1~0|+|skip_connect~0|skip_connect~1|+|skip_connect~0|skip_connect~1|skip_connect~2|","type":"Gradnorm","score":3.5}` ***Output after LLM mutation*** `{"arch":"|nor_conv_1x1~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_3x3~0|none~1|skip_connect~2|","type":"Gradnorm","score":4.2, exception:""}` **&Output of the LLM reflection module*** `"Try replacing one of the skip_connect operations with 3x3 Conv to increase the expressiveness of the cell. Convolution operations like 3x3 Conv can capture more spatial features compared to skip connections, potentially improving the architecture's performance."` Our architecture `Genotype` might exceed the limit of `MAX_LAYERS`. The exception will output as `"Error: out of the limit of MAX_LAYERS."` ***Output of reflection module with exception*** `"The "MAX_LAYERS" error indicates that the architecture exceeds the allowed number of layers. You can reduce the number of layers in the architecture. For example, consider reducing one of the ResK1K5K1 layers to a simpler operation like ConvK1BNRELU or reduce the number of channels to stay within the limit."` **2. Initial architecture F0** The initial population is randomly generated and the zc sore is calculated for each architecture in the candidate pool. This process aligns with the evolutionary strategy algorithms used in zero-cost NAS (e.g., Zen-NAS, Zico), and we have not modified this process. **3. Issue with Equation 1** Thank you for highlighting the issue with Equation 1. The revised equation is: $a^∗=argmax_{a \in A} O(a,T,S,D,M)$ Here, $a \in A$ is the architecture being evaluated, $O(a,T,S,D,M)$ is the Zero-Cost proxy score for architecture $a$, given task T, search space S, dataset D, and model M. The expectation was unnecessary and has been removed, as our method deterministically evaluates architectures using $O$. We will update the equation and surrounding text in the revised manuscript to reflect this correction. **4. Related work needs more detail** Thank the reviewer for helping improve the paper's clarity and readability. We agree that adding clearer explanation of Zero-Cost NAS and the definition of zc proxies in the main text is necessary. In the revised version, we will include more detailed descriptions of them. Moreover, we will give more explanation about Hao et al.'s (2024) you mentioned, which encodes architectures as natural language via prompt engineering and using LLMs to predict proxy scores, reducing computational costs. **4. Some statements should be clearly explained** Thanks for the valuable suggestions. The in-context examples store successful cases as key-value pairs, which help the induction heads of transformer to identify successful architectural patterns stored in the context examples. We will further elaborate these statements in the revised version. **5. More insights on prompt leakage** We add restrictions in the system prompt to prevent prompt leakage. Because there could be cases where in-context examples are output as mutation results during our experiments. To address this, we included "do not disclose examples" in the system prompt. This approach has been studied in prompt engineering and open-source prompt libraries to address the issue of prompt leakage[1]. [1] https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-prompt-leak?utm_source=chatgpt.com **6. The search space S** The search space S isn’t task-specific. In NAS, it adapts to datasets and evaluation operators. For fair comparison, we use the same search space as prior zero-cost NAS methods. Validating NAS algorithms across different search spaces proves their generalization. Therefore, S can be more generally defined as the search space, as it may change based on the evaluation setup, but we ensure consistency with prior benchmarks. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their clarifications. I kindly request that these be incorporated into the revised manuscript for completeness. In addition, I encourage the authors to strengthen the qualitative analysis by including a comprehensive flowchart or diagram that illustrates the evolution of the architecture across several downstream tasks. In particular, it would be helpful to visualize how the LLM modifies the architecture over time and how the feedback mechanism contributes to this process. I suggest providing a detailed comparison between the final learned architecture and the initial architecture (F₀) across a few representative tasks. Highlighting the key differences and improvements introduced during the evolution process would offer greater insight into the effectiveness of the proposed method. It would also be beneficial to include additional quantitative experiments to better demonstrate the utility of the feedback mechanism. For instance, in Figure 4, the performance difference between RZ-NAS and the version without the reflection module appears relatively small, despite the authors' claim of a significant impact. If the feedback is indeed crucial, I would expect a more substantial drop in performance without the reflection module (in fig 4). As it stands, the details surrounding how the architecture evolves and the specific contribution of the reflection module remain somewhat unclear. Lastly, I recommend refining the manuscript to improve its readability. Enhancing the clarity and organization of the writing would significantly improve the accessibility of the paper to a broader audience. In light of the above, I will maintain my original scores. However, I believe that by incorporating this feedback, the paper can be substantially strengthened. **Update after reading the response:** I thank the authors for their detailed explanation. Accordingly, I will be increasing my score. I look forward to seeing these improvements incorporated into the final version for greater clarity and a more thorough analysis. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and insightful comments! We truly value your kind words about our contributions. We hope our responses will help clarify any remaining concerns and strengthen your perspective on our work! URL: https://anonymous.4open.science/r/rebuttal-ECC4/workflow_EA_process.png **1. Visualize how the LLM modifies the architecture over time** Thank you for your valuable suggestion. First, we would like to clarify that our method follows the traditional evolutionary algorithm process used in neural architecture search (NAS). **In each iteration, we randomly select an architecture from the population to mutate**, rather than continuously evolving a single architecture throughout the entire iteration. This approach is designed to maintain population diversity, which is crucial for exploring better architectural possibilities. As a result, visualizing the architecture evolution across all iterations is challenging since the selection process is random, and we cannot guarantee that the same architecture will be selected for mutation in every iteration. Therefore, to better illustrate our mutation process, we visualize the mutation and reflection module within a single iteration in the figure of **URL**. Figure Explanation: The upper sub-figure illustrates the iteration process. In each iteration, we randomly select an architecture from the architecture population. LLM generates mutated architecture based on the LLM prompt template and the selected architecture. The mutated part of the architecture is highlighted in the figure. After generating the mutated architecture, we validate if the architecture is valid (e.g., checking if the layer length is within the allowed limits) and compute the zero-cost (zc) score. Then, the LLM reflection module will generate suggestions for the current architecture (blue box) to design better mutation. The reflection suggestion and mutation result of this iteration will be as an example for the next iteration's reflection generation. The lower sub-figure demonstrates how the population is updated. All architectures—including both the original and the mutated candidates—are first sorted by their zc scores. The architecture with the lowest score is then removed to maintain a fixed population size. For example, the original architecture 2 is removed, and the mutated architecture is added to the population. **2. Quantitative Analysis of the Feedback Mechanism** We sincerely appreciate your insightful feedback. We understand your concern regarding the performance of within and without reflection module. To better demonstrate the impact of reflection module, we expand our evaluation by including performance results across all search spaces (beyond just NAS-Bench-201 and DARTS in Figure 4). This comprehensive analysis will offer a clearer and more convincing picture of the reflection module’s consistent contribution. Here, we present the error rates for the "w/o reflection" condition across three different search spaces. ***Accuracy on NAS-Bench-201 Search Space*** | **Method** | **CIFAR10** | **CIFAR100** | **ImageNet** | | --------------------- | ------------ | ------------ | ------------ | | RZ-NAS | 94.24 ± 0.12 | 73.30 ± 0.21 | 46.24 ± 0.23 | | w/o Reflection Module | 93.16 ± 0.54 | 70.90 ± 0.68 | 44.56 ± 0.44 | ***Error Rate on DARTS Search Space*** | **Method** | **CIFAR10** | **CIFAR100** | | --------------------- | ----------- | ------------ | | RZ-NAS | 2.41 ± 0.13 | 17.49 ± 0.08 | | w/o Reflection Module | 3.78 ± 0.78 | 19.11 ± 0.94 | ***Error Rate on MobileNet Search Space under various FLOP budgets*** | **Method** | **450M** (Top-1) | **600M** (Top-1) | **1000M** (Top-1) | | --------------------- | ---------------- | ---------------- | ----------------- | | RZ-NAS | 21.0 | 19.9 | 18.7 | | w/o Reflection Module | 23.9 | 21.0 | 20.3 | While it's relatively small compared to other conditions in Figure 4., it remains crucial for our algorithm to compete with other NAS methods and achieving superior performance within the LLM-in-NAS space. This is because the reflection module is indeed one of our key contributions but it works alongside another key contribution of our work: let LLM understand the search tasks and architectures from both text and code levels, which helps to maintain strong baseline performance even without the reflection module. The impact of the text and code modules is also visible in Figure 4. We will add these results in our revisions to strengthen the overall presentation of our work. **3. Refine the manuscript** Thank you for your valuable suggestion regarding the readability of our manuscript. We will carefully review the manuscript and make revisions to improve its structure and flow, ensuring that it is more comprehensible.
null
null
null
null
null
null
Cut out and Replay: A Simple yet Versatile Strategy for Multi-Label Online Continual Learning
Accept (poster)
Summary: This paper tackles Multi-Label Online Continual Learning (MOCL) from a novel perspective. Unlike existing approaches that focus on challenges like class imbalance and missing labels, this paper first analyzes the localization capabilities inherent in pre-trained models and introduces a CUT-out-and-Experience-Replay (CUTER) strategy. CUTER works by extracting, segmenting, and replaying image regions corresponding to individual labels. Through this approach, it addresses three key challenges simultaneously: catastrophic forgetting, missing labels, and class imbalance. The method's effectiveness is validated through comprehensive experimental results. ## update after rebuttal After reading the authors' repsonses, I decide to keep my original score. Claims And Evidence: **Three important claims made in this paper:** 1. Modern pretrained models possess the innate localization capacity among which the multi-crop contrastive learning methods demonstrate the best localization potential. 2. The proposed regularization term on the feature-constructed graph serves to consolidate and further enhance model's localization capacity 3. The proposed CUTER strategy can simultaneously addresses catastrophic forgetting, missing labels, and class imbalance. **Evidence** For Claim 1, the paper provides a comprehensive analysis with clear visualization in Section 2.1, complemented by detailed quantitative comparison results across different pre-trained models in Table 7 (Appendix E.2). This systematic evaluation offers valuable insights into model selection for the proposed approach. For Claim 2, theoretical derivation and analysis are presented in Section 2.3, supported by mathematical proofs and derivations in Appendix D.2. Conclusively, this paper establishes a strong theoretical connection between localization ability and the spectral norm of the feature-constructed graph. Certainly, it's also worth noting that the mechanism by which the proposed regularization improves model performance relies on additional assumptions. For Claim 3, the comparison in Figure 4, Figure 5 and experimental results in Section 3 can demonstrate the method's effectiveness. The ablation studies in Section 3.3 are also informative. Methods And Evaluation Criteria: **Method** This paper introduces CUT-out-and-Experience-Replay (CUTER), which uniquely addresses MOCL challenges by extracting, segmenting, and replaying label-specific image regions. By focusing on foreground object extraction, CUTER presents a novel direction that distinctly differs from conventional MOCL approaches [1,2,3], demonstrating notable innovation within this research domain. The incorporation of nuclear norm regularization on the feature-constructed graph is noteworthy, providing both theoretical guarantees and a reasonable solution to the problem. **Evaluations** The method is evaluated on three established multi-label benchmarks using standard metrics (mAP, CF1, OF1). The comprehensive experiments, including ablation studies and comparative analyses with recent methods, convincingly validate CUTER's effectiveness both as a standalone approach and as a plug-in component. To further strengthen the evaluation, the authors could explore additional incremental settings with larger base class sets, similar to prior MOCL works [1,2,3]. [1] Knowledge restore and transfer for multi-label class-incremental learning. ICCV2023. [2] Confidence self-calibration for multi-label class-incremental learning. ECCV2025. [3] Optimizing class distribution in memory for multi-label online continual learning. Theoretical Claims: The paper's key theoretical analysis examines the relationship between the Fiedler value and noise matrix norm, grounded in assumptions about ideal feature-constructed graphs. I have checked most of their proofs. Though not overly complex, they support the claims made in the main text and are clearly demonstrated. Experimental Designs Or Analyses: **Experimental Designs** As I stated in **Evaluations** , the experimental settings and evaluation protocols align well with established practices in related prior works. I did not find any inappropriate experimental design choices. **Analyses** The analyses for the three claims mentioned above are well structured with either empirical or theoretical verifications. A suggestion is that the ablation study could be further enriched by including specific runtime comparisons to quantitatively showcase the computational benefits of the proposed re-balanced sampling strategy, considering that Figure 8 only presents the combined running time of all components, which cannot reflect the claimed reduced computational cost. Supplementary Material: I have reviewed the supplementary material on the codes. Relation To Broader Scientific Literature: The paper's main contributions lie in advancing multi-label learning and its integration with online continual learning. Essential References Not Discussed: I cannot think of essential references that are missing from this paper. Other Strengths And Weaknesses: **Strengths** 1. Multi-Label Online Continual Learning is an important and compelling research area. Furthermore, object region identification represents a fundamental challenge within this domain. 2. The analysis of localization capabilities across pre-trained models merits deeper exploration. The evaluation of these capabilities could be expanded beyond NCut and MCut metrics, as these represent just one category among many unsupervised and weakly-supervised object detection and segmentation approaches. A broader assessment incorporating diverse evaluation metrics would provide more comprehensive insights. 3. This paper presents an intuitive and straightforward approach that distinctly differentiates itself from existing work focused on class imbalance or missing labels. While taking this different direction, CUTER appears to address the fundamental challenges of MOCL more directly compared with strategies like pseudo labeling or rebalanced sampling, offering a more essential solution to the core problems in this domain. 4. The paper's use of spectral theorem analysis to examine localization capabilities across pre-trained models offers valuable insights. This theoretical foundation naturally leads to the intuitive development of low-rank regularization on the feature adjacency matrix, creating a cohesive analytical framework. **Weaknesses** 1. Some related work exists in continual object detection shares quite similar intuition with this submission. Including comparisons and analysis with these works would better highlight this paper's unique challenges and innovations. 2. The analysis of different regularization terms warrants inclusion in the main text, as it constitutes a significant component of the paper's contribution. Other Comments Or Suggestions: Some typos exist like "superiority of proposed our method" in line 39. Questions For Authors: See the previous parts. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: _Respected Reviewer Nhoa,_ we first thank you for your valuable and insightful feedback, and for recognizing the analysis and advantages of our proposed method. Below, we address your concerns in a point-by-point manner. Q: **Experiments on other incremental settings should be included.** A: We appreciate this suggestion. We have conducted additional experiments on different incremental settings following protocols established in previous works like KRT and OCDM. Specifically, we evaluated our approach on class-incremental scenarios with varying numbers of classes per stage: _MSCOCO B40-C10:_ In this setting, we first train a base model on 40 classes, then incrementally add the remaining 40 classes over 4 sessions (10 classes per session). | COCO (mAP) | 1-40 | 1-50 | 1-60 | 1-70 | 1-80 | | :----:| :----: | :----: | :----: | :----: | :----: | | ER | 69.8 | 54.2 | 50.7 | 44.8 | 36.4 | | PRS | 69.3 | 56.5 | 52.0 | 44.7 | 39.8 | | OCDM | 65.8 | 52.1 | 50.4 | 48.2 | 37.3 | | KRT | 68.4 | 57.3 | 52.1 | 46.5 | 40.0 | | CUTER| 69.0 | 59.6 | 57.4 | 54.7 | 50.8 | _NUSWIDE B41-C5:_ Here, we start with 41 base classes and incrementally learn 8 sessions with 5 new classes each. | NUSWIDE (mAP) | 1-41 | 1-46 | 1-51 | 1-56 | 1-61 | 1-66 | 1-71 | 71-76 | 1-81 | | :----:| :----: | :----: | :----: | :----: | :----: |:----: |:----: |:----: |:----: | | ER | 64.4 | 51.2 | 47.3 | 40.0 | 35.8 | 31.3 | 28.8 | 29.5 | 24.6 | | PRS | 61.2 | 50.6 |44.9 | 37.0 |36.7 | 30.2 |31.4 | 32.0 | 26.3 | | OCDM | 64.5 | 50.8 | 43.7 | 36.5 | 33.4 | 32.9 |31.8 | 30.8 |30.4 | | KRT | 62.3 | 52.8 | 48.5 | 34.6 | 35.8 | 33.4 | 31.9 | 31.0 | 29.4 | | CUTER| 63.0 | 54.3 | 50.0 | 42.3 | 40.8 | 38.1 | 36.8 | 35.4 | 32.8 | These results demonstrate that our CUTER method consistently outperforms existing approaches across different incremental settings, particularly in later stages where catastrophic forgetting is most pronounced. Q: **Method in continual object detection should be compared and discussed.** A:Thank you for your insightful comment regarding the similarities between our proposed cut-out and replay framework and continual object detection. We have conducted a preliminary literature search on this topic and will include a thorough comparison in the related works section, especially with reference [A], which adopts an approach similar to our CUTER framework. However, we would like to clarify that methods from continual object detection cannot be directly applied to MOCL (Multi-Object Continual Learning). This is because these methods fundamentally rely on labeled bounding boxes which, even if partially missing, enable the training of an additional detection head. Such a strategy is clearly challenging to implement in the MOCL setting where such annotations are unavailable. To validate this point, we conducted a comparative experiment on the VOC dataset between our method and a DETR detection head using the box replay approach from [A] (the initial box was obtained by NCut). The results, as shown in the table below, demonstrate the advantages of our approach in the MOCL context where bounding box annotations are not accessible. | | Avg mAP | Avg CF1 | Avg OF1 | | :----:| :----: | :----: | :----: | | ABR | 71.51 | 62.52 | 65.49 | | CUTER | 82.07 | 72.19 | 75.27 | [A] Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection. ICCV 2023. Q: **The analysis of different regularization terms warrants inclusion in the main text.** A: We agree this analysis provides important insights into our method. In the revised version, we'll include a concise comparison of regularization terms in the main text, explaining why nuclear norm regularization outperforms sparse and smooth approaches for preserving localization capabilities in MOCL. This addition will strengthen the theoretical justification for our design choices while maintaining the paper's focus. Q: **Ablation study could be enriched by including runtime comparisons between other methods and the proposed re-balanced sampling strategy.** A: We have enriched our ablation study by including runtime comparisons between our proposed re-balanced sampling strategy and other methods. For a fair comparison, we directly omit the CUT out process and measure the throughput in terms of samples processed per second: | Implemented on RTX4090 | ER (random sampling) | PRS | OCDM | Ours | | :----:| :----: | :----: | :----: | :----: | | # samples per second | 138.6 | 76.3 | 51.6 | 90.9 | Our proposed method delivers a strong throughput of 90.9 samples per second, which is considerably more efficient than both OCDM and PRS. Q: **Typos.** A: We feel sorry for any confusion or inconvenience may have caused. In the revised version, we will carefully proof-read our text to ensure that no grammar mistakes or typos still exist.
Summary: The paper tackles Multi-Label Online Continual Learning (MOCL) problem through a novel two-step approach. First, it identifies object-specific regions corresponding to labeled samples within each learning phase. Then, it selectively replays these regions, effectively circumventing the challenging missing label issue. The method also mitigates potential class imbalance problems in MOCL by transforming multi-label classification into single-label tasks. Experimental results across multiple image benchmarks validate its effectiveness. Claims And Evidence: **Important claims made in this paper:** 1. Multi-crop consistency pre-training enhances innate localization capabilities, whereas reconstruction-based training tends to promote feature sharing during recovery, which may hinder the effectiveness of spectral clustering-based localization approaches. 2. The Fiedler value of the feature graph has a direct upper bound determined by the norm of the perturbation matrix. For claim 1, authors substantiate their assertions by examining the correlation between the average Fiedler values of features on the VOC dataset and zero-shot detection performance across different pre-trained models. Additional analysis in Appendix D.1 further corroborates these findings. For claim 2, the authors provide a rigorous theoretical analysis to substantiate their claim. Methods And Evaluation Criteria: This paper addresses the Multi-Label Online Continual Learning (MOCL) problem through a novel two-step approach. First, it identifies object-specific regions corresponding to labeled samples within each learning phase. Then, it selectively replays these regions, effectively circumventing the challenging missing label issue. The proposed method is evaluated using standard protocols on multiple visual classification datasets, achieving state-of-the-art performance compared to existing MOCL and MLCIL methods. The authors further validate the effectiveness of each component and demonstrate its plug-and-play capabilities through additional experimental studies. Theoretical Claims: The authors establish a theoretical relationship between a graph's Fiedler value and the norm of the noise component in its graph Laplacian. Upon examination of the proof of Theorem 2.3 in the Appendix, the mathematical derivation and reasoning appear to be rigorous. Experimental Designs Or Analyses: The experimental methodology appears sound, as the authors follow standard evaluation practices on visual classification benchmarks, enabling fair comparisons with MOCL and MLCIL methods. The empirical validation is comprehensive, including well-structured ablation studies examining the memory buffer's data distribution and demonstrating how the regularization effectively preserves the pre-trained model's inherent localization capabilities. One limitation is that compared to other MOCL papers that evaluate multiple continual learning scenarios, this work primarily focuses on cases with similar number of classes across incremental stages. Including additional experiments with varying numbers of classes per stage would provide a more comprehensive evaluation of the method's robustness. Supplementary Material: N/A Relation To Broader Scientific Literature: There is nothing in particular worth mentioning. Essential References Not Discussed: See the weaknesses Other Strengths And Weaknesses: Strengths: 1. The topic of learning from online continual data streams is significant and represents a more realistic extension of traditional continual learning scenarios. 2. The proposed method demonstrates strong empirical performance, with comprehensive ablation studies effectively illustrating the function of each component. 3. The proposed regularization term is intuitive, and analyzing MOCL model degradation during continual learning from a spectral graph theory perspective offers an interesting theoretical insight. Weaknesses: 1. Additional spectral clustering literature citations would help better contextualize how the regularization term functions. Moreover, since this work employs a detect-then-learn approach, whether other efficient detection methods could be integrated into the proposed MOCL framework. 2. Additional explanation is needed to clarify why the proposed regularization terms outperform conventional approaches like smoothing or sparse regularizations. Other Comments Or Suggestions: NA Questions For Authors: What about the model's performance in an offline setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: _Respected Reviewer NiGx,_ we thank you for your valuable and insightful feedback. Below, we address your concerns in a point-by-point manner. Q: **Additional experiments with varying numbers of classes per stage should be included.** A: We appreciate this suggestion. We have followed the setting in previous works like KRT and OCDM to provide class-incremental settings with varying numbers of classes per stage. Due to the limited space, please refer to our response to Reviewer Nhoa's Q1 for complete tables. _MSCOCO B40-C10:_ In this setting, we first train a base model on 40 classes, then incrementally add the remaining 40 classes over 4 sessions (10 classes per session). | COCO (mAP) | 1-40 | 1-50 | 1-60 | 1-70 | 1-80 | | :----:| :----: | :----: | :----: | :----: | :----: | | PRS | 69.3 | 56.5 | 52.0 | 44.7 | 39.8 | | KRT | 68.4 | 57.3 | 52.1 | 46.5 | 40.0 | | CUTER| 69.0 | 59.6 | 57.4 | 54.7 | 50.8 | _NUSWIDE B41-C5:_ Here, we start with 41 base classes and incrementally learn 8 sessions with 5 new classes each. | NUSWIDE (mAP) | 1-41 | 1-46 | 1-51 | 1-56 | 1-61 | 1-66 | 1-71 | 71-76 | 1-81 | | :----:| :----: | :----: | :----: | :----: | :----: |:----: |:----: |:----: |:----: | | PRS | 61.2 | 50.6 |44.9 | 37.0 |36.7 | 30.2 |31.4 | 32.0 | 26.3 | | KRT | 62.3 | 52.8 | 48.5 | 34.6 | 35.8 | 33.4 | 31.9 | 31.0 | 29.4 | | CUTER| 63.0 | 54.3 | 50.0 | 42.3 | 40.8 | 38.1 | 36.8 | 35.4 | 32.8 | These results demonstrate that our CUTER method consistently outperforms existing approaches across different incremental settings, particularly in later stages where catastrophic forgetting is most pronounced. Q: **Additional spectral clustering literature should be discussed and cited.** A: We appreciate this suggestion. In the revised version, we will expand our related work to include seminal spectral clustering works such as Shi and Malik (2000) on normalized cuts, Von Luxburg's (2007) tutorial on spectral clustering fundamentals, and more recent advancements like Tang et al. (2018) on robust spectral clustering. We'll also discuss image segmentation applications of spectral clustering beyond MCut, including works like LSC by Li and Chen (CVPR 2015) and the analysis of limitations of these spectral-based segmentation methods by Boaz Nadler (NIPS 2006). Q: **Whether other detection methods could be integrated into the MOCL** A: Yes, our CUTER framework is designed with modularity in mind, allowing for integration of various detection methods beyond MCut. However, we want to emphasize that in the online continual learning scenario, MCut's advantage of requiring no training is significant. While other detection methods like LOST, TokenCut, or attention-based approaches could theoretically be integrated, they would need to overcome these online learning constraints. Additionally, methods like SAM could be a viable choice, but without additional prompts, SAM typically produces many more segmentation masks than required for MOCL objectives. Its segmentations tend to be relatively fragmented because SAM's results are not inherently class-oriented or semantically guided. This would introduce additional challenges in establishing the correct correspondence between regions and labels. We greatly appreciate the reviewer's suggestions and will carefully discuss these considerations in a future version of our work. Q: **Additional explanation is needed to clarify why the proposed regularization terms work and outperform conventional approaches like smoothing or sparse regularizations.** A: To clarify why nuclear norm regularization outperforms conventional approaches: While all regularization methods in Table 3 are established in graph structure learning, they differ fundamentally in their effects on representation learning for our task. Nuclear norm regularization promotes low-rank structure in the graph Laplacian, better preserving the intrinsic manifold structure of ViT features while removing noise $\epsilon$ as established in Theorem 2.3. In contrast, sparse regularization, though effective for denoising, penalizes hypernode patches with naturally high cross-node similarity, disrupting inherent structural properties of ViT parameters and compromising classification capacity (Table 3). Similarly, smooth regularization forces excessive feature similarity across nodes, impairing the spectral clustering processes essential for effective localization. Q: **The model's performance in offline setting should be included.** A:Despite the offline setting not being the primary goal of our proposed method, we make a simple comparison with KRT (method designed for offline setting) on MSCOCO: | | Avg mAP | Last mAP | Last CF1 | Last OF1 | | :----:| :----: | :----: | :----: | :----: | | KRT | 75.7 | 69.3 | 63.9 | 64.7 | | CUTER | 84.1 | 76.5 | 70.3 | 73.4 | These results suggest that CUTER's design principles are broadly applicable across different multi-label classification settings.
Summary: In this work, authors concentrate on Multi-Label Online Continual Learning (MOCL), mainly focusing on three main challenges: catastrophic forgetting, missing labels, and imbalanced class distributions. They introduce CUTER (CUT-out-and-Experience-Replay), a method that identifies and utilizes label-specific regions in images. By first evaluating pre-trained models' localization abilities and then implementing a region-based experience replay strategy, CUTER provides targeted supervision signals. Experimental results across multiple image benchmarks validate its effectiveness. Claims And Evidence: **Claims** 1. CUTER effectively addresses multiple MOCL challenges - catastrophic forgetting, missing labels, and class imbalance - while improving model performance. 2. The method achieves competitive results and can be readily integrated with existing approaches as a complementary component. 3. The model's performance benefits from the proposed localization-based regularization strategy. **Evidence** Regarding the paper's claims: While CUTER's approach to addressing multiple MOCL challenges through region localization before learning and replay follows logically from its design, the paper would benefit from a more thorough discussion of how the accuracy and reliability of this localization process is ensured. The paper effectively supports its second and third claims through both visual evidence (Figure 5) and quantitative results (Tables 3 and 5), demonstrating CUTER's performance advantages and successful integration with existing methods. Methods And Evaluation Criteria: This paper addresses MOCL through label-specific feature learning, introducing a novel label-attentional mechanism to identify and replay label-specific image regions. Leveraging state-of-the-art pre-trained models, the approach incorporates a regularization technique based on spectral graph theory principles commonly used in unsupervised segmentation methods. The method represents an innovative departure from existing MOCL approaches (which typically focus on memory buffer sampling or pseudo labeling techniques) and demonstrates both novelty and comprehensive design. The evaluation methodology aligns with established practices, utilizing multi-label benchmarks and standard metrics (mAP, CF1, OF1). The authors provide thorough empirical validation through ablation studies demonstrating the effectiveness of individual components, as well as results showing successful integration with existing methods. The experimental results effectively support the paper's claims. Theoretical Claims: The paper's key theoretical contribution centers on analyzing the model's localization capability through its relationship to the adjacency matrix constructed from learned features. After reviewing the proof of Theorem 2.3, I find the mathematical reasoning to be logically sound. Experimental Designs Or Analyses: The evaluation methodology aligns with established practices in Multi-label learning and MOCL, utilizing multi-label benchmarks and standard metrics (mAP, CF1, OF1). Additionally, the authors present an interesting connection between pre-trained models' MOCL capabilities and their derived feature characteristics. The experimental results, particularly the analysis of different regularization methods and subsequent investigations, provide empirical support for their findings. Supplementary Material: I did not review the supplementary materials. Relation To Broader Scientific Literature: As authors stated in the concluding paragraph, this research contributes to advancing machine learning methodology. While acknowledging that the work may have broader societal implications, an in-depth discussion of specific impacts falls outside this paper's technical scope. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. The proposed method offers a novel approach to the problem. The authors present a clear methodology that directly addresses label-region correspondence - a challenging yet fundamental issue in the field. This approach stands out from existing MOCL methods and recent works on multi-label classification with missing labels. 2. The proposed regularization term is well-motivated and theoretically validated. The analysis raises an interesting question about whether similar metrics, such as the averaged Fiedler value, could be applied to other domains like pre-training task design or dataset evaluation. 3. The method demonstrates strong empirical performance across multiple datasets. Through comprehensive ablation studies and informative visualizations, the authors effectively illustrate the contribution of each component to the overall system. Weaknesses 1. As stated by the authors, performing cut-out operations and further regularization terms to consolidate model's localization capacity introduces additional computational overhead. 2. There are some typos exist in the main text. 3. In Section 2.3, while the authors introduce nuclear norm regularization for the derived graph Laplacians, the underlying mechanism is relegated to the appendix, and detailed comparisons are deferred to the experimental section. A more cohesive presentation of this material within the main text would enhance the section's clarity and impact. Other Comments Or Suggestions: See the previous parts. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: _Respected Reviewer eZ9b,_ we first thank you for your valuable and insightful feedback, and for recognizing our motivation and theoretical analysis. Below, we address your concerns in a point-by-point manner. Q: **How the the accuracy and reliability of proposed localization process is ensured?** A: Our approach ensures the accuracy and reliability of the proposed localization process through three key mechanisms: 1. **Principled Model Selection**: We conducted detailed analysis of the localization potentials across different pre-trained models, establishing general selection principles that identify models with optimal localization capabilities for MOCL tasks. 2. **Selective Label-Region Matching Strategy**: We developed a targeted matching approach that accurately aligns semantic regions with corresponding labels, minimizing false correlations and enhancing localization precision. 3. **Spectral-based Regularization**: To prevent degradation of localization capacity during MOCL progression, we implemented a specialized regularization term that preserves the model's ability to accurately identify regions of interest over time. The effectiveness of these mechanisms is substantiated by both qualitative evidence (visualizations in Figures 3 and 5 demonstrating accurate region identification) and quantitative validation (ablation studies in Table 6 showing the benefits of our Cut-out Replay strategy in capturing fine-grained spatial information). Together, these elements form a comprehensive framework that consistently delivers accurate and reliable localization performance throughout the MOCL process. Q: **Performing cut-out operations and regularization terms introduce additional computational overhead.** A: We acknowledge that our approach introduces some additional training time. In fact, as shown in Figure 8 (left) in the Appendix, when not using regularization terms, CUTER achieves relatively comparable model throughput to other popular MOCL methods. Additionally, we recognize that the proposed regularization terms do cause a considerable decrease in model throughput. Finding ways to accelerate this process and achieve a better balance between performance and computational efficiency will be a focus of our future work. Q: **Nuclear norm regularization details spread across appendix and main text, affecting clarity.** A: We appreciate the reviewer's valid point about the nuclear norm regularization presentation. The current separation was primarily due to page constraints. In the revised version, we will restructure Section 2.3 to present a more cohesive narrative by: (1) Introducing the theoretical motivation behind nuclear norm regularization; (2) Connecting it directly to the derived graph Laplacians with concise mathematical formulations; (3) Providing intuitive explanations of how this regularization enhances representation learning; (4) Briefly highlighting comparative advantages over alternative approaches like smooth regularization or sparse regularization on the graph laplacian before the experimental section; Q: **Typos.** A: We feel sorry for any confusion or inconvenience may have caused. In the revised version, we will carefully proof-read our text to ensure that no grammar mistakes or typos still exist.
null
null
null
null
null
null
null
null
The Diffusion Duality
Accept (poster)
Summary: This paper finds that discrete diffusion models are just a transformed version of continuous Gaussian diffusion using an argmax operation. This lets them borrow techniques from continuous diffusion, like curriculum learning for faster training and distillation for super-fast sampling. The result is training that’s twice as fast, sampling that takes way fewer steps, and even better zero-shot performance than some autoregressive models. It’s a mix of solid theory and real practical improvements. ## Update after rebuttal After reading the authors’ rebuttal, my overall evaluation has not changed, and I still tend to accept this paper with a score of 4. Claims And Evidence: The paper makes some bold claims, where the below are some points that I think facing some weakness from my point of view: (1) First, the theoretical connection between discrete and continuous diffusion is a cool idea, but I don’t think the paper fully proves that discrete diffusion is fundamentally just a transformed version of Gaussian diffusion. The mapping they derive using argmax feels more like an approximation rather than a deep equivalence. They don’t really show that the key properties (e.g., transition dynamics, loss landscapes) are preserved—just that the marginals line up, which isn’t enough to claim a fundamental link. (2) The 2× faster training claim also feels a bit shaky. I was expecting a breakdown of different factors—like, does reducing variance actually translate to better generalization, or just faster convergence to a similar end result? Also, the use of softmax annealing[1][2] is a known trick in discrete optimization, so I’m not convinced this is a novel contribution. (3) Then there’s the "two orders of magnitude" speedup in sampling—that’s a massive claim, but the evaluation doesn’t fully back it up. The perplexity numbers are nice, but where’s the qualitative analysis? If you’re cutting sampling steps from 1000 to 10, how does that affect fluency and coherence in generated text? I was expecting some real-world comparisons—maybe human evaluations or even error cases where their method struggles. --- Reference --- [1] Chen, Binghui, Weihong Deng, and Junping Du. "Noisy softmax: Improving the generalization ability of dcnn via postponing the early softmax saturation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [2] Gu, Jiuxiang, et al. "Exploring the frontiers of softmax: Provable optimization, applications in diffusion model, and beyond." arXiv preprint arXiv:2405.03251 (2024). Methods And Evaluation Criteria: I think the methods are well-motivated but not entirely convincing in execution, and the evaluation criteria have some gaps that make it hard to fully trust the conclusions. For the method part, the core idea of leveraging Gaussian diffusion for discrete diffusion models makes sense conceptually, and mapping it through argmax is an interesting perspective. But I’m not sure if it’s the best approach in practice. The claim that this transformation allows discrete models to directly benefit from continuous techniques (like curriculum learning and consistency distillation) is reasonable, but it still feels like a bit of a shortcut rather than a truly fundamental reformulation. I would have liked to see more analysis of when this approximation holds and where it might fail. Right now, it’s mostly taken at face value. As for the evaluation, I also think the comparison baselines could be stronger. While they do compare against prior discrete diffusion models (SEDD, UDLM, MDLM), the autoregressive models they use for comparison aren’t the latest state-of-the-art. For example, modern transformer-based LMs like GPT-style models could set a stronger baseline, and without that, it's hard to say if diffusion is actually competitive for practical language modeling. Theoretical Claims: I checked the core theoretical claims in the paper, particularly the connection between discrete and continuous diffusion and the evidence lower bound (ELBO) comparisons. One Issue I am a little conern about is that the curriculum learning trick involves annealing argmax to softmax over training steps, supposedly reducing variance and leading to faster convergence. They cite prior work on softmax approximations of discrete gradients, which makes sense.while the proof of variance reduction is missing. They argue that higher τ (temperature) leads to lower variance, but there’s no mathematical analysis quantifying how variance scales with τ. This is just assumed based on intuition from Gumbel-Softmax-like tricks. It would have been more convincing with a variance-bound derivation rather than just empirical loss curves. Experimental Designs Or Analyses: I checked the experimental design and analyses, and while the results look promising, there are several issues with the evaluations including the bechmarks and the SOTA method selections: (1) This paper evaluates on LM1B and OpenWebText (OWT) for training, and then tests zero-shot generalization on 7 datasets (PTB, Wikitext, etc.). This is reasonable for a language modeling paper, but the OpenWebText dataset is outdated and is not representative of modern large-scale LMs trained on more diverse internet-scale corpora. Comparing to stronger baselines like models trained on The Pile, C4, or real GPT training datasets would have been more informative. (2) The baselines are weak. The autoregressive models they compare to (e.g., Transformer-XL, OmniNet) are not the best available. They should have compared against modern SOTA models like GPT-3.5, PaLM, or LLaMA. Beating old baselines doesn’t mean diffusion models are actually competitive. Supplementary Material: Yes, I went through the Supplementary Material, particularly the proofs, ELBO derivations, and additional experimental details. n Relation To Broader Scientific Literature: I think this paper makes meaningful contributions to the broader scientific literature on discrete diffusion models, efficient training strategies, and sampling acceleration techniques. It builds on and extends several key ideas from prior work while offering a novel theoretical perspective on the connection between discrete and continuous diffusion. Essential References Not Discussed: As far as I know, there are several important related works that the paper does not cite including within the aspect of the Softmax Relaxation[1][2], and sample accelerate[3], list as below. --- Reference --- [1] Jang, Eric, Shixiang Gu, and Ben Poole. "Categorical reparameterization with gumbel-softmax." arXiv preprint arXiv:1611.01144 (2016). [2] Maddison, Chris J., Andriy Mnih, and Yee Whye Teh. "The concrete distribution: A continuous relaxation of discrete random variables." arXiv preprint arXiv:1611.00712 (2016). [3] Luo, Simian, et al. "Latent consistency models: Synthesizing high-resolution images with few-step inference." arXiv preprint arXiv:2310.04378 (2023). Other Strengths And Weaknesses: Strengths: - I think this paper proposed a novel connection between discrete and continuous diffusion models, framing discrete diffusion as a transformed version of Gaussian diffusion via an argmax operation. It offers an interesting way to think about discrete generative modeling and provides a new lens for improving training and sampling efficiency. - The experiment results shows that the model can achieves lower perplexity on several benchmarks, including cases where it outperforms autoregressive baselines which I think it with great practical usage consider the ill nature of low speed for diffusions. Weakness: - The paper claims that diffusion models can compete with autoregressive models, but the baselines used for comparison are outdated. More modern transformers, such as GLaM or GILL, should be included. Beside, I hope the author provide a runtime comparison between diffusion models and autoregressive models. Even if diffusion models achieve lower perplexity, they may still be much slower at inference. - The training dataset is mostly OpenWebText, which is outdated. Stronger large-scale benchmarks like The Pile, C4, or GPT-style training datasets should be used for a more comprehensive evaluation. Other Comments Or Suggestions: See the Weakness Questions For Authors: (1)For the method part, the author claims that discrete diffusion naturally arises from a Gaussian diffusion process via an argmax transformation. However, the derivation only shows marginal distribution alignment and does not establish that the transition dynamics and Markov properties of the discrete process are fully preserved. Could you provide a formal proof or additional empirical validation that supports this claim beyond marginal alignment? (2)As for the evaluation, I think the Perplexity is promising. since the paper focuses on sampling efficiency while preserving sample quality, why is there no qualitative evaluation of generated text at different sampling steps? Perplexity is a useful metric but does not fully capture fluency, coherence, and grammaticality. Would you consider adding human evaluations or qualitative examples in an updated version? I am glad to see the author could resolve my concerns, Thank you. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We want to thank the reviewer for their constructive and detailed feedback. # Concern 1: Transition kernel and the loss landscape We emphasize that while the $\arg \max$ operator maps Gaussian marginals to discrete marginals, it also preserves the transition dynamics and Markov property. In `anon. link [3]`, we show the marginal evolution follows: $\frac{d}{dt} q_t = -\frac{\mathcal{T}'(\tilde{\alpha}_t)}{K \mathcal{T}(\tilde{\alpha}_t)} \left[{\mathbf{1}} {\mathbf{1}}^\top - K \mathbf{I}\right] q_t$, where $\mathcal{T}'(\tilde{\alpha}_t)$ is the time derivative of $\mathcal{T}(\tilde{\alpha}_t)$. This implies$\frac{d}{dt} q_t = Q_t q_t$, with $Q_t = -\frac{\mathcal{T}'(\tilde{\alpha}_t)}{\mathcal{T}(\tilde{\alpha}_t)} \left[{\mathbf{1}} {\mathbf{1}}^\top - K \mathbf{I}\right] \in \mathbb{R}^{n \times n}$ representing the transition kernel for a **Markovian discrete diffusion process** (Anderson, 2012). Schiff et al., 2024 (Supp C, Eq. 50) confirm this is the transition dynamics of USDMs with diffusion parameter $\mathcal{T}(\tilde{\alpha}_t)$. **Loss Landscape**: However, transforming a Gaussian diffusion into a discrete one **does not preserve the loss landscape**. In fact, the loss landscape for the discrete diffusion process is much more desirable because it induces a tighter bound on the likelihood as stated in Theorem 3.1. We stress that the equivalence betweeen Gaussian diffusion and the USDMs is **deep and fundamental without any approximations** whatsoever. # Concern 2: This method is an approximation As established above, the connection between Gaussian diffusion and USDMs is absolute and without any approximations. The softmax approximation to argmax introduced in Eqn(16), the training loss, leverages this fundamental relationship to design a low variance curriculum learning training scheme. # Concern 3: Two orders sampling speedup After distillation, DUO achieves a Gen. PPL of $79.24$ and entropy of $5.25$ with $T = 8$, closely matching the undistilled DUO with $T = 1024$, which has a Gen. PPL of $72.05$ and the same entropy ($5.22$). This reduction in steps comes without sacrificing entropy or degradation in sample quality (see `anon. link [3]`). # Concern 4: Novelty of Softmax annealing trick The core **novelty of this work is establishing a fundamental connection between discrete and Gaussian diffusion** via the argmax operator. We show that the softmax annealing trick, originally proposed for backpropagating through argmax [1, 2], can be repurposed to design a low-variance training curriculum for USDMs. # Concern 5: 2x faster training Variance reduction leads to better generalization, as shown by DUO achieving lower val. PPL with Curriculum Learning compared to without it. See our response to Concern 1 for Reviewer 1i8M. # Concern 6: Curriculum Learning Please refer to discussions in the response for concern 2 for the Reviewer 1i8M # Concern 7: Weak AR Baselines The comparison with the AR baseline **is fair**—DUO, diffusion baselines [5, 7, 8], and the AR Transformer in Tabs. 1–3 all use the same architecture and dataset, with the only difference being causal attention for AR and bidirectional for diffusion models. Notably, in Tab. 3, where **DUO outperforms AR on 3 out of 7 benchmarks, both use the same Transformer architecture**. Omninet and Transformer-XL results in Tab. 2 are included for reference only. # Concern 8: Experimental Design We’d like to emphasize that our experimental setup is consistent with the current literature in the field of diffusion modeling [5, 7, 8]. Our experiments isolate the effect of the training method by fixing the Transformer architecture. While training with modern architectures like GPT-3.5, PaLM, or LLaMA on datasets like Pile or C4 would be valuable, it would require retraining all baselines—an infeasible task for an academic lab. # Concern 9: Sampling speed Sampling from a distilled **DUO model is significantly faster than an AR model**. See responses for concern 2 for Reviewer J6SU. # Other concerns: **Missing citations**: We’ll cite [4] and add a detailed discussion for [1, 2, 4] in the next version of the paper. **Qualitative Analysis**: We provide more samples in the `anon. link [3]` where we observe that the distilled DUO model outputs **significantly better quality samples** than the distilled MDLM model at lower sampling steps. --- ### References [1] Jang et al., 2016 [2] Maddison et al., 2016 [3] `Anon. link`: https://docs.google.com/document/d/e/2PACX-1vR0uKDuQHl4bBuC8KokEQhHNMvGdxbIskJm_SfXO_L6haSzWEqjPtL9wkVmg_yacNzMei2DAk21J5XX/pub [4] Anderson, Continuous-time Markov chains: An applications-oriented approach. [5] Schiff et al., 2025 “Simple Guidance Mechanisms for Discrete Diffusion Models” [6] Luo, Simian, et al. "Latent consistency models" [7] Sahoo et al., 2024 “Simple and Effective Masked Diffusion Language Models” [8] Lou et al., 2024 “Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution” --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, which resolves most of my concerns, therefore, I will keep my score. good luck! --- Reply to Comment 1.1.1: Comment: Thank you for your prompt and thoughtful engagement. We greatly appreciate your detailed feedback and will incorporate these discussions into the next revision of the paper. In particular, your comments on the transition kernels are especially valuable, as they are central to the paper's narrative. Addressing these points will significantly strengthen both the story and the overall clarity of the work.
Summary: This work presents a new training scheme for a uniform state discrete diffusion model based on the correspondence between Gaussian diffusion in continuous state and uniform state discrete diffusion. The authors state that while uniform state discrete diffusion and Gaussian diffusion are two separate Markov chains, the discrete process can be understood as a continuous process through the argmax operator. Based on this connection, the paper proposes a curriculum learning training that leads to faster convergence and low variance, and also a dual consistency distillation method. The experimental results show 2 times speed-up in training convergence and two orders of magnitude improvement in sampling speed. Claims And Evidence: Yes, the claims are supported by experimental results. Methods And Evaluation Criteria: Yes, the evaluation criteria including LM1B and OWT datasets seem reasonable. Theoretical Claims: The claim on the connection between uniform state discrete diffusion and Gaussian diffusion has been explained. Experimental Designs Or Analyses: Yes, the experimental design seems valid. Supplementary Material: Yes, I read the supplementary material that contains derivations, training details, and additional experiments. Relation To Broader Scientific Literature: This work improves the performance of the uniform state discrete diffusion model with a new training scheme. Essential References Not Discussed: To the best of my knowledge, most of the relevant works were discussed in the paper. Other Strengths And Weaknesses: **Strength** - This work finds a new training algorithm effective for training uniform state discrete diffusion model from the connection between the discrete diffusion and Gaussian diffusion. - The proposed training algorithm improves the performance of the uniform state discrete diffusion model that reduces the gap between the masked discrete diffusion model. **Weakness** - The performance of the uniform state discrete diffusion model even with the complex training/distillation is still underperforming the masked discrete diffusion model. As they both work on discrete space while the difference comes only from how the transition kernel is designed, the strength of the proposed method diminishes. - The reason for using USDM is not clearly addressed in this paper (or I may have missed it). What is the advantage of using a uniform state instead of the masked discrete diffusion model? [Schiff et al., 2025] state that using a uniform state is advantageous for conditional generation but doesn't seem to apply for large language benchmarks such as LM1B. In this sense, this work could benefit from adding a controllable generation task where USDM would excel. Schiff et al., Simple Guidance Mechanisms for Discrete Diffusion Models, ICLR 2025 Other Comments Or Suggestions: Please see the question below. Questions For Authors: - Why is the reported performance of MDLM on the LM1B dataset different from the number from the original paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for their constructive feedback. We address the reviewers comments and questions below. # Concern 1: USDMs still lag MDMs As mentioned in the response to Concern 1, USDMs lag MDMs only when measured in terms of perplexity which isn’t necessarily the best metric for comparisions with MDMs. **USDMs are preferred over MDMs on few-step generation** (Fig. 3) where a distilled version of DUO significantly outperforms a distilled version of MDLM at $T=8$ As mentioned in lines 16-27 (right), the design space of Discrete Diffusion models remains largely under explored as compared to Gaussian Diffusion models. In this work we establish a core property of USDMs — they emerge from Gaussian diffusion. This **opens up new avenues of future research which would leverage this connection** to improve USDMs by borrowing techniques from Gaussian diffusion— a connection that doesn’t exist for MDMs. # Concern 2 : Motivation for using USDMs over MDMs USDMs are preferable to MDMs for tasks like **guidance** (Schiff et al., 2025) and **fast sampling** (this work). The reason USDMs allow for faster sampling than MDMs is that they can fix their mistakes. This allows USDMs to make mistakes fast, but fix them later. We also note that while perplexity is a useful sanity check, it does not account for speed. Perplexity only captures how sample quality with a high number of sampling steps. In the table below, we present **new experiments** that show that USDMs can achieve strong sample quality quickly: Either with low latency, the time to produce a whole sequence, or high throughput, the rate of parallel generation. | **Model** | **Non embedding parameters** | **Latency $(\downarrow)$(BS=1)** | **Throughput $(\uparrow)$ with MAX BS (tok/sec)** | **MAX BS** | **Gen. PPL $(\downarrow)$ <entropy>** | **Gen. PPL $(\downarrow)$ <entropy> (nucleus sampling p=0.9)** | | --- | --- | --- | --- | --- | --- | --- | | AR | 17M | 11.70 $\pm$ 1.15 | 926.80 $\pm$ 10.78 | 80 | 92.43 <5.61> | 32.13 <5.19> | | AR | 110M | 14.70 $\pm$ 0.16 | 471.16 $\pm$ 3.54 | 32 | 35.93 <5.58> | **13.44** <5.26> | | Distilled DUO ($T=8$) | 110M | **0.21** $\pm$ 0.01 | **9938.00** $\pm$ 4.14 | 32 | **78.73** <5.24> | - | In this table, we train AR models on OWT for 1M steps with a context length of $1024$, varying the number of Transformer layers to control parameter count; exact architecture details can be found in `anon link [2]`. We do not use a KV cache for the AR baselines. We measure latency (sec) for generating batch size (BS) = 1 and find that DUO is significantly faster than even the smallest AR model. We also measure throughput (tokens/sec) using the largest BS (multiple of 16) that fits in memory. **The distilled DUO outperforms the 17M-parameter AR model in tems of speed**. All experiments were run on a single A100-SXM4-80GB. The mean and std are computed over 100 batches. # Concern 3: MDLM number different from original in Tab. 2 Please refer to our response for the reviewer maKg, Concern 4. --- References [1] Schiff et al., 2025 “Simple Guidance Mechanisms for Discrete Diffusion Models” [2] `Anon. link` : https://docs.google.com/document/d/e/2PACX-1vR0uKDuQHl4bBuC8KokEQhHNMvGdxbIskJm_SfXO_L6haSzWEqjPtL9wkVmg_yacNzMei2DAk21J5XX/pub --- Rebuttal Comment 1.1: Comment: I appreciate the authors for the detailed response. Most of my concerns are addressed, and I raise my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt and thoughtful engagement. We sincerely appreciate your detailed feedback and will incorporate these insights into the next revision of the paper. In particular, your suggestion to elaborate on the motivation for USDMs over MDMs will meaningfully enhance both the narrative and the overall clarity of the work.
Summary: This paper presents the first theoretical connection between discrete and continuous diffusion models, showing that discrete diffusion emerges from an underlying continuous Gaussian diffusion process. Building on this insight, the authors propose a curriculum learning strategy to improve training efficiency. Furthermore, they develop distillation techniques inspired by the relationship between discrete and continuous diffusion. Finally, extensive experiments are conducted to validate their theoretical findings. Claims And Evidence: The experimental results generally support the authors' claims. Methods And Evaluation Criteria: The evaluation metrics used in the paper are generally comprehensive and appropriate for the task. However, generative perplexity (Gen PPL) has been criticized as an imperfect measure, as it can be artificially lowered by repeated words [1]. Despite this limitation, Gen PPL remains a widely used metric in this research area. Additionally, the authors mitigate this concern by including entropy as a supplementary evaluation metric. [1] Zheng et al. Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling. ICLR2025. Theoretical Claims: No. Experimental Designs Or Analyses: Yes, the authors’ experimental setup is reasonable. Supplementary Material: No Relation To Broader Scientific Literature: Continuous diffusion models have achieved significant breakthroughs in image and video generation. More recently, discrete diffusion models have gained increasing attention. However, key questions—such as the theoretical connection between discrete and continuous diffusion and whether widely used distillation techniques from continuous diffusion can be applied to discrete diffusion—have remained open. This paper makes a novel contribution by theoretically establishing the link between continuous diffusion and discrete diffusion based on the uniform forward process. Additionally, it introduces distillation techniques to discrete diffusion, further advancing the field. Essential References Not Discussed: What is the relationship between the proposed method and Bayesian flow networks [2, 3]? [2] Graves et al. Bayesian flow networks. arXiv 2023. [3] Xue et al. Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations. ICML2024. Other Strengths And Weaknesses: #### **Strengths** 1. The contributions of this paper are highly novel (as detailed in *Relation to Broader Scientific Literature*). 2. The paper is well-written, with clear logic and concise explanations. 3. Techniques such as curriculum learning are simple yet effective. #### **Weaknesses** 1. In Section 4.2, the authors claim that deterministic trajectories can be obtained in the continuous space using DDIM and then mapped to the discrete space. However, this is confusing because the continuous and discrete processes follow different trajectories. Based on the loss function, Duo is trained on discrete trajectories (i.e., the neural network receives discretized inputs after the *argmax* operation). Given this, how can we obtain a score function in the continuous space that allows deterministic trajectory generation via DDIM? Am I missing something here? 2. The authors do not address a key question: While they establish a theoretical connection between continuous and discrete diffusion, which enables the transfer of techniques such as distillation (despite my earlier concerns about DDIM), the loss function used (i.e., Eq. (14)) is equivalent to the original discrete diffusion model’s loss function (i.e., Eq. (5)). If the two are mathematically equivalent, why does Eq. (14) yield better PPL results than Eq. (5)? 3. I have some doubts about the comparison with the UDLM baseline. First, Table 1 and Table 3 do not include results for UDLM. I understand that the original UDLM paper did not report results on the OWT dataset or zero-shot PPL, but the authors retrained SEDD Absorb and Plaid for these comparisons. Why did they not retrain UDLM under the same conditions? Second, in Table 2, the reported result for retrained UDLM (36.71) is significantly worse than the result reported in the original UDLM paper (31.28). The original UDLM result is actually better than the proposed method’s result (33.68). Why is there such a large discrepancy? 4. I also have concerns about the generative perplexity experiments. First, Figure 3 does not report entropy values. Second, in Table 4, Duo's entropy is significantly lower than that of MDLM, which raises concerns about whether the improved generative perplexity is simply due to Duo generating low-diversity sentences. According to Section 6.1 in [1], the normal entropy range is around 5.6–5.7, while Duo’s entropy is noticeably lower than this range. Other Comments Or Suggestions: In Lines 768–769, is there a missing part of the sentence? It seems incomplete. Questions For Authors: Please refer to the Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We want to thank the reviewer for their constructive feedback. We address the reviewers comments and questions below. # Concern 1 : Generating trajectories using DDIM. As noted in lines 250–252 (left), the deterministic Gaussian trajectories assume an optimal denoiser: given clean data $\mathbf{x}$ and a latent $\mathbf{x}\_t \sim q_t(. | \mathbf{x})$*,* the optimal denoiser is defined as $\mathbf{x}^*_\theta(\mathbf{x}_t, t) = \mathbf{x}$ for all $t \in [0, 1]$. These DDIM trajectories (Eq. 17), under an optimal denoiser, preserve the Gaussian diffusion marginals (Song et al., 2022; Sec. C.1 in Zhou et al., 2025) and, when mapped to discrete space via $\arg \max$, align with the marginals of uniform state diffusion, as shown in Sec. 3. # Concern 2: Training loss vs Eval loss We'd like to clarify that while Eq. 14 and Eq. 6 are equivalent, **Eq. 16 which corresponds to a biased estimate of the true ELBO is used for training**. The $\arg \max$ in Eq. 14 is approximated by a tempered-softmax in Eq.16. This approximation reduces training loss variance and improves generalization. For a detailed explanation, please see our response to Concern 2 from Reviewer 1i8M. # Concern 3: Perplexity for UDLM in Tab 2 and Tab 3. The numbers for SEDD Absorb and Plaid are taken from Sahoo et al., 2024 and Lou et al., 2024. At the request of the reviewer, we conduct **new experiments** by training UDLM [3] on OWT as follows: | | Val PPL (OWT) $(\downarrow)$ | | --- | --- | | UDLM | 27.43 | | DUO (Ours) | **25.20** | Zero shot perplexities: | | Wikitext $(\downarrow)$ | AG News $(\downarrow)$ | LM1B $(\downarrow)$ | Lambada $(\downarrow)$ | PTB $(\downarrow)$ | Pubmed $(\downarrow)$ | Arxiv $(\downarrow)$ | | --- | --- | --- | --- | --- | --- | --- | --- | | UDLM | 39.42 | 80.96 | 77.59 | 53.57 | 112.82 | 50.98 | 44.08 | | DUO (Ours) | **33.57** | **67.81** | **73.86** | **49.78** | **89.35** | **44.48** | **40.39** | The conclusions remain unchanged—DUO is the state-of-the-art among USDMs. # Concern 4: Reported numbers for MDLM and UDLM in Tab. 2 Lines 258–264 (right) clarify the **discrepancy caused by differences in dataset preprocessing**. The LM1B dataset contains short sentences (~30 tokens each with the GPT-2 tokenizer). In prior work [1, 3, 4], each datapoint in a batch is a single sentence padded to 128 tokens. This results in each sentence being considered in isolation. In contrast, we follow the sentence-packing scheme from Austin et al. (2021) [5], where sentences are concatenated before batching. This results in sentences potentially being split between batches, with additional and potentially noisy context added. As a result, the packed data has a higher (worse) perplexity than padded data. At the reviewer’s request, we conduct **new experiments** where we trained our model using the preprocessing from [1, 3, 4], and report the results below. The conclusions remain unchanged—**DUO is state-of-the-art among USDMs** and approaches MDM performance. | | Val PPL (LM1B) $(\downarrow)$ | | --- | --- | | UDLM | 31.28 | | DUO (Ours) | 29.95 | | MDLM | 27.04 | # Concern 5: Low sample diversity To make the curves in Fig. 3 more readable, the corresponding entropy values are provided separately in Tab. 4. As clarified in lines 351–358, the entropies of MDLM, SEDD Absorb, and SEDD Uniform align with that of an autoregressive model without nucleus sampling (approximately 5.6). In contrast, **DUO’s entropy matches that of an AR model with nucleus sampling (p = 0.9)**, around 5.2. Qualitative samples (`anon. link [6]`) further show that DUO produces significantly higher-quality outputs than both distilled and undistilled MDLM at lower sampling steps $T=8$. # Comments: Clarification on lines 768-769 in the appendix We meant to say the following: "We empirically verify the equivalence of the USDM ELBO with discrete latents (Eqn. 6) and Gaussian latents (Eqn. 14). To do this, we trained UDLM [3] on LM1B using the true ELBO from (6). We then evaluated the model using Gaussian latents (Eqn. 14), and recovered the same perplexity (36.71) as when using discrete latents. For each datapoint $\mathbf{x}$, we used $1000$ Monte Carlo samples for $t$ sampled using antithetic-sampling, with a linear schedule for $\tilde{\alpha}_t = 1 − t$." --- References [1] Sahoo et al., 2024 “Simple and Effective Masked Diffusion Language Models” [2] Zhou et al., 2025 “Inductive Moment Matching” [3] Schiff et al., 2025 “Simple Guidance Mechanisms for Discrete Diffusion Models” [4] Lou et al., 2024 “Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution” [5] Austin et al., 2021 “Structured denoising diffusion models in discrete state-spaces” [6] `Anon. link`: https://docs.google.com/document/d/e/2PACX-1vR0uKDuQHl4bBuC8KokEQhHNMvGdxbIskJm_SfXO_L6haSzWEqjPtL9wkVmg_yacNzMei2DAk21J5XX/pub --- Rebuttal Comment 1.1: Comment: I appreciate the author’s response and have raised my score from 2 to 4 accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt and thoughtful engagement. We greatly appreciate your detailed feedback and will incorporate these discussions into the next revision of the paper. In particular, your suggestion to clarify the distinction between training loss and evaluation loss will help improve the overall clarity of the work.
Summary: This paper proposed a continuous parametrization of uniform-state discrete diffusion models. The key finding is a noise schedule (Eq. 11) that ensures the equivalence of the marginal distributions of the uniform-state discrete diffusion process and a Gaussian continuous diffusion process transformed by an argmax operator. The authors demonstrate two applications of this finding: to introduce a relaxed version of the training objective where a softmax operator is gradually annealed into the argmax operator, and to introduce a distillation method based upon the consistency loss of the underlying Gaussian diffusion. Their experiments show that the proposed method leads to improved perplexity in uniform-state diffusion models and faster sampling after distillation. ### After rebuttal ### Thank the authors for their response. My concern on the gradient variance is resolved, but the others remain. I stand my initial rating. Claims And Evidence: The major claim is equivalence of the uniform-state discrete diffusion process and a Gaussian continuous diffusion process transformed by an argmax operator, for which the authors provide a formal proof. On top of that, the authors further claim that a special curriculum that gradually anneal the soft to the hard max can be introduced to reduce the variance of in training, which is supported by the training curves with less perturbation than the baseline, as well as an improvement in the validation perplexity and the zero-shot perplexity. The authors also claim that the distillation method built on top of the aforementioned equivalence is effective, which is supported by the curves of Gen perplexity and entropy in Fig3 and Fig8. Methods And Evaluation Criteria: The proposed method makes sense for improved training of uniform-state Diffusion models. The evaluation criteria all commonly used in the field. Theoretical Claims: I have checked the proofs and they look sensible to me. Experimental Designs Or Analyses: The experiments and the associated analyses are quite standard so I don't feel any fundamental challenges. However, I am curious why the reduction of variance is only demonstrated with the training loss, instead of the gradients. Supplementary Material: N/A Relation To Broader Scientific Literature: Uniform-state discrete diffusion model is an important family, for it naturally induce error correction with more sampling steps. However, it does not work as well as the masked diffusion. This paper is an attempt to improve the training. Essential References Not Discussed: N/A Other Strengths And Weaknesses: + The writing is clear and very easy to follow. - Although the equivalence between the uniform-state discrete diffusion process and a Gaussian continuous diffusion process transformed by an argmax operator is established formally, the training loss appears to be less rigorous. The network takes in the soft version but predict the hard version. This may render the training loss harder to interpret. Reflected in the empirical observation, the curves in Fig2 shows clear margin between the proposed method and MDLM, but the result in Table 2 reports the opposite. \\ Other Comments Or Suggestions: The name of the proposed method DUO appears abruptly in the experiment section. Questions For Authors: Apart from my concerns mentioned above, I am also curious to know how is the distilled network parametrized. If the student model is similar to the teach in terms of the factorization of dimensions as articulated in Xu et al. 2024, why wasn't it an issue for the proposed method? Xu et al. 2024, ENERGY-BASED DIFFUSION LANGUAGE MODELS FOR TEXT GENERATION Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for their constructive feedback. We address the reviewers comments and questions below. # Concern 1: Interpreting training loss (Fig 2) vs Tab 2 We will clarify Fig 2 by highlighting the key takeaway: DUO’s training loss exhibits significantly lower variance than both MDLM and UDLM’s. We thank the reviewer for pointing out that the biased training loss of DUO distracts from this key takeaway, and will revise this figure to emphasize the empirical variance of the loss, as well as gradient variance (more details in Concern 2). We note that in the current Fig. 2 (paper), DUO’s curve lies below MDLM’s because its training loss (Eq. 16) is a biased approximation of the true ELBO (Eqs. 6 and 14). This bias causes the training loss to be lower. More formally, for the true ELBO (Eq. 6), the discrete diffusion parameter $\tilde{\alpha}_t$ must lie in $[0, 1]$. However, during training with the approximation (Eq. 16), it suffices to choose the Gaussian diffusion parameter $\tilde{\alpha}_t \in [0.85, 0.97]$, which maps to $\alpha_t = \mathcal{T}(\tilde{\alpha}_t) \in [\approx 0, \approx 1]$ for $\tau \to 0^+$ (see Fig. 4, appendix). Since, **Gaussian latents contain more signal than noise**, while training with Eq. 16 with $\tau > 0$ the denoising model finds reconstructing the input easier, **resulting in a lower training loss**. Tab. 2 (paper) reports the validation perplexity computed using the true ELBO (Eq. 6 for DUO). For clarity, we report the validation perplexities of various methods below at different training steps and observe that DUO consistently outperforms UDLM and approaches the performance of MDLM. We thank the review for pointing out the potential conflicting interpretations of Fig 2 and Tab. 2, and believe revising Fig 2 to focus on variance will alleviate this. | | 10K | 100K | 250K | 500K | 750K | 1M | | --- | --- | --- | --- | --- | --- | --- | | MDLM | 65.75 | 40.01 | 36.37 | 33.41 | 32.66 | 32.03 | | UDLM | 91.95 | 74.71 | 43.11 | 39.70 | 37.63 | 36.71 | | DUO (Ours) | 69.34 | 57.11 | 37.44 | 35.20 | 34.22 | 33.68 | **Note:** UDLM is equivalent to DUO w/o Curriculum Learning # Concern 2: Gradient Variance As requested by the reviewer, we report gradient variance across model weights. We compute the variance of each weight’s updates over 10 steps on LM1B at different training steps. **With Curriculum Learning (CL), gradient variance is significantly lower**—especially in early training (RTab 1; below). Among the 100 weights with the highest variance (RTab 2; below), **CL reduces variance by an order of magnitude** early on, which gradually diminishes over time. This reduction appears beneficial, as it accelerates learning in the initial phases, reflected in improved validation loss; see above. **(RTab 1) Sum of all variances:** | | 10k | 20k | 50k | 100k | 500k | | --- | --- | --- | --- | --- | --- | | CL | **2815.36** | **2471.65** | **1890.76** | **1469.85** | **947.98** | | w/o CL | 10852.9 | 7811.04 | 6315.7 | 5454.7 | 1678.47 | **(RTab 2) Sum of highest 100 variances:** | | 10k | 20k | 50k | 100k | 500k | | --- | --- | --- | --- | --- | --- | | CL | **0.30** | **0.85** | **1.21** | **0.86** | **1.15** | | w/o CL | 11.7 | 20.09 | 34.2 | 55.1 | 1.92 | We observe a similar trend in the variances of the loss as well; see below. **(RTab 2) Sum of highest 100 variances:** | | 10k | 20k | 50k | 100k | 500k | | --- | --- | --- | --- | --- | --- | | CL | **7.09** | **6.29** | **5.33** | **4.97** | **4.76** | | w/o CL | 9.19 | 7.72 | 6.85 | 6.32 | 5.47 | We will replace Fig 2 (the loss curves) with these findings. # Other comments During distillation, the student and teacher networks share the same architecture and differ only in weights—the teacher uses EMA weights, while the student uses the current model parameters. Both networks factorize the joint distribution independently as $p_\theta(\mathbf{x}\_0 | \mathbf{z}\_t) = \prod_i p_\theta(\mathbf{x}^i_0 | \mathbf{z}\_t)$. Although Xu et al. (2024) propose a more expressive factorization with an energy term,$p_\theta(\mathbf{x}\_0 | \mathbf{z}\_t) = \prod_i p_\theta(\mathbf{x}^i_0 | \mathbf{z}\_t) e^{E_\phi} / Z_\phi$, which leads to a better fit to the data, the independent assumption performs well in practice [2, 3, 4]. Modeling the energy function $E_\phi$ and the partition function $Z_\phi$ is also non-trivial, often requiring an additional network and complicating both training and sampling. We believe incorporating this approach could enhance our model’s expressiveness and benefit both DUO and MDLM, which we leave for future work. --- References [1] Xu et al., 2024, ENERGY-BASED DIFFUSION LANGUAGE MODELS FOR TEXT GENERATION [2] Sahoo et al., 2024 “Simple and Effective Masked Diffusion Language Models” [2] Schiff et al., 2025 “Simple Guidance Mechanisms for Discrete Diffusion Models” [3] Lou et al., 2024 “Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution”
null
null
null
null
null
null
OptMATH: A Scalable Bidirectional Data Synthesis Framework for Optimization Modeling
Accept (poster)
Summary: This paper proposes an automatic data synthesis framework for LLM optimization modeling. The method can control the problem complexity starting with some seed data. Then, the method obtains the natural language description using a backtranslation step. Experiments demonstrate the effectiveness of training various sizes of LLMs using the generated dataset. Claims And Evidence: The paper mentioned "This increased complexity, manifested through longer problem descriptions, poses greater challenges for LLMs." The author may want to investigate the relation between the modeling accuracy and the problem length. In my understanding, the most challenging part of LLM is to understand the scenarios of the problems, not necessarily the description length. Methods And Evaluation Criteria: While using LLMs to analyze the LP files sounds interesting, this paper can be hard to generalize to large-scale instances. In practice, the LP files can be large, even more than 10M, which poses a great challenge for LLM to process such long input. Theoretical Claims: This paper does not contain any proof for theoretical claims. Experimental Designs Or Analyses: 1. Some experimental results are missing. I wonder whether the results of Chain-of-Experts/Optimus on the MAMO EasyLP/MAMO ComplexLP/OptMATH Bench are missing. 2. The author may want to conduct experiments on the harder dataset, either the IndustryOR or ComplexOR dataset. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper is related to the LLM automatic formulation for mathematical optimization. Essential References Not Discussed: The author may want to cite the following works on LLM automatic modeling. [1] OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling. Other Strengths And Weaknesses: Strengths: 1. This paper proposes an interesting bidirectional data synthesis framework for optimization modeling. 2. The experiment results seem promising in LLM finetune. Weaknesses: 1. The improvement in the Qwen models is significant. However, to demonstrate the applicability to other LLMs, I suggest the authors provide experiment results on the Llama models. Other Comments Or Suggestions: If the authors can address my concerns, I would like to increase my score. Questions For Authors: 1. How about the data generation efficiency of the method? 2. How about the number of constraints and variables in the generated problems? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed feedback. We address the specific questions: **Regarding Claims And Evidence:** - **Problem Length vs. Complexity:** We fully agree that scenario understanding is crucial for LLMs. However, more complex scenarios naturally require longer, more detailed NL descriptions (e.g., ROADEF 2012 [1]), challenging both LLM abstract reasoning and long-context processing. Thus, length reflects certain scenario complexity. Our paper's results on MAMO (Fig 3, Table 1) show a correlation between scale/length and reduced accuracy. - **Scenario/Problem Type Coverage:** Our seed dataset features **over 50 expert-curated problem classes**, which we believe provides substantial and representative coverage. As detailed in Appendix A.2, each class is grounded in **referenced literature and reflects practical application scenarios**. While methods like the evolutionary approach in Li et al. [2] can generate synthetic variations, they often start from a limited set of initial classes (reportedly 8 in that case), whereas our foundation of **50+ manually curated classes is significantly more extensive**. Moreover, such synthetic generation typically recombines existing elements rather than creating fundamentally new problem types. We are confident in the breadth of our 50+ classes; further expansion could effectively build upon this rich seed set using augmentation or evolution. **Regarding Methods And Evaluation Criteria:** - **Scalability for Massive Instances:** Our current pipeline already handles considerable scales effectively (up to ~25k characters, Fig 2). However, for the ultra-large instances (>10MB) you mentioned, direct LLM processing is indeed problematic due to context limits. To address this, our framework can adapt using a Model-Data Separation Format (similar to OptiBench [3]): - **Format:** Instance = (NL Description + Structured Data File). NL has scenario/data needs; Data File has numerics. - **OptMATH Adaptation:** Generate (NL + Data File) pairs. Train AutoFormulator to generate code reading the Data File, validating via OV or bipartite graph check. The (MF, NL, PD) interpretation shifts: NL includes scenario + data ref; PD is the final solver-ready format. This is a promising direction for ultra-large scales. **Regarding Experimental Designs Or Analyses:** - **Missing Results & Harder Datasets:** We added the requested Optimus results on MAMO/OptMATH Bench and other benchmarks. We also added IndustryOR and OptiBench results. See Table on the response to **Reviewer mCmd.** - **Regarding Chain-of-Experts and ComplexOR:** We do not show the results for Chain-of-Experts (CoEs) [5] and ComplexOR. The input for the CoEs requires each instance to include a structured code template and detailed descriptions of its parameters. Additionally, the ComplexOR dataset format is not suitable for end-to-end modeling tests, as its data format for each problem includes a natural language description without numerics. **Regarding Essential References Not Discussed:** - We agree [4] is pertinent and will add/discuss it in revised Sec 1. While both explore reverse synthesis, OptMATH uniquely emphasizes: - **Generator-based Scalability:** Abstract MF + parameterized generators enable large-scale, diverse PD creation (Sec 3, Alg 1). - **Rigorous Semantic Validation:** Strict Optimal Value (OV) equivalence check (Sec 4.3) ensures high NL-PD correctness (99.6% manual accuracy), beyond just code executability. **Regarding Weaknesses:** - **Applicability to Llama:** We fine-tuned Llama 3.1 8B on OptMATH-Train, showing applicability. Table of the response to Reviewer mCmd shows significant improvements over baseline Llama. Performances of checkpoints are uploaded in [6] named llama_checkpoints_perf. **Regarding Questions For Authors:** - **Data Generation Efficiency:** Uses feedback-driven tuning (Alg 1) for controllable quality. PD generation success ~50% (see [6], rate_after_feedback_tuning figure). Rejection sampling (OV matching validation) acceptance is 62.14% (Fig 14: T=1 efficient). 200k valid (NL, PD) samples cost ~$1914 (13.35B tokens, DeepSeek-V3, promotional period), much cheaper than manual creation. - **Number of Constraints/Variables:** OptMATH-Train covers wide range: up to ~2500 vars / ~1800 cons, including complex instances. Please refer to optmath_train_under500, optmath_train_linear, optmath_train_log, benchmarks_distribution, benchmarks_under100, and benchmarks_box_plot figures in [6] for a more detailed description. --- **References:** [1] ROADEF Challenge 2012 Subject. [2] Towards foundation models for mixed integer linear programming. [3] OptiBench: A Large Language Model Benchmark for Optimization Problem Understanding and Formulation. [4] OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling. [5] Chain-of-Experts: When LLMs meet complex operations research problems. [6] https://anonymous.4open.science/r/OptMATH-Rebuttal-8F5E
Summary: This paper proposes OptMATH, a method for data generation in the field of optimization modeling, which primarily combines the "Back Translation" technique from previous work with the "rejection sampling" method. Strictly speaking, it falls under the categories of data augmentation and data annotation within data generation. Additionally, it introduce a benchmark that is more comprehensive compared to others in the field, capable of varying difficulty levels and providing stricter scoring, thereby better assessing a model's optimization modeling capabilities. Overall, a substantial amount of experimentation has been conducted, covering all necessary aspects, and the experimental results are promising, even surpassing those of GPT-4. Claims And Evidence: Please refer to Questions For Authors Methods And Evaluation Criteria: The methods and metrics make sense. Theoretical Claims: This is a work of data synthesis, so there is no proof in the article, and many formulas are just explanations or definitions of properties. Experimental Designs Or Analyses: The submission contains some claims that are not fully supported by clear and convincing evidence. Specific problematic claims will be addressed in the "Other Strengths And Weaknesses" section. Supplementary Material: Please refer to Questions For Authors Relation To Broader Scientific Literature: How are the key contributions of the paper related to the broader scientific literature? Be specific in terms of prior related findings/results/ideas/etc. 1. This article falls within the domains of data augmentation and data annotation in data generation, specifically under the broader category of synthetic data generation. 2. The techniques it employs, namely "back translation" and "rejection sampling," are not uncommon in the field of synthetic data generation. Essential References Not Discussed: No Other Strengths And Weaknesses: Advantages: 1. The paper introduces OptMATH, a method for data generation in optimization modeling integrating "Back Translation" with "rejection sampling", contributing to solving the data shortage in field of optimization modeling. 2. Besides, the introduction of OptMATH-Bench provides a more robust assessment of a model's optimization capabilities compared to existing benchmarks. 3. The extensive experimentation conducted covers many necessary aspects of the study, yielding promising results that even surpass those of GPT-4. Disadvantages: 1. The article lacks transparency in explaining how the rejected data was refined to establish the proposed benchmark, leaving a gap in understanding the curation process. 2. The paper fails to clearly articulate where the controllable difficulty aspect of the research is manifested. While mentioning the definition of difficulty levels and data generation through a generator, the specifics of training, generation, and validation of the produced data remain undisclosed. This lack of detail raises concerns about the robustness and reliability of the generated seed dataset. 3. The utilization of data from challenging benchmarks as seed data, without a comprehensive explanation or validation process, raises questions about the fairness and reasonableness of the experimental setup. Testing on different benchmarks, even if not identical, could potentially introduce biases or skew results. 4. Insufficient elaboration on the methodology employed to create OptMATH-Bench limits the clarity on its distinctiveness from the training data generation process. Without a clear distinction in sources, distributions, and methodology, achieving superior performance on OptMATH-Bench may primarily reflect strong in-domain capabilities rather than a comprehensive model evaluation. Other Comments Or Suggestions: No Questions For Authors: 1. They did not provide a detailed explanation of how the rejected data was further curated to create the proposed benchmark. 2. It is not clearly stated where the controllable difficulty of this work is reflected. They simply mentioned defining difficulty levels and using a generator to produce data? My understanding is that they classify the difficulty of existing data, then train a generator to produce corresponding PD, MF, and OV based on difficulty requirements, which are then fed into the data generation pipeline. However, the specific training and generation details were not further disclosed. Additionally, I assume these data serve as the seed dataset. Was there further validation to ensure the reasonableness and correctness of the generated seed data? 3. Part of the data source comes from challenging benchmarks. Generating data from other benchmarks and then testing on different benchmarks, even if not the same one, raises questions: Is this setup reasonable? Is it fair? 4. The article does not provide further details on the method for creating OptMATH-Bench. My understanding is that if the methods for generating training data and the benchmark are largely similar, and their sources and distributions are consistent, then good performance on OptMATH-Bench may only indicate strong in-domain capabilities. Outperforming other models on this benchmark might not be entirely fair. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed feedback. We address the specific questions raised: **1. Regarding Questions 1 & 4: OptMATH-Bench Curation, Distinctiveness, and In-Domain Evaluation** We clarify OptMATH-Bench's curation and distinction from OptMATH-Train to address concerns about evaluating only in-domain capabilities: - **Dual Curation Pathways:** OptMATH-Bench was created via two distinct routes. **Pathway 1** started with instances **rejected** by our AutoFormulator (failed OV check), indicating initial difficulty. An "LLM-Committee" (inspired by [1], using diverse powerful models like GPT-4, Claude, Gemini, DeepSeek) then filtered these: **(PD, NL) pairs** were retained only if **at least one and at most two** committee members successfully formulated them (passed OV check). This isolated well-posed but non-trivial modeling challenges. Crucially, **human OR experts** subsequently **validated the correctness** of these selected pairs and further refined them based on relevance and clarity. **Pathway 2** involved **experts directly curating challenging problems from external OR literature** (journals, textbooks), ensuring methodological/source independence and including known hard problem types (e.g., NLP, SOCP - see Figure 4). - **Addressing "In-Domain" Concern:** This dual approach ensures distinction. Pathway 2 uses external sources/methods. Pathway 1 involves significant expert validation. Even if underlying PD distributions overlap, the **distribution of (PD, NL) pairs** in OptMATH-Bench is fundamentally different due to curation. The LLM-Committee filtering specifically ensures the benchmark represents the **hard tail of this paired distribution** relative to strong LLM capabilities. The superior performance of our finetuned model on newly added IndustryOR and OptiBench benchmarks (see Table of the response to **Reviewer mCmd**) further contradicts a purely in-domain evaluation and shows generalization ability. *Proposed Revision:* We will revise Section 4.3/6.1 to detail these distinct curation pathways, emphasizing expert roles, external sourcing, the LLM-committee filtering targeting the (PD, NL) hard tail, and problem diversity. **2. Regarding Question 2: Controllable Difficulty Mechanism and Seed Data Validation** We clarify the difficulty control and validation, addressing a misunderstanding about generator training: - **Generator Usage Clarification:** We must correct a misunderstanding: we do **not** *train* generators. We use **pre-defined, parameterized code generators** implementing standard MFs from OR literature (e.g., Bin Packing [2], Appendix E.6). Our novelty is controlling their *input parameters*. - **Feedback-Driven Parameter Tuning (Alg. 1):** Difficulty is controlled via an iterative LLM feedback loop (inspired by [3]). The LLM suggests/refines generator parameters based on evaluation feedback (complexity score S(PD), solve time, feasibility) from generated instance batches, steering output towards targets. - **Further Validation:** AutoFormulator training uses **curriculum learning**. Seed MFs/generators are expert-curated from literature (Appendix A.2) **to ensure the reasonableness and correctness**. PDs are validated for feasibility/solvability (Alg. 1) and OV equivalence (Alg. 2). *Proposed Revision:* We will revise Sections 3, 5.2, Appendix A/B to detail the LLM-driven *parameter tuning* (not generator training), curriculum learning, seed curation, and validation. **3. Regarding Question 3: Use of Benchmarks for Seed Inspiration and Experimental Fairness** We clarify the use of solving benchmarks like MIPLIB and address fairness: - **Solving Benchmarks as Inspiration Only:** These were used *solely* to identify **representative OR problem classes** (e.g., TSP, Job Shop). Their specific solver-focused PD (lacking NL) were **not** reused. - **Independent Generator Development:** We built **new parameterized generators from scratch** based on MFs from classic OR **literature** for each identified class (e.g., Job Shop [4], Appendix A.2). - **Distinction Justifies Fairness:** The clear separation – using solver benchmarks only for class inspiration, building new generators from literature, not reusing instances, and testing on different types of benchmarks (Optimization Modeling vs. Solving) – ensures our setup is reasonable and fair. We argue that **this strikingly contradicts hacking the modeling benchmarks**. We do not use any benchmarks directly. *Proposed Revision:* We will revise Section 3 and Appendix A.2 to explicitly state this distinction and the generator development process based on literature. --- **References:** [1] Auto-Arena: Automating LLM Evaluations with Agent Peer Battles and Committee Discussions. [2] Analysis and design of algorithms in combinatorial optimization. [3] Large language models as optimizers. [4] The shifting bottleneck procedure for job shop scheduling.
Summary: The paper proposes a framework named OptMATH for synthesizing high-quality datasets aimed at optimization modeling from natural language descriptions. This framework addresses the scarcity of optimization datasets by generating problem data through mathematical formulations and back-translation into natural language descriptions. The framework includes a rigorous quality control process involving forward modeling and rejection sampling to ensure mathematical consistency. Extensive experiments demonstrate that models trained on the OptMATH dataset outperform existing benchmarks, showcasing the framework's effectiveness and scalability. Claims And Evidence: Experimental results have shown the proposed method can achieve strong results compared with previous studies. Methods And Evaluation Criteria: Yes, the proposed method mainly addresses the data scarcity issue in optimization modeling. Theoretical Claims: I did not find any flaws currently. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, the dataset details, training details and some additional experimental results. Relation To Broader Scientific Literature: I cannot provide an accurate assessment towards the broader scientific literature, as I am not sufficiently familiar with the target literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths 1. The proposed bidirectional data synthesis framework is innovative and provides a systematic solution to the issue of data scarcity in optimization modeling. 2. The use of rejection sampling ensures high-quality data generation, with the framework demonstrating a remarkable 99.6% accuracy in maintaining mathematical consistency. 3. The framework is versatile, covering over 10 real-world applications with various optimization problems such as LP, MILP, IP, NLP, and SOCP. 4. Comprehensive experiments validate the framework's effectiveness, with models trained on OptMATH achieving superior performance on multiple established modeling benchmarks, including NL4Opt and MAMO. ### Weaknesses 1. While the authors highlight limitations in previous prompting-based methods, the paper lacks clear articulation of how OptMATH specifically overcomes these issues. In the 'Our Contributions' section, consider adding concise explanations that directly contrast OptMATH's advancements with those of prior work, providing readers with a clearer understanding of its unique advantages. 2. The experiments primarily focus on NL4Opt and MAMO benchmarks. Evaluating the framework on a wider range of standard datasets would be better. 3. To better illustrate the performance gains achieved by OptMATH, consider including results for Qwen2.5-7B and 32B in Table 1. This would provide a more comprehensive comparative analysis. 4. The selection of seed data is a critical aspect of OptMATH. The paper should dedicate more space to detailing the process of seed data selection, including the criteria and methodologies employed. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate the suggestions for improving the clarity and scope of our work. We address each point below: **Regarding Weakness 1:** We will revise the **'Our Contributions' section** to more explicitly contrast OptMATH with prior prompting-based methods. Key differentiators that will be emphasized include: - **Specialized & Efficient Models:** Fine-tuning on OptMATH-Train yields specialized AutoFormulator models demonstrating superior performance. Additionally, complex, multi-step prompting pipelines (e.g., multi-turn interaction) can sometimes confuse models on simpler tasks, potentially degrading performance relative to complex ones (see Table, OptiMUS on MAMO EasyLP). Furthermore, fine-tuning yields models with low inference costs suitable for large-scale deployment, unlike prompt-based methods requiring continuous, expensive calls to powerful foundation model APIs. - **Beyond Prompting:** Unlike methods relying solely on base LLM capabilities via prompting, OptMATH focuses on fine-tuning models using a large-scale, high-quality dataset synthesized specifically for optimization modeling. This fine-tuning approach **is complementary to prompt engineering techniques**, offering a path to enhance foundational model capabilities rather than just leveraging existing ones. - **Scalable High-Quality Data Generation:** Our bidirectional framework systematically generates vast amounts of verified (NL, MF, PD) triplets, addressing the data scarcity that limits fine-tuning approaches. - **Rigorous Semantic Validation:** We employ Optimal Value (OV) based rejection sampling, ensuring semantic consistency between the NL description and the problem data, which is more rigorous than checks for mere code executability often used implicitly by prompting methods. - **Controllable Complexity & Diversity:** The framework allows generating problem data with targeted difficulty via feedback loops and incorporates diverse problem types. **Regarding Weaknesses 2&3:** We now include results on the IndustryOR [1] and OptiBench [2] benchmarks. Our Qwen2.5-32B model, finetuned on the OptMATH-Train dataset, demonstrates consistently superior performance, achieving results comparable to GPT-4 and DeepSeek-V3. Results for OptiMUS, Llama, and ORLM on these benchmarks are also presented. Furthermore, to illustrate the impact of finetuning, results for the baseline Llama and Qwen models (before finetuning) have been added. This updated table provides a comprehensive comparative analysis. Due to time constraints, the reproduced ORLM results were neither tuned nor run multiple times, which explains the drop in performance. | Models | NL4OPT | MAMO EasyLP | MAMO ComplexLP | OptMATH-Bench | IndustryOR | OptiBench | | --- | --- | --- | --- | --- | --- | --- | | GPT-3.5-turbo | 78.0% | 79.3% | 33.2% | 15.0% | 21.0% | 58.1% | | GPT-4 | 89.0% | 87.3% | 49.3% | 16.6% | 33.3% | 68.6% | | DeepSeek-V3-1226 | 95.9% | 88.3% | 51.1% | 32.6% | 37.0% | 71.6% | | OptiMUS base on GPT-4o (2024-05-13) | 78.8% | 77.0% | 43.6% | 20.2% | 31.0% | 45.8% | | LLama3.1_8B (pass@1) | 0% | 0.2% | 0% | 0% | 0% | 0% | | OptMATH_LLama3.1_8B (pass@1) | 55.5% | 73.9% | 40.8% | 24.4% | 18% | 55.5% | | OptMATH_LLama3.1_8B (pass@8) | 97.6% | 94.2% | 71.6% | 51.6% | 37% | 66.6% | | Qwen2.5_7B (pass@1) | 86.9% | 83.6% | 21.8% | 1.6% | 10% | 36.2% | | OptMATH_Qwen2.5_7B (pass@1) | 94.7% | 86.5% | 51.2% | 24.4% | 20% | 57.9% | | OptMATH_Qwen2.5_7B (pass@8) | 98.4% | 94.5% | 72.5% | 56.0% | 38.0% | 68.1% | | Qwen2.5_32B (pass@1) | 92.7% | 82.2% | 44.6% | 9.3% | 16.0% | 47.6% | | **OptMATH_Qwen2.5_32B (pass@1)** | **95.9%** | **89.9%** | **54.1%** | **34.7%** | **31.0%** | **66.1%** | | OptMATH_Qwen2.5_32B (pass@8) | 97.9% | 93.9% | 75.4% | 67.4% | 47.0% | 76.8% | | ORLM-LLaMA-3-8B (reported) | 85.7% | 82.3% | 37.4% | * | 38.0% | * | | ORLM-LLaMA-3-8B (reproduced) | 84.5% | 74.9% | 34.1% | 2.6% | 24.0% | 51.1% | **Regarding Weaknesses 4:** We agree this is important. The comprehensive methodology, criteria, and structured organization for our seed data generation—including how we utilize benchmark problem structures (e.g., from MIPLIB) to create validated, parameterized instance generators and associated metadata—are detailed in **Appendix A.2 (Seed Classes)**. Recognizing the value of highlighting this in the main text, we will revise **Section 3** to include a concise summary of this systematic process and clearly reference Appendix A.2 for the full description. --- References: [1] ORLM: Training large language models for optimization modeling. [2] OptiBench meets ReSocratic: Measure and improve LLMs for optimization modeling.
Summary: The paper presents OptMATH, a scalable bidirectional data synthesis framework designed to address the challenge of data scarcity in optimization modeling. It automatically generates high-quality optimization problem data with controllable complexity, starting from curated seed data with mathematical formulations. The framework employs a backtranslation step to obtain natural language descriptions and uses forward modeling with rejection sampling to verify the correspondence between the descriptions and problem data. The accepted pairs form the OptMATH training dataset, while rejected pairs are filtered to create a challenging benchmark. Extensive experiments demonstrate that models trained on OptMATH achieve superior results on multiple modeling benchmarks, validating the framework's effectiveness and scalability. Claims And Evidence: In general, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem. Theoretical Claims: This paper did not provide any proofs for theoretical claims. Experimental Designs Or Analyses: See weaknesses below Supplementary Material: I have not thoroughly reviewed the supplementary materials yet. Relation To Broader Scientific Literature: Optimization. Essential References Not Discussed: OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling. ICLR 2025 Other Strengths And Weaknesses: While the OptMATH framework presents a significant advancement in generating high-quality optimization modeling datasets, there are several potential weaknesses and areas for improvement: 1. Complexity of Natural Language Descriptions: The paper acknowledges that the complexity of natural language descriptions can vary widely. While the framework aims to generate high-quality descriptions, there may still be instances where the descriptions are overly complex or ambiguous, making it difficult for models to accurately translate them into mathematical formulations. 2. Generalization to New Domains: Although the framework covers a wide range of optimization problems, its ability to generalize to entirely new domains or problem types not covered in the seed data is uncertain. The framework relies on curated seed data, which may limit its adaptability to novel optimization scenarios. 3. Computational Resources: The bidirectional synthesis process, including feedback-driven problem data generation and rejection sampling, is computationally intensive. This may limit the scalability of the framework, especially for very large datasets or extremely complex optimization problems. 4. Optimization Problem Diversity: While the framework generates a diverse set of optimization problems, there is a risk that the generated problems may not fully capture the diversity of real-world optimization challenges. The reliance on existing benchmarks and expert-curated seed data might introduce biases or limit the range of problem types generated. 5. Evaluation Metrics: The paper uses accuracy (pass@1) as the primary evaluation metric, which measures whether the optimal value obtained by the generated code matches the ground truth. While this is a relevant metric, it may not fully capture the quality of the generated mathematical formulations or the reasoning capabilities of the models. Additional metrics, such as the diversity of generated problems or the robustness of the models to variations in problem descriptions, could provide a more comprehensive evaluation. 6. Human-in-the-Loop: The framework involves a significant amount of human effort in curating seed data, designing prompt templates, and validating the generated datasets. This reliance on human expertise may limit the framework's ability to be fully automated and could introduce human biases. 7. Solution-Based Validation: The rejection sampling mechanism relies on comparing the optimal values of the original and generated problem instances. While this approach ensures a high degree of semantic equivalence, it may not guarantee the exact equivalence of the mathematical formulations. Further research is needed to develop more sophisticated validation techniques that can ensure the precise correspondence between natural language descriptions and mathematical formulations. 8. Model Size and Data Scaling: The paper demonstrates that larger models generally achieve better performance, but the relative gains from fine-tuning diminish as model size increases. This suggests that there may be diminishing returns in scaling up the model size, and more efficient training strategies or model architectures might be needed to achieve better performance on complex optimization tasks. Overall, while the OptMATH framework represents a significant step forward in generating high-quality optimization modeling datasets, addressing these weaknesses could further enhance its effectiveness and applicability in real-world scenarios. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review on the potential areas for improvement. We appreciate the opportunity to address these points: **Essential References Not Discussed:** See “Essential References Not Discussed” section of response to Reviewer PCa3 **Regarding Weakness 1:** Our framework directly addresses this via the **rejection sampling mechanism detailed in Section 4.3 and Algorithm 2**. Each generated NL description is translated back into problem data (PD') using our AutoFormulator. We then rigorously validate this translation by comparing the optimal objective value (OV') obtained from solving PD' against the optimal value (OV) from the original problem data (PD). Only pairs where OV' equals OV are accepted into OptMATH-Train. Consequently, this process **can process NL descriptions that seem to be complex or ambiguous** accurately, ensuring that all included data points feature corresponding NL and PD that are **demonstrably modelable by an LLM**. **Regarding Weakness 2:** Please refer to the "Addressing 'In-Domain' Concern" and "Distinction Justifies Fairness" sections of the response to Reviewer LZQX and the table in the response to Reviewer mCmd, where we add new benchmarks. These arguments clearly show that the **benchmarks, even OptMATH-Bench, originating from the same PD distribution, do not overlap with OptMATH-Train**. These results show the models finetuned on OptMATH-Train **generalize to entirely new domains or problem types beyond the curated seed data.** **Regarding Weakness 3:** Please refer to the "Scalability for Massive Instances" a "Data Generation Efficiency" sections of the response to Reviewer PCa3. Our pipeline is efficient and can **easily adapt the data-model separation paradigm** to process extremely complex optimization problems. **Regarding Weakness 4:** Please refer to the "Scenario/Problem Type Coverage" section of the response to Reviewer PCa3. **Regarding Weakness 5:** In **Appendix A.2 and A.3, we have given a detailed description of the diversity** of seed data and OptMATH-Train. For additional metrics, please refer to the table in the response to Reviewer mCmd. We evaluate the performance of finetuned models on IndustryOR and OptiBench. We also reported the results using pass@8; the higher performance at pass@8 compared to pass@1 indicates a **high upper bound on the modeling capability** of the finetuned models, signifying that models trained on OptMATH-Train attain a **high ceiling for their modeling skills**. **Regarding Weakness 6:** We agree that this is a critical aspect and will revise the paper to elaborate on our methodology. To clarify, our **process is largely automated**: we begin with core problem structures inspired by benchmarks like MIPLIB (e.g., TSP, JSS variants) and **utilize parameterized instance generators** designed around these structures. For instance, our Job Shop Scheduling generator accepts parameters like job/machine counts and operation details to automatically create problem data (PD). Crucially, these **generated PD are validated for solvability (e.g., using Gurobi feasibility checks) before being included in our seed set** and subsequently translated into natural language by LLMs. To further increase automation and reduce human biases, we can easily adapt the methods in addressing **weakness 4** and **reduce the amount of seed data to merely 8 generators**. **Regarding Weakness 7:** Our empirical validation is crucial here: as reported in Section 4.3, manual checks confirmed that our OV-based rejection sampling **achieves a 99.6% accuracy rate** in capturing the correct problem semantics and ensuring practical equivalence. We find this level of accuracy to be **highly effective and practically sufficient** for ensuring dataset quality for LLM training. Alternatively, checking graph isomorphism for LP problem structures, inspired by related work [1], could be used for stricter MF equivalence; however, we **opted for the OV check due to its operational simplicity and broad applicability** across problem types within our framework, ensuring a practical and scalable validation approach. **Regarding Weakness 8:** To proactively enhance scalability and mitigate this effect, our framework incorporates specific strategies aimed at maximizing data diversity within OptMATH-Train. As detailed in **Section 5.1, we employ extensive data augmentation techniques** to generate more varied and non-standard problem instances. Additionally, during the forward modeling phase [**Section 4.2**], **we utilize diverse Chain-of-Thought (CoT) prompting strategies** to capture multiple valid reasoning paths and formulation variants. We believe that enriching the training data with such diversity **helps elicit stronger performance improvements even on larger models, thereby counteracting the diminishing returns trend**. --- **References** [1] OptiBench: A Large Language Model Benchmark for Optimization Problem Understanding and Formulation. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I will keep my rating as weak acceptance for this paper.
null
null
null
null
null
null
Action Dubber: Timing Audible Actions via Inflectional Flow
Accept (poster)
Summary: Authors design a novel task of audible action temporal localization. A new dataset called $Audible623$ and a $TA^2Net$ architecture are proposed for this task. Experiments shows the advantages of them. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The correctness of any proofs for theoretical claims is suitable. Experimental Designs Or Analyses: The experiment design is suitable. Supplementary Material: I have review the supplementary material. It is a 5 min video with further experiments. Relation To Broader Scientific Literature: The task authors proposed is diverges from traditional action recognition and temporal action localization. Essential References Not Discussed: Related works are introduced. Other Strengths And Weaknesses: **Strengths:** 1. The proposed task of audible action temporal localization is interesting and the motivation is reasonable. 2. The explanation of the task in Figure 1 is clear and vivid. 3. Clear model structure. 4. Reviewer appreciate the experimental results in demo videos. **Weaknesses:** 1. The cite of Figure 2 precedes Figure 1 in the introduction which is strange. Additionally, the figure name is main text is Figure X instead of Fig.X. Reviewer recommend authors check and apply \figurename~\ref in their manuscript. 2. Why Figure 2 is not PDF format and fonts within it cannot be selected? Authors are suggested to polish Figure 2. 3. Why authors select Kinetics and UCF101 instead of other widely-used video datasets such as SSv2 and HMDB51? 4. The formula is not properly written, especially the subscripts of the loss. In standard mathematical formulas, the subscripts of such nonvariables should not be used in italics but in regular fonts. 5. The layout of the manuscript should be improved. There are many unreasonable single words that take up an entire line. Other Comments Or Suggestions: Please refer to weaknesses. Questions For Authors: Please refer to weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Formatting Issues** We sincerely appreciate your valuable feedback. We will revise the figure citations, adjust the formatting of Figure 2, correct the subscripts in the equations, and improve the overall layout of the paper. Additionally, we will fix the typographical error in line 376 to ensure clarity and consistency throughout the manuscript. **2. Concerns on Dataset Selection** We selected Kinetics and UCF101 because they serve as standard benchmarks for action counting, which is a core use case of our task. These datasets include a wide range of repetitive, sound-associated actions such as “clapping”, “jumping jacks”, “typing”, and “punching”, which are well aligned with the objectives of audible action localization. In contrast, datasets like SSv2 and HMDB51 emphasize abstract or context-dependent interactions that are less relevant to our goal. For example, SSv2 includes many object-centric and non-audible actions such as “pretending to open something” or “moving something from left to right”, which lack consistent visual-sound correspondence. Similarly, HMDB51 features a high proportion of semantically complex or non-repetitive actions such as “smiling” or “listening to music”, which are not suitable for sound grounding or counting tasks. Therefore, our dataset selection is driven by the need to support visually grounded audible action detection, and we believe it is well justified for the specific focus of our work.
Summary: The paper proposes a new task named "audible actions" and introduces a new dataset for this task. It introduces an inflectional flow estimation and an auxiliary self-supervised training method to improve the performance. Experiments are conducted on Audible623, UCFRep, and CountixAV datasets. ## update after rebuttal I keep my initial rating of weak accept after the rebuttal. The rebuttal addressed most of my concerns. Claims And Evidence: The paper propose "audible actions" as a novel task in TAL. However, it is a subset of the existing TAL task where the actions have audible cues and so existing methods can still work. So, the claim of novelty is not justified. The proposed method is trained to identify "audible actions". But it can only identify actions that it was trained on. Does it generalize to audible actions beyond the training dataset? There is no discussion on this. Figure 1 shows a motivating example of audible actions in computer video games. However, visible audible actions are subjective and ambiguous in video games making it difficult to quantify. Further, certain actions such as dribbling a ball can be considered audible or not depending on the decibel level. In essence, the framing of this task is subjective and there is no clear rubric for it. In real videos, audible actions can occur simultaneously which can result in commotion if there are multiple sources. For example, two persons juggling a ball. How to distinguish between different audio sources in this case? Methods And Evaluation Criteria: The self-supervised spatial auxiliary training (Sec. 4.2) loss is not very well motivated and it appears to be added just to improve the overall performance. Theoretical Claims: No theoretical claims made in the paper. Experimental Designs Or Analyses: Missing quantitative comparisons with popular TAL methods such as BMN and G-TAD. Supplementary Material: Yes, fully Relation To Broader Scientific Literature: The key contributions of the paper in introducing the task of predicting audible actions and proposing the model TA2Net. Since, this is a narrow subset of tasks defined in existing TAL methods, the contribution is limited to a narrow domain that does not benefit the broader TAL community. For example, it would be better if the proposed method can still apply to broader TAL tasks. Essential References Not Discussed: The paper does not cite prior methods which have used optical flow for TAL tasks such as Dejun Zhang, Linchao He, Zhigang Tu, Shifu Zhang, Fei Han, Boxiong Yang, Learning motion representation for real-time spatio-temporal action localization, Pattern Recognition, Volume 103, 2020. Yuanzhong Liu, Zhigang Tu, Liyu Lin, Xing Xie, and Qianqing Qin Real-time Spatio-temporal Action Localization via Learning Motion Representation, ACCV, 2020. Other Strengths And Weaknesses: The paper does not discuss the computational complexity of the method, specifically because it uses optical flow which is expensive. Other Comments Or Suggestions: Typo - should be "quantitatively" in L376 Questions For Authors: What are guidelines for annotators in choosing audible actions? How is data annotated for high frequency actions like drumming? Are all frames labeled as audible? For dataset collection, how does the filtering occur? With action class names? (L152) How to deal with categories that are not labeled but present in the video? i.e., How does it generalize to novel audible actions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Distinction Between Our Method and TAL Methods, and Applicability to TAL** We respectfully clarify that our method targets a task that is fundamentally different from traditional TAL. While TAL focuses on localizing the full temporal extent of actions, our method is designed to identify precise keyframes where audible events occur. In this sense, TAL addresses event-level localization, whereas our approach focuses on frame-level detection tied to sound-producing moments. This distinction is important, as audible frames often represent brief, high-impact instants that are not captured by coarse action intervals. As shown in the appendix, our method also performs well on non-audible actions, demonstrating its ability to capture subtle temporal transitions. We believe this fine-grained capability may provide useful insights for improving temporal precision in future TAL work. **2. Generalization Beyond the Training Dataset** Please refer to the reply of Reviewer umcX Q2. **3. Ambiguity of Audible Action && Multiple Audio Sources** While the exact decibel level may vary, our task focuses on visually observable actions that are intentionally associated with sound production, consistent with how humans infer sound-related events from visual cues. To minimize subjectivity, we annotate dominant and intentional audible actions during training, following standard practice in broader action recognition tasks where annotators focus on salient or representative instances of an action (e.g., annotating the main swing in "golf swing" rather than every subtle motion). For testing, our model responds to motion patterns indicative of sound, regardless of their intensity. For instance, in the case of basketball dribbling, both soft and forceful dribbles share similar motion dynamics, and our model is designed to detect both. We also clarify that Fig. 1 is purely for illustration purposes; all samples used in training and evaluation come from real-world, in-the-wild videos with natural variations in appearance and motion. We will revise the manuscript to better explain our annotation criteria and the illustrative role of Fig. 1. **4. Multiple Audio Sources** In scenarios involving multiple audio sources, our method is designed to detect all sound-producing actions present in the video. However, distinguishing between individual sound sources is beyond the current scope of our work. Our primary objective is to accurately identify the timing of audible actions, regardless of their specific source. **5. Quantitative Comparisons with TAL methods** We further evaluated BMN on the Audible623 dataset using a two-stream network that incorporates both video and optical flow features. This allows for a fair comparison with TAL methods that also leverage motion information. Since the source code for Zhang et al. and Liu et al. is not publicly available, we have contacted the authors and will include further discussion in the revised manuscript based on their responses. | Methods | Recall↑ | Precision↑ | F1↑ | NME↓ | PME↓ | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | BMN | 0.417 | 0.486 | 0.413 | 9.327 | 0.951 | | Ours | 0.648 | 0.656 | 0.616 | 3.462 | 0.744 | **6. Concerns on Computational Complexity** Thanks to our lightweight network design, the computational cost of incorporating additional optical flow and inflectional flow features remains manageable. We evaluated the processing time of various methods on the same set of videos. While the inclusion of optical flow does introduce some additional overhead, our method maintains a favorable trade-off and still outperforms most competing approaches in both accuracy and efficiency. | Methods | Time(s) | Methods | Time (s) | |--------------------------------|---------|-----------------------------|---------| | RepNet | 0.075 | Hiera | 0.263 | | TransRAC | 4.301 | ActionFormer | 0.233 | | X3D | 0.066 | TriDet | 0.240 | | TimeSformer | 9.968 | Ours (w/ & w/o Flow) | 0.152 / 0.083 | **7. Dataset Selection and Annotation** Annotators were instructed to review each video individually and select visually observable audible actions, excluding videos that either lacked audible actions or featured occluded actions. For high-frequency actions such as drumming, which are particularly challenging to annotate precisely, we adopt a frame-level labeling strategy, marking only the frames where the drumstick makes contact with the drum. As noted in Sec. 3.2, we do not apply category-based filtering during annotation. Instead, the first round of filtering involves manually reviewing each video to determine whether it contains visually identifiable audible actions. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. It addressed some of my concerns. **Novelty**: The authors mention that proposed method focuses on frame-level detection that produces sounds as compared to TAL methods. However, several TAL methods also output frame-level timestamps (for example, weakly supervised approaches). How is it different when considering the $\textit{visible audible action}$ as a separate category in existing TAL approaches? **Generalization**: The authors refer to the results of generalization to other tasks such as repetitive counting for the question on generalization, however, it is not the same as generalization to new visible audible actions that is not present in the dataset. It would be better to show either qualitative or quantitative results of the proposed method on new videos containing $\textit{unseen audible actions}$. **Quantitative Comparisons with BMN**: What are the features used for BMN? Is it comparable to the proposed $TA^2Net$? The performance of BMN highly depends on the features used as using stronger features such as I3D will obtain better results than weaker features such as 2D TSN. **Complexity**: Is the reported time the average time taken per video? --- Reply to Comment 1.1.1: Comment: **Novelty:** Thank you for raising this point. While some weakly supervised TAL methods produce frame-level outputs, these are typically used to generate temporal proposals for segment-level localization. Their core objective remains identifying the temporal extent of **predefined action categories** (e.g., "running" or "clapping") based on semantic understanding. In contrast, our method is **category-agnostic** and focuses on detecting the precise keyframes where audible events occur, regardless of action class. This distinction brings two key novelties: first, our approach emphasizes low-level motion patterns directly linked to sound production, rather than higher-level semantic labels; second, by not relying on predefined categories, our method can generalize to a wide range of actions, including unseen ones. This category-agnostic, keyframe-level detection is particularly well suited for applications such as dubbing and audio-visual synchronization, where temporal precision is more critical than semantic classification. **Generalization:** Our generalization experiments are designed to evaluate how well the model performs on unseen datasets, rather than on new instances within the same distribution. We use UCFRep and CountixAV because they are the only available datasets that are both relevant to our task and completely disjoint from our training data. Our model was never trained on these datasets, making them strong candidates for evaluating cross-dataset generalization. The focus on repetitive counting is not because our method is limited to that task, but because these datasets contain repetitive, sound-producing actions that align with the goals of audible action detection. This setting allows us to assess how well our model transfers to different visual content and action instances without retraining. Additionally, as shown in our demo, our method generalizes well to animated content, which is visually distinct from all training data. This qualitative result highlights the robustness of our model in detecting sound-producing actions even under substantial domain shifts. We appreciate the suggestion and agree that further testing on newly collected audible action categories would provide additional support, which we plan to explore in future work. **Quantitative Comparisons with BMN:** To ensure fairness, we use the same frame-level visual and optical flow features, generated by our encoder, for both our method and BMN. We intentionally avoid using I3D features, as they are tailored for snippet-level representations and performed worse in preliminary experiments for fine-grained localization. By keeping the features consistent, we isolate the modeling performance and ensure a direct comparison. **Complexity:** Yes, the reported inference time represents the average per video on the test set, where each video contains approximately 250 frames. This provides a clear and consistent basis for comparing computational efficiency across methods.
Summary: This paper introduces a new task called audible action temporal localization, aimed at predicting the frame-level positions of visible audible actions. And the paper further proposes a dedicated dataset called Audible623, derived from Kinetics and UCF101. Finally, the paper proposes a baseline method TA$^2$Net which employs flow estimation based on motion's second derivative. ## update after rebuttal I keep my initial rating of weak accept. The rebuttal addressed some of my concerns. Claims And Evidence: 1. The authors introduce the task of Audible Action Temporal Localization, aimed at pinpointing the spatio-temporal coordinates of audible movements. However, there are no spatial annotations in the proposed dataset. The authors propose an auxiliary training method that leverages spatial information and produces a localization map as a side-output. 2. The authors claim that the difference between Temporal Action Analysis and Audible Action Temporal Localization is that one focuses on event-level localization, while the other focuses on accurate key frame identification. And the authors formulate the task as aiming to determine whether each frame contains action that can generate sound. However, the timing boundaries of certain action in the video inevitably exhibit some degree of ambiguity. The authors have not discussed this ambiguity issue in the Dataset Annotation section. 3. The proposed dataset is derived from Kinetics and UCF101 which contains at least 400 and 101 action classes respectively. And the proposed dataset contains only 14 categories of audible actions. The reviewer has concerns about the generalizability of the proposed dataset and its applicability in helping the dubbing task. Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. Proposing a new task, constructing a dedicated dataset and proposing a baseline method with comprehensive comparisons requires a significant amount of effort. And the paper is written in a relatively clear and readable manner. Weaknesses 1. The definition of visible audible action remains ambiguous. And judging from the proposed dataset, the audible action is merely a small subset of the existing action categories. Other Comments Or Suggestions: 1. Typos in the caption of Figure 4 (Page4-Line181). Questions For Authors: Since the proposed new task is close to the task of traditional Spatial Temporal Action Localization/Temporal Action Detection, why would the authors construct the new benchmark using the public datasets on these tasks? It seems the temporal annotation would be easier and the spatial-temporal annotations can be reused. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Ambiguity on Timing Boundaries of Actions** Temporal boundary ambiguity is indeed a well-known and widely acknowledged challenge in general action understanding tasks. Precisely defining the start and end of an action, particularly for semantically complex or continuous activities such as "sit down" or "throw the ball", is inherently difficult due to visual similarity across adjacent frames and the subjective nature of temporal segmentation. However, unlike general action semantic understanding, our task, Audible Action Temporal Localization, is significantly more constrained and objective. Rather than requiring full temporal segmentation of action intervals, we focus on identifying the specific frames in which a sound is produced by an action. These "sounding frames" are typically easier to annotate consistently, as the accompanying audio provides a clear and reliable temporal anchor. We will clarify this annotation protocol and emphasize the inherent simplicity and objectivity of our task in the revised Dataset Annotation section. **2. Concerns on Dataset Categories** Our task is fundamentally different from the original objectives of Kinetics and UCF101. While those datasets are designed for action classification across a broad range of semantic categories, our focus is category-agnostic, targeting the detection of action patterns rather than semantic understanding. As a result, the Audible623 dataset does not include category annotations but instead provides precise temporal labels for frames where audible events occur. This design is well aligned with the goals of dubbing and audio-visual synchronization, which rely on accurate temporal localization rather than action recognition. Thanks to the design of our framework, our method demonstrates strong generalization ability. As shown in Table 3, it performs well on datasets with different action categories, such as UCFRep and CountixAV. This confirms that our approach is not constrained by specific categories and can generalize effectively to unseen actions. For instance, our method was successfully applied to animated content in the demo without any additional training. We will clarify these distinctions and elaborate on the rationale behind our dataset design in the revised manuscript. **3. Audible Action Definition** While audible actions are a subset of general actions, they serve a distinct and crucial role in applications like sound dubbing and repetitive counting. Unlike general action recognition, which often relies on high-level and potentially ambiguous semantic categories (e.g., "playing sports," "interacting"), audible actions are grounded in concrete, low-level visual cues that correlate directly with sound production (e.g., clapping, stomping, tapping). This distinction makes it category-agnostic and more robust to domain shifts. This targeted focus enables reliable grounding of actions with sound, which is essential for many practical tasks. **4. Reuse of TAL Datasets** As mentioned in Section 3.2, existing Temporal Action Localization (TAL) datasets are not directly applicable to our task. The keyframe-level labels required for Audible Action Temporal Localization cannot be reliably derived from the start and end frames of action segments in these datasets. In many cases, the annotations are relatively coarse and do not separate repeated instances of the same action. For example, a kicking sequence with multiple discrete kicks is often labeled as a single continuous segment, whereas our task focuses on detecting each individual audible event. Additionally, TAL datasets include many inaudible actions that fall outside the scope of our task. Reusing these datasets would require significant re-annotation and filtering to align with our objective. For these reasons, we chose to construct a dedicated benchmark with precise frame-level annotations that better support the goals of audible action localization and audio-visual synchronization.
Summary: This paper introduces the task of audible action temporal localization and proposes a novel framework $TA^2Net$ alongside the Audible623 dataset. The method features a inflectional flow estimation technique grounded in the second derivative of the position-time images. Additionally, the authors develop a self-supervised spatial localization. The effectiveness of the proposed method is demonstrated, and its broad applicability is validated in other tasks. Claims And Evidence: please refer to Weaknesses. Methods And Evaluation Criteria: While the proposed dataset demonstrates partial validity for audible action temporal localization, its ability to capture real-world complexity remains questionable. The current evaluation primarily focuses on short-duration, scripted scenarios (e.g., 9.2-second clips in Audible623) with controlled audio-visual correspondence. Theoretical Claims: I have reviewed the theoretical claims in Sections 4.1 (Timing Audible Actions with Inflectional Flow) and 4.2 (Self-supervised Spatial Auxiliary Training). While the presented formulations are logically consistent and free of apparent errors, their relative simplicity warrants discussion. The derivations primarily involve elementary operations without addressing more complex scenarios (e.g., non-linear motion patterns). Experimental Designs Or Analyses: I did. The experimental design and analysis are sound and valid. The authors have provided. Supplementary Material: I reviewed the supplementary material, including Appendix A (More details of dataset), B (Settings), E (More Qualitative Comparisons), G (Failure Case), and H (Evaluation with Vision Language Models). Relation To Broader Scientific Literature: please refer to Weaknesses and Questions For Authors. Essential References Not Discussed: please refer to Weaknesses and Questions For Authors. Other Strengths And Weaknesses: ### Strengths 1. **Novel Task FormulationL:** Proposes the first formal task of audible action temporal localization. 2. **Rigorous Validation:** Demonstrates broader applicability through: * Cross-task generalization experiments * Vision-language model (VLM) evaluations in supplements ### Weaknesses 1. **Lack of user study:** A potential issue with $TA^2Net$ is that it shows slightly weaker performance on certain metrics compared to some baselines for the audible action temporal localization task, which may make the subjective human opinions of higher importance to the present manuscript. This should further validate how well the paper's core assumption aligns with the actual human preferences. 2. **Mismatch Between Data Characteristics and Claimed Application Scenarios:** The proposed method is primarily evaluated on short and low-framerate videos (e.g., Audible623 dataset, with an average duration of 9.2 seconds and 250 frames per video). While this setup may suffice for initial validation, it significantly limits the generalizability of the method to real-world applications. In practice, videos on platforms like YouTube, TikTok, or in professional contexts (e.g., movies, TV dramas) are typically longer and have higher framerates, requiring more robust temporal modeling and scalability. The current experiments do not adequately address these challenges. 3. **Methodological Ambiguity in Motion Feature Extraction:** The paper’s reliance on Xu et al.'s pre-trained optical flow network raises concerns about feature selection rationale. While the chosen network can extract multi-modal motion features (e.g., depth, disparity), the authors exclusively utilize optical flow without justifying why supplementary features were disregarded. This omission is particularly notable given that depth/disparity information could enhance temporal modeling in complex scenes. Furthermore, the decision to adopt Xu et al.'s architecture over established alternatives like FlowNet or PWC-Net lacks explicit justification. Critical factors such as computational efficiency (e.g., inference speed comparisons), benchmark performance, or inherent architectural advantages for audible action temporal localization tasks remain unaddressed. These unresolved design choices cast doubt on whether the selected framework optimally balances accuracy and practicality. To strengthen the methodology, the authors should either provide empirical evidence or theoretical arguments explaining why optical flow alone suffices for their task. Other Comments Or Suggestions: please refer to Weaknesses. Questions For Authors: The paper computes forward optical flow between consecutive frames using a pre-trained network (Xu et al., 2023). Could the authors clarify why this approach was prioritized over classical optical flow algorithms (e.g., DisFlow, Brox Flow) that are widely adopted in video processing? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. User Study** As suggested, we conducted a user study comparing five approaches: TriDet, TransRAC, X3D, Hiera, and our proposed method. We curated a set of eight videos and enlisted 30 participants for the study. Each video was dubbed using sound aligned to the audible action locations detected by each method. Participants were then asked to assess the audio-visual synchronization quality and select the top two methods they perceived as most accurate for each video. The results show that our method was most frequently selected as the best, indicating a clear preference and superior perceived synchronization performance. | Method | Top-2 Rate| |----------|----------| | Ours | 0.8458 | | TriDet | 0.2208 | | TransRAC | 0.5625 | | X3D | 0.1125 | | Hiera | 0.2583 | **2. Concerns on Short Duration & Application Scenarios** We respectfully disagree with this concern, as it appears to stem from a misunderstanding of both the dataset characteristics and our method’s design. The video duration and frame rate in the Audible623 dataset, as well as in other benchmarks used in our study, are consistent with established standards in the action counting and localization community. These datasets are widely adopted to ensure fair comparisons and reproducibility. More importantly, our method operates at the frame level, making it inherently agnostic to the overall video length. This design choice allows the method to scale effectively to longer and higher-framerate videos without degradation in localization performance. As such, our approach remains robust and applicable to real-world scenarios, including platforms like YouTube and professional media content. **3. Methodological Ambiguity in Motion Feature Extraction** We respectfully believe this concern may stem from a misunderstanding of motion feature roles. Depth and disparity capture static spatial structure but do not reflect temporal dynamics. For example, counting the number of times a person jumps depends on detecting repeated vertical motion—something depth alone cannot convey. In contrast, optical flow reflects dynamic changes across frames and is widely recognized as a reliable signal for temporal modeling. While depth can theoretically be used to extend 2D optical flow to 3D, such usage is rare in practice, as action scenarios that truly benefit from 3D motion modeling are uncommon and often domain-specific. Moreover, incorporating depth adds computational overhead without clear gains for typical action tasks. We chose Xu et al.’s network for its solid empirical performance and a good trade-off between accuracy and efficiency. While alternatives like FlowNet or PWC-Net are well-known, we did not observe significant advantages for our task. We will clarify this rationale in the revised manuscript. **4. Optical Flow Method Selection** Compared to traditional methods such as DisFlow, modern neural network-based approaches offer significantly improved robustness in challenging real-world scenarios involving occlusion, noise, or complex motion, and they perform consistently well across varying resolutions. Among these, GMFlow demonstrates state-of-the-art accuracy while maintaining inference times comparable to established methods like PWC-Net [1, 2]. Additionally, GMFlow offers a more streamlined and efficient framework for computing bidirectional optical flow, which is beneficial for our application. Based on these advantages and after thorough consideration, we selected GMFlow for optical flow extraction in our pipeline. [1] Xu H, Zhang J, Cai J, et al. Gmflow: Learning optical flow via global matching, CVPR, 2022. [2] Teed Z, Deng J. Raft: Recurrent all-pairs field transforms for optical flow, ECCV, 2020. **Note to Reviewer** We appreciate the reviewer’s time and effort in evaluating our work. While we find that some of the comments may stem from misunderstandings of our method and its design choices, we have addressed each point in detail and clarified the rationale behind our decisions. We believe these clarifications resolve the concerns raised and further highlight the strengths and contributions of our approach. We respectfully ask the reviewer to reconsider the insights and potential impact of our work in light of these responses. --- Rebuttal Comment 1.1: Comment: thanks for rebuttal. While I have read the rebuttal and all the comments of other reviewers, the concerns are still maintained. I will keep the rating. --- Reply to Comment 1.1.1: Comment: We believe we have thoroughly and carefully addressed the reviewer’s concerns. In response, we conducted a user study directly comparing our method against multiple baselines, offering empirical validation of its effectiveness. We also clarified the misunderstanding regarding video duration and application scenarios, provided a detailed explanation for our use of optical flow—highlighting its unique role in modeling temporal dynamics—and justified why depth or disparity are not suitable for capturing sound-related motion. Furthermore, we offered a clear rationale for selecting GMFlow, supported by discussion on its balance of accuracy and efficiency, and evaluated the impact of incorporating optical flow on time complexity. Given these clarifications and additional results, we hope the reviewer can engage further with the specific points raised, as simply stating that concerns remain—without responding to the evidence or explanation provided—does not support a constructive or professional review process. We remain open to detailed feedback that will help further refine our work.
null
null
null
null
null
null
S4S: Solving for a Fast Diffusion Model Solver
Accept (poster)
Summary: Solving ODEs in diffusion models using traditional ODE solvers is expensive due to the iterative Neural Function Evaluations. If only use a few NFEs, the evolution trajectory is broken because of large step sizes. Targeting this problem, the authors propose S4S, a method that learns optimized, few-NFE solvers from conventional, many-NFE teacher solvers. Additionally, the authors propose S4S-Alt that also optimizes the discretization schedule. Claims And Evidence: More evidence may be needed to prove the authors' approach matches the output of a teacher solver. Methods And Evaluation Criteria: The evaluations are generally good and the dataset choices (cifar-10 and imagenet) are good. Theoretical Claims: No new theorem is proposed or proven in this work. I do not see any mistakes in math. Experimental Designs Or Analyses: See questions Supplementary Material: None Relation To Broader Scientific Literature: It is closely related to other few-NFE approaches such as LD3 and BNS, which are thoroughly discussed in this work. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper is generally enjoyable to read and clearly written, with an interesting plug-and-play approach. It clearly points out the key differences and improvements over previous works, and the detailed discussions are likely to be valuable for researchers interested in the topic. However, the focus is largely on improvements in the few-NFE regime for training-free methods, and it remains somewhat unclear how the student solver matches the output of a teacher solver (which is supposed to run for many NFEs) as claimed in the conclusion. Other Comments Or Suggestions: None Questions For Authors: (1) Since S4S is designed for few-NFE, while the teacher solvers are specifically for many-NFE. It would be insightful to examine how far are the student solvers away from ideal performance (many-NFE for teacher solvers). (2) The convergence of quality vs. NFE (at how many NFEs do the student solvers cease to improve) may be a useful reference of quality-speed tradeoff. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for the time taken to review our work and your helpful feedback! Please find our responses below: > However, the focus is largely on improvements in the few-NFE regime for training-free methods, and it remains somewhat unclear how the student solver matches the output of a teacher solver (which is supposed to run for many NFEs) as claimed in the conclusion. You raise an important point about our objective of matching teacher outputs. I'd like to clarify this aspect of our approach. While our training objective involves matching teacher solver outputs, this is primarily a means to an end rather than the ultimate goal. Our core innovation is recognizing that in the few-NFE regime, traditional error control approaches break down, so we instead directly optimize for what matters most: high-quality final samples. We use teacher matching as a proxy for quality for several reasons, including: - It provides a clear optimization target that avoids the need for dataset access, unlike distillation methods. - It learns to reproduce the outcomes of many-NFE solvers without being constrained to follow their intermediate steps. - The relaxed objective (Equation 7) further enhances this by allowing flexibility in finding solutions that prioritize final quality over exact trajectory matching. Empirically, our results demonstrate that this approach succeeds not just at matching teacher outputs but often exceeding them at lower NFE counts. For instance, on CIFAR-10, S4S-Alt with 7 NFEs (FID 2.52) outperforms the teacher with 20 NFEs (FID 2.87). This indicates our method is learning a fundamentally more efficient sampling strategy, not merely approximating the teacher's steps. The effectiveness of our global error minimization approach suggests that, while traditional solvers accumulate errors across many small steps, our learned solvers take optimal large steps that directly target high-quality outputs. This represents a paradigm shift in diffusion model sampling that could inspire further research into learning-based ODE solvers beyond diffusion models. We'll emphasize this distinction more clearly in the final version of the paper to ensure the conceptual contribution is properly understood. > Since S4S is designed for few-NFE, while the teacher solvers are specifically for many-NFE. It would be insightful to examine how far are the student solvers away from ideal performance (many-NFE for teacher solvers). You raise an excellent point about comparing student solvers to teacher performance. We did analyze this gap in our experiments, though we may not have emphasized it enough in the main paper due to space constraints. In our full results tables (H.4), we provide FID scores for a range of NFEs up to 10, while in H.5 we mention the FID scores of the various teacher solvers. For example, on CIFAR-10, our S4S-Alt achieves 2.18 FID at 10 NFEs, while the teacher solver (UniPC with logSNR) achieves 2.03 at 20 NFEs. For FFHQ, we see a similar pattern: S4S-Alt with 10 NFEs achieves 2.91 FID, while the teacher at 20 NFEs achieves 2.62. > The convergence of quality vs. NFE (at how many NFEs do the student solvers cease to improve) may be a useful reference of quality-speed tradeoff This is an insightful suggestion. While we didn't include full convergence curves in the paper, our results in Tables 13-18 show the quality-speed tradeoff across 3-10 NFEs. We observe that: - For most datasets, improvements diminish beyond 8 NFEs, with minimal gains between 8-10 NFEs. - S4S-Alt shows more rapid convergence than traditional solvers, achieving near-optimal quality at just 5-6 NFEs on most datasets. - Different datasets show varying convergence patterns: CIFAR-10 and AFHQ-v2 converge more quickly (plateau around 6-7 NFEs), while more complex datasets like MS-COCO and LSUN-Bedroom continue improving up to 10 NFEs.
Summary: The sampling process of diffusion models heavily depends on numerical solvers. This paper provides a comprehensive overview of existing works, including (1) Vanilla solver-based fast samplers, such as single-step, multi-step, predictor-corrector methods; (2) Data-driven solver-based fast samplers, which involve learning the discretization schedule, learning solver coefficients, or a combination of both, and highlights their differences, Then, this paper introduces Solving for the Solver (S4S), a method that optimizes numerical solver coefficients by distilling from a teacher solver with a relaxed objective function. An improved version, S4S-Alt, alternatively optimizes both the solver coefficients and the time schedule, achieving state-of-the-art results for few-step sampling of diffusion models. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. I've checked the soundness and validity of all the experiments in the main content. Supplementary Material: Yes. I've gone through all the content in the Appendix. Relation To Broader Scientific Literature: The proposed method sets a new record for few-step sampling of diffusion models, with potential applications in broader fields such as video and 3D generation. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The proposed method achieves state-of-the-art results on few-step sampling of diffusion models, while maintaining a low additional training overhead. 2. The authors provide a thorough review of existing works in Appendix A, clearly distinguishing their proposed method from previous methods. This helps readers better understand the literature and the unique contributions of this paper. 3. Extensive quantitative results are presented in Appendix H.4, which supports the effectiveness of the proposed method. Weaknesses 1. Concerns about novelty: (i) The proposed S4S closely resembles LD3 [1] in terms of algorithm, with the primary difference being that this paper optimizes solver coefficients, whereas LD3 focuses on optimizing the time schedule. (ii) Meanwhile, the idea of learning solver coefficients has already been explored before [2][3], as acknowledged in Appendix A (I appreciate the authors' efforts in maintaining academic integrity). (iii) Also, the proposed S4S-Alt appears to be an iterative combination of two LD3-like algorithms, i.e., alternating between optimizing solver coefficients and time schedule. Given these points, the novelty of the contributions seems somewhat limited. 2. Lack of direct experimental comparisons in the main context: Sec 4 does not provide sufficient direct comparison with existing works, making it difficult to assess the superiority of the proposed S4S and S4S-Alt. To be more specific, there is no comparison between S4S and other distillation-based methods for learning solver coefficients [2][3], despite these being the most relevant works. Besides, S4S-Alt is not fully and fairly compared against BNS [4], which also optimize solver coefficients and the time schedule. Only a single value is directly taken from the original BNS paper in Tab 4, rather than conducting a fair head-to-head comparison. 3. Section 3.1 claims that the pathologies/errors contained in the teacher trajectory could be distilled into the student, making it crucial to optimize global error. However, no experiments are provided to support this claim. It would be beneficial to conduct an experiment to further discuss this point. References [1] 2025ICLR Learning to Discretize Denoising Diffusion ODEs https://arxiv.org/abs/2405.15506 [2] 2024ICLR On accelerating diffusion-based sampling process via improved integration approximation https://arxiv.org/abs/2304.11328 [3] 2024CVPR Distilling ODE Solvers of Diffusion Models into Smaller Steps https://arxiv.org/abs/2309.16421 [4] 2024ICML Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models https://arxiv.org/abs/2403.01329 Other Comments Or Suggestions: I don't have other comments. Questions For Authors: The proposed method incorporates LPIPS loss, which has been criticized for leaking ImageNet features that may cause inflated FID scores [1]. How does the proposed method perform without LPIPS loss? Have you evaluated its impact on these results? References [1] Song Y, Dhariwal P. Improved Techniques for Training Consistency Models[C]//The Twelfth International Conference on Learning Representations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback, we address your questions and concerns, starting with two of the weaknesses you mentioned. > Lack of direct experimental comparisons in the main context [In the linked image](https://imgur.com/a/jkFi6tJ), we provide direct comparisons against the recent works [2,3] that focus on *coefficient* distillation. Compared to these works, on the same discretization schedule (Time EDM), S4S achieves superior performance in the low-NFE regime across several key datasets. This helps reinforce the strength of S4S, even when only optimizing over the solver coefficients. Here, * refers to FID values that were estimated from Figures if no results table was present, and the "Rel. Fig." column details which Figure the value was taken from. Regarding BNS, it is difficult to provide a wholesale, fair comparison to the original paper for two main reasons: (1) the authors do not release their code for learning BNS, and (2) the model checkpoints used for BNS are not publicly released. As a result, we cannot (1) port an exact implementation of BNS over onto the pretrained DMs that we used, or (2) port over our own implementation onto the pretrained (flow) models that BNS used. To try to compare against BNS, we note that it has two significant differences from our work: - In BNS, the order of the learned solver is equal to the number of NFEs used; that is, it assigns a coefficient to all of the previous denoising steps. - BNS jointly learns the solver coefficients and the time discretization steps. - BNS uses PSNR as the global objective function. In our submission, we tried to explore the first two components of this tradeoff in the third row of Table 6, although we should have made the connection to BNS more clear. To further extend this analysis, we **explicitly re-implemented the approach from BNS into our own setup** to directly compute this comparison. We include this in the table below for CIFAR-10 and FFHQ. > Pathologies/errors from the teacher trajectory distilled into the student We agree that more experiments are needed in the revision. We reference a visualization from recent work [4] that visualizes trajectories from few-step sampling at [this link](https://imgur.com/a/EgcOKEY). At the beginning of the boomerang shape, there is greater variance in the trajectories. Therefore, training a model to explicitly match these trajectories can result in distilling these pathological behaviors directly into the student solver. To further assess this claim, we conducted several experiments to understand the consequences of using this as an objective. In the attached image, we see that **training on a dataset that includes these "pathological" trajectories teaches the student solver to mimic them**. In particular, conducting a full FID evaluation of a solver trained in this manner reveals worse FID performance. > Concerns about novelty To best characterize our novelty, we compare with two types of learning diffusion solvers. First, unlike $\Pi$A or D-DDIM, which mimic the ODE trajectory, our approach directly matches the output of the teacher solver. While this view is shared at a high-level by LD3 in how discretization points should be selected, at its core, LD3 still views sampling from a diffusion model as solving an ODE and instead focuses on selecting discretization points that minimize the global error. In contrast, S4S argues that in the very few-NFE regime, an ODE-centric view is limiting. Second, S4S and S4S-Alt are similar in spirit to BNS but differ in key methodological choices. Through a careful exploration of the diffusion solver design space, S4S-Alt greatly outperforms BNS. This improvement stems from four key decisions: - Using local rather than global information—learning coefficients for only the three most recent denoising points. - Employing an alternating objective to learn coefficients and discretization points. - Relaxing the objective for more effective learning. - Exploring a wider array of distance functions. While these choices may seem minor, they significantly enhance performance. BNS, in contrast, attempts to generalize across solvers, optimizing a complex objective over a large coefficient space, which makes it vastly slower (requiring orders of magnitude more compute) and leads to a more difficult optimization landscape. Our ablation study confirms that jointly optimizing solver coefficients and schedules underperforms compared to S4S-Alt. > LPIPS losses Thank you for this question! Although our results achieve stronger FID scores when used with the LPIPS loss, our overall results still hold when using an alternative distance function in our objective. Below, we include a table that characterizes our performance using non-LPIPS losses for both S4S and S4S-Alt. Broadly speaking, using an alternative loss decreases FID scores, though not to such an extent that it weakens our overall approach. [4] https://arxiv.org/pdf/2409.19681 SFD --- Rebuttal Comment 1.1: Comment: I’ve read the authors’ response and appreciate their efforts. However, it seems they forgot to include the results supporting their claims, as several of the referenced tables are missing. --- Reply to Comment 1.1.1: Comment: Thank you so much for taking the time to read our rebuttal, and our sincere apologies for missing the attached tables in our first response, the links to which were accidentally removed trying to get under the 5000 character limit. We also apologize for the further delay; we spent an extra day to hopefully ensure that this response comprehensively answers your questions w/ more results, particularly if we can't continue our back-and-forth. --- ## S4S vs. Coeff. Distill. Methods **Rebuttal Tables 1-4**: https://imgur.com/a/2ngs7jR. Briefly, we find that S4S outperforms comparable methods for coefficient distillation across a large number of datasets. Oftentimes, at 7 NFEs, S4S achieves a similar performance that alternative methods reach using 10 NFEs. ## S4S-Alt vs. BNS **Rebuttal Tables 5-8:** https://imgur.com/a/ZN2kZG9 We reimplemented BNS on our diff. models using the available information from the BNS paper. We also evaluated BNS with LPIPS as the distance metric for a more fair comparison + more extensive ablations. We find S4S-Alt outperforms BNS, particularly on low-NFE scales. A summary of these findings: - Our CIFAR results for BNS are similar to that of BNS; although there are discrepancies in the precise FID numbers, BNS trained their own CIFAR DM w/ different amounts of training time. The fact that this overall trend is similar, however, gives us reasonable confidence in our implementation. - Our results accentuate the importance of the alternating objective, as well as its robustness across choice of $r$ and distance metric, especially in Reb. Tables 6 and 8. - Using a fixed solver order (3) helps vs. using the maximum order allowed (as in BNS), especially with a larger number of NFEs. The exception to this is at 4 NFE, where having an extra parameter (order 4 solver) can give a small boost. - Relaxing the objective (i.e. $r>0$) can lead to very meaningful improvements in FID, whereas BNS always uses $r=0$. - Using PSNR as a distance metric is harmful at low-NFE scales. Reb. Tables 5 and 7 show that using PSNR reduces the FID of S4S-Alt, especially for CIFAR. - Using $r=0$ can be helpful for very high-order solvers like BNS with more (8+) NFEs. This can be seen in Reb. Tables 6 and 8, where BNS + LPIPS does better than S4S-Joint w/ max order at higher NFEs. This wholesale evaluation shows that while BNS is a strong algorithm for exploring the solver design space, S4S-Alt gives overall better performance through our particular design choices. ## Trajectory Matching vs. Output Matching **Figure from Zhou et al. (Simple and Fast Distillation 2024)**: https://imgur.com/a/EgcOKEY **Rebuttal Tables 9-10**: https://imgur.com/a/JsVmems Given a disc. step sched., we trained S4S to match a teacher solver trajectory in two different ways: - Matching the teacher trajectory at uniform intervals along the time disc. (i.e. matching the trajectory at every 5th time step) - Matching the teacher trajectory on the GITS (https://arxiv.org/pdf/2405.11326) disc. schedule, which takes smaller steps where the avg. traj. deviation in the teacher is large (more curvature in traj.) and larger steps where deviation is smaller (less curv.). All other implementation details (i.e. $r$, LPIPS) are the same. Our results are in Reb. Tables 9-10. Briefly: - Matching the final output outperforms matching the trajectory, for 4, 5, and 10 NFEs. - Both traj. matching methods are reasonably good at "high"-NFE generation, but perform very poorly on few-NFE generation. - Uniform matching doesn't take into account the curv. of the teacher traj., requiring the student solver to learn to match difficult transitions between disc. steps. - GITS matching leads to much worse performance with few NFEs because the highest curvature areas (as displayed in Fig above) often have the most variation btwn. teacher traj. (aka pathological regions). - **Matching the teacher trajectory means one is unable to use LD3 for the student solver's time disc.**, since it no longer has the same time disc. as the teacher. All "matching" methods are outperformed by simply using iPNDM solver + LD3, much less using S4S as well.
Summary: The paper proposes S4S and S4S-Alt, methods to optimize diffusion ODE solvers for fast, high-quality sampling with minimal neural function evaluations (NFEs). S4S learns solver coefficients via a distillation objective, matching the output of a high-NFE "teacher" solver while minimizing global error (not local truncation error). Additionally, S4S-Alt jointly optimizes solver coefficients and discretization steps via alternating minimization, further improving performance. The authors achieve state-of-the-art FID scores across datasets (e.g., 3.73 on CIFAR-10, 13.26 on MS-COCO with 5-8 NFEs), outperforming prior solvers like DPM-Solver++ and iPNDM. The method is lightweight (<1 A100 hour), data-free, and compatible with existing schedules. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. There are no theoretical results in the main document (and in the supplementary material, the authors only restate the theoretical guarantee for the relaxed objective presented in Eq. (7); this guarantee was provided by Tong et al. (2024)). Experimental Designs Or Analyses: Strengths: Extensive evaluation across various datasets (CIFAR-10, FFHQ, ImageNet, etc.) and multiple baselines (DPM-Solver++, iPNDM, UniPC). Ablations validate design choices (time-dependent coefficients, relaxed objective). Supplementary Material: Yes, I briefly read all parts of the supplementary material as this work is of sufficient interest to me. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The proposed approaches are novel, and demonstrate universal improvement across datasets, architectures, and schedules. Moreover, they require no data/retraining, are computationally efficient, and can be plugged in black-box on top of any discretization schedule or architecture. The authors have performed extensive and impressive experiments to demonstrate the effectiveness of the proposed approaches. I have not found major weaknesses in this submission. Other Comments Or Suggestions: No. Questions For Authors: - How sensitive is performance to the radius $r$? Does the $r \propto m^{-5/2}$ heuristic (mentioned in Appendix G.2) hold across varying $m$? - Can S4S/S4S-Alt achieve competitive performance with 1-2 NFEs, or does it require a minimum step count? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your time reviewing our work, and for recommending acceptance! We hope we answer your outstanding questions below. > How sensitive is performance to the radius $r$? Does the $r\propto m^{-5/2}$ heuristic hold across varying $m$? In practice, the heuristic $r \propto m^{-5/2}$ works reasonably well "out of the box" across the experimental settings we looked at. Sensitivity emerges in two different directions though on both sides of the parameter scale. With very few number of parameters, choosing the radius to be too small makes the optimization problem more difficult. On the other hand, with a larger number of parameters, allowing for too large of a radius enables more overfitting. While $m^{-5/2}$ is a good heuristic that achieves baseline strong performance in both settings, we would expect improvements to come by carefully tuning this parameter based on the experimental settings. To help clarify this, we ran some additional experiments on CIFAR-10 and ImageNet-256 to characterize this dependence; our results are in the table below. | Dataset | Model | Parameters $(m)$ | NFEs | $r=0.1m^{-5/2}$ | $r=0.5m^{-5/2}$ | $r=1.0m^{-5/2}$ | $r=2.0m^{-5/2}$ | |---------|-------|-----------------|------|-----------------|-----------------|-----------------|-----------------| | CIFAR-10 | iPNDM-S4S | 6 | 4 | 32.15 | 30.58 | 28.91 | 31.24 | | CIFAR-10 | S4S-Alt | 16 | 4 | 17.23 | 16.05 | 16.95 | 18.39 | | ImageNet | iPNDM-S4S | 6 | 4 | 8.12 | 7.84 | 8.06 | 8.53 | | ImageNet | S4S-Alt | 16 | 4 | 5.28 | 5.13 | 5.37 | 5.72 | As shown in the table, the performance is generally robust within a reasonable range around our heuristic, with the optimal value typically falling between $0.5m^{-5/2}$ and $1.0m^{-5/2}$. The sensitivity increases with larger model sizes, supporting our conclusion that a well-calibrated radius is more important as parameter count increases to prevent overfitting. > Can S4S/S4S-Alt achieve competitive performance with 1-2 NFEs, or does it require a minimum step count? Thank you for the interesting question! In practice, we found that going below 3 NFEs requires solving a very difficult problem that likely requires directly modifying the underlying score network. Concretely, 1 NFE generation entails directly generating an image from a noise latent; as such we completely lose any "degrees of freedom" over the time discretization; similarly, we may now only control ~2-4 coefficients, which is generally insufficient for producing high-quality outputs. While having 2 NFEs yields better performance, on traditional score network architectures, S4S and S4S-Alt still fall short of the performance seen in 1-2 step generation in training-based distillation methods. Nonetheless, this challenge persists for all other methods that don't modify the underlying score network. To further characterize these results, we conducted a few more experiments. First, for CIFAR-10, we characterized S4S and S4S-Alt on 2 NFE generation for LD3 discretization. | Dataset | Method | NFE=2 | NFE=3 | NFE=4 | |---------|--------|-------|--------|-------| | CIFAR-10 | iPNDM | 155.37 | 23.64 | 9.06 | | CIFAR-10 | iPNDM-S4S | 142.45 | 20.65 | 8.25 | | CIFAR-10 | S4S-Alt | 104.62 | 14.71 | 6.52 | As shown in the table, while S4S-Alt achieves significant improvements over traditional solvers at 2 NFEs, the FID scores are still substantially higher than at 3-4 NFEs. This supports our observation that there's a minimum effective step count (~3 NFEs) for maintaining reasonable image quality with training-free methods that don't modify the score network architecture. On the other hand, we decided to replicate an experiment from the LD3 paper that we didn't have time to include in our results. Here, we examined results on InstaFlow, a flow network that is explicitly trained for high-quality few-step sampling. When the underlying model is explicitly trained to produce high-quality few-step samples, S4S correspondingly improves the image quality as well, on a scale competitive with the teacher solver (8 NFEs, Uniform Time Disc. = 14.16 FID). | Method | NFE=2 | |---------|--------| | InstaFlow | 24.13 | | InstaFlow [LD3] | 16.74| | InstaFlow [LD3 + S4S] | 15.22 | | InstaFlow [LD3 + S4S-Alt] | 14.31 |
Summary: The paper proposes the S4S method for optimizing diffusion model solvers. The optimization space includes the solver coefficients, time discretization schedule, and time correction terms. The optimization objective is a relaxed version of the global error with LPIPS as the distance metric, which only requires the existence of an input xT' sufficiently close to the original xT. Experiments demonstrate that S4S can outperform previous learning-free/learning-based solvers and fixed/learned timestep discretizations over diverse datasets. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper applies a practical objective to optimizing the solver in a free space. Unlike more restricted papers like DPM-Solver-v3, there are no strict theorems. Experimental Designs Or Analyses: Yes. Supplementary Material: I reviewed appendix A, E, F, G, H. Relation To Broader Scientific Literature: The paper can optimize both the solver coefficients and timestep discretization with a novel alternating objective. The optimization cost and final performance are better than those of previous works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The authors did a nice job in appendix A comparing to previous works, including (1) optimizing time steps (2) optimizing local truncation error (3) optimizing global error. - Ablations are comprehensive, including (1) the benefits of alternating optimization (2) the benefits of relaxed objective (3) the disadvantage of large order, as in BNS (4) the benefits of LPIPS as distance metric (5) the initialization method (6) the training dataset size. Weakness: - With a larger parameter space, the method is more data-driven and less theory-grounded than traditional solvers. The coefficients require separate optimization under different NFE. - There are no intuitive visualizations of the learned coefficients and timestep discretizations. Other Comments Or Suggestions: I suggest the authors visualize the learned coefficients and timestep discretizations in comparison to previous solvers. Questions For Authors: - Does the sampling trajectory of the learned solver still follow the teacher's, or is only the final sampler closer? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time reviewing our work and for your recommendation of acceptance. Below, we hope to address the weaknesses you identified and the questions you raised. > With a larger parameter space, the method is more data-driven and less theory-grounded than traditional solvers. The coefficients require separate optimization under different NFE. We agree that our approach is less grounded in theory relative to other diffusion model solvers. However, we think that this is more a feature of the particular problem setting -- very low-NFE sampling -- than of our specific approach. For instance, in many impressive theory-grounded diffusion model solvers, e.g. DPM-Solver++, the underlying assumptions that guarantee convergence in these solvers begin to break down in the low-NFE regime (~3-5 steps) as the step size for each solver step increases. Accordingly, we see significant degredation in performance in many of these theory-grounded solvers as the number of NFEs decrease; this decrease in performance can be seen our tables in Appendix H. S4S's data-centric approach avoids trying to make these strong assumptions in the low-NFE regime and as a result achieves stronger performance than its theoretically-grounded alternatives. Nonetheless, from a theoretical perspective, it's still **not intuitive** why S4S even works; in fact, it seems shocking that solvers like iPNDM or DPM-Solver++ get remotely reasonable performance in the low-NFE regime in the first place. We think that trying to characterize why these approaches even achieve a modicum of success with few NFEs is a very exciting direction of future work that we hope to provide answers to. We also recognize that our data-centric approach necessitates S4S to learn a new solver for each number of NFEs. In practice, this is a similar limitation to alternative methods for learning diffusion model solvers (e.g. LD3 [1], BNS [2], $\Pi$A [3]). Creating a reusable method for crafting diffusion model sovlers without needing to retrain each time is similarly a direction for future work that we aim to address. > There are no intuitive visualizations of the learned coefficients and timestep discretizations. Thank you for raising this point! Here, we provide some visualizations of the learned time-step discretizations for [LSUN bedroom](https://imgur.com/a/uGAP88Q) and for [FFHQ](https://imgur.com/a/rFKizhf). These visualizations indicate that the learned time-step discretizations are similar to the "best" heuristic time discretizations, i.e. the learned time discretization for latent diffusion models are similar to the uniform time discretization, while the learned time discretizations are similar to EDM / logSNR for pixel-space models. > Does the sampling trajectory of the learned solver still follow the teacher's, or is only the final sampler closer? Thank you for asking this question! The answer to this question depends on the difficulty of the underlying task. Interestingly, in relatively simple domains (e.g. CIFAR-10), the trajectory of the learned solver still closely follows that of the teacher's, despite being trained to explicitly only match the output of the teacher solver. In contrast, however, on more complex domains (e.g. conditional generation in ImageNet-256 or MS-COCO text-to-image), the trajectories of the student solver can have notable differences from that of the teacher solver.
null
null
null
null
null
null
A Generalizable Physics-Enhanced State Space Model for Long-Term Dynamics Forecasting in Complex Environments
Accept (poster)
Summary: This paper addresses the problem of dynamic forecasting with noisy and irregularly sampled data. A model is proposed that 1) a physics-based SSM is applied to integrate partial physics knowledge and 2) a physics state regularization is used to constrain the latent states with noisy and irregularly sampled data. Empirical results show improved performance of the proposed model on interpolation and extrapolation tasks. Claims And Evidence: 1. The challenge of noisy and irregularly sampled data in long-term dynamics forecasting is critical. 2. In Section 2 (ii) the authors stated that the existing works did not consider the infeasibility of obtaining complete physics knowledge. However, there is an existing domain of hybrid modeling that aims to solve this problem [1-3]. The authors should further discuss the difference between the proposed model compared to hybrid modeling. 3. Section 2 also mentioned the limitation of NODE on nonlinear and time-variant systems. However, there have been works such as ODE2VAE [4] for such complex systems. The authors should check these works for better comparison. Also, the authors stated that the initialization of NODE-based models is critical. Could the authors compare the initialization in the three experimental settings to show how that is improved by the proposed model? [1] Yin, Yuan, et al. "Augmenting physical models with deep networks for complex dynamics forecasting." Journal of Statistical Mechanics: Theory and Experiment 2021.12 (2021): 124012. [2] Takeishi, Naoya, and Alexandros Kalousis. "Physics-integrated variational autoencoders for robust and interpretable generative modeling." Advances in Neural Information Processing Systems 34 (2021): 14809-14821. [3] Wehenkel, Antoine, et al. "Robust hybrid learning with expert augmentation." arXiv preprint arXiv:2202.03881 (2022). [4] Yildiz, Cagatay, Markus Heinonen, and Harri Lahdesmaki. "Ode2vae: Deep generative second order odes with bayesian neural networks." Advances in Neural Information Processing Systems 32 (2019). Methods And Evaluation Criteria: 1. Based on Eq 7 and Eq 8, the proposed Phy-SSM unit is similar to an RNN-structured sequential model. How does this unit process the continuous dynamics? Also, how is the function $\psi(z)$ defined? 2. Eq 9 introduced the knowledge mask mechanism. For real-world systems where the physics is usually unknown, it is not feasible to explicitly write out the mask as in the example in Section 4.2. How does the proposed method deal with such a problem? 3. The physics regularization in Eq 12 is supposed to be a major contribution as stated in Section 1. The authors should elaborate on why the L2 norm of the latent states from the prior and posterior distribution is used. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. The authors used three real-world datasets in experiments. How irregular are the three datasets? Could the authors provide more details? 2. From the ablation study it seems the physics state regularization has a marginal contribution to the model improvement. Could the authors discuss this phenomenon? Also, could the authors compare the model with regularization only? Supplementary Material: Please find my comments above. Relation To Broader Scientific Literature: Please find my comments above. Essential References Not Discussed: Please find my comments above. Other Strengths And Weaknesses: Please find my comments above. Other Comments Or Suggestions: Please find my comments above. Questions For Authors: Please find my questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer gB8T **Q 4.1**: About clarifying the difference between our method and existing hybrid modeling approaches [1-3] that address incomplete physics knowledge. **A 4.1**: We have discussed the difference between our model and hybrid modeling approaches as follows. The models in [1–3] are all physics-based NODE methods, which suffer from a shared limitation: difficulty in modeling nonlinear and time-variant systems over long time horizons due to their heavy reliance on initial conditions. This limits their ability to capture long-term sequence dependencies. PI-VAE [2] is already included as a baseline in **Tables 1–4** of our submission, and our method outperforms it. We will include references [1] and [3] in the related work section of the revised version. >[1] Yin et al. Augmenting physical models with deep networks for complex dynamics forecasting. JSTAT 2021 > >[2] Takeishi et al. Physics-integrated variational autoencoders for robust and interpretable generative modeling. NIPS 2021 > >[3] Wehenkel et al. Robust hybrid learning with expert augmentation. TMLR 2023 **Q 4.2**: About (i) comparing with baselines like ODE2VAE; (ii) NODE limitations on initialization sensitivity. **A 4.2**: We have included ODE2VAE [4] as a baseline in our experiments. The results are provided in **Tables 1-4** [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)]. While ODE2VAE introduces uncertainty modeling to alleviate the sensitivity to initial conditions, it still lacks an effective mechanism to dynamically refine predictions based on later observations. As a result, its performance on long-term forecasting remains limited. Regarding the discussion of initialization in Section 2, we refer specifically to the sensitivity of NODE-based methods to initial conditions $x(t_0)$, not to the initialization of network parameters. This is due to the fact that NODEs solve initial value problems (IVPs), where the trajectory is determined by both the initial state and the learned vector field. In contrast, our Phy-SSM adopts a dynamical VAE framework that refines its predictions over time using the posterior from the preceding time step. This mechanism effectively mitigates errors caused by inaccurate initial conditions. A motivating example that visually illustrates this improvement is provided in Fig. 2, Appendix A. >[4] Yildiz et al. Ode2vae. NIPS 2019 **Q 4.3**: About how Phy-SSM handles continuous dynamics and the definition of $\psi(z)$. **A 4.3**: The Phy-SSM unit learns continuous dynamics through the parameterized matrices $A_{unk}$ and $B_{unk}$ as showed in Eq. (8), as illustrated in S5 [5]. $\psi(z)$ denotes additional extended state terms, which can include nonlinear functions or constants—such as $\sin{({z})}$, $\cos{({z})}$ and 1. This augmentation enables the representation of certain nonlinear systems in a linear state-space form. Please refer to page 4, right column, lines 192–200 for the detailed explanation. >[5] Smith et al. S5. ICLR 2023 **Q 4.4**: About defining the knowledge mask when physics is fully unknown. **A 4.4**: If physics is unknown, we treat the entire system as unknown and set all mask entries to 1. In this case, our method reduces to a data-driven SSM that learns all dynamics from data without explicit constraints. **Q 4.5**: About elaborating on why the L2 norm of the latent states from the prior and posterior distribution is used. **A 4.5**: We use L2-norm for its efficiency and strong empirical performance. We have conducted comparisons with alternative metrics(e.g., Chebyshev, cosine). The results are provided in **Table 5** [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)], where Euclidean distance (L2-norm) performs best in extrapolation tasks. **Q 4.6**: About the irregularity details of the three real-world datasets used in the experiments. **A 4.6**: Appendix E in our submission provides the details. Drone data is high-frequency and irregularly sampled, recorded at nearly 1010 Hz (minimum: 573.05 Hz, maximum: 1915.86 Hz); COVID-19 contains 10% missing daily records; vehicle dataset includes 5% missing agent observations. We will add clearer descriptions in the revised version. **Q 4.7**: About the seemingly marginal contribution of the physics state regularization term and the request for a regularization-only comparison. **A 4.7**: We would like to emphasize that the physics state regularization yields approximately a 10% relative improvement in extrapolation performance. It plays a crucial role in guiding the model toward learning more generalized physically representations—particularly under noisy and irregular data conditions. It is important to note that the regularization term cannot be used independently, as it penalizes the distance between the posterior (output by the sequential encoder) and the prior physics-based latent states predicted by the Phy-SSM unit.
Summary: The paper aims to improve long-term forecasting using state space models based on deep learning to a) embed prior knowledge about physical systems and b) handle noisy irregularly sampled data. Specifically, the paper proposes to separate the state matrix into known and unknown / learnable elements. To this end, the paper proposes to use a “binary knowledge mask” that effectively freezes elements in the state matrix that are known a priori. Similar to the literature in this area, to handle irregularly sampled data, the continuous-time state matrix is optimised, i.e., the system is discretised during each forward pass using Tustin discretisation. A Variational Autoencoder is used for optimisation. In addition to the reconstruction loss and Kullback-Leibler divergence, the authors propose to introduce an addition regularisation term that encourages the encoder to adhere to the system dynamics. The proposed approach is evaluated for interpolation and extrapolation, and for three datasets that involve drone state prediction, COVID-19 epidemiology forecasting, and vehicle motion prediction. The results are compared against state-of-the approaches, including recurrent encoders, the contiformer, VAEs, and the S5 SSM. The results indicate that the proposed approach outperforms baseline approaches, particularly for the extrapolation tasks. For vehicle motion prediction, the results are further divided into in-domain prediction (predictions within 0-5 seconds) and out-of-domain prediction (predictions from 5-6 seconds). Results indicate that the proposed approach performs particularly well for out-of-domain prediction compared to the baseline approaches. ## Update after rebuttal: I raised my recommended score from 3 to 4. Claims And Evidence: The paper makes two main claims: 1) Embedding prior knowledge about the system dynamics improves performance, and 2) the proposed regularisation term enables long-term predictions in the presence of noisy and irregularly sampled data. The concept of a binary knowledge mask is well motivated. However, domain expertise is often subject to at least some level of uncertainty. However, it is not clear from the discussions in the paper if and how this uncertainty is handled within proposed model. Regarding the regularisation term, the paper remarks on p. 5 that “this term is implemented as a Euclidean distance penalty between the sample $z(ti)$ from the prior distribution and the sample $z^{\ast} (ti)$ from the posterior distribution.” Considering that the we don’t know the space in which the latent states are located, how do we know that the Euclidean distance is an appropriate distance? The authors argue that the experimental results and ablations validate this choice. While the experiments provide convincing results for the model itself, I couldn’t find any ablations in the paper or the appendix that focused on the specific choice of the metric that is used to implement the regularization term. I would have liked to see a comparison between Euclidean distance against alternative distance measures/metrics. Methods And Evaluation Criteria: The experimental results clearly evidence the improvements in performance compared to the state of the art. I particularly appreciate the distinction between the interpolation and extrapolation tasks that provide insight into the model’s ability to generalise. The ablation studies in Appendix H provide additional insight how the two proposed components (Phy-SSM unit; regularisation term) contribute to the overall performance of the system. That said, the results in Table 7 seem to suggest that the performance of the Phy-SSM unit (consisting of a 4-layer MLP, 9 SSM layers, and a 4-layer MLP, see p. 17) is comparable to a unit that consists only of MLPs for the interpolation task, and leads to small improvements for the extrapolation task. However, it is unclear at what cost these improvements in extrapolation are achieved. To this end, I feel that the paper would benefit from a more thorough treatment of the computational cost of the proposed Phy-SSM module. To ensure a balanced discussion of the achievements and limitations of the proposed model, I would also recommend to move the ablation study from the appendix into the main body of the paper. Moreover, I am surprised that a unit involving a simple MLP outperforms a unit involving standard SSMs (as opposed to Phy-SSMs) by a considerable margin for the extrapolation task. How much of this improvement can be attributed to the regularisation term? It would be helpful to include one more row in the table to replaces the Phy-SSM unit with the MLP but does not incorporate regularisation. Theoretical Claims: See “Claims & evidence” Experimental Designs Or Analyses: See “Methods and Evaluation” Supplementary Material: N/A Relation To Broader Scientific Literature: In my opinion, the novel contribution of the paper is the extension of SSMs to extrapolation. In my opinion, the results – particularly the trajectory plots in Appendix G – clearly highlight the shortcomings of S5 for extrapolation, and showcase the significant improvements that can be achieved through regularisation and incorporation of prior knowledge. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper well written and provides insightful examples and diagrams to help illustrate the proposed model. Detailed information about the experimental settings and datasets are provided in the appendix. I particularly appreciated the information in Appendix D.2 that details prior knowledge / models of the systems used for the experimental evaluation. In addition, Appendix G provides convincing plots of the trajectory estimates for different baseline models. Other Comments Or Suggestions: - P. 4, left column, line 207: “approximated posterior $z(t_i)$” -> “approximate posterior of $z(t_i)$? - P. 4, right column, line 214: “influence of control inputs is often known” – do you mean “unknown”? - Section 4.3 feels out of place. I would suggest to swap 4.2 and 4.3 - P. 28, line 1490: “SMM” -> “SSM” Questions For Authors: 1. How is uncertainty in the “known dynamics” handled within the proposed model considering that a binary knowledge mask is applied to separate frozen and learnable elements of the state matrix? 2. How does the Euclidean distance for regularisation term compare to alternative metrics? 3. What is the computational cost of the Phy-SSM module (e.g., compared to the standard SSM and MLP in Appendix H)? 4. In Section 5, how was the Phy-SSM module initialised? How was S5 initialised? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to Reviewer xWcK **Q3.1**: About handling uncertainty in domain knowledge within our model. **A3.1**: We do not explicitly handle uncertainty in domain knowledge in this work. However, such uncertainty can be modeled within the unknown dynamics. A possible approach is to apply conformal prediction [1] as a post-processing step to estimate uncertainty in the deep SSM outputs. We will explore this in future work. >[1] Kamile et al. Conformal time-series forecasting. NIPS 2021 **Q3.2** About the lack of ablations comparing L2 with other regularization metrics. **A3.2:** Following your suggestion, we have added comparisons with alternative metrics, including Chebyshev distance, and cosine distance. The results are presented in **Table 5** [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)], where our method achieves the best performance in extrapolation tasks. We will include these results in the revision. Chebyshev distance focuses on worst-case deviations, while cosine distance captures directional similarity, which may not fully penalize deviations in magnitude. Since our goal is to measure the overall discrepancy between two physical state trajectories, the L2 norm is not only empirically effective but also conceptually the most appropriate metric. **Q3.3**: About why MLP outperforms standard SSMs in ablation study, explain performance gains in extrapolation, and the placement of ablation study in the paper. **A3.3**: We would like to clarify that we do not replace the entire Phy-SSM unit with MLPs in the ablation study. Instead, we only replace the SSM layer of approximating unknown terms inside the Phy-SSM unit (highlighted in the blue rectangle in Fig. 1) with an MLP. we will include a clearer and more detailed explanation in the revised version. The observed improvements—particularly in extrapolation—are due to the SSM layer’s ability to model long-range dependencies, which MLPs cannot effectively capture. This allows the Phy-SSM to learn more generalizable unknown physical dynamics, especially over extended horizons. Finally, per your suggestion, we will move the ablation study from the appendix into the main body of the paper. **Q3.4**: About typos and structural suggestions. **A3.4**: We will correct all listed typos and adjust the section order as suggested in the revised version. **Q3.5**: About the computational cost of Phy-SSM compared to standard SSMs and MLPs (Appendix H). **A3.5**: As clarified in A3.3, we do not replace the entire Phy-SSM unit with MLPs. To address your concern, we report the computational costs of the standard SSM, our Phy-SSM module, and a NODE-based baseline (GOKU). The results are provided in **Table 6** [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)]. Phy-SSM is slightly slower than the standard SSM due to the additional structure and regularization components introduced for physics integration. However, it is faster than NODE-based models, as it preserves the parallel-scan acceleration capability inherent in SSMs. **Q3.6**: About the initialization for the Phy-SSM module and the S5. **A3.6**: In the Phy-SSM module, the known dynamics $A_{knw}$ is initialized using known physical parameters. The unknown dynamics $A_{unk}$ is initialized based on the output of the deep SSM layer. For S5, we follow the default HiPPO initialization [2]. >[2] Gu et al. "Hippo: Recurrent memory with optimal polynomial projections." NIPS 2020 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed clarification and for taking the time to provide additional results. The authors' rebuttal addresses my concerns and I am happy to update my score. I feel that there is sufficient number of applications that benefit from embedding prior knowledge from domain expertise in learnable SSMs. In my opinion, the paper presents a novel scientific contribution. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score. We will address your suggestions in our revised version.
Summary: This paper proposes a general-purpose framework that integrates partial physics knowledge into state space models. The topic is attractive and key innovation is clear. Claims And Evidence: 1. Phy-SSM effectively integrates partial physics into SSMs for improved generalization. The dynamics decomposition (Eq. 8) and knowledge masking (Eq. 9) are designed to enforce physical constraints. The idea is sound but the experiment is weak. Only the ablation results on drone state is not enough to state that it can integrate physical knowledge. How about other complex systems, especially with non-linear dynamics and multi-variable coupling? 2. The physics state regularization term enhances long-term prediction accuracy. It seems to make sense. 3. Theoretical guarantees ensure uniqueness of the dynamics decomposition. Proposition 1 assumes that the known and unknown terms occupy disjoint positions in the matrix (Equation 16). However, in practical systems, partial overlap may exist (e.g., certain terms containing both known and unknown components simultaneously). In such cases, uniqueness cannot be guaranteed, but the paper does not discuss this scenario. Methods And Evaluation Criteria: Strengths: 1. The Phy-SSM unit and knowledge masking mechanism provide a systematic way to embed physics into SSMs. 2. Diverse applications (drone, COVID-19, vehicle) and metrics (MAE, MSE, ADE, physics-based errors) strengthen validity. Weaknesses: 1. The process of defining knowledge masks (e.g., Eq. 15) is described for the pendulum example but lacks generalizable guidelines for other systems. 2. The physics state regularization term (Eq. 12) is implemented as a simple L2 penalty; more principled approaches (e.g., Lagrangian multipliers for hard constraints) are unexplored. Theoretical Claims: Proposition 1 asserts uniqueness of the dynamics decomposition but relies on assumptions (e.g., disjoint support of known/unknown terms) without discussing their practical validity. Experimental Designs Or Analyses: 1. Omission of recent physics-guided transformers (e.g., PDE-Refiner) or hybrid models limits comparative rigor. Why SSM is your choice? 2. Small-scale datasets (e.g., COVID-19 data from only 8 countries) raise concerns about generalizability. 3. The influence of control inputs (e.g., vehicle steering/throttle) is oversimplified; real-world actuator dynamics (e.g., delays) are unaddressed. Supplementary Material: Yes, all of them is reviewed. Relation To Broader Scientific Literature: The work builds on physics-informed ML (e.g., PINNs, Hamiltonian NODEs) and SSMs (e.g., S4, S5), advancing hybrid modeling for irregular dynamics. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The uniqueness proof is informal, and broader theoretical implications (e.g., stability) are unexamined. 2. Key competitors like Neural ODEs with uncertainty-aware priors or PDE-Refiner are absent. 3. The knowledge masking heuristic may not scale to systems with overlapping known/unknown dynamics. 4. Experiments lack large-scale benchmarks (e.g., climate modeling or robotics datasets) to test generality. Other Comments Or Suggestions: See review above Questions For Authors: See review above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Response to Reviewer ooWi **Q2.1**: About validating physical integration beyond the ablation drone experiment, especially for nonlinear and multi-variable systems. **A2.1**: We evaluate our method on three real-world nonlinear, multi-variable systems: COVID-19, drone, and vehicle dynamics as presented in Tables 1–4 of our submission. Compared to data-driven models like S5, our physics-integrated approach consistently outperforms baselines, validating its effectiveness under diverse dynamical systems. **Q2.2**: About the disjoint assumption in Proposition 1 and the generalizability of the knowledge mask definition to overlapping dynamics. **A2.2**: The uniqueness result in Proposition 1 still holds when a term includes both known and unknown parts (e.g., a known multiplier applied to an unknown function). This is because we can treat the entire term as "unknown". For example, in the COVID-19 model (Eq. 28), the term $-\frac{I}{N} \cdot (*)$ is handled by learning the unknown part with a deep SSM and reintroducing the known factor $-\frac{I}{N}$ in post-processing. The same principle applies to the knowledge mask design. For disjoint dynamics, we assign 1 to unknown terms and 0 to known ones. For overlapping terms, the mask marks the full expression as unknown, with known components injected after training. We will introduce this guideline of mask in our revision. **Q2.3**: About using L2 regularization and other principled alternatives like Lagrangian multipliers. **A2.3**: We have done additional experiments using other metrics, as shown in **Table 5** [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)]. We can see that L2 norm performs best. While Lagrangian multipliers are theoretically sound, they are difficult to apply in DNN models due to non-convexity and large searching space. We leave this for future work. **Q2.4**: About recent physics-guided transformers (e.g., PDE-Refiner) or Neural ODEs with uncertainty baselines, and the rationale for choosing SSMs. **A2.4**: We have included Contiformer [1], a state-of-the-art physics-based transformer, as a baseline. PDE-Refiner [2] is not suitable for irregularly sampled dynamics due to its reliance on fixed-step one-step MSE loss. Per your suggestion, we also added ODE2VAE [3], a Neural ODE with uncertainty-aware priors. Our method still achieves the best performance (see results in this [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)] **Tables 1–4**). We choose SSMs as the core module of our approach because (i) they capture long-term dependencies via HiPPO memory, and (ii) their continuous-time formulation naturally handles irregular data. >[1] Chen et al. Contiformer. NeurIPS 2023 > >[2] Lippe et al. Pde-refiner. NeurIPS 2023 > >[3] Yildiz et al. Ode2vae. NeurIPS 2019 **Q2.5**: About generalization of COVID-19 data and lack of large-scale benchmarks. **A2.5**: In the COVID-19 experiment, we train the model on data from six countries and evaluate on two unseen countries (Ireland and Spain), demonstrating generalization across regions. Beyond that, we have already included the large-scale autonomous driving (AD) dataset nuScenes [4], which contains 1000 driving scenes from Boston and Singapore—two cities known for dense traffic and challenging driving conditions. This dataset is widely used in AD research [5–7]. Experiment shows the generality of our method in learning vehicle dynamics from real-world data. >[4] Caesar et al. Nuscenes. CVPR 2020 > >[5] Girgis et al. Autobot. ICLR 2022 > >[6] Ren et al. Safety-aware motion prediction with unseen vehicles for autonomous driving. ICCV 2021 > >[7] Zhang et al. G2LTraj. IJCAI 2024 **Q2.6**: About the simplification of vehicle dynamics and the lack of actuator delay modeling. **A2.6**: Following prior works [8,9], we incorporate partially known vehicle dynamics into our Phy-SSM model. Experimental results show that our method performs the best. Nevertheless, we agree with the reviewer that accounting for actuator delay is a valuable direction, and we plan to extend our method to address this scenario in future work. >[8] Rajamani, Vehicle dynamics and control. 2011 > >[9] Mao et al. Phy-Taylor. TNNLS 2023 > **Q2.7**: About the formality of the uniqueness proof and the lack of broader theoretical discussions (e.g., stability). **A2.7**: We use a contradiction-based argument for proving uniqueness, which is a standard formal method. We will refine the proof in the revised version for clarity and rigor. As for stability analysis, it is beyond our current scope, our focus is on learning the dynamics model rather than control design. we recognize its importance and leave it in future work. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ efforts in providing additional comparisons. However, my core concern remains unaddressed. The authors claim that the proposed model is a general-purpose solution for dynamics forecasting in complex environments. Yet, the selected examples, COVID-19, drone and vehicle dynamics, do not convincingly support this claim. These systems lack the clear, physically grounded governing equations typically associated with complex dynamical systems, such as the Navier-Stokes equations. This raises questions about the generality of the approach. I believe the motivation is somewhat overstated. That said, I appreciate the authors’ willingness to engage in this discussion. --- Reply to Comment 1.1.1: Comment: Dear Reviewer ooWi, Thanks for your response. We would like to clarify that the core contribution of our paper is not to use DNNs to solve a well-defined but analytically intractable equation (e.g., Navier–Stokes), but rather to propose a method that integrates partially known physics into state-space models to improve generalization in long-term forecasting under complex, real-world conditions (see page 2, lines 98–109). Systems such as COVID-19, drones, and vehicle dynamics are representative of real-world dynamical systems, where only partial physical knowledge is typically available. To further address your concern, we have conducted additional experiments on isotropic turbulence [1], which is governed by the Navier–Stokes equations. We compare our method with GOKU, the best-performing baseline for extrapolation. As shown in the following Table 1, our method achieves better performance than the best baseline. For experimental setup, we follow the setting in [2], using normalized L2 norm and $H^{-1}$ norm for evaluation. A spatial-temporal encoder (ST-FNO) [2] is used for all methods to extract latent representations, and we assume unknown ODE dynamics in latent space, followed by a decoder to reconstruct the original field. Each model takes 10 vorticity fields as input to predict the next 10 time steps. All methods are trained with equivalent parameter sizes for a fair comparison. **Table 1**: Performance comparison of our method and the best baseline in terms of interpolation and extrapolation using **isotropic turbulence** dataset. The results are averaged over three random seeds. The lower is the better. | Method | $H^{-1}$ (Interp.)(×10^-2) ↓ | L2 (Interp.)(×10^-2) ↓ | $H^{-1}$ (Extra.)(×10^-2) ↓ | L2 (Extra.)(×10^-2) ↓ | |:------ |:--------------------------------:|:--------------------------:|:------------------------------:|:--------------------------:| | GOKU | 3.942 ± 0.227 | 6.136 ± 0.023 | 4.139 ± 0.186 | 6.325 ± 0.032 | | **Ours** | **3.533 ± 0.016** | **5.828 ± 0.006** | **3.764 ± 0.009** | **5.950 ± 0.008** | >[1] McWilliams, J. C. (1984). The emergence of isolated coherent vortices in turbulent flow. Journal of Fluid Mechanics, 146, 21-43. > >[2] Cao et al. Spectral-Refiner. ICLR 2025.
Summary: This paer proposes Phy-SSM, a general-purpose framework that integrates partial physics knowledge into state space models (SSMs) for long-term dynamics forecasting in complex environments. Our motivation is that SSMs can effectively capture long-range dependencies in sequential data and model continuous dynamical systems, while the incorporation of physics knowledge improves generalization ability. Claims And Evidence: . Methods And Evaluation Criteria: . Theoretical Claims: . Experimental Designs Or Analyses: . Supplementary Material: . Relation To Broader Scientific Literature: .. Essential References Not Discussed: . Other Strengths And Weaknesses: I'm not an expert in this area, I want to know how complex it is and need a detailed presentation. Other Comments Or Suggestions: . Questions For Authors: . Ethical Review Concerns: . Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer VtD1 **Q1.1**: About understanding the method’s complexity and presentation clarity. **A1.1**: To help you better understand our work, we outline the problem, motivation, and key contributions below. This work addresses the problem of long-term dynamical forecasting in complex environments where data are noisy and irregularly sampled. Our motivation is that SSMs can effectively capture long-range dependencies in sequential data and model continuous dynamical systems, while the incorporation of physics knowledge improves generalization ability. Our key contributions can be summarized as follows: 1) We propose Phy-SSM, a novel approach that integrates partially known physics into state-space models to improve generalization for long-term forecasting in complex environments. 2) To enhance long-term prediction accuracy, we introduce a physics state regularization term that constrains latent states to align with system dynamics. 3) We provide a theoretical analysis demonstrating the uniqueness of solutions in our framework. 4) Extensive experiments on three real-world applications, including ve hicle motion prediction, drone state prediction, and COVID-19 epidemiology forecasting, demonstrate the superior performance of Phy-SSM over the baselines in both long-term interpolation and extrapolation tasks. Additionally, based on feedback from the other reviewers, we have: 1) Included ODE2VAE as a new baseline to enable a more comprehensive comparison with uncertainty-aware NODE-based models, as shown in **Tables 1–4** [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)]. 2) Added detailed ablation studies evaluating our choice of the L2 norm for regularization against alternative distance metrics (e.g., cosine distance, Chebyshev distance), as shown in **Table 5** [[link](https://anonymous.4open.science/r/ICMLExp-6D54/Rebuttal_exp.pdf)].
null
null
null
null
null
null
TeLoGraF: Temporal Logic Planning via Graph-encoded Flow Matching
Accept (poster)
Summary: The paper studies the problem of training an RL/planning agent that takes as input (i) a Signal Temporal Logic (STL) specification and (ii) a start state and then generates a plan/trajectory in the environment that starts at the given start state and satisfies the given STL specification. The proposed learning algorithm involves (i) sampling STL specifications from four main categories that appear commonly in practice, (ii) sampling expert trajectories that satisfy each specification (computed using well-known but slow optimization procedures), and (iii) using the sampled dataset of specification-trajectory pairs to fit a generative model that generates trajectories given a specification and a start state. The generative model architecture involves two main components: (i) a graph neural network architecture to encode the given STL specification and (ii) a temporal U-Net flow model that can be used to generate a trajectory given the encoding of the specification. Experiments in complex environments indicate that the proposed approach can outperform baselines w.r.t. satisfaction rate under compute constraints. Claims And Evidence: The high-level claims are supported by decent evidence in the paper. For instance, the authors show that the chosen encoder architecture outperforms alternatives via a sufficiently thorough ablation study. The only claim that seems slightly problematic is that authors are the first to train a generative model for planning conditioned on STL specifications. Although it appears that this claim is true, I am not sure if this is a significant claim given there is already work on training RL agents that output actions conditioned on LTL specifications (see below). Methods And Evaluation Criteria: The benchmark environments seem appropriate and the authors present experiments on a wide range of environments (including standard MuJoCo environments). The evaluation criterion also makes sense. One thing that is not very clear is how the satisfaction of an output trajectory is calculated. For the evaluation to make sense, the generated trajectories do not only need to satisfy the given specification but also need to be realizable (follow the dynamics in the environment). It is not apparent from reading the paper that the authors are considering realizability in the evaluations. Theoretical Claims: There are no new theoretical claims in the paper. Experimental Designs Or Analyses: Apart from the minor question I posed earlier, the experimental design appears sound and valid. Supplementary Material: No. Relation To Broader Scientific Literature: - The presented approach can be considered as a meta reinforcement learning algorithm. STL offers a natural syntax to encode specifications in the context of meta RL. - Although there has been a lot of recent work on reinforcement learning from temporal logic specifications, there is relatively less research on zero shot execution given a new specification. This paper attempts to tackle this important problem. - Obtaining good embeddings of code (in this case, STL specifications) is a relevant problem. For instance, it is known that LLMs do not understand the semantics of code very well (even though they can generate good code). It appears that the approach in the paper enables the agent to understand the semantics of STL specifications. Essential References Not Discussed: I think the below reference [1] is very similar to this paper. The approach presented in [1] also involves using a GNN to encode the temporal specification (LTL). Although this paper differs slightly since the focus is on using a generative model for planning, it is still very similar to [1] which trains an actor network instead of generative model. The main differences seem to be (i) using STL instead of LTL and (ii) using a generative model for planning instead of training an actor network. These differences are fairly minor and [1] is tackling the same high-level problem. Hence, I think this paper should be discussed and the authors should clarify how their contributions add value beyond existing findings in [1]. [1] Vaezipoor, Pashootan, et al. "Ltl2action: Generalizing ltl instructions for multi-task rl." International Conference on Machine Learning. PMLR, 2021. Other Strengths And Weaknesses: - In spite of the similarity with prior work, the idea of using a flow model in this context appears novel and natural. The results look encouraging since the required number of flow steps for a decent performance is small (in the study in the paper), which improves the time taken for planning. - Clarity can be improved slightly and some things can be made more precise (see questions below). Other Comments Or Suggestions: N/A Questions For Authors: - Why is there a distinction between $\phi_{and}$ and $\phi_{or}$ in page 4? From the definition, it looks like a $\phi_{or}$ formula is also a $\phi_{and}$ formula and vice-versa. Aren't they just equivalent to $\phi_{bool} := \phi_{reach}\ |\ \phi_{bool,1}\lor\phi_{bool,2}\ |\ \phi_{bool,1}\land\phi_{bool,2}$? Then $\phi_{multi}$ can also be written as just $\phi_{bool}\land\phi_{avoid}$. - In page 4, the description of $\phi_{partial}$ is confusing. It is mentioned that $O_p$ has to be reached after $O_q$ is reached. But $(\lnot O_p)\ \mathcal{U}\ O_q$ is satisfied even if $O_p$ is never reached. - Is it possible to define an executable policy from the trajectory generated by the model? If so, why not measure the satisfaction of the specification based the trajectory resulting from executing this policy (instead of directly analyzing the generated trajectory which may not exactly follow system dynamics)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 8aYy for the thoughtful feedback and appreciation of our ablation studies and graph-based encoding, which aim to enhance symbolic understanding in decision-making. Below, we address the comments on novelty and trajectory realizability. **Significance of STL planning**: (same to Reviewer 8mCx) We argue that it is non-trivial to extend the existing LTL2Action [1] to STL, particularly for the key technique “progression” [2] used to update task spec based on assignments. Consider an example (Sec 3.3 in LTL2Action) “First reach R, then reach G”. In LTL, this can be written as $F(R \land F(G))$. Once reach R, following LTL2Action, the LTL is updated to $F(G)$, and the updated reward will encourage reaching G. STL expresses this in $F_{[ta,tb]} (R \land F_{[tc,td]} G)$ where [ta, tb] is the time range to reach R, and [tc, td] is the relative range that G should be reached after reaching R. Now, the progression is non-trivial - when reaching R, we need to: (1) ensure the event “reach R” happens in [ta, tb] (2) store this event time because the success of “reach G” will depend on the time R was reached. So we need to bookkeep all the “reach R” events as well as their times, and whenever we reach G, we need to iterate all the past “reach R” events to check if the “reach G” event happens after any of it within the [tc,td] range. The complexity is $O(T^2)$, where T is the trace length. The complexity is $O(T^L)$ for L nested temporal layers in STL. Thus, it is not trivial nor efficient to extend LTL2Action to STL. Because STL doesn’t have automata-like forms as LTL does, it is hard to efficiently augment the state to be Markovian, as mentioned in our paper (page 2, lines 81-83, right column). Thus we use imitation learning to learn diverse STL, and we believe this distinction highlights the novelty and our contribution. That said, we value the reviewer’s observation, and we will cite LTL2Action [1] and the progression paper [2] with explanations in the final version. **Realizability**: (same to Reviewer GKRz) We appreciate the reviewer’s question regarding dynamics consistency. In all cases, our method generates waypoint trajectories, regardless of whether actions are also predicted during training. These waypoints serve as high-level plans, which can directly get actions from inverse dynamics or be tracked by PD controllers. While we do not enforce dynamics during inference, our focus is on generating STL-compliant high-level paths. Ensuring executability can be done through future extensions such as trajectory refinement or policy warm-starting, as explored in prior works [3], and we will clarify this in the final version. **Q1: $\phi_{and}$ and $\phi_{or}$**: This is a good observation, and the distinction was intentionally designed: $\phi_{and}$ and $\phi_{or}$ are used to enforce “structured”, canonical forms: $\phi_{and}$ requires all its children nodes to be “disjunctions” (OR-type) while $\phi_{or}$ requires its children to be conjunctions (AND-type). The reviewer's proposed formula will allow both cases. For instance, we allow STL formulas like $A \lor (B \land C)$ but do not allow redundant nesting such as $A \land (B \land C)$, which can be flattened into $A \land B \land C$. This structural design eases the NN training by reducing the variability of data in syntactic forms. **Q2: $\phi_{partial}$**: Sorry for the confusion. In fact, in our implementation, we additionally add “Eventually Reach $O_p$” to ensure the agent will first reach $O_q$ and then move to $O_p$ (as shown in our appendix, Figures 19, 20). In some other cases (Figure 37 in our appendix for the PointMaze env), we don’t explicitly specify to reach K0 and K1, but from the layout the agent has to reach the keys first. We will make these clear in our final paper. **Q3: measure in executable policy**: It is possible to define the executable policy for simple cases like Linear and Dubins from the predicted waypoint trajectories, but hard for PointMaze (higher-order ODE for dynamics), AntMaze (higher-order ODE, contact force) or Franka Panda (inverse kinematics to convert workspace trajectories to the configuration space to derive the policy). We can measure satisfaction by executing this calculated policy in some cases, but as mentioned in the **Realizability** section, we focus on planning STL-compliant high-level paths. Ensuring executability can be done through future extensions as explored in [3], and we will clarify it in the final version. **References:** 1. Vaezipoor, Pashootan, et al. "Ltl2action: Generalizing ltl instructions for multi-task rl." ICML 2021. 2. Bacchus, Fahiem, and Froduald Kabanza. "Using temporal logics to express search control knowledge for planning." Artificial intelligence 116.1-2 (2000): 123-191. 3. Ajay, Anurag, et al. "Is conditional generative modeling all you need for decision-making?." ICLR 2023 --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. Overall I am still leaning towards acceptance. The lack of executable policy remains the biggest weakness but the authors have suggested concrete extensions to their work to obtain executable policies.
Summary: This paper provides a flow-matching based approach to generate plans for a diverse range of Signal Temporal Logic (STL) specifications. Consequently, the proposed method can be optimized to be significantly faster than a diffusion-based approach as considered previously by skipping Ordinary Differential Equation (ODE) steps in the Euler sampling process. A Graphical Neural Network (GNN) encodes the STL specification which is then fed into a conditional flow model that predicts a satisfying trajectory given the current state. Experiments on simple differentiable environments as well as more complicated Maze-like non-differentiable environments show the merits of the approach in speed and trajectory performance. ## update after rebuttal I appreciate the additional experiments (MILP planner) provided by the authors and the clarifications. It would be beneficial for the usability of the approach if the realizability of the trajectories generated was further discussed or evaluated. Regardless, the paper provides an interesting direction for STL-guided planning and I am still in favor of accepting the paper. Claims And Evidence: The approach considered (to the best of my knowledge) is the first generative model for planning over STL specifications. The specifications considered are diverse and the experiments are shown on a variety of benchmarks ranging from simple linear dynamics to more complex non-differentiable settings like AntMaze. Methods And Evaluation Criteria: I am mostly in agreement with the experiment setting and appreciate the thoroughness in finding a diverse range of STL specifications and goals to support the claims of generality. It appears that the robustness score ($\rho$) of the STL specification ($\phi$) is not used during evaluation time and the GNN encoding of $\phi$ alone is used in the conditional flow model. Using the robustness score as part of the guidance (akin to LTLDoG) could in fact make TeLoGraD a stronger technique. For the non-differentiable Maze environments, the time to satisfy the specification or length of the trajectories considered should be helpful to distinguish between the plan qualities of the various methods. Theoretical Claims: No novel theoretical advances were discussed. Experimental Designs Or Analyses: The STL satisfaction rate calculation is a little hard to understand. The highest STL score trajectory out of 1024 sampled outputs is denoted as the final trajectory. The satisfaction rate is quantified as the ratio of final trajectories that satisfy the STL specification out of 256 specs from the validation set. Additionally, it is unclear how the trajectories are determined to be realistic without using the environment dynamics directly. For the differentiable environments (Linear, Dubins, Franka Panda), it would be helpful to clarify how the trajectories are deemed to achievable by a given controller. Unlike the proposed method (TeloGraD/F), the **Grad** baseline differentiates through the environment’s transition function and yields trajectories that respect the system dynamics. Error bars representing standard deviation or spread are not present in any of the results considered. Supplementary Material: I read the supplementary material to understand the methods for trajectory generation and the example STL trajectories. Relation To Broader Scientific Literature: The study and modern dataset for diverse STL specifications along with a release for the GNN-based approach to encode the objectives is relevant and will be appreciated by the community. Diffusion based methods using STL as guidance have been considered previously but not in the general planning setting (except LTLDoG to the best of my knowledge). Essential References Not Discussed: Mixed-integer linear programming (MILP)-based methods (Kurtz & Lin 2022) are mentioned but not evaluated even for the environments with simple dynamics like Linear and Dubins. To this reviewer, an additional baseline for the non-differentiable Maze environments can be assuming simple linear dynamics to generate waypoints using an MILP planner with the STL spec. objectives and obstacles as Avoid predicates. Since an A* planner is used between the proposed waypoints, the actual trajectories should similarly be realizable (if a path is found). Some relevant work on robotic control for Signal Temporal Logic is missing: [1] Co-learning Planning and Control Policies Constrained by Differentiable Logic Specifications, Xiong et al, ICRA 2024 [2] Reinforcement Learning of Flexible Policies for Symbolic Instructions with Adjustable Mapping Specifications, Hatanaka et al, RA-L [3] Synthesis of Temporally-Robust Policies for Signal Temporal Logic Tasks using Reinforcement Learning, Wang et al, ICRA 2024 Other Strengths And Weaknesses: The ability to generate satisfying plans quickly is especially interesting and begs the question whether slower inference diffusion models, that could be more powerful and diverse, are necessary for most environments. Other Comments Or Suggestions: - p6L310 : referencing the graphs mentioned (Fig. 5) would help clarify if they are different from Fig. 4. - p12L639 : The equation referenced is wrong (should be Eq. 7) Questions For Authors: 1. Is the robustness score used at all during evaluation (and not only during generating the demonstration data)? On a related note , how is this gradient based planner used for demonstration data implemented (for the differentiable environments)? Does it differentiate through the dynamics? How many time steps are the given trajectories for Linear and Dubins? 2. It appears that the **Grad** baseline significantly outperforms **TeloGraD/F** in Linear environment. Is this because, unlike **Grad,** the proposed method does not use environment dynamics which can help significantly when the environment is simple (like in Linear)? 3. How are the generated trajectories verified to be consistent with environment dynamics? 4. Is the same methodology to calculate satisfaction rates carried out for all baselines? (the same 256 specs from the validation set followed by sampling 1024 trajectories to pick 256 final trajectories). If this understanding is incorrect, an explanation would be appreciated. 5. Can there be comparisons provided with MILP based methods for the differentiable environments? If this is not an appropriate baseline, why? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate Reviewer GKRz's thorough review and positive evaluations. We are glad to see that our contributions (first STL generative model; diverse setups; fast inference) have been recognized. We conducted new experiments as requested. Below are our responses to the concerns and questions. **(New experiment) MILP baseline**: We use the official codebase from [1] to run MILP baseline on Linear and PointMaze benchmarks. Dubins and AntMaze are omitted due to their nonlinear dynamics and similarity to Linear and PointMaze, respectively. We omit Franka Panda due to its nonlinear constraints from the configuration space. In Linear, we approximate the circular goal/obstacle regions using polygons. We run MILP under various timeouts (“second”): |Method | STL satisfaction | Runtime (s)| |-|-|-| |MILP (5) | 0.062 | 5.147| |MILP (10) | 0.203 | 18.682| |MILP (30) | 0.328 | 27.178| |MILP (45) | 0.375 | 37.131| |MILP (60) | 0.375 | 47.508| |MILP (90) | 0.383 | 68.913| |**TeLoGraF(Fast)** | **0.477** |**0.174**| The result shows that TeLoGraF(Fast) outperforms MILP (90) in STL satisfaction, with 400X speed-up (MILP runtime is reasonable compared to TABLE I in paper [1]). MILP is slow as it generates thousands of binary variables in our long-horizon tasks (horizon T=64). We further ran MILP on “PointMaze” (horizon T=512) and it only achieves 0.02 STL satisfaction with average runtime 329.785 seconds, highlighting the efficiency and scalability of our method for long-horizon STL planning. **(New experiment) Evaluation with varied random seeds**: We redo the experiment in Figure 4 with three random seeds to generate error bars - for learning-based methods, we train and evaluate the NN from different seeds; for non-learning methods, we directly evaluate them with different seeds. Due to time constraints, current results are just for the first three benchmarks, and we will complete this figure by the end of the rebuttal phase. The updated Figure 4 is at https://postimg.cc/XX4LtW62 - the statistical trends remain consistent with those reported in our submission. **Q1: implementation details**: For each STL spec in the validation set, the robustness score is only used in evaluation to select the best quality trajectory (out of 1024 sampled trajectories). We have 256 STL specs in the validation set. For the gradient-based planner, we use “backprop-through-time” (BPTT) [2] to optimize trajectories thus, it differentiates through the dynamics. The timesteps are 64 for Linear, Dubins and Franka Panda, and 512 for PointMaze and AntMaze. **Q2: Grad outperforms TeLoGraD/F**: This is an interesting finding, and we agree with the reviewer’s insight. Another factor is that “Linear” has simple, fully actuated dynamics, making it well-suited for gradient-based optimization. In contrast, “Dubins” involves second-order dynamics (acceleration -> speed -> position) with limited control, making them more “dragged” and underactuated and hindering gradient-based planners from efficiently finding solutions. **Q3: Realistic trajectories**: We appreciate the reviewer’s question regarding dynamics consistency. In all cases, our method generates waypoint trajectories, regardless of whether actions are also predicted during training. These waypoints serve as high-level plans, which can directly get actions from inverse dynamics or be tracked by PD controllers. While we do not enforce dynamics during inference, our focus is on generating STL-compliant high-level paths. Ensuring executability can be done through future extensions such as trajectory refinement or policy warm-starting, as explored in prior works [3], and we will clarify this in the final version. **Q4: STL metrics**: The STL satisfaction rate is evaluated over 256 STL specs on the validation set (unless otherwise specified, like in Figure 5). For each STL spec, we use our method or other baselines to generate 1024 trajectories and pick the one with the highest STL score as the final trajectory. So these 256 trajectories (one for each STL spec) will be marked as 1 (satisfaction) or 0 (STL violation). And the STL satisfaction rate is computed across these binary outcomes. **Q5: MILP baseline**: Thanks for the suggestion; we have conducted the experiment showing “TeLoGraF (Fast)”’s advantage over MILP for long-horizon STL tasks. **Missing STL references**: Thanks. We will cite those accordingly. **Other suggestions**: Thanks for pointing out the potential improvement in P6, L310, and the typo in P12, L639. We will modify the paper in the final version. **References:** 1. Kurtz, Vincent, and Hai Lin. "Mixed-integer programming for signal temporal logic with fewer binary variables." IEEE Control Systems Letters 6 (2022): 2635-2640. 2. Leung, Karen, and Marco Pavone. "Semi-supervised trajectory-feedback controller synthesis for signal temporal logic specifications." ACC 2022. 3. Ajay, Anurag, et al. "Is conditional generative modeling all you need for decision-making?." ICLR 2023.
Summary: This paper proposes to use learning-based methods to generate planning trajectories for STL specifications. The authors introduce some STL templates and then present a GNN to encode the STLs into feature representations. These features are then fed into flow matching as the conditioning factor for end-to-end learning of valid trajectories. Claims And Evidence: Partially. The authors claim that most previous work do not consider diverse STL or neural-network encoder, while there are existing work that can achieve this for a similar temporal logic -- linear temporal logic (LTL). Although their semantics are not entirely same, many template and encoder designs can be shared and re-utilized with minor modifications. Vaezipoor P, Li AC, Icarte RA, Mcilraith SA. Ltl2action: Generalizing ltl instructions for multi-task rl. InInternational Conference on Machine Learning 2021 Jul 1 (pp. 10497-10508). PMLR. Methods And Evaluation Criteria: The proposed methodology is reasonable. The proposed dataset can be used to test the proposed algorithm. However, it is unclear how difficult or laborious it is for a new environment in real applications. Theoretical Claims: No theoretical proofs. Experimental Designs Or Analyses: Experiments are reasonable. Supplementary Material: I reviewed the additional results in the supplementary material. Relation To Broader Scientific Literature: Will be interesting to the venue and researchers focusing on STL and neural-symbolic learning. Essential References Not Discussed: Please see Claims And Evidence part for a previous work LTL2Action. Other Strengths And Weaknesses: The contribution of STL templates and GNN encoder is incremental compared to LTL2Action as the extension of these for LTL to STL does not face significant challenges. Therefore, the technical contribution is limited. The proposed method requires paired demonstrations for training the conditional generative model. If I am understanding correctly, this would require first collecting a set of demonstrations for a new environment in application before the method can be trained. In real physical world this might still be very laborious. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful reviews. Below are our responses to the concerns. **Extend LTL work to STL**: We argue that it is non-trivial to extend the existing LTL2Action [1] to STL, particularly for the key technique “progression” [2] used to update task spec based on assignments. Consider an example (Sec 3.3 in LTL2Action) “First reach R, then reach G”. In LTL, this can be written as $F(R \land F(G))$. Once it reaches R, following LTL2Action, the LTL is updated to $F(G)$, and the updated reward will encourage reaching G. STL expresses this in $F_{[ta,tb]} (R \land F_{[tc,td]} G)$ where [ta, tb] is the time range to reach R, and [tc, td] is the relative range that G should be reached after reaching R. Now, the progression is non-trivial - when reaching R, we need to: (1) ensure the event “reach R” happens in [ta, tb] (2) store this event time because the success of “reach G” will depend on the time R was reached. So we need to bookkeep all the “reach R” events as well as their times, and whenever we reach G, we need to iterate all the past “reach R” events to check if the “reach G” event happens after any of it within the [tc,td] range. The complexity is $O(T^2)$, where T is the trace length. The complexity is $O(T^L)$ for L nested temporal layers in STL. Thus, it is not trivial nor efficient to extend LTL2Action to STL. Because STL doesn’t have automata-like forms as LTL does, it is hard to efficiently augment the state to be Markovian, as mentioned in our paper (page 2, lines 81-83, right column). Thus we use imitation learning to learn diverse STL, and we believe this distinction highlights the novelty and our contribution. That said, we value the reviewer’s observation, and we will cite LTL2Action [1] and the progression paper [2] with explanations in the final version. **Data collection labor in real-world**: We understand reviewer’s concern regarding paired expert data. However, we note that learning from demonstrations for robots is a widely adopted paradigm in both simulation and real-world settings. In cases where no efficient solver exists, it is common to collect demonstrations from scripted policies, human operators, or off-the-shelf planners. STL belongs to this category, as mentioned in our paper (page 1, L25-L36, right column). We view our work as a first step toward enabling STL-conditioned policy learning from demonstrations—an area that is currently underexplored and, to our knowledge, lacks dedicated prior work. Although we consider the paired data in our work, in the future, demonstrations do not need to be paired with STL. Recent works have shown STL can be inferred from offline demonstrations [3,4,5,6] or translated from natural language [7,8,9]. With these techniques and the increasing availability of open and modular data collection pipelines (e.g., UMI [10]), we argue that the reliance on demonstrations should not be viewed as a major limitation. **Contribution:** We respectfully disagree that our contribution is incremental. As acknowledged by the other reviewers, our work introduces a novel conditional flow model for general STL planning with solid experiments. Reviewer GKRz appreciated the diverse STL coverage and fast generation of satisfying plans, and Reviewer 8aYy highlighted that our model effectively captures STL semantics with efficient inference. During rebuttal, we also conduct extra experiments, including evaluations across varied random seeds and compare with an MILP baseline. These perspectives support the relevance of our contributions and their potential impact on the STL planning community. We hope this addresses the concerns, and we welcome further discussions. **References:** 1. Vaezipoor, Pashootan, et al. "Ltl2action: Generalizing ltl instructions for multi-task rl." International Conference on Machine Learning. PMLR, 2021 2. Bacchus, Fahiem, and Froduald Kabanza. "Using temporal logics to express search control knowledge for planning." Artificial intelligence 116.1-2 (2000): 123-191 3. Liu, Wenliang, et al. "Interpretable generative adversarial imitation learning." arXiv 2024 4. Leung, Karen, and Marco Pavone. "Semi-supervised trajectory-feedback controller synthesis for signal temporal logic specifications." ACC 2022 5. Meng, Yue, and Chuchu Fan. "Diverse controllable diffusion policy with signal temporal logic." RA-L 2024 6. Vazquez-Chanlatte, Marcell, et al. "Learning task specifications from demonstrations." NeurIPS 2018 7. Shah, Ankit, et al. "Bayesian inference of temporal task specifications from demonstrations." NeurIPS 2018 8. Cosler, Matthias, et al. "nl2spec: Interactively translating unstructured natural language to temporal logics with large language models." CAV 2023 9. He, Jie, et al. "Deepstl: from english requirements to signal temporal logic." International Conference on Software Engineering. 2022 10. Chi, Cheng, et al. "Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots." RSS 2024
null
null
null
null
null
null
null
null
WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs
Accept (poster)
Summary: This paper proposes a benchmark called WikiBigEdit for evaluating knowledge editing. The benchmark is constructed from Wikidata edits and includes 500k question-answer pairs. They evaluate a number of existing methods for knowledge editing and find that current techniques have limitations where continual fine-tuning and retrieval augmented methods perform better. They suggest one of their contributions is to have a fully automated extraction pipeline which continuously extracts suitable factual edits from Wikidata. The resulting WikiBigEdit spans eight time-intervals over five months (February - July 2024). Section 3 discusses the design of WikiBigEdit. Their construction is divided into 7 steps: 1. Periodic Snapshot Acquisition ($S_{changed}, S_{unchanged}$): downloading subject-relation-object relations from Wikidata for most recent snapshots. Triplets are divided into changed and unchanged sets. 2. Initial Filtering: this step excludes triplets based on simple rules such as length of the subject or object. 3. Generation of Locality Probes ($S_{locality}$): finds pairs of triplets in changed and unchanged sets where the subject-relation is similar but the object is distinct. 4. Inclusion of multi-hop reasoning tuples ($S_{mhop}$): extracts pairs of factual triplets that are linked to each other by a shared subject/object. These pairs can be used to create multi-hop questions. 5. Generation of QA edit pairs: for the sets changed and locality GPT-3.5 and for the set of mhop facts GPT-4o-mini are utilized to create question-answer pairs. For mhop, the questions omit the middle shared entity. 6. Generation of personalized and rephrased question-answer edit pairs: to further evaluate the reasoning generalization, GPT-4o-mini is used to create question-answer variants by either rephrasing or mimicking a persona that rewrites the QA in various styles. 7. Final filtering stage: filter QAs to make sure the question contains the subject/relation but not the answer/object and answers include the object. The paper provides various analyses of the benchmark. For example, Figure 2.a shows that the generation QAs focus on recent events although questions about older events also exist as old as the 1900s. Figure 2.b shows that the performance of five LLMs drops after their cut-off date on the proposed benchmark. Section 4 evaluates the performance of various knowledge editing methods. Figure 3 summarizes the performance of these methods on subsets of the evaluation. It shows while RAG outperforms all methods, continual fine-tuning methods based on LoRA perform better than other knowledge editing methods. The result about RAGs is expected as retrieving the correct answer from Wikipedia/Wikidata should be easy for RAG methods while multi-hop reasoning does not see significant improvement. Figure 4 also shows that performance of knowledge editing methods decays as the number of edits increase which is a limitation of such methods over long horizons. ## Update after rebuttal I thank authors for their response. I encourage authors to incorporate their response into a revision, particularly the result of multi-hop evaluations and connection DiRe condition as well relation to other methods and benchmarks. I maintain a positive rating. Claims And Evidence: The paper introduces a new benchmark to evaluate knowledge editing capabilities. The paper shows the importance of the problem, discusses the construction with details, and evaluates existing methods and provides interesting insights. Methods And Evaluation Criteria: The selection of methods covers a wide spectrum of methods. The authors may consider including evaluation of more recent RAG methods that potentially utilize an LLM for reasoning on facts that may improve on multi-hop reasoning evaluations proposed in the paper. - Gutiérrez, Bernal Jiménez, et al. "Hipporag: Neurobiologically inspired long-term memory for large language models." The Thirty-eighth Annual Conference on Neural Information Processing Systems. 2024. Theoretical Claims: No theoretical claims made. Experimental Designs Or Analyses: The design of the evaluations and comparison of models is well designed. The authors may consider testing their evaluation against the DiRe condition. See paper below: - Trivedi, Harsh, et al. "Is multihop QA in DiRe condition? Measuring and reducing disconnected reasoning." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) Supplementary Material: I skimmed through Appendix A for examples of prompts and persona sets for better understanding. Relation To Broader Scientific Literature: The question of how to keep the knowledge of LLMs up to date is an important research direction that requires stronger benchmarks. This work is a helpful evaluation for better understanding the limitations of LLMs and designing improved methods to keep their knowledge up to date. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - The paper is well-written, the benchmark is well-designed and provides insightful findings. Weaknesses - Please see above for potential comparisons and related works. Specifically, performing a variant of DiRe test may be helpful in understanding whether the multi-hop reasoning questions could be answered with a shortcut. It may be hard to apply the test directly as the QAs are written by GPT models. One potential way is to manually remove the mention of the subject from the question and evaluate if the question would be still answerable without it. Some questions about general knowledge may have this property. Other Comments Or Suggestions: Authors may consider discussing the relation to recent concurrent work TiC-LM and particularly TiC-Wiki where a related automatically generated continual benchmark based on Wikipedia and Wikidata over a span of 10 years has been proposed. - Li, Jeffrey, et al. "Tic-lm: A multi-year benchmark for continual pretraining of language models." Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and thoughtful feedback. We’re pleased they found the paper well-written, the benchmark well-designed, and the findings insightful. We appreciate the recognition of *WikiBigEdit* as a valuable contribution for evaluating real-world limitations of knowledge editing, as well as the clarity of our benchmark construction. We’re also grateful for the positive assessment of our method coverage and the relevance of our experimental insights, particularly regarding the challenges current editing techniques face over long update horizons. We address the reviewer’s suggestions and questions below. **Advanced RAG Approaches** We thank the reviewer for the helpful suggestion of including more advanced RAG approaches and for highlighting HippoRAG [1]. While HippoRAG introduces a compelling long-term memory mechanism via structured memory graphs, it assumes access to full-text documents—unlike our setting, which focuses on factual QA pairs derived from structured triplets. We agree that advanced retrieval-augmented approaches, including reasoning-augmented RAG or RAG agents, could enhance multi-hop reasoning. However, our goal is to evaluate the limits of lifelong knowledge editing in a controlled setting. We, therefore, use a simple, transparent RAG baseline to compare against editing and continual finetuning. Notably, even this baseline outperforms editing methods across most axes, though at higher inference cost. RAG’s difficulty with multi-hop reasoning is orthogonal to the challenge of integrating new knowledge over time — the focus of our work. More advanced RAG variants may improve multi-hop QA but likely require even more test-time compute. We will add this discussion, along with a reference to HippoRAG, to Section 4.3 of the paper. **Shortcuts in Mhop Evaluation** We appreciate the reviewer’s suggestion to consider the DiRe condition [2]. While we do not explicitly evaluate it in its original form, we conduct a related analysis in the Supplementary Material (Figure 9, top row). Specifically, we compare model performance on individual first-hop and second-hop questions to its performance on the combined multi-hop question. This serves to test whether the model relies on shortcut reasoning, i.e., answering the multi-hop question without resolving its components, which is central to the DiRe condition. Our results show that multi-hop accuracy depends heavily on correctly answering both hops. When a model fails on either, it rarely succeeds on the full question. However, a small fraction of questions (avg. 1.75% across models) are answered correctly despite failure on the first hop. These cases often involve surface-level cues, e.g.: * “What is the language of work or name associated with the spouse of Johann Caspar Richter?” → German * “In which country is the organization that Yamato Auto Works is a member of located?” → Japan These align with shortcut-driven behavior as discussed in the DiRe framework. We will clarify this connection in Section 3.1 of the revision. **Reference of Concurrent Work TiC-Wiki** We thank the reviewer for pointing us to the concurrent work TiC-Wiki [3], which shares similarities in spirit. We will include a discussion of it in the related works section. While TiC-Wiki is a valuable benchmark for continual pretraining, it covers Wikipedia/Wikidata revisions from 2008–2022 — a period largely included in the pretraining data of modern LLMs. This makes it less suitable for evaluating updates beyond the knowledge cutoff, which is the central focus of our work. In contrast, WikiBigEdit is constructed from post-cutoff updates, allowing direct evaluation of newly emerging knowledge. We view the two efforts as complementary and appreciate the chance to clarify this distinction. [1] Gutiérrez, B. J., et al. HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models, 2024. [2] Trivedi, Harsh, et al. Is multihop QA in DiRe condition? Measuring and Reducing Disconnected Reasoning. 2020. [3] Li, Jeffrey, et al. TiC-LM: A Multi-Year Benchmark for Continual Pretraining of Language Models, 2024. --- Rebuttal Comment 1.1: Comment: I thank authors for their response. I encourage authors to incorporate their response into a revision, particularly the result of multi-hop evaluations and connection DiRe condition as well relation to other methods and benchmarks. I maintain a positive rating.
Summary: Enabling LLMs to retain up-to-date knowledge is of significant practical importance. To avoid costly full-parameter retraining, recent research has proposed various lifelong knowledge editing methods to inject new knowledge into models at minimal expense. However, these knowledge editing approaches have two notable limitations: 1. The scale of edited knowledge is relatively small; 2. The complexity of the knowledge is limited. These constraints make existing knowledge editing methods difficult to apply directly in real-world scenarios. To better simulate real-world conditions and investigate the limitations of knowledge editing techniques, the authors designed an automated data extraction pipeline to acquire the latest world knowledge from Wikipedia. In this pipeline, they employed LLMs to rephrase the data, generating five distinct datasets collectively named WikiBigEdit. These five datasets (four of which serve as test sets) evaluate model knowledge editing effectiveness from four perspectives: 1. Ability to answer rephrased questions; 2. Ability to answer questions written in specific personal styles; 3. Ability to answer multi-hop questions; 4. Retention of unmodified knowledge. The authors used WikiBigEdit to test the performance of mainstream knowledge editing methods against general approaches in knowledge updating tasks. Experimental results demonstrate that models utilizing RAG methods achieved the highest accuracy on the test sets, while continued fine-tuning and model merging yielded second-best results, whereas knowledge editing methods performed unsatisfactorily. The findings indicate that existing knowledge editing techniques have significant limitations when applied to large-scale knowledge editing. Claims And Evidence: The authors designed comprehensive experiments to substantiate their arguments. Methods And Evaluation Criteria: Regarding data, the authors employed a convincing approach to construct a large-scale dataset for testing whether models can retain the latest knowledge from Wikipedia. The test sets they developed comprehensively evaluate knowledge editing effectiveness from four distinct perspectives. In terms of model selection, the authors tested several mainstream knowledge editing methods (including ROME, R-ROME, MEMIT, and WISE) as well as general-purpose RAG methods, continued fine-tuning approaches, and model merging techniques. Theoretical Claims: No theoretical contribution was presented by the authors. Experimental Designs Or Analyses: In the experimental section, the authors evaluated the effectiveness of various knowledge-updating methods across different LLMs (such as LLaMA and others). For each method, the authors provided detailed analyses of their performance on different datasets and presented several significant conclusions. Supplementary Material: No Supplementary materials are provieded. Relation To Broader Scientific Literature: This paper summarizes and emphasizes findings from existing literature, namely: as the number of knowledge entries to be edited increases, the ability of models trained with current knowledge editing techniques to memorize knowledge gradually deteriorates. Furthermore, through comparative experiments, this study reveals that knowledge editing techniques cannot outperform general methods (such as RAG, fine-tuning, etc.) and do not offer significant cost advantages. Essential References Not Discussed: No missing key references were identified. Other Strengths And Weaknesses: Strengths: - The authors constructed a large-scale and comprehensive knowledge editing dataset that effectively measures model performance on knowledge updating tasks. Other Comments Or Suggestions: The main focus of this paper is to analyze the limitations of existing knowledge editing methods. However, in the experimental section, the authors merely replicated similar experiments from previous literature (only changing the dataset) and obtained similar results. It would be desirable for the authors to conduct a more in-depth analysis of the specific reasons why existing knowledge editing techniques underperform compared to general methods. Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive review. We are pleased that the reviewer found the motivation and contributions of our work to be of practical importance — particularly our effort to investigate the limitations of existing knowledge editing techniques under realistic conditions. We appreciate the recognition of our automated pipeline for extracting real-world knowledge updates, as well as the comprehensive evaluation framework of *WikiBigEdit*, which tests factual retention, generalization, and reasoning across multiple dimensions. We are also grateful for the acknowledgment of our systematic comparison of knowledge editing methods with general update approaches such as RAG and continual finetuning. We address the reviewer’s suggestion for deeper analysis and provide further clarifications below. **In-Depth Analysis of Knowledge Editing Failure** While our study builds on the foundations of previous work, our experiments go beyond a simple dataset substitution. Specifically, we design a setting that reflects real-world, large-scale lifelong knowledge editing, a departure from the typically synthetic and small-scale settings used in prior studies (see also related works). Our benchmark introduces several important new dimensions: (1) the sequential nature of edits over time, (2) the use of real-world factual updates derived from Wikidata, and (3) a comprehensive evaluation protocol that includes not just edit accuracy but also generalization, locality, and multi-hop reasoning. Moreover, our analysis compares knowledge editing methods not only to one another but also to broader model update strategies, such as retrieval-augmented generation (RAG) and continual finetuning with adapter merging. This allows us to contextualize the limitations of editing methods in a practical deployment landscape - a comparison absent in prior work. We agree that a deeper investigation into the specific mechanisms of failure in knowledge editing is an important direction. While our primary focus is on empirically establishing limitations at scale, we highlight recent work that examines these mechanisms in detail (see Section 2). For example, Hsueh et al. [1], Gupta et al. [2], and Yang et al. [3] demonstrate that even small-scale editing can induce instability or forgetting in LLMs. Gupta et al. [4] further attribute performance collapse during sequential edits to the lack of regularization, which leads to overfitting on individual updates. These insights complement our findings and underscore the challenges in designing robust editing techniques that scale. We will clarify this distinction and discussion in Section 2 of the revision. [1] Hsueh et al., Editing the Mind of Giants, 2024. [2] Gupta et al., Model Editing at Scale Leads to Gradual and Catastrophic Forgetting, 2024. [3] Yang et al., The Butterfly Effect of Model Editing, 2024. [4] Gupta et al., Lifelong Sequential Knowledge Editing without Model Degradation, 2025. --- Rebuttal Comment 1.1: Comment: Thank you for your response, I will keep my rating unchanged.
Summary: This paper proposes a large-scale knowledge editing benchmark *WikiBigEdit* based on the Wikidata, which contains over 500k question-answer pairs. At the same time, this work also constructs a pipeline that can automatically update data to adapt to the changes in real data, while mitigating the overfitting problem of pre-training processes for large-scale language models on information before the training time point. After construction, this article tests the effectiveness of several mainstream methods on *WikiBigEdit*, reflecting various problems of previous methods in terms of locality, multi-hop, and personalized questioning (generalizability). ## update after rebuttal I'll keep my assessment unchanged. The authors have answered my questions clearly. Claims And Evidence: Yes. The text provides a detailed description of the construction process of the knowledge editing benchmark *WikiBigEdit*, and makes a detailed evaluation of the representative methods in the field on different open-source models, followed by analysis. The experimental results are very comprehensive. Methods And Evaluation Criteria: The *WikiBigEdit* proposed in this paper is meaningful for knowledge editing tasks. On one hand, this is the first dataset of such a large scale constructed based on real data. On the other hand, the additional data update pipeline designed by the authors makes the benchmark keep pace with the times and avoids the overfitting or data leakage problems of newly emerged LLMs. Theoretical Claims: This paper is a work about a new benchmark and does not involve theoretical proof. All the formulas in the text merely describe the update process of model parameters and serve only as a narrative, not as proof. Experimental Designs Or Analyses: Yes, I have checked all the experiments, mainly including different models (7B level) under various knowledge editing methods on the *WikiBigEdit* benchmark proposed in this paper. Regarding the experiments, I have several questions I would like to ask the authors: 1. As described in the paper, in the Multi-Hop Reasoning test, the authors only considered connecting two triplets end-to-end (which is actually a two-step question). But is this too short for normal multi-hop question answering? General multi-hop question answering is usually based on a subgraph of a knowledge base or a longer reasoning chain. 2. Can some other results based on closed-source models (considering GPT3.5, 4o-mini used in the experiment, the cost is actually not high) or larger open-source models (like Qwen-series) be supplemented in the experiment? Supplementary Material: Yes. I've checked the prompt for constructing data based on LLM, as well as the detailed results and introductions of various experiments, which are included in the appendix. At the same time, the author provided the data and corresponding code for data construction (in an anonymous form) in the text. Relation To Broader Scientific Literature: This paper proposes a large-scale dataset *WikiBigEdit* for the knowledge editing task. This dataset addresses the insufficient amount of data in previous knowledge editing tasks, as well as the issue of overfitting in the pre-training process after the emergence of large-scale language models. More importantly, this benchmark proposes a pipeline for continuously updating data, constantly updating the dataset based on reality, which very effectively solves the problem of models overfitting to all knowledge before a specific time point. Essential References Not Discussed: There is no such situation in my vision. This is the first knowledge editing dataset to reach this scale and can be continuously updated. At the same time, as a benchmark, this work has compared several very mainstream knowledge editing methods. Other Strengths And Weaknesses: As described in "Relation to Broader Scientific Literature" and "Experimental Designs or Analyses". Other Comments Or Suggestions: None Questions For Authors: 1. The previous common sense knowledge graph (such as ConceptNet) is a relatively dirty knowledge base, where the same head entity and relationship can correspond to multiple tail entities. In the narrative of this paper, such relationships are directly removed. I would like to know what proportion is removed? Can there be some analysis experiments to analyze the quality of the extracted triples? (Because in my previous understanding, the dirty relationships or meaningless triples in such common sense graphs account for the majority.) 2. The construction of problems in the text is based on GPT3.5 or GPT-4o-mini. Have you tried other models for problem construction? Would the different distributions of different models lead to an impact on data quality? Anyway, this is a good piece of work, and answering these questions will make my understanding more complete. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and detailed evaluation. We appreciate the recognition of our key contributions: the construction of *WikiBigEdit* as the first large-scale, automatically extensible benchmark for real-world knowledge editing; the automated pipeline enabling continual updates; and the comprehensive evaluation across multiple LLMs and editing techniques. We are also grateful for the acknowledgment of our efforts to mitigate data leakage and pretraining overlap, as well as the clarity of our supplementary materials and code. We address the reviewer’s insightful questions below. **Length of Multi-hop Reasoning Chains** We appreciate the reviewer’s observation regarding the scope of multi-hop reasoning. While multi-hop can involve longer chains or subgraphs in general QA settings, in the context of knowledge editing, it is commonly defined as reasoning over two connected facts, forming a two-hop chain [1–3]. This is consistent with prior work, such as HotpotQA [4] and EX-FEVER [5], where two-hop questions are standard for evaluating a model’s ability to integrate and reason over multiple facts. Our benchmark adopts this convention to align with established protocols. That said, we agree that exploring longer reasoning chains is a valuable direction for future work. **Triplet Quality and Filtering** We thank the reviewer for their question on triple quality and filtering. To ensure a high-quality benchmark, our automated pipeline applies multiple filtering and QA steps (see Section 3.2). Below are average statistics from the first four timesteps: * Initial Input: 100% of new or changed factual triplets from consecutive Wikidata snapshots. * Initial Filtering: Remove cyclic, non-Roman, or overly long entities (~89% remain). * Filter Unwanted Relations: Keep only WikibaseItem relations (entity-to-entity), excluding media, URLs, etc., and a small list of low-value relations (e.g., “use”, “is a list of”), retaining ~39%. * Non-Deterministic Triplets: Remove (subject, relation) pairs with multiple objects (~24% remain). Final Filtering: Further filtered to remove spurious entries (final ~24% retained). We also conducted manual inspections, sampling triplets from various batches to confirm that relations and entity pairings are meaningful. These steps help ensure that only high-quality, interpretable factual triples are included in WikiBigEdit. We will extend Section 3.2 to report these statistics in the final version. **Impact of LLM Choice on QA Generation** We thank the reviewer for raising the important question of how the choice of LLM affects QA pair generation. In preliminary experiments, we evaluated the sensitivity of our generation pipeline to different GPT models. For the Update and Locality sets, model choice had a negligible impact. These questions are directly derived from subject–relation–object triplets, and even GPT-3.5-turbo consistently produced accurate and well-formed outputs. For example, given the triplet (SS Heliopolis, manufacturer, Fairfield Shipbuilding and Engineering Company), GPT-3.5-turbo generated “Which company was the main manufacturer of SS Heliopolis?”, GPT-4o-mini produced “Who was the manufacturer of the SS Heliopolis?”, and GPT-4o generated “Which company was the manufacturer of the SS Heliopolis?” — all of which are stylistically comparable and semantically correct. The Rephrase set showed similarly consistent quality across models, as paraphrasing proved to be a relatively simple task for most LLMs. In contrast, the Multi-hop and Persona sets were more sensitive to model choice, as these tasks require compositional reasoning or stylistic transformation. For instance, in the multi-hop case combining (2004 VS75, discoverer, Marc Buie) and (Marc Buie, gender, male), GPT-3.5-turbo generated the incomplete question “Who is the discoverer or inventor of (525257) 2004 VS75?”, failing to incorporate both facts. Meanwhile, GPT-4o-mini and GPT-4o correctly generated queries such as “What is the gender of the discoverer of 2004 VS75?” Based on these findings, we selected GPT-4o-mini for the Multi-hop and Persona sets, as it offers a strong balance between generation quality and efficiency. We observed no significant difference between GPT-4o and GPT-4o-mini in these settings. We hope this clarifies the robustness of our generation pipeline and our rationale for model selection. We will include an ablation on generation models in the Supplementary Material. [1] Cohen et al., Evaluating the Ripple Effects of Knowledge Editing in Language Models, 2023. [2] Zhong et al., MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions, 2023. [3] Zhong et al., MQuAKE-Remastered: Multi-Hop Knowledge Editing Can Only Be Advanced with Reliable Evaluations, 2025. [4] Yang et al., HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering, 2018. [5] Ma et al., EX-FEVER: A Dataset for Multihop Explainable Fact Verification, 2024.
null
null
null
null
null
null
null
null
On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds
Reject
Summary: This paper studies the effect of explicit regularization in adversarial training using sharp high-dimensional asymptotics. Its main qualitative result is that regularizing using the dual of the norm with respect to which the perturbation budget is defined yields significant performance gains. ## Update after rebuttal After the rebuttal, my opinion remains positive. Claims And Evidence: All claims are well-supported. Methods And Evaluation Criteria: The methods are appropriate. Theoretical Claims: The main theorems are extensions of the results of Loureiro et al. 2021, and their proofs consist in no small part of quoting appropriate results from that earlier work. Based on my knowledge of Loureiro et al. 2021, I thus believe them to be correct. Experimental Designs Or Analyses: The experiments are adequate. I do wish the authors mentioned the choice of covariance structure used in Figure 3 in the main text rather than completely deferring that detail to Appendix C. Supplementary Material: I have not reviewed the code included in the Supplementary Material. Relation To Broader Scientific Literature: This paper addresses two topics of great interest to the ICML audience at large: sharp asymptotic characterizations of high-dimensional regression and adversarial training. It is well-situated within the literature, and (as I discuss below) I think the authors do a good job of referencing relevant prior work. Essential References Not Discussed: The authors provide a satisfactory review of prior work. Other Strengths And Weaknesses: - All experiments in the submitted manuscript use Gaussian data. As the authors do not have a proof of universality for the problem at hand, I think an experiment showing that the main conceptual result (i.e., regularizing with the dual norm improves performance) transfers to a real dataset would enhance the paper. However, I leave this to the authors' discretion, as I think the paper could be accepted in its submitted form. - I wish the authors were able to extract more insight directly from Theorem 3.15. I fully appreciate that these self-consistent equations are not easily analyzable, but Remark 3.17 doesn't really say anything specific to this result. It merely restates the standard idea that the order parameters are sufficient statistics, and mentions the proof strategy (which is re-stated again in lines 297-301). Are there any special cases (beyond isotropic data) for which you could make something more of this result? - I think more could be done to integrate the Rademacher complexity analysis in Section 4 with the sharp asymptotics derived in the preceding sections. As it stands, the reader is left feeling that the main conceptual message comes from this analysis, which then could be supported through numerics alone rather than through numerical solution of the self-consistent equations giving the sharp asymptotic. I acknowledge that a complete analysis of the transition to optimality of the dual norm with increasing perturbation scale is out of reach, but (in the same vein as my previous comment) I wish the authors showed how more can be done with access to the sharp asymptotic. At the very least, the authors should show how loose the Rademacher complexity bounds are. Other Comments Or Suggestions: - In Line 165, "euclidean" -> "Euclidean" - For the reader's convenience, it would be nice to link to the appendix containing the relevant proofs after each theorem. - I would suggest that the authors consider swapping their use of color and dashing in Figure 3; I think using different colors for different norms and different dashing for different $\epsilon$ would be more legible. Alternatively, they could use different colors for different $\epsilon$ and differentiate norms by the saturation of the colors. The dotted and dashed lines are too similar to be optimally legible. - In Line 862, there is a jumbled reference to Loureiro et al. 2021: "(?)Lemma 11]loureiro2021learning". Questions For Authors: Given the smooth transition between $r^{\star} = 2$ and $r^{\star} = 1$ with increasing $\epsilon$ that you observe in Figure 4, can you gain any insight by expanding around $\epsilon = 0$ and $\epsilon \to \infty$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful and insightful review. Regarding all the typos and clarity suggestions, we will fix them in the camera-ready version. > [...] I think an experiment showing that the main conceptual result (i.e., regularizing with the dual norm improves performance) transfers to a real dataset would enhance the paper. [...] We refer to the answer given to Reviewer BhWC, where we explain how a similar behavior is observed for a classification task on 0 vs. 1 MNIST. > I wish the authors were able to extract more insight directly from Theorem 3.15. I fully appreciate that these self-consistent equations are not easily analyzable, but Remark 3.17 doesn't really say anything specific to this result. It merely restates the standard idea that the order parameters are sufficient statistics, and mentions the proof strategy (which is re-stated again in lines 297-301). Are there any special cases (beyond isotropic data) for which you could make something more of this result? The system of equations in the theorem could provide insights for specific questions. For instance, studying them perturbatively might help reveal how generalization error scales with model parameters. As an example, [Vilucchio et al. 2024] derived such scalings for a robust regression setting. However, the analytical progress in that paper relies critically on their choice of loss functions ($\ell_2$, $\ell_1$, and Huber loss) which have closed-form proximal operators, and on the regression setting they consider. In our binary classification context, the proximal operator lacks an analytical form for all the commonly used classification losses, making similar closed-form derivations significantly more challenging to obtain. > [...] I acknowledge that a complete analysis of the transition to optimality of the dual norm with increasing perturbation scale is out of reach, but (in the same vein as my previous comment) I wish the authors showed how more can be done with access to the sharp asymptotic. At the very least, the authors should show how loose the Rademacher complexity bounds are. Thank you for the suggestion. Indeed, it would be useful to show that the Rademacher complexity bounds are numerically loose. We will add in Appendix B the high-dimensional version of the Rademacher complexity bounds. For instance, see the [attached anonymized plot (bound_comparison.pdf)](https://anonymous.4open.science/r/imcl2025-rebuttals-940D) that shows that the $\ell_2$ bound $\left(\frac{ \mathcal{W}_2}{ \sqrt{\alpha}} \left(1 + \sqrt{\frac{1}{\lambda_1 (\Sigma) }} \right),\right)$ where $\lambda_1(\Sigma)$ is the smallest eigenvalue of the perturbation matrix $\Sigma$, is numerically loose (and even increases with $\alpha$). It corresponds to the blue lines shown in the Figure. > Given the smooth transition between $r^\star = 2$ and $r^\star = 1$ with increasing $\epsilon$ that you observe in Figure 4, can you gain any insight by expanding around $\epsilon = 0$ and $\epsilon \to \infty$? The primary challenge with formal expansions in this binary classification context is that the system of equations remains unsolvable and depends on integrals at any order in the expansion. As a result, the expansion would depend on a specific scaling ansatz in $\varepsilon$, where the coefficients at each order of the expansion are solutions of a similar system of equations as the original one. Thus, it appears to be not possible to obtain such an expansion without resorting to numerical solutions. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response; my main concerns are addressed. I appreciate the difficulty of finding closed-form solutions, and the challenges of a perturbative expansion in $\epsilon$. I will maintain my score, as I am in favor of acceptance.
Summary: This work studies how to select the appropriate regularization norm in high-dimensional adversarial training for binary classification. The authors provide an exact asymptotic description of the robust, regularized empirical risk minimizer for various adversarial attacks and regularization norms. They also conduct a uniform convergence analysis, deriving bounds on the Rademacher Complexity for these problems. Using their theoretical findings, they quantitatively characterize the relationship between perturbation size and the optimal choice of regularization norm. Claims And Evidence: The results in this paper are rigorously proved. Methods And Evaluation Criteria: The evaluation methods used are generally appropriate for assessing the proposed method. However, it would also be interesting to test the robustness of the results by examining scenarios where the linear classifier assumption is violated. This would provide valuable insights into how the method performs under model misspecification. Theoretical Claims: I have reviewed the proofs, and they appear to be correct. Experimental Designs Or Analyses: The experimental designs appear to be reasonable. However, the algorithm used to solve the minimization problem (7) on page 3 is not clearly explained. Providing details about the algorithm used in the simulation studies would be helpful for understanding the approach and replicating the results. Supplementary Material: I have reviewed the entire supplementary material, although I did not check every technical detail. Relation To Broader Scientific Literature: This work primarily focuses on linear classifiers for binary classification. While the results appear to be of theoretical interest, it is unclear how they apply to practical situations involving high-dimensional, complex data, where nonlinear classifiers are typically required. Explaining the implications of these findings in such contexts would provide valuable insights into their practical utility. Essential References Not Discussed: It appears that the essential references are cited. Other Strengths And Weaknesses: No additional comments. Other Comments Or Suggestions: No other comments. Questions For Authors: This paper focuses on binary classification models. I wonder if the results can be extended to linear regression models and generalized linear models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your critical and thorough review of our work. > The evaluation methods used are generally appropriate for assessing the proposed method. However, it would also be interesting to test the robustness of the results by examining scenarios where the linear classifier assumption is violated. This would provide valuable insights into how the method performs under model misspecification. We agree that this is a nice addition for our paper and are adding such an experiment. We consider the case of 0 vs 1 MNIST classification. We take 0 and 1 MNIST images, normalize their pixel values in $[0,1]$ and flatten them to have a $d=784$ dimensional $\bf{x}_i$. The labels are converted to be $-1$ and $+1$. We then train as per eqs. (6,7) with this dataset, using subsets of a suitable size to fix various $\alpha = n / d$ and then test robust and clean error as per eqs. (3,4) with $1000$ new samples. We attach the results of two experiments with $\varepsilon = 1.0, 2.0$ in an [anonymized folder](https://anonymous.4open.science/r/imcl2025-rebuttals-940D). We see that by reducing the number of training samples (lowering $\alpha$) the robust error differs based on the choice of the regularization. The optimal choice of regularization is $r=1$, which is constistent with our idealized theoretical model. In the camera ready version, we will include a discussion of these results. > [...] the algorithm used to solve the minimization problem (7) on page 3 is not clearly explained. [...] Thank you for pointing this out. Equation (7) corresponds to a convex problem as explained in Appendix A (see eq (37)). Thus, it can be solved using a general purpose convex solver. We used the L-BFGS-B algorithm with random normal initialization and a stopping tolerance of `1e-5` for the gradient. We will include these experimental details in the revised version. > This paper focuses on binary classification models. I wonder if the results can be extended to linear regression models and generalized linear models? The key step that allows the analytical description of the problem is going from eq. (36) to eq. (37). This step relies on the fact that we consider binary classification, as explained in Appendix A, and cannot easily be generalized to more complicated classification tasks.
Summary: The authors investigate the impact of regularization geometry on adversarial training in a binary classification problem. The primary objective of the study is to control the robust generalization error under input perturbations constrained by a specified norm. To achieve this, they optimize the robust empirical (regularized) risk under a specified regularization. The work is presented in three key steps: 1. The authors derive an exact asymptotic characterization of the performance of Regularized Robust Empirical Risk Minimization (RERM) for various combinations of perturbation and regularization norms, including both $\ell_p$-norms and $\|\cdot\|_{\Sigma}$-norms. 2. They establish novel uniform convergence bounds using Rademacher complexity results for different norm geometries. They prescribe the dual norm of the perturbation norm to be used as regularizer for the linear classification problem. 3. They conduct synthetic experiments to show the validity of their theoretical results. ## update after rebuttal I would like to thank the authors for their response. They have addressed my questions, and I have decided to increase my score. Claims And Evidence: Although I have not examined the mathematical details carefully, the claims and arguments presented in the main text appear to be valid. Methods And Evaluation Criteria: The work is mostly theoretical, although synthetic experiments have been presented which make sense for the problem setting. Theoretical Claims: I have not examined the mathematical details carefully. Experimental Designs Or Analyses: The work is mostly theoretical, although synthetic experiments have been presented which make sense for the problem setting. Supplementary Material: I have not checked the details carefully. Relation To Broader Scientific Literature: The contributions are well-situated within the context of existing literature. The work by Tanner et al., 2024 appears to be the closest to the proposed study. Specifically, the results in Section 3 bear resemblance to the techniques and findings presented by Tanner et al., 2024, though applied to $\ell_2$-regularization and perturbations in the $\|\cdot\|_{\Sigma}$-norm. Could the authors provide a comparison and contrast of their results with those of Tanner et al., 2024, particularly in terms of the technical settings and innovations in their technical analysis? Additionally, what were the key technical challenges in extending the work of Tanner et al., 2024 to the proposed setting? Addressing these questions would clarify the novelty and technical contributions of this work. Essential References Not Discussed: The relevant references are discussed adequately. Other Strengths And Weaknesses: Although the binary classification problem setting appears simple, the exact asymptotic characterization of the performance of Regularized Robust Empirical Risk Minimization (RERM) and the novel uniform convergence bounds derived using new Rademacher complexity results are valuable contributions to the machine learning community. To further strengthen the paper, it would be beneficial to discuss the practical applicability of the assumptions. For example: 1. Can Assumption 3.11 be extended to accommodate zero-mean sub-Gaussian covariates? 2. How restrictive is Assumption 3.13 in practice, and how often is it likely to hold in real-world scenarios? Other Comments Or Suggestions: Please see the previous sections. Questions For Authors: Please see the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thorough review of our work. > Could the authors provide a comparison and contrast of their results with those of Tanner et al., 2024, particularly in terms of the technical settings and innovations in their technical analysis? Additionally, what were the key technical challenges in extending the work of Tanner et al., 2024 to the proposed setting? In a nutshell, Tanner et al. 2024 study Mahalanobis-norm perturbations and regularization with an $\ell_2$ penalty, while Section 3 in our paper presents results for (a) Mahalanobis-norm perturbations and general Mahalanobis-norm regularizations and (b) $\ell_p$ norms for both perturbation and regularization. On the technical side the proofs that appear in [Tanner et al. 2024] are based on a mapping to a Generalised Approximate Message Passing algorithm. Our results are proven via Gordon's Theorem (Theorem A.1) and allow for a broader range of loss and regularization functions. > Can Assumption 3.11 be extended to accommodate zero-mean sub-Gaussian covariates? This is an excellent question about practical extensions. While our theoretical guarantees rely on Gaussian assumptions for applying Gordon's Theorem, recent work on universality [Montanari & Saeed (2022), and Dandi et al. (2023)] suggests these results often extend to more general distributions including sub-Gaussian covariates. The key technical challenge for a formal extension would involve using more general concentration inequalities for sub-Gaussian random variables [Vershynin, "High-Dimensional Probability", 2018]. However, maintaining the freedom of chosing an arbitrary non-increasing convex loss function would require additional technical machinery beyond the scope of this initial work. We believe this is a promising direction for future research. > How restrictive is Assumption 3.13 in practice, and how often is it likely to hold in real-world scenarios? Assumption 3.13 (simultaneous diagonalizability) is primarily a technical assumption that allows for a well specified setting. In practice, this assumption is less restrictive than it might appear. For example, this property holds when the matrices $\Sigma_w, \Sigma_\delta$ share principal components, which occurs naturally in many signal processing applications where the same underlying factors drive both input correlations and noise structure. The assumption tries to mimic PCA applied to data, which is common practice in many ML pipelines. In such cases, the perturbation and regularization matrices would indeed share eigenvectors with the transformed data covariance. Nonetheless we agree that future work could relax this assumption to broaden applicability, and our Rademacher complexity analysis in Section 4 already takes a step in this direction by providing distribution-agnostic guarantees.
null
null
null
null
null
null
null
null
Feature-Mapping Topology Optimization with Neural Heaviside Signed Distance Functions
Accept (poster)
Summary: This paper presents a novel deep constrained topology optimization algorithm. Using the SIMP formalism, the authors propose learning an encoding of a space of fabricable shapes and solve the optimization problem on this space using gradient-based methods. ## update after rebuttal In light of these comparisons and clarifications (and on the assumption that they all are included in the updated manuscript), I am raising my score. I hope this paper gets presented at ICML! Claims And Evidence: All claims made in the paper are thoroughly supported. It should be noted that these claims are measured: for example, the authors limit themselves to exploring several possible new frameworks for topology optimization, without necessarily claiming its superiority with respect to other works. Methods And Evaluation Criteria: The authors evaluate their method on a broad range of examples, which are described in the supplemental. They even include code in their supplemental material that reproduces most results. In general, I was surprised that the authors did not evaluate and ablate their algorithmic choices more thoroughly. For example, this paper relies on encoding the shape code X into a (smaller) latent space Z, and then optimizes the code Z according to the physical problem using projected gradient descent, effectively ensuring that the optimal Z is the image through the encoding of a valid X. A question that may occur to a reader is: why is this encoding necessary? Can’t one simply do projected gradient descent on X, projecting (or “refactoring”, as said in the paper) X into the set of possible X after every 5 iterations? Given the encoding adds a lot of the complexity in the algorithm, I would have expected to have a clear answer to this question in the text or experiments, but I missed it. Similarly, the paper uses a Heaviside-“SDF” representation for the primitives. This choice feels somewhat ad-hoc, justified by saying “[it] is convenient for the optimization process and for training the neural network”. I wished the authors either elaborated on this, or compared to other strategies like learning the non-Heaviside SDF directly (in fact, isn’t the Heaviside loss in Eq. 9 the same as an SDF loss with anisotropic sampling of B?). Theoretical Claims: See above Experimental Designs Or Analyses: See above Supplementary Material: The appendix is complete, and the derivation of the stiffness matrix is welcome. As a small note, this derivation is often referred to as “linear elasticity” in the literature; perhaps the authors wish to add that keyword somewhere. I am particularly thankful to the authors for providing their source code, including annotated notebooks showing the results in the paper. I wish every paper did this. Relation To Broader Scientific Literature: The paper does not (nor does it claim or intend to) include a full survey of work in topology optimization, a large field of research with decades of work. Instead, it focuses on deep feature-mapping methods for topology optimization and, as far as I can tell, does a good job of covering these. Nonetheless, I am somewhat disappointed that the manuscript does not attempt to place this algorithm in context with other topology optimization strategies. Given that this is a problem considered in different fields of research and with many interested practicioners, I would have expected this paper to clearly answer the question “how is this algorithm better or worse than existing topology optimization methods?”, i.e., “if I were a manufacturing company, when if ever do I wish to use this method as opposed to others?” This can be answered in text or in experimentation. Essential References Not Discussed: Like I said above, the authors are not expected to perform a full survey of this topic, but I would have appreciated including works from the engineering community (e.g., “Multiscale structural topology optimization with an approximate constitutive model for local material microstructure”) or the graphics one (e.g., “Two-Scale Topology Optimization with Microstructures”). Other Strengths And Weaknesses: The main strength of the paper is its novelty: using learned latent spaces of shapes as a way of guiding topology optimization to fabricable shapes is a good idea, and this is a relatively good execution of it. As said above, I believe the main weakness in this work is the lack of comparison to prior art (especially for a task as crowded as this), as well as the lack of experiments evaluating the algorithmic choices. These are the only reasons I am not more excited about accepting this work, although I will carefully read and review the authors’ response and other reviewer comments. Other Comments Or Suggestions: “Cantilever” is misspelled in C.1, and the citation “Maz’e” should be “Mazé”. Questions For Authors: - Can the authors please clarify the dependencies of each variable on rho in Sec 3.2.? - What is \xi_e and how does it differ from \xi in Eqs. (7-8) - Why are there 6 entries in \xi in Fig. 3? Isn’t \xi a (x,y,z) coordinate? - The shape code X is never introduced, defined or explained beyond Figure 4 (which appears a full page after X is used in the text). Perhaps front loading the definition of X would be useful to a reader? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and insightful review! We are grateful for your detailed comments, which have not only highlighted the strengths of our methodology but also pointed out key areas for improvement, ultimately helping us enhance the clarity and impact of our research. ***Refactoring mechanism.*** During topology optimization, reconstructed geometry can significantly deviate from the Heaviside function boundaries because the latent representation $ Z $ extends beyond the learned shape distributions. To address this, we initially reprojected latent vectors into the correct regions using an additional objective function, but this hindered convergence. Our refactoring mechanism instead steers latent vectors back to their learned distribution and potentially allows for geometric corrections during optimization, such as enforcing minimum radii or correcting overlapping arcs. Of course, we did not try every possible variant that might address the shortcomings of our method. In the future, we plan to explore alternative approaches to improve the algorithm. ***Heaviside-“SDF” representation.*** We chose to use the Heaviside-SDF representation instead of a direct SDF during model training for the following reasons: 1. The Heaviside transformation produces a sharper, more distinct boundary—exactly what we need for our application. 2. This transformation enhances FMTO's robustness to neural network noise. Using a direct SDF requires applying the Heaviside function later, which significantly amplifies boundary noise occurring in the model. ***Motivation for the method.*** Our method is motivated by the common industry practice of creating parts by extruding sketches—typically polygons with rounded corners—for milling and casting. We developed a framework that allows polygons to evolve from an initial circle during optimization, producing more natural and manufacturable designs. Additionally, we plan to extend the framework to include other common geometric features, such as polygons with arc segments. ***Essential References.*** Thank you for the note. These directions are indeed closely related to our work, and we would be happy to include them in our review. ***Comparison with other methods.*** We can provide more comparisons with other methods, including one recently published work $\texttt{TreeTOp}$ [1], which directly relates to FMTO. 1. K. Padhy, R., Thombre, P., Suresh, K. et al. Treetop: topology optimization using constructive solid geometry trees. 2025. | Method | Method type | $\text{vonMises}_{max}$ | Compliance | Volume Fraction | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{0.483}$ | $\textbf{0.00125}$ | 0.44 | | NTopo | Free-form | 2.52 | 0.00163 | 0.438 | | TreeTOp | FMTO | 6.08 | 0.00373 | 0.455 | | Ellipses | FMTO | 0.607 | 0.00174 | 0.449 | | NeuralHeavisideSDF | FMTO | $\textbf{0.522}$ | $\textbf{0.00163}$ | 0.437 | Table: Methods comparison for Example 3: MBB beam half | Method | Method type | $\text{vonMises}_{max}$ | Compliance | Volume Fraction | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{0.194}$ | $\textbf{0.000104}$ | 0.34 | | NTopo | Free-form | 0.349 | 0.000107 | 0.339 | | TreeTOp | FMTO | 43.8 | 0.000155 | 0.357 | | Ellipses | FMTO | 0.674 | 0.000155 | 0.345 | | NeuralHeavisideSDF | FMTO | $\textbf{0.575}$ | $\textbf{0.000152}$ | 0.337 | Table: Methods comparison for Example 4: Beam Distributed Load $\textbf{Note:}$ Since the NTopo method is quite limited in altering the parameters of the initial conditions, we had to adjust the parameters of the other methods to match those of the NTopo method. Therefore, the metric results differ from those in the original manuscript. Our approach achieves the best Compliance metric values among all FMTO methods while using less material. Additionally, in some experiments, our method is comparable to the NTopo method (see Example 3). ***“Cantilever” is misspelled, incorrect citation, “linear elasticity” keyword.*** Thank you for pointing this out. We will correct the spelling error, add the “linear elasticity” keyword, and fix the citation. ***What is $\xi_e$ and how does it differ from $\xi$ in Eqs. (7-8).*** $\xi$ represents any point within the design domain and is used to train the model. In contrast, $\xi_e$ denotes the center point of element $e$. ***Why are there 6 entries in $\xi$ in Fig. 3? Isn’t $\xi$ a (x,y,z) coordinate?*** We apologize for the confusion caused by our previous depiction of a batch size of 3. We have updated the scheme to display a batch size of 1. Our experiments focus on 2D problems, so only two coordinates (x, y) are used. ***Shape code $X$.*** We apologize for the missing definitions. Inconsistencies in the method description have been corrected, and an updated version—including details on the shape code $\chi$—will appear in the main text, as well as a revised Figure 4. Additionally, a table of notations will be added to the appendix. --- Rebuttal Comment 1.1: Comment: Thank you very much! In light of these comparisons and clarifications (and on the assumption that they all are included in the updated manuscript), I am raising my score. I hope this paper gets presented at ICML! --- Reply to Comment 1.1.1: Comment: Thank you very much for your thorough evaluation and for reconsidering your score. We deeply appreciate the valuable insights you have provided. We assure you that the suggested changes will be incorporated into the final version of both the paper and the accompanying code. Thank you again for your support and constructive feedback!
Summary: This paper designs a new learning-based approach for feature mapping based topology optimization. The major advantage of the proposed method against previous works is that the generated voids are guaranteed to be directly manufacturable, thus circumventing cumbersome post-processing procedures. Technically, the integration of Neural Heaviside SDFs with structured latent spaces addresses the limitation of traditional feature-mapping methods by enabling diverse geometric features, thus improving manufacturability. In general, this characteristic well aligns with actual industrial needs. Claims And Evidence: Partially. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Well related to prior studies on topology optimization and geometry processing. Essential References Not Discussed: No. Other Strengths And Weaknesses: As presented in Eq. (8), the Sigmoid function tuned by \beta is used as a soft approximation of the Heaviside function. I think throughout the paper it would be more straightforward to directly use the description of "Sigmoid", instead of "Heaviside". Empirically, the authors experimented with examples of ellipses. triangles, and quadrilaterals. I am wondering if more types of geometric primitives with higher degrees of complexities can be included for evaluation. Eq. (12) introduces Kreisselmeier-Steinhauser (KS) function for smooth max approximation, but sensitivity to the smoothing parameter γ_KS is not analyzed. How does γ_KS affect convergence? The ellipse-based baseline is limited. Why not compare against B-spline or Bézier-based feature-mapping methods? Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for taking the time to provide such a detailed and insightful review. Your comment has been extremely valuable in enabling us to carefully analyze various aspects of our method's implementation, including the integration of the Kreisselmeier-Steinhauser (KS) function. ***Using Sigmoid instead of Heaviside.*** The use of the term "sigmoid" is, of course, acceptable. However, the Heaviside function is a more traditional term in the context of topology optimization. We, like many other authors in this field, prefer to retain the possibility of using various smooth approximations of the Heaviside function without changing the name of the method. ***Higher Degrees of Complexity.*** Adding new primitives forces an increase in latent space dimensions to keep accuracy, which can harm convergence. However, since our latent space captures overall contours rather than exact parameters, we believe that there's a threshold beyond which more dimensions are unnecessary. We will investigate this further. ***Kreisselmeier-Steinhauser (KS) function.*** The current implementation includes several additional details that are not covered in the main text. It is important for us to maintain sensitivity with respect to the objective function; however, to ensure the method does not fail, we are forced to clamp the $\rho$ parameter because the KS function can produce values greater than 1. Therefore, in addition to equation (11), we use the following formula to scale $\rho$: $$ \\rho_e = \\frac{1 - \\rho_{\\min}}{KS_{\\max} - KS_{\\min}} \\left(KS_{\\max} - \\frac{\\ln \\sum_{m=1}^{M} \\exp(\\gamma_{KS} \\widetilde{H_{m,e}})}{\\gamma_{KS}}\\right) + \\rho_{\\min} $$ where $ KS_{\min} $ is the minimum value of the KS function, which is equal to $$ KS_{\min} = \\frac{\\ln P}{\\gamma_{KS}} $$ and $ KS_{\max} $ is the maximum value of the KS function, which is equal to $$ KS_{\\max} = \\frac{\\ln (P\\exp(\\gamma_{KS}))}{\\gamma_{KS}} $$ where $P$ is the expected maximum number of geometric primitives intersecting at one point (by default, this is 2). Therefore, the combined $\rho$ will exceed the range $[\rho_{min}, 1]$ only at intersections where more than $P$ geometric primitives overlap. In these cases, $\rho$ is clamped to the range and becomes insensitive to the objective function. In our case, the primary shape for topology is a non-overlapping primitive within which a void must form. For large values of P but small values of $\gamma_{KS}$, the value of $\rho$ inside the primitive becomes considerably higher than $\rho_{min}$. Therefore, we choose $\gamma_{KS}$ to be as large as possible, while ensuring that computations do not become impractical due to excessively large values of $\exp(\gamma_{KS})$. Changing $\gamma_{KS}$ above 10 has little impact on convergence or final topology, whereas lower values worsen convergence because $\rho_e$ remains too high for void formation. $\gamma_{ks}$ | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- vf | 0.373 | 0.358 | 0.347 | 0.355 | 0.347 | 0.344 | 0.357 | 0.354 C | 0.00238 | 0.00212 | 0.00209 | 0.00207 | 0.00210 | 0.00213 | 0.00203 | **0.00198** $min(\rho)$ | 0.00067 | 0.00105 | 0.00504 | 0.00089 | 0.00116 | 0.00163 | 0.00623 | 0.00103 $max(\rho)$ | 0.87418 | 0.93682 | 0.95769 | 0.96813 | 0.97439 | 0.97857 | 0.98155 | 0.98379 Table: experiments with different $\gamma_{ks}$ values for Example 3: Bracket ***B-spline and Bézier-based Feature-Mapping Methods.*** Unfortunately, most published works in topology optimization come without accompanying code and do not always provide enough details for reproduction. However, we have the opportunity to compare our method with a recently published approach [1], where the main geometric feature is a polygon constructed from half-spaces. These comparisons will be added to the manuscript. [1] K. Padhy, R., Thombre, P., Suresh, K. et al. Treetop: topology optimization using constructive solid geometry trees. | Method | Method type | $\text{vonMises}_{max}$ | Compliance | Volume Fraction | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{0.483}$ | $\textbf{0.00125}$ | 0.44 | | NTopo | Free-form | 2.52 | 0.00163 | 0.438 | | TreeTOp | FMTO | 6.08 | 0.00373 | 0.455 | | Ellipses | FMTO | 0.607 | 0.00174 | 0.449 | | NeuralHeavisideSDF | FMTO | $\textbf{0.522}$ | $\textbf{0.00163}$ | 0.437 | Table: Methods comparison for Example 3: MBB beam half $\textbf{Note: }$ Since the NTopo method is quite limited in altering the parameters of the initial conditions, we had to adjust the parameters of the other methods to match those of the NTopo method. Therefore, the metric results differ from those in the original manuscript. Our approach achieves the best Compliance metric values among all FMTO methods while using less material. Additionally, in some experiments, our method is comparable to the NTopo method (see Table with Example 3). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses and explanations. Some of my concerns and questions are explicitly addressed. However, due to my inadequare expertise in this field, I am afraid that I cannot give further higher score for this paper.
Summary: The authors propose a novel neural approximation framework based on a variational autoencoder (VAE) model to approximate the Heaviside function of the Signed Distance Function (SDF), enabling a unified representation of diverse geometric features in a single latent space. This approach integrates machine learning with traditional topology optimization (TO) to overcome the limitations of existing TO methods. The key contribution includes an improved optimization framework that incorporates volume constraints into the objective function like the Kreisselmeier-Steinhauser (KS) function for smooth maximum function approximation, and efficient gradient computation and sensitivity analysis using adjoint differentiation. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: In my opinion, this work is more suitable for industrial or material journals or conferences. Essential References Not Discussed: Yes. Please refer to the weaknesses part. Other Strengths And Weaknesses: **Strengths**: - The authors conducted several experiments using a set of reasonable metrics to evaluate the accuracy of predictions (MSE) and measure the noise of the gradient on grid points (Smoothness Metrics). - The authors implemented an approach similar to an ellipse-based method that achieves superior compliance values. While the SIMP method creates small, locally conditioned edges through a free-form approach, the proposed method avoids and ensures manufacturability. **Weaknesses** - VAE encode-decode structure: The use of VAE to learn geometric feature distributions has been extensively explored [1, 2], even in 3D scenes [3]. Learning the shape distribution is a commonly used approach that does not introduce efficient improvements or novel insights. - Neural Heaviside SDF: SDF, Heaviside, and sigmoid representations are the most basic techniques. Although they express geometric boundaries and allow for training of discrete spatial variables with continuous representations, similar functions like sigmoid and Heavide also widely improved and not taken as sufficient innovations. - Noise in training and geometric approximation: Does the latent space representation derived from the VAE struggle to capture smooth transitions between certain geometric features (e.g., ellipses to polygons)? And considering the SDF precision, does it perform well in resolving sharp edges or high-curvature regions? How to avoid or decrease inaccuracies in boundary fitting. [1] Chadebec, C., & Allassonnière, S. (2022). A geometric perspective on variational autoencoders. Advances in Neural Information Processing Systems, 35, 19618-19630. [2] Vadgama, S., Tomczak, J. M., & Bekkers, E. J. (2022, November). Kendall shape-VAE: Learning shapes in a generative framework. In NeurIPS 2022 Workshop on Symmetry and Geometry in Neural Representations. [3] Kosiorek, A. R., Strathmann, H., Zoran, D., Moreno, P., Schneider, R., Mokrá, S., & Rezende, D. J. (2021, July). Nerf-vae: A geometry aware 3d scene generative model. In International conference on machine learning (pp. 5742-5752). PMLR. Other Comments Or Suggestions: Please see the above weaknesses part. Questions For Authors: Please see the above weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your comprehensive and constructive feedback on our work. We value your insights on the VAE encode-decode structure, the Neural Heaviside SDF representation, and the potential challenges related to geometric approximation and boundary precision. **Novelty of the Proposed Method.** We acknowledge that VAEs have been widely used for learning shape distributions. However, our work differentiates itself by integrating a learned neural approximation of the Heaviside SDF function within Feature Mapping Topology Optimization (FMTO) tasks. Conventional FMTO approaches rely on explicitly defined SDFs, restricting the geometric diversity that can be effectively represented. Our approach, in contrast, enables a unified latent space where multiple geometric primitives can coexist and transform smoothly, facilitating shape evolution in FMTO beyond conventional methods. Moreover, although sigmoid and Heaviside functions are commonly used for defining geometric boundaries, our method employs a learned neural surrogate. This allows for an adaptive, data-driven boundary representation, enhancing both optimization flexibility and saving manufacturability. **Smooth Transitions Between Geometric Features.** Your concern regarding smooth transitions, particularly between ellipses and polygons, is well-taken. While our dataset does not explicitly include gradual shape interpolations between such forms, our model demonstrates an ability to approximate smooth deformations even beyond the trained shape classes. As illustrated in Figure 6 of the main text, the learned latent space permits meaningful shape variations, even if exact interpolations between distinct classes are not explicitly encoded. Nevertheless, we recognize that the latent space could be further refined to better support smooth morphing between different geometric classes. Future work may explore improved training strategies, such as incorporating intermediate transition shapes or using additional regularization to better structure the latent space for smooth interpolations. **SDF Precision and Boundary Fitting.** We understand the importance of accurately capturing high-curvature regions and sharp edges in SDF representations. To reduce inaccuracies in boundary fitting, we have enhanced our dataset generation strategy as follows: Each shape type in our dataset now includes 5,000 instances, sampled to ensure comprehensive coverage of geometric variations. Additionally, we have modified the point generation method. Out of 10,000 points per shape, one third are now generated directly along the shape boundaries, improving precision and ensuring that the trained SDF captures intricate features more effectively. The remaining points are evenly distributed between those drawn from a Gaussian distribution centered around the shape and those concentrated near vertices and edges. These improvements have significantly enhanced boundary fidelity, minimizing discrepancies between original and reconstructed shapes. The updated dataset and training approach result in more accurate SDF modeling, especially in areas with high curvature and sharp transitions. **Discussion on Related Work.** We appreciate the reviewer’s suggestion to better position our contributions within the existing literature. While previous works [1,2,3] have showcased the use of VAEs for shape learning, our study uniquely incorporates this framework into FMTO with a learned Heaviside SDF representation. We will expand our discussion in the manuscript to clearly contrast our method with prior approaches and emphasize our contributions more explicitly. Once again, we sincerely thank the reviewer for their valuable feedback. We believe that our revisions effectively address the concerns raised and further strengthen our work. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. After carefully reading the authors' response and other reviewers' comments, I believe my judgement that the limited and incremental contribution of this work, particularly that similar methods or ideas have been extensively explored. Concerning the high level standard of ICML, I hold on my rating.
Summary: In this work, the authors work on topology optimization, specifically, they propose a deep learning method to simulate Feature-Mapping Topology Optimization (FMTO) (and not SIMP). They propose two decoders, one for the reconstruction and another to approximate the heaviside function. They show results on variations of autoencoder architecture types and training strategies. ## update after rebuttal After the rebuttal, I am increasing my score. From the experiments, the proposed work brings significant improvements compared to existing work such as free-form methods NTopo and TopoDiff, and other FMTO methods. Using standard shapes eases manufacturing, and in the rebuttal for other reviewers, the authors discuss how they can improve boundary fitting further. Claims And Evidence: See Experiments section Methods And Evaluation Criteria: See Experiments section Theoretical Claims: N/A Experimental Designs Or Analyses: - In this work, the authors propose using a Heaviside Decoder for the topology-optimization task. They have two decoders, different options for each decoder (DeepSDF or Symmetric) and different training strategies (train reconstruction before heaviside or vice-versa). The experiments compare their method with SIMP solver and FMTO (Ellipse) which are non-deep learning methods. While the proposed method doesn’t reach the levels of SIMP, it performs better than FMTO. - The authors did extensive ablation study on different decoder types and strategies (Table 1) and compared against SIMP and FMTO in Table 2. However, the authors do not compare with any other deep learning based method. Specifically, in L31 and L36 they discuss diffusion-based topology-optimization methods by Maze & Ahmed’22 and Mohseni & Khodaygan’24. The authors do not provide any discussion of why existing methods cannot be compared to theirs. Hence, such baselines should be included. - The model is trained as a reconstruction task, and the authors provide the training objective functions. However, how are the constraints like $V_{max}$, $s_{min}$, $s_{max}$ enforced during inference? They seem to be used only during the training objective function to penalize the outputs, and it is unclear how they are incorporated during inference/test. - In Table 1, authors should bold the best value under each metric. Currently, they omit bolding when their proposed method doesn’t achieve the best performance. - In Table 1 and Table 2, the authors should perform t-test to determine if the performance improvement is statistically significant or not. - I appreciate the code release in the supplementary Supplementary Material: Yes, I read the supplementary material. Relation To Broader Scientific Literature: The work is important because most CAD-integrated design requires human intervention in the post-processing stage. This work aims to minimize human intervention. Essential References Not Discussed: As mentioned above, the authors need to compare with existing deep learning based methods. The authors could also consider comparing with the following: - Zhang, Zeyu, et al. "Topology optimization via implicit neural representations." Computer Methods in Applied Mechanics and Engineering 411 (2023): 116052. - Zelickman, Yakov, and James K. Guest. "Introducing a general polygonal primitive for feature mapping-based topology optimization." - Chi, Heng, et al. "Universal machine learning for topology optimization." Computer Methods in Applied Mechanics and Engineering 375 (2021): 112739. - Sosnovik, Ivan, and Ivan Oseledets. "Neural networks for topology optimization." Russian Journal of Numerical Analysis and Mathematical Modelling 34.4 (2019): 215-223. Other Strengths And Weaknesses: The presentation of the paper needs a lot of work. The authors tend to use notations without defining them, which makes the paper very difficult to read; the reviewer has to guess themselves what the notation means. I list a few below, but there were several such issues: - Equation 7 uses $\Omega$ but never defines it. It is likely the boundary of the feature. - What is H in the left side of Figure 3? It is not defined in text. Only $\tilde{H}$ and $\tilde{H}_{true}$ are defined in the text, but not H. - No clear separation between the training and inference procedure. - $X$ is not explicitly defined in the text. It is first used in L208, and the reader has to go to Fig. 4 to understand what it represents. Other Comments Or Suggestions: See above. Questions For Authors: I would re-consider my score after the authors' responses to my comments, and after discussion with other reviewers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough and insightful review!! We deeply appreciate the time and effort you invested in evaluating our work. ***Comparison with other methods*** Our method follows the FMTO approach by creating topology with geometric primitives. Unlike traditional free-form optimization that focuses only on compliance, FMTO constrains voids to specific shapes. Thus, comparing with other FMTO methods is most appropriate, even though many lack available code and full reproducibility details. Nevertheless, we can still provide more comparisons with four methods, including two new frameworks: the first competitor is a free-form method based on deep learning, $\texttt{NTopo}$ [1], and one recently published work $\texttt{TreeTOp}$ [2], which directly relates to FMTO. These comparisons will be added to the manuscript. | Method | Method type | $\text{vonMises}_{max}$ | Compliance | Volume Fraction | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{0.483}$ | $\textbf{0.00125}$ | 0.44 | | NTopo | Free-form | 2.52 | 0.00163 | 0.438 | | TreeTOp | FMTO | 6.08 | 0.00373 | 0.455 | | Ellipses | FMTO | 0.607 | 0.00174 | 0.449 | | NeuralHeavisideSDF | FMTO | $\textbf{0.522}$ | $\textbf{0.00163}$ | 0.437 | Table: Methods comparison for Example 3: MBB beam half | Method | Method type | $\text{vonMises}_{max}$ | Compliance | Volume Fraction | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{0.194}$ | $\textbf{0.000104}$ | 0.34 | | NTopo | Free-form | 0.349 | 0.000107 | 0.339 | | TreeTOp | FMTO | 43.8 | 0.000155 | 0.357 | | Ellipses | FMTO | 0.674 | 0.000155 | 0.345 | | NeuralHeavisideSDF | FMTO | $\textbf{0.575}$ | $\textbf{0.000152}$ | 0.337 | Table: Methods comparison for Example 4: Beam Distributed Load $\textbf{Note:}$ Since the NTopo method is quite limited in altering the parameters of the initial conditions, we had to adjust the parameters of the other methods to match those of the NTopo method. Therefore, the metric results differ from those in the original manuscript. Our approach achieves the best Compliance metric values among all FMTO methods while using less material. Additionally, in some experiments, our method is comparable to the NTopo method (see Example 3). 1. J. Zehnder, Y. Li, S. Coros, and B. Thomaszewski. NTopo: mesh-free topology optimization using implicit neural representations. 2021. 2. K. Padhy, R., Thombre, P., Suresh, K. et al. Treetop: topology optimization using constructive solid geometry trees. 2025. ***Training and inference*** During training, we optimize our model to best approximate the Heaviside SDF for various geometric primitives, independent of topology optimization. At inference, we use the frozen Heaviside decoder for SDF evaluation in FMTO and for geometry reconstruction during post‐processing. Constraints $V_{max}$, $s_{max}$, and $s_{min}$ are enforced at inference—$V_{max}$ controls the design domain volume via the Lagrangian, while $s_{max}$/$s_{min}$ bound the shape variables via a sigmoid. We have clarified our method by distinctly separating training from inference. These changes improve clarity and reproducibility. ***Table 1, Bolding*** Best metric values in Table 1 are bolded. Due to layout limitations, the table was split; we apologize for the confusion and will restore the original format. ***Statistical Significance*** We conducted a t-test on 20 independent runs, comparing each model to the best-performing model. The results are presented in the table below and and be provided in the revised appendix, confirming our choice of VAE architecture and training strategy based on $MSE_{sdf}$. | | AE | MMD-VAE | VAE | AE | MMD-VAE | VAE | AE | MMD-VAE | VAE | AE | MMD-VAE | VAE | |---|---|---|---|---|---|---|---|---|---|---|---|---| | Strategy | st1 | st1 | st1 | st1 | st1 | st1 | st2 | st2 | st2 | st2 | st2 | st2 | | Decoder | Symm. | Symm. | Symm. | DeepSDF | DeepSDF | DeepSDF | Symm. | Symm. | Symm. | DeepSDF | DeepSDF | DeepSDF | | mean | 0.0014 | 0.00134 | 0.00128 | 0.000346 | 0.000368 | **0.000277** | 0.00155 | 0.0015 | 0.00146 | 0.000796 | 0.00059 | 0.000475 | | std | 0.000268 | 0.000233 | 0.000201 | 1.81e-05 | 1.95e-05 | **1.07e-05** | 0.000257 | 0.00023 | 0.000327 | 0.000101 | 5.42e-05 | 3.51e-05 | | p-value | 1.4e-11 | 3.9e-12 | 9.4e-13 | 4.9e-13 | 8.3e-15 | -- | 1.1e-12 | 3.3e-13 | 1.2e-10 | 4.1e-13 | 2.9e-14 | 8.8e-15 | Table: Metric: $MSE_{sdf}$ (p-values computed relative to st1\_VAE\_DeepSDF) In the case of Table 2, which presents the comparison results with other methods, we did not perform a t-test because these methods are deterministic with respect to the input parameters. ***Notations*** We apologize for the missing definitions. Inconsistencies in the method description have been corrected, and an updated version—including details on the shape code $\chi$—will appear in the main text, as well as a revised Figure 4. Additionally, a table of notations will be added to the appendix. --- Rebuttal Comment 1.1: Comment: I appreciate the experiments on free-form comparison. I understand that authors propose FMTO, but comparison to free-form is important to understand why FMTO is a better alternative. While NTopo is a free-form-based experiment, I believe it is not a published work. As I mentioned in the review, the authors do discuss diffusion-based topology optimization by Maze & Ahmed AAAI (Diffusion Models Beat GANs on Topology Optimization) but do not compare against it. Could the authors provide a discussion on why it cannot be compared to? Their work does release the code. --- Reply to Comment 1.1.1: Comment: We sincerely thank the Reviewer for their thoughtful feedback regarding the comparison with diffusion-based topology optimization, as presented in Mazé & Ahmed’s work. We appreciate your suggestion to discuss why a direct comparison with their approach, specifically the TopoDiff implementation, was not initially included, and we are pleased to provide further clarification. To explore this, we utilized the official TopoDiff implementation and adapted three test cases — MBB Beam Half, Cantilever Beam, and Beam Distributed Load — into a square domain, as TopoDiff’s implementation is designed to support square domains. We maintained the boundary conditions and load configurations as closely as possible. Below, we present the results for the three examples: | Method | Method type | $\text{vonMises}_{max}$ | Compliance | vf | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{132}$ | $\textbf{22.1}$ | 0.41 | | TopoDiff | Free-form | 133 $\pm$ 5.45 | 24.4 $\pm$ 0.602 | 0.415 $\pm$ 0.00417 | | NeuralHeavisideSDF | FMTO | 132 | 22.9 | 0.409 | Table: Methods comparison for Example 5: Square beam | Method | Method type | $\text{vonMises}_{max}$ | Compliance | vf | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{15.1}$ | $\textbf{4.09}$ | 0.41 | | TopoDiff | Free-form | (min = 65.2, max = 9.74e+04) | (min = 7.18, max = 1.5e+04) | 0.422 $\pm$ 0.006 | | NeuralHeavisideSDF | FMTO | 18.3 | 4.94 | 0.411 | Table: Methods comparison for Example 6: Square beam Distributed Load | Method | Method type | $\text{vonMises}_{max}$ | Compliance | vf | | :--- | :---: | :---: | :---: | :---: | | SIMP | Free-form | $\textbf{28.8}$ | $\textbf{15.1}$ | 0.41 | | TopoDiff | Free-form | 64.5 $\pm$ 22.7 | 17 $\pm$ 0.368 | 0.424 $\pm$ 0.0048 | | NeuralHeavisideSDF | FMTO | 30.2 | 15.6 | 0.408 | Table: Methods comparison for Example 7: Square cantilever beam We observed that TopoDiff’s implementation is optimized for square domains and processes loads applied to a single node. In contrast, for cases such as Example 6 ("Square Beam Distributed Load"), where the load is distributed across multiple nodes, there is significant variability in the values of $\text{vonMises}_{max}$ and Compliance produced by TopoDiff’s framework. Our FMTO-based approach (NeuralHeavisideSDF) demonstrates competitive performance, particularly in maintaining lower Compliance values with lower material usage (vf) compared to TopoDiff in these adapted test cases. However, a direct comparison is challenging due to the differences in domain flexibility and load-handling capabilities. TopoDiff, as a free-form method, excels in optimizing within its square-domain, single-node-load paradigm, while our FMTO method prioritizes manufacturability by constraining solutions to predefined geometric primitives, which inherently limits its topological freedom compared to free-form approaches like TopoDiff or SIMP. This constraint inherently reduces topological flexibility compared to free-form methods like TopoDiff or SIMP, but it aligns with our goal of ensuring practical, manufacturable solutions. In the revision of the manuscript, we will include the discussion about TopoDiff and the above results in the revised manuscript. We have corrected the citation error that occurred when citing the discussed work and will ensure proper referencing in the revised manuscript. Thank you again for your constructive feedback, which has helped improve our paper. We hope that this response adequately addresses your concerns.
null
null
null
null
null
null
Kernel Quantile Embeddings and Associated Probability Metrics
Accept (poster)
Summary: This paper introduces kernel quantile embeddings (KQEs) as a novel way to represent probability distributions in reproducing kernel Hilbert spaces (RKHS), extending beyond traditional kernel mean embeddings (KMEs). The authors leverage KQEs to define a new family of probability metrics that require weaker kernel conditions than MMD, connect to the sliced Wasserstein distance, and enable efficient near-linear estimation, demonstrating their effectiveness in hypothesis testing. Claims And Evidence: My main concerns are as follows: 1. There is no intuitive explanation behind the KQE. While the authors claim their approach is motivated by concepts from the statistics and econometrics literature, it remains unclear what specific problem KQEs address that cannot be handled by existing quantile estimation methods or previous kernel mean embeddings. 2. The procedure for conducting the hypothesis test and determining the threshold is not described. 3. In Algorithm 3.1, the density function $f_v$ is assumed to be given. How should this function be set in practice? Does it require additional data for simulation? Methods And Evaluation Criteria: N/A Theoretical Claims: This paper establishes several desirable properties of KQEs and highlights the connection between KQDs and Wasserstein distances. Experimental Designs Or Analyses: There is a lack of comparison with other embedding methods proposed in Chatalic et al. (2022), Lerasle et al. (2019), and Chwialkowski et al. (2015). References: Chatalic A, Schreuder N, Rosasco L, et al. Nyström kernel mean embeddings[C]//International Conference on Machine Learning. PMLR, 2022: 3006-3024. Lerasle M, Szabó Z, Mathieu T, et al. Monk outlier-robust mean embedding estimation by median-of-means[C]//International conference on machine learning. PMLR, 2019: 3782-3793. Chwialkowski K P, Ramdas A, Sejdinovic D, et al. Fast two-sample testing with analytic representations of probability measures[J]. Advances in Neural Information Processing Systems, 2015, 28. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and for recognizing the desirable properties established for KQEs and the connections drawn between KQDs and Wasserstein distances. We address the points raised below and hope that, in light of these responses, the positive feedback from other reviewers, and new experimental evidence provided in response to other reviewers (particularly kcy5), the reviewer may consider raising their score. ## I. “it remains unclear what specific problem KQEs address that cannot be handled by existing quantile estimation methods or previous kernel mean embeddings”. This question is two-part: (1) value compared to “previous kernel mean embeddings” and (2) value compared to “existing quantile estimation methods.” We address these separately. 1. For benefits compared to kernel mean embeddings, and specifically when they are not enough, we refer to point (IV) in response to dCj7. In a nutshell, KQDs require weaker conditions for a kernel to be quantile-characteristic than for mean-characteristic kernels, which can be difficult to establish beyond Euclidean domains. Additionally, we show in a number of experiments that even when a mean-characteristic kernels is a good choice, the KQD can still outperform MMD approximations of similar cost, highlighting its practical advantage. 2. As to benefits compared to existing quantile methods: Sliced Wasserstein (which compares all quantiles in [0, 1]), is (1) specific to $\mathbb{R}^d$, and (2) does not take advantage of the flexibility and rich non-linear representations inherent to kernel methods—in contrast to KQD. This was pointed out in Connection 1,2: SW is KQD with the linear kernel $k(x, x’)$. The consequences of this are evident in the experiment outlined in point (III) in response to dCj7: even in $\mathbb R^d$, KQD with the Gaussian kernel $k(x, x’)$ outperforms Sliced Wasserstein, due to the Gaussian kernel’s greater expressiveness compared to the linear kernel. ## II. Procedure for Hypothesis Testing and Threshold Selection: We use a permutation-based approach with 300 permutations to estimate the null distribution of the test statistic and determine the threshold, ensuring proper Type I error control ([1]). This procedure will be added as an algorithm in the camera-ready version. [1] Erich Leo Lehmann, Joseph P Romano, and George Casella. Testing statistical hypotheses, volume 3. Springer, 1986. ## III. Setting the Density Function $f_\nu$ in Algorithm 3.1: The density function $f_\nu$ is the measure over the quantile levels. We use the uniform $\nu$ over $[0, 1]$ in our experiments, which corresponds to equal weight for all quantiles, $f_\nu \equiv 1/n$. We will add a note to Algorithm 1 for clarity. ## IV. Numerical Comparison with MMD based on Other KME Approximations: As the reviewer notes, there are several efficient KME methods currently available, and no single approach has emerged as definitively superior. The multi-diagonal approximation chosen in this study serves as a common, strong, and interpretable benchmark, supported by both general and test-specific theoretical guarantees ([2]). While a comprehensive comparison of all existing methods falls outside the scope of this paper, we did conduct additional experiments as requested. In particular, as requested, we compared the KQD with the following (at matching cost) 1. the Mean Embedding (ME) approximation of MMD from [5] (which was identified as the best-performing method in their numerical study) and 2. the Nystrom-MMD method from [3]. The results are presented in https://pdfhost.io/v/qLDRyBQawp_nystrom_vs_ME. ME performs at the level of MMD-multi, while Nystrom has extremely high Type II error—likely due to sensitivity to hyperparameters. This result was consistent, as we also observed poor performance of Nystrom-MMD in the following simple minimum-distance parameter estimation task, illustrated in https://pdfhost.io/v/KhHzg7UPsY_par_est . The results are over 1000 runs and show high variance of Nystrom-MMD. Although we could not complete the BCD-Fast [4] comparison in time, it is noted that the method is primarily advantageous for its robustness to outliers, a feature not explicitly addressed in our study. We are happy to complete this experiment for the final paper. [2] Schrab, A. (2025). Optimal Kernel Hypothesis Testing (PhD thesis, UCL). [3] Chatalic A, Schreuder N, Rosasco L, et al. Nyström kernel mean embeddings. ICML. [4] Lerasle M, Szabó Z, Mathieu T, et al. Monk outlier-robust mean embedding estimation by median-of-means. ICML. [5] Chwialkowski K P, Ramdas A, Sejdinovic D, et al. Fast two-sample testing with analytic representations of probability measures. NeurIPS. We appreciate the reviewer’s feedback. In response, we have provided additional experimental results, clarified the role of $f_\nu$, and explained the motivation behind KMEs and our testing procedure. We would appreciate it if the reviewer considered raising the score. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My questions and concerns have been well addressed. I will update my rate. --- Reply to Comment 1.1.1: Comment: We are happy to hear this. Thank you for carefully considering our paper and rebuttal, and for taking the time to update your score.
Summary: This paper presents an alternative of kernel mean embeddings of distributions through the use of quantiles rather than the mean. More precisely, the embedding of a distribution on a set X is given by the collection of all alpha-quantiles of the pushforwards of the distribution all directions, each given by a unit vector in a RKHS (defined by a kernel k on X). The authors show a number of properties of the resulting embedding that mirror those of kernel mean embeddings, such as the conditions (much weaker as in KME) under which a kernel is characteristic (i.e. the embedding is injective), and complexity and approximation error of empirical estimators. They proceed to define a distance between probability distributions using the embeddings, in a similar fashion to MMD for KME. The resulting distances (the proof that it is indeed a distance is given) are integrals (or a maximum) over alpha and the unit-norm directions of the RKHS, which is reminiscent of the way Sliced Wasserstein distances are defined. The authors indeed show a connection to this class of distances in a particular case, as well as to Sinkhorn divergences. Finally they put forward a particular instance of the distance where the unit norm directions are sampled from a Gaussian measure on the RKHS. The authors finally present a set of experiments using the proposed KQD for two sample hypothesis testing, where its performance and complexity in various cases is tested against MMD estimators. Sensitivity to the input space dimension, capacity to distinguish between close (in moments) distributions with a polynomial kernel are tested, and experiments on image datasets are shown. Claims And Evidence: The theoretical claims of the paper are sound, and the proposed measures of discrepancies are interesting in how the generalize KME derived distances such as MMD and kernelize (and generalize) Sliced-Wasserstein distances. The milder condition for a kernel to be characteristic is interesting as well. As for the experimental claims, the sensitivity to input dimension and the capability to distinguish between Laplace and Gaussian distributions with the same first and second order moments under a cubic kernel are well supported by the experiments. The results of the image datasets are slightly less convincing, since it looks like MMD is the superior metric to use in this cases (the centered version of the proposed metrics is used and compares similarly to MMD, but the authors show that this version actually interpolates between MMD and Sliced Wasserstein distances – except the choice of kernel is more general). This observation is mitigated by the fact that among the linear or quasi-linear estimators, the KQE ones perform better. ## Update after rebuttal As mentioned in my comment below, I maintain my positive evaluation of the paper and my recommendation of acceptance. Methods And Evaluation Criteria: The method to benchmark the proposed metrics is sound : two sample hypothesis tests are one of the key applications of MMD and KME. The authors actually check that the statistical tests actually are meaningful by checking that the tests’ significance level is respected. However, it would have been interesting to showcase different applications that are usually considered with MMD (generative modeling, for instance). Theoretical Claims: I skimmed through a few of the proofs (all in appendices), namely those of theorem 2 and connections 1 and 2, which seemed correct to me. Experimental Designs Or Analyses: It is not very clear why the Gaussian Kernel Quantile discrepancy is put forward instead of a simple version where \gamma is uniform over the sphere (as in SW distances) ? Is it for computational reasons and the fact that in fine only a standard univariate normal is sampled in fine ? Also, it would have been interesting to consider more variants of the distances : for instance, the authors show that they are able to kernelize SW distances, which are widely used as an efficient distance between probability distributions ; I am curious to see how this would have compared (possibly generalizing through the kernel) to the Gaussian version put forward here. The theoretical connection is really nice though. Also, even though under quantile embeddings the conditions for a kernel to be characteristic are very mild (X is sensible as a topological space and the kernel is separating), in practice this doesn’t seem very important since in all the experiments but the second a Gaussian Kernel is used (which is characteristic for both kernel mean and quantile embeddings). Supplementary Material: See the discussion above for proofs. I also took a look to the introductory material on MMD and Wasserstein distances, as well as the Type 1 control experiment in app D. Relation To Broader Scientific Literature: I believe the paper is generally well positioned in the literature related to MMD and distances between probability measures, and the proposed distances essentially generalize several of those in meaningful ways. However, I feel like the connection to attempts at generalizing kernel mean embeddings using medians or variance, though mentioned, could have been detailed a bit further. How different are the constructions in those cases (I am not familiar with that literature) ? Would it have made sense to compare the resulting estimators in the experimental part ? Essential References Not Discussed: I can’t think of a key reference that would be missing. Other Strengths And Weaknesses: I have underlined a number of strengths and weaknesses of the work in my comments above. Other Comments Or Suggestions: It seems that Fig 3 is never referenced in the text. Besides a description seems necessary as I am not sure I understood it well. For instance, u1 looks like the identity map in the left panel, but the original and projected measures seem different (the projections are smoother). What am I missing ? There is a missing reference in app C.5. Questions For Authors: 1) What are situations where using a polynomial kernel (typical of non mean-characteristic kernels) is advantageous wrt a kernel such as the Gaussian or exponential ones ? Or more generally situations where using non mean characteristic kernels is actually useful ? 2) How do the more complex (Gaussian measure and kernel) variants of the proposed metrics compare to the more classical SW distance (uniform measure and linear kernel) ? What is the added value in practice of the generalization ? Is there an explanation as to why MMD seems superior in the image dataset experiments ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and for recognizing the soundness and novelty of our contributions, including generalizing KME-derived distances and kernelizing Sliced-Wasserstein distances. We address the key comments below. ## I. Additional Applications: KQD is a general-purpose probability metric, applicable to many problems like conditional independence testing, causal inference, reinforcement learning, and generative modeling. Among the many possible applications, our focus has been on two-sample hypothesis testing to benchmark the proposed distances in a controlled and interpretable setting, which is commonly used in the literature. In addition, we are working on applying KQD to the task of parameter estimation in simulation-based inference. A toy version of the experimental setup is presented in point IV in response to u7Px. ## II. Why isn’t $\gamma$ uniform? As pointed out in footnote 2 on page 5, the uniform/Lebesgue measure or standard Gaussian measure cannot be defined on infinite-dimensional spaces ([1]). Anticipating an infinite-dimensional RKHS, we propose a (projected) Gaussian measure, which is a standard approach in the inverse problem literature (see [1]). We emphasize that when $X=\mathbb{R}^d$, the kernel is linear, and $\gamma$ is uniform over the unit sphere, KQD matches Sliced Wasserstein (Connection 1). A more general version, when the kernel is non-linear but induces a finite-dimensional RKHS, the KQD can be connected to the Generalised Sliced Wasserstein distances ([2]). We will include this discussion. [1] Stuart, A. M. (2010). Inverse problems: a Bayesian perspective. Acta numerica. [2] Kolouri, S. et al (2019). Generalized Sliced Wasserstein distances. Advances in neural information processing systems, 32. ## III. Comparison with SW Distances We now extended the power decay experiment to include SW distances, with directions sampled uniformly or from $(P_n + Q_n)/2$ projected onto the sphere. Results in https://pdfhost.io/v/FsnNBjRnJa_kqd_vs_sw show KQD significantly outperforms SW---since Gaussian kernel is more expressive than linear kernel (Connections 1,2: SW is KQD with linear kernel). ## IV. “when is using non mean characteristic kernels useful” KQDs require weaker conditions to be quantile-characteristic, as shown in the Laplace vs. Gaussian experiment. While mean-characteristic kernels are well understood for bounded translation-invariant kernels on Euclidean domains ([3]), beyond this, it’s hard to establish whether a kernel is mean-characteristic. For example, many graph kernels are not characteristic ([4]). Deep kernels ([5]), or any transformed kernel $k(T(u), T(u’))$, where $T$ is a non-injective map, offer examples of non-characteristic kernels. [3] Sriperumbudur, B. K., et al (2011). Universality, Characteristic Kernels and RKHS Embedding of Measures. JMLR. [4] Kriege, N. M., et al. (2020). A survey on graph kernels. Applied Network Science, 5. [5] Wilson, A. G., et al. (2016). Deep kernel learning. AISTATS. ## V. Figure 2 Clarification: We apologize for the omission. Figure 2 should be referenced in Section 3.2 after defining $\tau^2$ to illustrate how the integrand varies for different directions. We will amend this and provide a clearer explanation. ## VI. Reference Missing in Appendix C.5: Thank you for pointing this out. We will add reference [6]. [6] Kukush, A. (2020). Gaussian measures in Hilbert space. Wiley. ## VII. MMD Performance in Image Dataset Experiments: We will clarify that the goal of the experiment was to demonstrate that KQD outperforms MMD approximations with matching cost, and when MMD outperforms KQD, the centered KQD offers MMD-level performance. Additionally, in response to kcy5 (I.(2)), KQD with uniform $\mu$ outperforms MMD on the CIFAR problem, and we are exploring this further. ## VIII. Connection to Median and Variance-Based Extensions of KMEs: We are happy to elaborate. Longer response with review of the methods: https://pdfhost.io/v/t2rYgcDSLS_kernel_quantile_embeddings_2_25 Kernel Covariance Embeddings (KCE) is the 2nd-order moment of $k(X, \cdot)$, while KME is the 1st-order moment. KCE exists iff KME for $k^2$ exists; and, $k$ is covariance-characteristic iff $k^2$ is mean-characteristic. The divergence between KCEs can be estimated at $O(n^3)$ due to the need to do eigenvalue decomposition, while KQE comes with a practical Monte-Carlo estimator. The median embedding is the geometric median of $k(x, \cdot)$ in the RKHS, minimizing the $L_1$ distance. While it exists for separable Hilbert spaces, it requires iterative algorithms for estimation and has $O(n^2)$ complexity. The median-characteristic property hasn’t been explored, and the relation to 1D-projected quantiles remains unclear. Further research into geometric median embeddings is needed. We believe these revisions address the reviewer’s concerns and improve the clarity of the paper. Thank you for your constructive feedback.
Summary: This paper introduces the concept of Kernel Quantile Embeddings (KQEs) in reproducing kernel Hilbert spaces (RKHS) and investigates how these embeddings can be leveraged to define a new family of probability metrics: Kernel Quantile Discrepancies (KQDs). The authors argue that KQEs are a natural analogue of quantiles in function spaces and can capture distributional information beyond the first moment. The paper proves that these embeddings can be estimated at the same rate O(n-1/2) as classical kernel mean embeddings. It then proposes specific instances of these discrepancies (e-KQD and sup-KQD), connecting them to well-known probability metrics like sliced Wasserstein and Sinkhorn divergences. The authors also provide an estimator with near-linear O(nlog2n) complexity for a version of the e-KQD, and demonstrate empirically—in two-sample hypothesis testing tasks—that KQDs can be competitive with MMD, sometimes surpassing standard MMD or its fast approximations. Claims And Evidence: Claim (1): KQEs offer a strictly weaker requirement on the kernel to preserve injectivity (quantile-characteristic) than classical mean-characteristic kernels. Evidence: The authors give proofs (Theorems 1–2) showing that every mean-characteristic kernel is quantile-characteristic, but not vice versa. Claim (2): KQDs defined from KQEs form valid probability metrics under weaker conditions. Evidence: Theorem 4 establishes that e-KQD and sup-KQD satisfy the properties of a metric, assuming the kernel has “full support” in the specified sense. Claim (3): An estimator for e-KQD can be computed in O(nlog2n) time and achieves O(n-1/2) convergence in the two-sample setting. Evidence: The authors outline a Monte Carlo sampling procedure (using Gaussian measures in the RKHS) and show a theoretical sample complexity bound (Theorem 5). Overall, these claims are supported by rigorous proofs (for theoretical statements) and empirical experiments (for computational cost and testing power). Methods And Evaluation Criteria: The method centers on constructing quantiles in RKHS by projecting the kernel feature maps onto directions, then computing one-dimensional quantiles. Evaluation criteria revolve around: The rate at which KQD-based two-sample tests converge (statistical consistency), The computational complexity of the KQD estimators, The test power in detecting distributional differences for multiple synthetic and real benchmarks. These criteria align well with the standard practice in two-sample testing and kernel-based distribution comparison. Theoretical Claims: The “quantile-characteristic” property (Theorems 1 and 2) generalizes the common “mean-characteristic” property in kernel embeddings. The curvature arguments and integral transform approach in the proofs are consistent with known results on characteristic functionals in topological vector spaces. The authors’ expansions connecting KQDs to Sliced/Max-Sliced Wasserstein and Sinkhorn-like divergences (Connections 1–3) appear algebraically consistent. I did not find obvious flaws in the proofs from a correctness standpoint; the steps follow standard theorems about characteristic functionals in Hilbert spaces. Experimental Designs Or Analyses: The authors run two-sample tests on both synthetic (Gaussian vs. Gaussian with different variances, Laplace vs. Gaussian, etc.) and high-dimensional image data (Galaxy MNIST and CIFAR10 vs CIFAR10.1). They use standard metrics: rejection rate under different sample sizes, and also check Type I error. The analyses compare KQD-based tests against multiple MMD-based tests (including linear, multi, and full). Overall, the experimental design is sensible for measuring the performance of a novel test statistic in a fairly standard manner. Supplementary Material: The authors mention that the proofs for the main theorems are in the Appendix (Sections C.1–C.5), as well as additional experiments on Type I error. The main text references these properly, and from what was described, the supplementary clarifies the technical lemmas for sampling from Gaussian measures on RKHS, etc. Relation To Broader Scientific Literature: The paper extends the kernel embedding framework (Smola et al., 2007; Gretton et al., 2012; Muandet et al., 2016). It connects directly to quantile-based distance measures in classical statistics (Kosorok, 1999; Ranger et al., 2020) and to sliced Wasserstein approaches (Bonneel et al., 2015). By bridging the concept of “quantiles” in infinite-dimensional Hilbert spaces, the paper also resonates with prior works that exploit directional statistics (Cramér–Wold theorem). The references seem adequate and highlight the synergy between kernel methods, quantile-based methods, and integral probability metrics. Essential References Not Discussed: The coverage is mostly thorough. Other Strengths And Weaknesses: Strengths: Substantial theoretical contribution that expands kernel embeddings beyond means. Empirical demonstration that the proposed distances are competitive with MMD. The proposed near-linear Monte Carlo estimator is especially appealing for large-scale data. Weaknesses: The weighting measure ν and reference measure ξ are not deeply explored in terms of best practical choices. The authors choose them fairly generically (often uniform or half from Pn, half from Qn). Additional experiments on different weighting might help. The paper mentions potential broader applications (like conditional independence) but does not provide concrete evidence or experiments in such directions. Other Comments Or Suggestions: Minor writing comment: Some early paragraphs heavily reference “Cramér–Wold in RKHS,” so it might help to add a more elementary explanation or an example in the main text. Also, if feasible, an experiment on centered e-KQD for a mixture distribution scenario might illustrate the effect of the MMD “shift.” Questions For Authors: Weighting measure: Have you tried or do you have insights on using different weighting distributions ν besides the uniform one, e.g., a heavier weighting around certain quantile regions? Could that accelerate the test in practice or help with tail detection? Conditional embeddings: You mention the possibility of extending to “conditional quantile embeddings.” Do you see a straightforward approach, or are there structural challenges that differ from conditional mean embeddings? Potential domain constraints: Could there be issues if X is not connected or if the kernel is unbounded? You mention bounded k, but might large or unbounded kernels degrade the performance or the theory? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on our work. We appreciate the recognition of our theoretical contributions, the rigor of our proofs, and the relevance of our evaluation criteria. We are also glad the reviewer found our near-linear Monte Carlo estimator promising for large-scale data. Below, we address specific comments, and return additional experiments suggested. ## I. Further exploring the choices for weighting measure $\nu$, measure on the sphere $\xi$, reference measure $\mu$. Thank you for this insightful suggestion. 1. $\nu$: We conducted experiments on the Galaxy MNIST and CIFAR datasets, varying $\nu$ from up-weighing extreme quantiles to down-weighing them. Results are in https://pdfhost.io/v/jeaYtK5X2Y_varying_nu , where triangle "/\” indicates up-weighting, and reverse triangle “\\/” indicates down-weighting. For Galaxy MNIST, down-weighting extremes improved test power, whereas for CIFAR, up-weighting extremes worked better. Uniform weighting of the quantiles remained a good choice. This suggests that tuning $\nu$ beyond the uniform is problem-dependent and can enhance performance. The difference likely arises from the nature of the problems: CIFAR datasets, where samples are expected to be similar, benefit from emphasising extremes, while Galaxy MNIST, which has fundamentally different galaxy images, performs better when “robustified,” i.e., focusing on differences away from the tails. Exploring this further presents an exciting avenue for future work. 2. $\mu$: The reference measure in the covariance operator serves to “cover the input space” and is typically set to a “default” measure—for $\mathbb R^d$, typically the standard Gaussian. We considered $(P_n + Q_n)/2$ to stick to the most general setting, when no “default” is available—only $P_n$ and $Q_n$. We now compare this choice against (i) standard Gaussian scaled by IQR/1.349 , where IQR is the interquantile range of $(P_n+Q_n)/2$, and 1.349 is the interquantile range of $\mathcal N(0, 1)$; (ii) a uniform measure on $[-1,1]^d$, scaled by IQR. Results in https://pdfhost.io/v/wtc2mgbFGU_varying_mu show performance superior to MMD for the standard/uniform $\mu$. This is a valuable finding, we will be performing further analysis. 3. $\xi$: Varying $\xi$ on the sphere in inf-dim spaces is extremely challenging due to the complexity of both theoretical definition and practical sampling (no uniform or standard Gaussian measure can be defined on an infinite-dimensional space). To the best of our knowledge, no practical alternative has been proposed. ## II. Conditional Quantile Embeddings and Independence Testing as Future Work: The population-level embedding of $P(Y|X=x)$ will be defined in the same way as a quantile embedding: for every $x$, we have a standard quantile embedding. Estimating conditional quantiles requires learning a mapping from $x, u, \alpha$ to $\rho^\alpha_{u \\# P}$, a complex task beyond the scope of this paper; however, given the likely smoothness of this mapping in $x, u, \alpha$, it is reasonable to expect that it should be possible to develop a practical method. Independence testing can be framed as a two-sample testing problem (e.g., [1, 2]). However, a thorough investigation, comparable to that in [1, 2], is necessary to develop this approach rigorously. We leave this exploration for future research. [1] Gretton et al., 2008. A kernel statistical test of independence. NeurIPS. [2] Doran et al., 2014. A permutation-based kernel conditional independence test. UAI. ## III. Clarifying “Cramér–Wold in RKHS” in Early Paragraphs We will add a clarification. ## IV. Experiment on Centered e-KQD for Mixture Distributions We thank the reviewer for their suggestion. However, we would like to emphasise that moments on the input space map highly non-linearly to the RKHS. As a result, shifts in moments on the input space translate non-trivially to the outcome of the experiment, breaking the interpretability of results. We will instead focus on constructing an example where moments in the kernel features space (instead of the input space) vary. We will aim to identify these before camera-ready. ## IV. “Could there be issues if X is not connected or if k is unbounded?”: Boundedness: we highlight that boundedness is not required to define KQD as a probability metric. This contrasts with MMD, which requires the kernel to be bounded in order to be defined over all probability measures—due to the need to take an integral to obtain the mean (see “Why bounded kernels?” in www.jmlr.org/papers/volume24/21-0599/21-0599.pdf). Connectivity: KQD remains a valid probability metric for completely regular Hausdorff spaces, including disconnected spaces like totally ordered sets. We thank the reviewer again for their insightful feedback. We have addressed your concerns by providing additional experimental results and elaborating on future work in conditional independence/CMEs, and therefore would appreciate a raise in score. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explanation and thanks for the additional results, I confirm authors addressed my concerns and questions, I would like to rise my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their continued engagement with our work, valuable suggestions, and taking the time to increase their score.
Summary: The paper proposes an alternative to kernel mean embeddings where they embedd the quantile functions instead of the mean, each quantile function being computed in the direction of a unit-norm function in the RKHS. They show that such embedding is injective under milder conditions on the kernel than the known characteristic property for the mean embedding counterpart. The authors propose a new family of discrepencies based on these quantile embeddings, and exhibit conditions under which these are metrics on the space of Radon probability measures. As the computation of these discrepencies involves an expectation over the unit ball of the RKHSs, the authors develop tools based on Gaussian measures to compute it efficiently. Numerical experiments highlight the benefits of the approach. ## update after rebuttal The rebuttal confirms the problem in Theorem 5. The authors then failed to answer subsequent questions about the implications of the correct bound and the choice of the hyperparameters. Claims And Evidence: Yes, most of the claims are supported by clear and convincing evidence, except Theorem 5 whose proof cannot be located in the appendix. Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: The claims seem sound, even though I only skimmed through the proofs. I just have a problem with Theorem 5. It is the only result not proved in the appendix. I would have expected the bound to scale as $\mathcal{O}(n^{-1/2} + l^{-1/2})$ and I am surprised by the multiplicativity of the bound: $\mathcal{O}(n^{-1/2} * l^{-1/2})$ means that we can keep only one direction $u$ in the RKHS and still get consistency as long as $n \to \infty$. Or am I wrong here ? Experimental Designs Or Analyses: I found no issue with the experimental designs. Supplementary Material: I skimmed through it but not in details. Relation To Broader Scientific Literature: The idea of using quantile embeddings instead of mean embeddings is of great originality. Essential References Not Discussed: None Other Strengths And Weaknesses: +: The paper is clear and well written. The topic is of interest to the ICML community. -: The limitations of the KQD are not enough discussed. It remains to be shown that picking $l$ and $m$ logarithmic w.r.t. $n$ is a good choice in terms of approximation error. Yes, you lose the quadratic dependency, but you add a layer of difficulty in that you need to compute the quantities along several directions. There should be a trade-off to find. Other Comments Or Suggestions: More of a suggestion: maybe there is something to investigate about how different kernels may allow easier approximation of the e-KQD (i.e. smaller $l$) when the associated RKHS is small (sum of the eigenvalues of the integral operator as a suitable measure ?) Questions For Authors: What was the motivation behind staying in the small data regime $(n \leq 2000)$ ? Do you know how the estimator behaves beyond that ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and positive evaluation of our work. We appreciate the recognition of the originality of using quantile embeddings, the clarity of our presentation, and the relevance of our contribution to the ICML community. We have carefully addressed all your comments below. ## I. Proof of Theorem 5, and a discussion of trade-offs: Thank you for pointing this out. We apologize for the omission, which was an oversight at the time of submission. The proof, which we will include in the revised version, uses the triangle inequality to decompose the error into two terms: (1) the error due to approximating the expectation, which is $\\mathcal{O}(l^{-1/2})$ by Hoeffding’s inequality (valid since the kernel is bounded), and (2) the error from quantile approximation, which is $\\mathcal{O}(n^{-1/2})$ by Theorem 3. Therefore, the correct bound is indeed $\\mathcal{O}(n^{-1/2}) + \\mathcal{O}(l^{-1/2})$, and we will update this in the revision. ## II. Choice of Kernels for e-KQD Approximation: We appreciate this insightful suggestion—this direction definitely has potential! Of course, it involves a thorough theoretical study that is beyond the scope of this paper, since it involves extending RKHS theory aimed at KMEs to the non-linear case of KQEs. This is something we are planning on looking into in detail in a follow-up study to this paper. ## III. Sample size in experiments: We note that the Laplace VS Gaussian experiment goes up to 10,000 datapoints, and the sample sizes in each experiment align with prior work in similar settings. However, our proposed near-linear Monte Carlo estimator means the method is suitable for large datasets, both in higher-dimensional (note d=12288 for Galaxy MNIST) and larger-sample settings (see scaling in Figure 4); we will include a note to that effect in the revised version.
null
null
null
null
null
null
Learning from Suboptimal Data in Continuous Control via Auto-Regressive Soft Q-Network
Accept (poster)
Summary: This work introduces an algorithm for continuous control with discretized actions. Building upon coarse-to-fine Q learning, this algorithm further models advantages and policies in an autoregressive fashion, breaking the limiting assumption of independence between action dimensions. Update rules are derived from a soft variant of Q-learning, and are combined with a behavior cloning component. This method is evaluated on offline-to-online settings on D4RL and RLBench, displaying strong performance compares to purely offline baselines, or methods modeling action dimensions independently (Seo et al., 2024). Claims And Evidence: The main claim put forward by this work is in the empirical effectiveness of the proposed method. This claim is indeed supported by convincing evidence. Methods And Evaluation Criteria: The evaluation criteria are consistent with the scope of this paper. However, the offline-to-online setup is rather restrictive. Can this ARSQ be applied, e.g., to fully online or fully offline settings? Theoretical Claims: Yes, the theoretical results are overall correct, although imprecise. - I am skeptical about Equation 3 and 10: why can we assume that the exponentiated advantages are already normalized? Should the $=$ be replaced by a $\propto$? The rest of the derivations would still hold, as far as I can tell. - Moreover, I find the theorem to be unnecessarily complicated. If I understand correctly, Eq. 12 is equivalent to assuming that exponentiated advantages represent a valid probability distribution. In this case, it's important to state this more clearly, and remark that this assumption is almost always violated. Experimental Designs Or Analyses: The analyses are overall sounds, with minor issues with clarity as described in my questions. Supplementary Material: I reviewed the supplementary materials in their entirety. Relation To Broader Scientific Literature: The proposed method proposes a solution to the important tradeoff between expressivity and tractability in continuous control with discretized actions. The autoregressive model proposed appears to solve the issues that arise when treating action dimensions independently, which is the standard approach in the literature, as far as I know. Essential References Not Discussed: I am not aware of missing important references. My knowledge of related literature is however limited. Other Strengths And Weaknesses: Strengths: - The presentation is overall good, except for the aforementioned imprecisions in Section 4 - Empirical improvements appear to be significant. Weaknesses: - The theory is not precise, as described above (Eq. 3 and 10, Theorem 4.3). - The experimental evaluation does not mention important information (see question below). Other Comments Or Suggestions: None. The paper is well written, and I could not find any language issues nor typos. Questions For Authors: 1. Can you comment on my questions on Theoretical claims? Clarifications in this regard would help me confirm my score. 2. Why was the offline-to-online setup chosen? What prevents an evaluation in fully online or offline settings? 3. How many online steps are performed in Figure 4? Why are bar plots reported instead of training curves (as done in Figure 6)? 4. What tasks are evaluated in Figure 7? Is it an aggregate score over tasks? ## Update after rebuttal Considering all of my comments were addressed, I have updated my score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful review and valuable suggestions. We appreciate your careful reading and constructive feedback. Supplementary Material for our response is at [THIS LINK](https://anonymous.4open.science/r/icml25-Submission9509_2/fig_9ufZ.pdf). Below we address your specific questions and concerns one by one and provide clarifications and additional results as requested. ## 1. Clarifications on Theoretical Claims (Eq. 3 and Eq. 10) We are greatly thankful for your careful reading and for pointing out the ambiguity in the theoretical claims. Indeed, the exponential of advantages is inherently normalized due to its definition. In Soft Q Learning[1], the soft value function is defined using the soft Q function (Eq. 4) $$V^*_{\text{soft}}(s_t) = \alpha \log \int_{A} \exp ( \frac{1}{\alpha} Q^*_{\text{soft}}(s_t, a') ) da'$$ Note that the soft value is a *softmax* of Q, and is *different* from the common concept of value function in RL. By construction, this directly implies: $$\int_{A} \exp ( \frac{1}{\alpha} ( Q^*_{\text{soft}}(s_t, a_t) - V^*_{\text{soft}}(s_t) )) da_t = 1$$ which means that the exponentiated soft advantages are already normalized. This equality, crucial to our theoretical claims (Eq. 3 and Eq. 10 in our manuscript), has been rigorously proven in Theorem 1 of the original Soft Q-Learning paper [1]. To address the reviewer’s valuable comment, We will explicitly clarify this point in the revised paper to strengthen the theoretical justification. [1] Haarnoja et al. (2017). Reinforcement learning with deep energy-based policies. ICML. ## 2. Clarification on Theorem 4.3 Thank you for highlighting the complexity of Theorem 4.3. Theorem 4.3, which exactly means the exponential of the dimensional soft advantage represents a valid probability distribution, allows us to express the overall soft advantage function as the sum of the dimensional soft advantages. In this formulation, the dimensional soft advantage serves as a critical link between auto-regressive policy representation and Q-value prediction. We agree with the reviewer that the conditions of Theorem 4.3 are non-trivial and may not be satisfied naturally. To address this, we enforce a hard constraint by normalizing the output of the dimensional advantage prediction network. Specifically, we apply a log-sum-exp subtraction, as defined in Eq. (16). This normalization ensures that Theorem 4.3 holds, thereby validating the correctness of all subsequent derivations. Additionally, we identified a typo in Eq. (16), missing temperature parameter $\alpha$. The corrected equation is: $$ A^d(\mathbf{s}_t, \mathbf{a}^{-d}, a^d) = u^d(\mathbf{s}_t, \mathbf{a}^{-d}, a^d) - \alpha log \sum _{a^{d'}} \exp ( \frac{1}{\alpha} u^d(\mathbf{s}_t, \mathbf{a}^{-d}, a^{d'}) )$$ We will rearrange the theoretical claims and correct this typo accordingly in the revised manuscript. ## 3. Evaluation on Fully Offline and Fully Online Settings We agree with the reviewer that evaluating ARSQ in both fully offline and fully online settings is essential for completeness. **In the fully offline setting**, we have experiments in Tab. 1 in Supplementary Material, comparing ARSQ against representative offline RL and imitation learning methods. ARSQ demonstrates superior overall performance across tasks, highlighting its effectiveness on suboptimal offline data. **In the fully online setting**, we compare ARSQ against online CQN and PPO[2] in Fig. 1 in Supplementary Material. Online ARSQ achieves better sample efficiency than CQN and PPO, underscoring its potential as a general-purpose reinforcement learning algorithm. However, we observe that although online ARSQ eventually matches the converged performance of the standard ARSQ, it requires approximately $4\times$ more environment interactions to reach comparable performance, highlighting the importance of using offline datasets to enhance sample efficiency. [2] Schulman et al. (2017). Proximal policy optimization algorithms. arXiv:1707.06347. ## 4. Training Curves for D4RL Main Results (Fig. 4) Thank you for pointing out the absence of training curves. We have added detailed training curves for each task in Fig. 2 in Supplementary Material, clearly showing the number of online environment steps (approximately 25k–50k steps until convergence). We will also include these training curves in the revised manuscript to enhance clarity. ## 5. Tasks Evaluated in Figure 7 Thank you for pointing out the lack of clarity regarding the tasks evaluated. Figure 7 reports results on two D4RL tasks (*hopper-medium-expert* and *hopper-medium-replay*), and one RLBench task (*Open Oven*). We will clarify this in the revised manuscript. We sincerely appreciate your constructive feedback, which has significantly improved the quality and clarity of our paper. We hope these clarifications and additional results address your concerns, and we are happy to further discuss any remaining questions.
Summary: The paper proposes ARSQ, a value-based reinforcement learning method to improve learning from suboptimal data in continuous control tasks. Previous methods estimate Q-values independently for each action dimension, neglecting their interdependencies, leading to biased action selection with mixed-quality data. ARSQ addresses this by modeling Q-values in an auto-regressive manner, A coarse-to-fine hierarchical discretization is porposed in improving efficiency in high-dimensional action spaces. Experiments on D4RL and RLBench benchmarks show that ARSQ achieves sota performance. Claims And Evidence: The paper offers good foundation for many of its claims, particularly through evaluations on standard benchmarks such as D4RL and RLBench. Still, tackling some remaining issues would further enhance the submission: 1. While ARSQ demonstrates improved performance when trained on suboptimal datasets, the nature of the suboptimality is not deeply analyzed. 2.The discretization method is discussed, but there is no direct comparison with alternative discretization strategies to prove its superiority. Methods And Evaluation Criteria: The proposed ARSQ and evaluation criteria are largely appropriate for the problem of RL from suboptimal data in continuous control. Theoretical Claims: The theoretical claims in the paper are generally well-motivated. Experimental Designs Or Analyses: The experimental design in the paper is generally well-structured, leveraging D4RL and RLBench as benchmark datasets, comparing against strong baselines, and including ablation studies to isolate key design choices. However, there are several areas where the soundness and validity of the experiments could be improved: There lacks comparison to offline RL methods (such as IQL, CQL, etc.), and RL/IL methods that handle suboptimal data (SafeDICE, EDAC). The reviewer believe a further comparison could enhance the soundness. Supplementary Material: I have gone through all supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper build upon several existing themes in reinforcement learning (RL), particularly in value-based RL for continuous control, handling suboptimal data, and hierarchical action representations. Essential References Not Discussed: [1] Jang, Y., Kim, G. H., Lee, J., Sohn, S., Kim, B., Lee, H., & Lee, M. (2023). SafeDICE: offline safe imitation learning with non-preferred demonstrations. Advances in Neural Information Processing Systems, 36, 74921-74951. [2] An, G., Moon, S., Kim, J. H., & Song, H. O. (2021). Uncertainty-based offline reinforcement learning with diversified q-ensemble. Advances in neural information processing systems, 34, 7436-7447. [3] Kostrikov, I., Nair, A., & Levine, S. (2021). Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A. Questions For Authors: 1 Theorem 4.3 assumes a normalization condition on the dimensional soft advantage function, Is there empirical validation or an approximation analysis to show its impact when the condition is violated? 2. The hierarchical discretization approach breaks down continuous action selection, but does it introduce bias in Q-learning updates? Would the method still converge to the optimal Q-function under function approximation errors? 3. Can you provide an error analysis of ARSQ’s predictions when trained on suboptimal data? How well does ARSQ generalize to unseen tasks or out-of-distribution environments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. Supplementary Material for our response is at [THIS LINK](https://anonymous.4open.science/r/icml25-Submission9509_2/fig_L88h.pdf). Below, we address each point raised: ## 1. Analysis of the Nature of Suboptimality We agree that explicitly analyzing the nature of suboptimality is important, and provide two additional analyses to better illustrate suboptimality of datasets: - Trajectory Reward Analysis (Fig. 1 in Supplementary Material): We visualize histograms of trajectory rewards in D4RL datasets, intuitively showing varying data quality. - Case Study (Fig. 2 in Supplementary Material): We introduce a simplified environment and compare ARSQ against action dimension independent Q decomposition methods. The learned Q landscape demonstrates ARSQ's ability to learn accurate Q-values despite suboptimal data. ## 2. Comparison to Alternative Discretization Strategies We agree that ablations of alternative discretization strategies are important. In fact, we have provided some analyses along these lines in Section 5.3, where we evaluate several alternative discretization variants, including variants without hierarchical coarse-to-fine discretization (w/o CF Cond., w/o CF), variants that generate actions independently for each dimension (w/o Dim Cond., Plain), and a variant that swaps the conditioning order (Swap). The results show that ARSQ consistently outperforms all other variants, highlighting the effectiveness of our discretization strategy. ## 3. Comparison to Offline RL and Offline IL Methods We appreciate this valuable suggestion. In response, we have included additional comparisons between ARSQ and several popular offline RL and IL methods, in Table 1 of Supplementary Material. Regarding mentioned baselines: - **SafeDICE** primarily targets scenarios with labeled non-preferred trajectories. Thus, we adopt its preliminary version, **DWBC**, which is more applicable to our setting. - **EDAC** is orthogonal and can be integrated to ARSQ in principle. However, due to time constraints, we leave this integration for future work. - **IQL** and other offline RL/IL baselines are included in our revised paper. These comparisons further support the effectiveness of ARSQ in learning from suboptimal data. ## 4. Analysis of Theorem 4.3 The normalization condition in Theorem 4.3 (Eq. 12) is indeed essential for the conclusion that the soft advantage can be decomposed into the summation of dimensional soft advantages (Eq. 13). Without this assumption, dimensional decomposition does not hold. To ensure this normalization constraint in practice, we apply a hard constraint during training via log-sum-exp subtraction (Eq. 16), ensuring consistency and theoretical validity. We also identified a typo in Eq. 16, where the temperature parameter $\alpha$ was inadvertently omitted. The corrected equation is: $$ A^d(\mathbf{s}_t, \mathbf{a}^{-d}, a^d) = u^d(\mathbf{s}_t, \mathbf{a}^{-d}, a^d) - \alpha log \sum _{a^{d'}} \exp ( \frac{1}{\alpha} u^d(\mathbf{s}_t, \mathbf{a}^{-d}, a^{d'}) )$$ We will correct this typo in the revised manuscript. ## 5. Convergence to Optimal Q-function under Approximation Errors Our algorithm updates value and advantage functions via soft Bellman iteration (Eq. 14). Haarnoja et al. [1][2] have demonstrated that, in the absence of approximation errors, the optimal soft Bellman operator possesses a unique fixed point and is a contraction mapping in the infinity norm with contraction factor $\gamma < 1$. Thus, we expect our value iteration to converge close to the optimal solution if approximation errors remain bounded. [1] Haarnoja et al. (2017). Reinforcement learning with deep energy-based policies. ICML. [2] Haarnoja et al. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. ICML. ## 6. Hierarchical Discretization Bias and Error Analysis with Suboptimal Data We conducted an error analysis in a simplified 2D environment, in Fig. 2 & 3 and Tab. 2 in Supplementary Material. | Discretization | Q Prediction Error | |----------------|--------------------| | Independent | 17.57 ± 0.67 | | ARSQ w/o CF | 0.50 ± 0.21 | | ARSQ | 0.16 ± 0.02 | The results demonstrate that both dimensional conditioning and hierarchical coarse-to-fine (CF) discretization significantly reduce Q-value prediction bias, further highlighting the necessity of our action discretization strategy. ## 7. Generalization to Unseen Tasks or Out-of-Distribution Environments We agree that analyzing ARSQ's generalization to unseen tasks or out-of-distribution environments is highly relevant. However, such analysis is beyond our current scope. We acknowledge this as an important future research direction. We thank the reviewer again for their valuable feedback, which significantly improves the clarity and rigor of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I will maintain my rating, with a positive inclination toward acceptance.
Summary: This paper studies the problem of reinforcement learning for continuous control with action discretization, specifically focusing on offline RL (D4RL benchmark) and RL from demonstrations (RLBench) as problem settings. A key limitation of prior work that leverages action discretization is the explosion in dimensionality that occurs if the continuous action space is jointly binned in a grid-like manner across all dimensions. Another option that prior work has explored is the use of separate action binning across each dimension, which avoids the aforementioned explosion but in turn makes an assumption about action dimension independence. This work proposes ARSQ, a method for coarse-to-fine hierarchical action discretization in an auto-regressive framework that overcomes both of these two issues by auto-regressively conditioning action dimension/coarse-to-fine sampling on the previously sampled ones (nicely illustrated in Figure 2). Experiments are conducted on D4RL and RLBench, and the proposed method outperforms CQN, an existing method for coarse-to-fine discretization, as well as recent methods for imitation learning (BC, ACT) and online RL (DrQ-v2). ## Post-rebuttal assessment I appreciate the detailed response to my comments. The additional comparisons and clarifications address my concerns. I am raising my score from Weak Accept -> Accept under the assumption that these changes (along with those requested by my fellow reviewers) will be included in the camera-ready revision. Claims And Evidence: The main claims of the paper (that auto-regressive conditioning is beneficial, and that ARSQ outperforms existing methods on competitive benchmarks) are supported by empirical evidence. I believe that the proposed method is well motivated based on observed limitations in prior work, and the illustrative example in Figure 1 is helpful for understanding the problem. Ablations support the claim that the specific formulation of auto-regressive conditioning used in the proposed method is beneficial and favorable over a number of alternatives (e.g. conditioning only on dimensions or coarse-to-fine levels). Methods And Evaluation Criteria: The experimental setup is appropriate for evaluation of the method in question. D4RL and RLBench are competitive, commonly used environments in related literature, and the two benchmarks span both low-dimensional state representations as well as RGB images as inputs, and cover both pure offline RL and RLfD. Results are mostly convincing. I presently have three concerns regarding the evaluation: * There are few baseline results included for D4RL. I find this somewhat surprising given that it is a well established benchmark with lots of benchmark results readily available in prior literature. I would recommend that the authors include a few more baseline comparisons, e.g. IQL [1] and CQL [2]. I understand that this submission focuses on action discretization and that the aforementioned methods do not, but it would be helpful to have a set of established results on this benchmark for comparison. I believe that numbers can be extracted directly from the respective papers. * Number of seeds is not specified, except for L350 referencing D4RL. It would be greatly appreciated if the authors could clearly detail the number of seeds used for each experiment/figure. * Given that DrQ-v2+ is not a well established method but rather proposed as a baseline in prior work CQN, it would be helpful to include a description of this method in the submission (to make it self-contained) as well as a (potentially brief) comment on how the baseline was obtained/implemented (presumably on top of the public implementation of DrQ-v2). [1] Kostrikov et al., Offline Reinforcement Learning with Implicit Q-Learning, https://arxiv.org/abs/2110.06169 (2021) [2] Kumar et al., Conservative Q-Learning for Offline Reinforcement Learning, https://arxiv.org/abs/2006.04779 (2020) Theoretical Claims: I have not checked the theoretical claims in detail, but did not encounter any glaring issues while reading through the paper. Experimental Designs Or Analyses: See my previous comments in the "methods and evaluation criteria" field. Supplementary Material: I skimmed through the directories and some of the code. I appreciate inclusion of source-code in the submission. Relation To Broader Scientific Literature: The work is generally well positioned wrt prior literature on the topic of action discretization. Essential References Not Discussed: I am not aware of any works that are currently missing from the list of references, but it is possible that I missed some. Other Strengths And Weaknesses: In summary **Strengths:** The paper is generally well written and easy to follow. The illustrations are helpful for understanding the problem setting and proposed method. The experimental setup appears solid (aside from the few concerns already mentioned), and empirical performance gains are substantial. I have no concerns regarding the originality of the work. I believe that the contributions will be of interest to the ICML community. **Weaknesses:** I have a few concerns regarding the experimental evaluation, namely lack of (1) baseline results for D4RL, (2) clarity regarding number of seeds, and (3) details regarding the DrQ-v2+ baseline. I believe that these issues, while important, can easily be addressed during the rebuttal period. Other Comments Or Suggestions: It is clear from Eq. 17-18 how the value target is computed, but the couple of lines preceding that are a bit ambiguous. Maybe the authors can consider rephrasing it to make it explicit that they train two value networks and use two target networks that (presumably) are exponential moving averages of the two online value networks + that the value target then is computed as the minimum of the two targets. This is standard practice in the field but may not be obvious to uninitiated readers. Also minor, but some of the figures are rather small and/or have small fonts which makes them difficult to read. I suggest that the authors revisit the formatting of the paper to mitigate that. Questions For Authors: I would like the authors to address my previous comments regarding the experimental evaluation / weaknesses. Details about baselines such as DrQ-v2 and DrQ-v2+ could potentially be added to the appendices. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed and constructive feedback. We appreciate the positive remarks about the motivation, clarity, originality, and empirical results of our work. Below, we respond to the reviewer’s concerns point-by-point: ## 1. Additional Baseline Results on D4RL We agree with the reviewer that additional baseline comparisons on the well-established D4RL benchmark would strengthen our evaluation. Following the reviewer’s suggestion, we have included results from several offline RL methods (CQL[1], IQL[2], TD3+BC[3], Onestep RL[4], RvS-R[5]) and offline imitation methods (Filtered BC[5], Decision Transformer(DT)[6], DWBC[7]). We have included the results in Table 1 of [Supplementary Material](https://anonymous.4open.science/r/icml25-Submission9509_2/fig_5ENF.pdf). All data are sourced from the respective papers, and we re-evaluate DWBC with extending datasets (marked by "*"). These results are summarized below for convenience: | Dataset | CQL | IQL | TD3+BC | Onestep RL | RvS-R | Filt. BC | DT | DWBC | **ARSQ (Ours)** | |----------------|--------|-------|--------|------------|-------|----------|-------|---------|-----------------| | halfcheetah-m | 44.0 | 47.4 | 48.3 | *48.4* | 41.6 | 42.5 | 42.6 | \*41.4 | 43.7 ± 0.6 | | hopper-m | 58.5 | 66.3 | 59.3 | 59.6 | 60.2 | 56.9 | 67.6 | \*56.0 | *99.2 ± 0.5* | | walker2d-m | 72.5 | 78.3 | *83.7* | 81.8 | 71.7 | 75.0 | 74.0 | \*72.3 | 81.2 ± 0.9 | | halfcheetah-mr | *45.5* | 44.2 | 44.6 | 38.1 | 38.0 | 40.6 | 36.6 | 38.9 | 41.1 ± 0.1 | | hopper-mr | 95.0 | 94.7 | 60.9 | *97.5* | 73.5 | 75.9 | 82.7 | 73.0 | 90.7 ± 4.4 | | walker2d-mr | 77.2 | 73.9 | *81.8* | 49.5 | 60.6 | 62.5 | 66.6 | 59.8 | 74.0 ± 2.6 | | halfcheetah-me | 91.6 | 86.7 | 90.7 | *93.4* | 92.2 | 92.9 | 86.8 | \*93.1 | 92.4 ± 1.2 | | hopper-me | 105.4 | 91.5 | 98.0 | 103.3 | 101.7 | *110.9* | 107.6 | \*110.4 | *110.9 ± 1.0* | | walker2d-me | 108.8 | 109.6 | 110.1 | *113.0* | 106.0 | 109.0 | 108.1 | \*108.3 | 107.9 ± 0.3 | | Total | 698.5 | 692.4 | 677.4 | 684.6 | 645.5 | 666.2 | 672.6 | 653.2 | **741.1** | The above results demonstrate that ARSQ achieves competitive performance with existing offline RL/IL algorithms, further demonstrating its potential as a versatile RL algorithm. [1] Kumar et al. (2020). Conservative q-learning for offline reinforcement learning. NeurIPS 33. [2] Kostrikov et al. (2021). Offline reinforcement learning with implicit q-learning. arXiv:2110.06169. [3] Fujimoto et al. (2021). A minimalist approach to offline reinforcement learning. NeurIPS 34. [4] Brandfonbrener et al. (2021). Offline rl without off-policy evaluation. NeurIPS 34. [5] Emmons et al. (2021). Rvs: What is essential for offline rl via supervised learning? arXiv:2112.10751. [6] Chen et al. (2021). Decision transformer: Reinforcement learning via sequence modeling. NeurIPS 34. [7] Xu et al. (2022). Discriminator-weighted offline imitation learning from suboptimal demonstrations. ICML. ## 2. Clarification on Number of Seeds We apologize for the previous lack of clarity regarding the number of random seeds used. To clarify, all experiments were conducted using three different random seeds. We will ensure this information is clearly stated in the revised version of the paper. ## 3. Details on the DrQ-v2+ Baseline We thank the reviewer for pointing out this omission. DrQ-v2+ is an enhanced variant of DrQ-v2, proposed and open-sourced by Seo et al[8]. It incorporates several key improvements, including a distributional critic, an exploration strategy using small Gaussian noise, and optimized hyperparameters. These enhancements make DrQ-v2+ a more competitive baseline compared to the original DrQ-v2. We will include a detailed description of the DrQ-v2+ baseline in the revised paper. [8] Seo et al. Continuous Control with Coarse-to-fine Reinforcement Learning. CoRL. ## 4. Clarification on Target Value Computation Specifically, we train two separate value networks and maintain two corresponding target networks, which are updated using exponential moving averages of the online value networks. The value targets are computed as the minimum of the two target network outputs. We will clarify this in the revised paper. ## 5. Improved Figure Readability We thank the reviewer for highlighting the readability issues in some figures. We will improve this in the revised paper. We greatly appreciate the reviewer's insightful feedback, which has helped us significantly improve the manuscript. We would be happy to incorporate any additional suggestions from the reviewer.
null
null
null
null
null
null
null
null
Pairwise Maximum Likelihood For Multi-Class Logistic Regression Model With Multiple Rare Classes
Accept (poster)
Summary: This paper addresses the problem of multi-class logistic regression in scenarios with class imbalance: specifically, one class is overwhelmingly dominant and the remaining classes are rare. The authors develop a theoretical framework demonstrating that, under appropriate assumptions and asymptotic conditions, the maximum likelihood estimator (MLE) is asymptotically normal with a mean-zero error, and has an asymptotically block-diagonal covariance structure, implying that the parameters for the rare classes are asymptotically independent. Building on this insight, the paper proposes a pairwise maximum likelihood estimator (PMLE) that decomposes the multi-class problem into separate binary logistic regression problems, along with a subsample-based variant (SPMLE) that reduces computational costs by down-sampling the major class. ## Update After Rebuttal I raised my score as a result of a discussion with the authors (see below). Claims And Evidence: ### Interesting theoretical results with limited motivation. The paper presents intriguing theoretical results that decompose a multi-class classification problem with imbalanced classes into a set of pairwise binary problems. However, the overall motivation and practical impact remain questionable. The introduction discusses customer purchase behavior and related contexts, which might lead readers to expect approaches involving gaze tracking or attention mechanisms integrated into an object detection pipeline. Instead, the paper ultimately focuses on logistic regression applied to sub-image classification. For instance, one might naturally consider a two-stage approach—first detecting car plates and then classifying them—to mitigate the imbalance issue, an alternative not discussed by the authors. ### Strong assumptions with limited practical relevance. The theoretical guarantees rely on specific conditions such as $\alpha_N \to -\infty$ and $a_N + \log N \to \infty$. These conditions ensure that while the probability of a rare event converges to zero, the absolute number of rare events still diverges, allowing for asymptotic analysis. In practice, however, these assumptions may not hold. For example, in a dataset of car images, the probability that a car plate is visible might remain roughly constant regardless of $N$ (assuming each image reliably shows a car plate). Moreover, the convergence rate is effectively downgraded from $1/\sqrt{N} \to 1/\sqrt{N e^{a_N}}, meaning that when the rare events are very infrequent (i.e., $e^{a_N}$ is very small), the effective sample size for estimation is substantially reduced, resulting in a much slower convergence. This raises concerns about the practical feasibility of the method even if the theoretical assumptions are met. ### Limited experimental validation. While the paper supports its theoretical claims with simulation studies and a single real-world TikTok Screenshots dataset, the experimental evidence is limited. The evaluation compares only the authors’ own MLE-based variants (GMLE, PMLE, and SPMLE) without benchmarking against alternative methods—such as a detection-classification pipeline or other imbalance-handling techniques (e.g., cost-sensitive learning, upsampling, or downsampling) —that are commonly used in practice. This narrow scope of experiments makes it challenging to assess the broader impact and practical relevance of the proposed approach. Overall, although the paper’s theoretical contributions are solid, the motivation, practical relevance, and experimental validation leave several claims insufficiently supported by convincing evidence. Methods And Evaluation Criteria: I incorporated methodological concerns into the above “Claims and Evidence” section. Theoretical Claims: I have reviewed the high-level theoretical claims but I did not perform a line-by-line verification of the detailed proofs in the main text and Appendix. Consequently, while the arguments appear plausible, there remains a possibility that oversights exist. Experimental Designs Or Analyses: I incorporated experimental concerns into the above “Claims and Evidence” section. Supplementary Material: The authors do not provide supplementary material. Relation To Broader Scientific Literature: Their theoretical findings, notably the asymptotically block-diagonal covariance structure and the derivation of a convergence rate, provide rigorous insights into the behavior of maximum likelihood estimators under extreme imbalance. While the focus is on logistic regression, these insights have the potential to inform future research on more complex models, advancing our understanding of algorithm design in extreme imbalance scenarios. Essential References Not Discussed: The paper would benefit from a broader discussion of the imbalanced classification literature. For example, recent advances in deep learning for imbalanced data, such as focal loss (Lin et al., ICCV 2017) and class-balanced loss based on the effective number of samples (Cui et al., CVPR 2019), have shown promising results in mitigating imbalance issues, especially in object detection tasks. Discussing these methods would help situate the paper’s contributions within the broader context of imbalanced classification and highlight potential avenues for extending the theoretical results to more complex models. Other Strengths And Weaknesses: ### Other strengths to note: Building on their theoretical insights, the authors introduce the pairwise maximum likelihood estimator PMLE and its subsample-based variant, SPMLE. SPMLE keeps the same convergence rate as PMLE, efficiently reduce computational burdens without sacrificing benefits. ### Minor weaknesses: - The term “NR algorithm” is undefined. I assume it refers to the Newton-Raphson algorithm, but a clear definition or brief explanation would improve clarity. - To ensure reproducibility, it would be beneficial if the authors made the code available. Other Comments Or Suggestions: The paper was an enjoyable read. Please note that my comments represent my initial impressions and may include misunderstandings. I welcome further discussion on these points and am open to revising my score once my questions and concerns are adequately addressed. Questions For Authors: I have incorporated questions into the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive suggestions. The concerns have been well addressed as follows. 1. **Motivation and Practical Impact.** - **Theoretical Motivation.** The focus here is the theoretical investigation of logistic regression with rare classes. Specifically, we find that the asymptotic covariance of the resulting MLE is block-diagonal, which further inspires the novel PMLE method. All the discussions in the introduction about customer purchase behavior are used to demonstrate the practical importance of our problem. However, with your kind reminder, we have realized that we might have overemphasized this point. We now have rewritten the section for better elaboration. - **Comparison with Two-Stage Approach.** We consider a very ideal situation with all car plates having been correctly detected. All rare classes have been perfectly separated from the major one. We then only need to focus on rare classes. The detailed classification results on the TikTok Screenshots (TTS) dataset are given in Table A.1. Even under this ideal situation, the resulting classification accuracy (RARE) is much worse than our one-stage strategy (GMLE, PMLE and SPMLE). This is not surprising since the useful information contained in the major class was not effectively used in the second stage. In contrast, all the information from both major and rare classes is comprehensively used by the one-stage strategies. ### Table A.1: Prediction results for the TTS dataset. | | GMLE | PMLE | SPMLE | RARE | | --- | --- | --- | --- | --- | | ACC | 0.836 | 0.835 | 0.824 | 0.745 | | AUC | 0.997 | 0.999 | 0.999 | 0.969 | 2. **Theoretical Properties.** - **Assumption.** The theoretical assumption $\alpha_N \to -\infty$ and $\alpha_N+\log N \to \infty$ are used for allowing rigorous asymptotic analysis. This leads to the block diagonal structure of the asymptotic covariance for MLE, and further leads to the novel PMLE method. Without this specific condition, we can never obtain those theoretical findings and then be inspired to develop PMLE method. Moreover, those conditions seem to be very standard in the literature. See, for example, Equation (2) in Wang (2020, ICML) and Section 2 in Wang et al. (2021, NeurIPS). - **Convergence Rate.** The convergence rate reflects that sufficient amount of samples must be provided for both major and every rare class. Otherwise, no consistent parameter estimates can be obtained. The much slower convergence rate of $1/\sqrt{Ne^{\alpha_N}}$ is as expected. In fact, this is a phenomenon having been well documented in the literature. See, for example, Wang (2020, ICML), Song and Zou (2024, TIT), and Wang et al. (2021, NeurIPS). We now have made this point clear in the revision. 3. **Empirical Comparison with Baseline Methods.** We have included the following methods for comparison on the TTS dataset: the focal loss (FL) of Lin et al. (2017), the class-balanced loss (CBL) of Cui et al. (2019), the cost sensitive loss (CSL) and random downsampling (RDS) of Fernández et al. (2018). All methods are optimized according to the suggestions of the original papers. The results are summarized below in Table A.2. We are happy to report that our method outperforms all competitors. ### Table A.2: Prediction results for the TTS dataset. | | GMLE | PMLE | SPMLE | FL | CBL | CSL | RDS | | --- | --- | --- | --- | --- | --- | --- | --- | | ACC | 0.836 | 0.835 | 0.824 | 0.794 | 0.789 | 0.747 | 0.763 | | AUC | 0.997 | 0.999 | 0.999 | 0.998 | 0.998 | 0.991 | 0.996 | 4. **Minor Issues.** For the other issues, we have also addressed them carefully. - “NR algorithm” has been referred to as the Newton-Raphson algorithm in the revision. - The code has been made publicly available at [https://anonymous.4open.science/r/Anony-63CC](https://anonymous.4open.science/r/Anony-63CC). **Additional References** 1. Cui, Y., Jia, M., Lin, T. Y., Song, Y., & Belongie, S., 2019, Class-balanced loss based on effective number of samples. CVPR. 2. Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P., 2017, Focal loss for dense object detection. ICCV. 3. Song, Y., & Zou, H., 2024, Minimax optimal rates with heavily imbalanced binary data. IEEE TIT. 4. Fernández, A., García, S., Galar, M., Prati, R. C., Krawczyk, B., & Herrera, F, 2018, Learning from imbalanced data sets. Springer. --- Rebuttal Comment 1.1: Comment: I greatly appreciate the authors’ response in thoroughly clarifying my concerns. I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: We truly appreciate the valuable time and effort you have dedicated to reviewing our submission. Thank you so much for your support.
Summary: This paper focuses on multi-class logistic regression with one major class and multiple rare classes, a problem arising from real applications. By the suggestions from Theorem 2.1, the standard maximum likelihood estimators as well as the re-parametrized version are asymptotically independent for different rare classes, which in turn motivates the development of a new algorithm, PMLE, by solving multiple pairwise log-likelihood functions. Since the pairwise log-likelihood functions contain the samples with large size in the major class, minimizing the pairwise log-likelihood functions is still computationally challenging. To further accelerate the computation, the authors propose to solve subsample-based pairwise log-likelihood functions, termed as SPMLE. Theoretically, the authors prove that the newly-proposed algorithms, PMLE and SPMLE, require lower computational cost compared to the standard maximum likelihood estimators without compromising asymptotic efficiency. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. The proposed methods are corroborated by simulation studies and real-example analysis. Supplementary Material: No Supplementary Material submitted. Relation To Broader Scientific Literature: A result similar to Theorem 2.1 in the submitted manuscript is also established by Wang (2020); however, their work focuses on an imbalanced two-class problem, whereas the manuscript addresses a multi-class setting with one major class and multiple rare classes. Notably, a new computationally efficient algorithm with theoretical guarantee is proposed in the manuscript, advancing beyond the scope of prior work. [1] Wang, H. Logistic regression for massive data with rare events. In International Conference on Machine Learning, pp. 9829–9836. PMLR, 2020. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** 1. Inspired by the asymptotic covariance matrix of the global MLE, the manuscript proposed two methods, PMLE and SPMLE, which have a significant advantage in computation and are specifically designed for the problem of multi-class logistic regression with one major class and multiple rare classes. 2. The established theory for PMLE and SPMLE is quite impressive. It reveals that the newly-proposed algorithms require lower computational cost without compromising asymptotic efficiency. 3. Comprehensive numerical studies, including simulations and real-data analyses, demonstrate the effectiveness of the proposed methods. Weaknesses: The appendix should be well-organized. For instance, Lines 629-639 currently appear cluttered and could be improved by using interline formulas for better clarity and readability. Other Comments Or Suggestions: Typographical remarks: 1. In Line 93, it should be $1\le i\le N$. Questions For Authors: In this work, the number of classes $K$ is treated as fixed. I think it may be interesting to investigate the theoretical behavior of the proposed method when considering the case where $K$ diverges as the number of samples increases? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for all the constructive suggestions. All the concerns have been well received and carefully addressed as follows. 1. **Diverging Number of Classes.** With a diverging $K$, the expected percentage of rare classes should be even smaller. To ensure a diverging sample size for each rare class, the technical assumption $\alpha_N \to -\infty$ should be replaced by $\alpha_N + \log K \to -\infty$. The resulting theoretical behavior of our proposed methods remains fairly the same. More specifically, the convergence rate of the PMLE remains $\sqrt{Ne^{\alpha_N}}$-consistent. The PMLEs associated with different rare classes remain mutually independent asymptotically. More importantly, the resulting PMLE should be asymptotically as efficient as GMLE. To numerically demonstrate this point, we replicate the simulation example but with $K = [N^{0.25}]$. See below Table A for the detailed simulation results. We find that (1) both GMLE and PMLE remain statistically consistent, and (2) their asymptotic efficiency seems to be the same with extremely similar RMSE values. We now have made this point clear in the revision. ### Table A: Simulation results with diverging $K$. | N | $10^4$ | $2.5\times10^4$ | $5\times10^4$ | $7.5\times10^4$ | $10^5$ | |-----------|--------|----------------|--------------|-----------|--------| | K | 10 | 12 | 14 | 16 | 17 | | GMLE | 0.114 | 0.091 | 0.077 | 0.068 | 0.064 | | PMLE | 0.114 | 0.091 | 0.077 | 0.068 | 0.064 | | SPMLE | 0.116 | 0.093 | 0.078 | 0.069 | 0.064 | 2. **Minor Issues.** For the other issues that you have mentioned, we have also corrected them carefully. (1) In the revision, we have reorganized Lines 629-639 in the Appendix. (2) In Line 93, it should be $1 \le i \le N$. We now have made it clear in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! I will keep my already positive score. --- Reply to Comment 1.1.1: Comment: Thank you for the valuable time and effort you have spent reviewing our submission. We are truly grateful for your support of our work.
Summary: This paper studies the parameter estimation problem for Multi-class logistic model with one major class and $K$ rare classes. The main observation is that, under certain decay rate assumptions on the coefficients of rare classes. The joint MLE estimator is asymptotically equal to the pairwise MLE estimator between each rare class and major class, and the pairwise MLE estimator significantly save the computation complexity. Authors also extends the theoretical guarantee of pairwise estimators to the sub-sampled setting. Claims And Evidence: The theoretical statements and contributions made in this paper are easy to understand. However, there are several points that could further improve readability: 1. I am a bit confused about the estimator under the model described in Equation (1). From my understanding, all subsequent operations (e.g., pairwise estimates, sub-sampling) are based on the model defined in Equation (2). Why not start directly with the model in Equation (2)? Additionally, the reductions made from the model in Equation (1) to the model in Equation (2) (paragraph 2 in section 2.2) should be explained more clearly. 2. Please consider formally stating the technical assumptions mentioned in the last paragraph of Section 2.1. Methods And Evaluation Criteria: The proposed method(pairwise MLE) is the main contribution of this paper, and is easy to apply. The applied evaluation criteria(estimation error) is standard. Theoretical Claims: I have not checked all the details of the theoretical proofs, but the independence structure revealed by Theorem 2.1 make sense to me given the assumptions made in this paper. And the Theorem 2.2 and 2.3 also seems reasonable given such independence structure. Experimental Designs Or Analyses: No Supplementary Material: Yes, I have reviewed the proofs of the theoretical results, though I have not checked all of them in detail. Relation To Broader Scientific Literature: This paper studies a special structure in multi-class logistic regression, which is new in this area. And the proposed new estimator provides a efficient algorithm under such structure, as demonstrated in their experiment. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: The studied setting is new, and the proposed method is new and efficient. Weakness: 1. In my opinion, the assumptions made regarding rare classes are too strong, which weakens the theoretical contributions of this paper. Specifically, the theoretical guarantees require the rare classes to be sufficiently balanced and impose an explicit rate bound on $\alpha_N$. I tend to believe this is a relatively loose sufficient condition, as no lower bound results or counterexamples are provided to justify its necessity. 2. The assumption that there exists only one major class (corresponding to $k=0$) seems restrictive. Can the current results be extended to a more general setting where there are $m$ major classes? Other Comments Or Suggestions: No Questions For Authors: My questions are: 1. Are there any necessity results on the decay rate condition $\alpha + \log N \to +\infty$ ? 2. Can the assumption of a single major class be extended to accommodate a general case with $m$ major classes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for all the constructive suggestions. All the concerns have been well received and carefully addressed as follows. 1. **Relationship between Two Models.** Note that the model (2) is a special case of the model (1), which is a more general model. As you have correctly noted, we can indeed start with the model (2) directly. However, we then lose the opportunity to explain why the model (2) is specified in its current form. Specifically, by starting from the general model (1), it seems clear that we have to set $\alpha_{N k}\rightarrow -\infty$. Otherwise, the rare class phenomenon cannot be theoretically reflected. Moreover, we cannot have balanced rare classes unless $\alpha_{N k_1}-\alpha_{N k_2}=O(1)$ for any $k_1\neq k_2$. Both the conditions (i.e., $\alpha_{N k}\rightarrow -\infty$ and $\alpha_{N k_1}-\alpha_{N k_2}=O(1)$) enable us to re-parameterize $\alpha_{Nk} = \alpha_{N}+\beta_{0k}$. That leads to the model (2). We now have explained this issue in the revision more clearly. Thank you so much for this kind reminder. 2. **Imbalanced Class Assumption.** Requiring the rare classes to be sufficiently balanced is a relatively loose sufficient condition and not very necessary. In fact, if rare classes are unbalanced, PMLE can still be applied without any difficulty. However, there is one critical condition. That is the sample size of the target rare class must be sufficiently large. Otherwise, no consistent classifier can be learned. This seems to be a very basic and reasonable condition, which has been widely used in the past literature (Wang, 2020, ICML; Li et al., 2024, JMLR; Wang et al., 2021, NeurIPS). However, this very basic and reasonable condition can be easily violated, if rare classes are highly unbalanced. In such cases, some classes may be even rarer than others. We refer to those even rarer classes as tiny classes, whose sample sizes are often too tiny to support any meaningful statistical learning. This is the only reason we focus on the balanced case in our theoretical analysis. We now have made it clear in the revision. 3. **Multiple Major Classes.** Our method can be readily extended to a more general setting with multiple major classes. To fix the idea, consider, for example, the case with two major classes. Denote the two major classes by $k = 0$ and $k = 1$, respectively. Then, the logistic regression model becomes: $$ P(Y\_i = 0 \mid X\_i)=\frac{1}{1+\exp(Z\_i^{\top} \theta^*\_1)+\sum\_{k = 2}^K \exp(Z\_i^{\top} \theta^*\_k+\alpha\_N)}, $$ $$P(Y\_i = 1 \mid X\_i)=\frac{\exp(Z\_i^{\top} \theta^*\_1)}{1+\exp(Z\_i^{\top} \theta^*\_1)+\sum\_{k = 2}^K \exp(Z\_i^{\top} \theta^*\_k+\alpha\_N)}, (A.1)$$ $$P(Y_i = k \mid X\_i)=\frac{\exp(Z\_i^{\top} \theta^*\_k+\alpha_N)}{1+\exp(Z\_i^{\top} \theta^*\_1)+\sum_{k = 2}^K \exp(Z\_i^{\top} \theta^*\_k+\alpha\_N)} \text{ for }2 \leq k \leq K. (A.2)$$ The key difference between (A.1) and (A.2) is that the diverging intercept $\alpha_N$ is involved in (A.2) for rare classes but not in (A.1) for the major class. To estimate the model parameters, the pairwise log-likelihood in Section 2.4 remains valid. The only difference is that the convergence rate of $\hat\beta_1$ becomes $\sqrt{N}$. However, the convergence rate of the parameters associated with the rare classes (i.e., $\hat\beta_k$ for $2 \leq k \leq K$) remains $\sqrt{N e^{\alpha_N}}$. We now have made it clear in the revision. 4. **Minor issues.** For the other issues that you have mentioned, we have also corrected them carefully. (1) We have now formally stated the technical assumptions in the last paragraph of Section 2.1 as: "Assume (C1) $\alpha_{N k}\rightarrow -\infty$ and (C2) $\alpha_{N k}+\log N\rightarrow \infty$ as $N\rightarrow \infty$ for every $1\leq k\leq K$." (2) We have carefully discussed the decay rate condition $\alpha_N+\log N\rightarrow \infty$. Without this condition, we might have $\alpha_N + \log N \to -\infty$. Then, the expected sample size for the $k$th rare class becomes $E(N_k) \approx NP(Y_i = k) \to 0$ as $N \to \infty$ for every $1\leq k \leq K$. As a consequence, the sample sizes for the rare classes may be too small to yield any consistent estimator. **Additional Reference** Wang, H., Zhang, A., & Wang, C., 2021, Nonuniform negative sampling and log odds correction with rare events data. NeurIPS.
null
null
null
null
null
null
null
null
Compressing tree ensembles through Level-wise Optimization and Pruning
Accept (poster)
Summary: This paper proposes a new algorithm for compressing a learned tree ensemble while keeping its generalization performance. In each depth of a given tree ensemble, the proposed method prunes its redundant subtrees and adjusts the remaining leaf values. Through the experiments on binary classification datasets, the authors demonstrated that the proposed method attained higher compression rates than the existing baseline methods without significantly degrading accuracy. In addition, the experimental results showed that the proposed method could improve the computational costs of both test inference and robustness verification. ## update after rebuttal I appreciate the authors for their insightful response. Because the additional experimental results provided by the authors address my concern regarding the lack of comparisons with the existing methods, I have decided to improve my score. If the paper is accepted, I hope the authors will include these results in the final version. Claims And Evidence: My main concern is the lack of comparisons to the existing methods, such as [Nan+, NeurIPS2016] and [Liu+, AISTATS2023], that aim to reduce the complexity of tree ensembles by pruning each tree. I believe the novelty and effectiveness of the proposed method can not be supported without comparisons with these existing baselines. - [Nan+, NeurIPS2016] Feng Nan, Joseph Wang, Venkatesh Saligrama. Pruning Random Forests for Prediction on a Budget. NeurIPS, 2016. - [Liu+, AISTATS2023] Brian Liu, Rahul Mazumder. ForestPrune: Compact Depth-Pruned Tree Ensembles. AISTATS, 2023. Methods And Evaluation Criteria: - I think the proposed representation $c_n v_k + b_n$ is an interesting idea that can simultaneously express pruning a redundant subtree or refinements of leaf values. - I also think the experimental results shown in Table 2 demonstrated well the effectiveness of the proposed method in terms of not only compression performance but also verification efficiency. However, as mentioned above, I am concerned about the lack of comparisons to the related baselines. Theoretical Claims: This paper does not include theoretical claims. Experimental Designs Or Analyses: - My main concern is the lack of comparisons to the related baselines ([Nan+, NeurIPS2016] and [Liu+, AISTATS2023]), as mentioned above. - Another concern is that the number of trees and maximum tree depth were only examined up to 100 and 8, respectively. Since the computational complexity of the proposed method depends on these parameters, I think the scalability and sensitivity of the proposed method with respect to these parameters should be investigated. Supplementary Material: This submission does not include the supplementary material. But I checked Appendix. Relation To Broader Scientific Literature: The key contributions of the paper are related to the field of practical techniques for learning tree ensembles. In particular, pruning a learned tree ensemble is one of the promising approaches from the perspectives of generalization performance [Ren+, CVPR2015], computational efficiency [Nan+, NeurIPS2016] [Liu+, AISTATS2023], and interpretability [Hara+, AISTATS2018] [Liu+, KDD2024]. - [Hara+, AISTATS2018] Satoshi Hara, Kohei Hayashi. Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach. AISTATS, 2018. - [Liu+, KDD2024] Brian Liu, Rahul Mazumder. Fire: An Optimization Approach for Fast Interpretable Rule Extraction. KDD, 2024. Essential References Not Discussed: As mentioned above, the key contributions of this paper seem to be related to the work by [Nan+, NeurIPS2016] and [Liu+, AISTATS2023]. I also think this paper is related to the existing studies on extracting rules from tree ensembles from the perspective of interpretability, e.g., [Hara+, AISTATS2018] and [Liu+, KDD2024]. Other Strengths And Weaknesses: I believe the presentation of this paper could be improved. For example, this paper seems to use parentheses too frequently, which makes some sentences complex and difficult to follow. Other Comments Or Suggestions: Nothing in particular. Questions For Authors: Nothing in particular. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for your insightful comments. Below, we address your main concerns: (a) baselines for comparison and (b) scaling behavior. Existing baselines: Thank you for pointing us to this related work. Among the listed papers, we find Nan et al. (2016) less relevant as it solves a different problem: obtaining the value of each feature has a cost and the goal is to minimize the expected feature acquisition cost at prediction time. This is a very different goal. The other papers are more relevant and we will discuss them in the revised version (it seems we missed them because they are not connected through citations with the body of literature we studied or the baselines we compare to, so thank you for pointing us to them). The novelty of our work is not jeopardized, as all these methods clearly differ from our approach: - ForestPrune (Liu et al. 2023) simplifies forests by cutting whole trees at a specific level, rather than pruning individual nodes like LOP does. This way, it loses a crucial aspect of trees, namely, that some subtrees can be deeper than others: a tree can partition the input space in a finer-grained manner in some areas, and in a coarser manner elsewhere. A second difference is that LOP refines leaf values, while ForestPrune does not. Hence, LOP explores ensembles that are not in ForestPrune’s search space. Empirically, we have compared LOP and ForestPrune on XGBoost classifiers on all datasets mentioned in our paper. We have taken the same XGB settings, that is, number of trees in [10, 25, 50, 100], tree depth in [4, 6, 8] and learning rate in [0.1, 0.25, 0.5, 1] and averaged the results. As the table below shows, LOP’s compression factors are up to 10 times better than ForestPrune’s, for comparable accuracy: | | Compr.: | LOP | forestprune | Diff. BAcc.: | LOP | forestprune | |-----------------|--------|------:|-------------:|-----|-----:|-----------: | |Spambase || 8.5 | 3.6 || 0.9 | 0.7 | |Phoneme || 7.5 | 3.7 || 1.3 | 0.9 | |Electricity || 5.2 | 1.4 || 0.5 | 0.2 | |Adult || 31.8 | 37.2 || -0.2 | 0.1 | |Credit || 196.6 | 133.2|| 0.5 | 0.4 | |CompasTwoYears || 356.8 | 240.3|| -0.1 | -0.5 | |DryBean[6vRest] || 28.8 | 15.6 || 0.3 | 0.4 | |Mnist[2v4] || 8.0 | 3.1 || 0.6 | 0.4 | |Volkert[2v7] || 8.3 | 4.9 || 0.6 | 0.4 | |Jannis || 18.1 | 15.3 || 0.6 | 0.5 | |Vehicle || 8.0 | 2.6 || 1.4 | 1.8 | |MiniBooNE || 6.2 | 2.6 || 0.6 | 0.4 | |California || 9.2 | 2.5 || 0.6 | 0.4 | |Ijcnn1 || 6.5 | 2.3 || 0.4 | 0.1 | |Average || 50.0 | 33.4 || 0.6 | 0.4 | In terms of runtime, compressing an ensemble takes LOP on average 236.5 seconds, whereas ForestPrune takes 98.5 seconds. We will include the full results in a revised version of the paper. - Hara et al. (2018) learn an additive model that contains a small set of “rules”; a rule is one path from root to leaf. This corresponds to an ensemble where each tree is constrained to have 1 leaf with non-zero value. LOP has no such constraint. Their experiments only consider models with at most 10 rules. - FIRE (Liu et al. 2024) re-learns leaf weights, regularizing for sparsity and “fusion” of non-zero leaves. It is very similar to the Global Refinement method (Ren et al. 2015), which we do cite and compare to in our paper; the main difference between GR and FIRE is in the fusion regularization and in the optimizer. LOP uses a different parametrization than FIRE: it uses fewer variables, yielding simpler optimization problems to be solved. Scaling: We only considered up to 100 trees because it is known that including more trees rarely improves predictive performance. Similarly, XGBoost may overfit with deeper trees. We felt that testing the method on larger ensembles would artificially boost compression factors. That being said, we ran two experiments. 1. We trained XGBoost with 100, 250 and 500 trees and max depth 8 on four datasets: Adult, California, Spambase, and Phoneme. LOP scales well. For LOP, the average run time on 100 trees is 195s, 250 trees is 520s and 500 trees is 425s. We hypothesize that adding more trees allows LOP to prune full trees more aggressively, leading to more efficient pruning on lower levels. For GR, average run time is 225s for 100 trees, 436s for 250 trees and 1988s for 500 trees. For LRL1 it is 2704s, 3667s, 7581s. For IC it is 9s, 45s and 128s. All times are in seconds. The average compression ratios are 86.4 for LOP, 4.6 for GR, 3.8 for LRL1 and 4.9 for IC. 2. For scaling depth, the response to reviewers N4NE/fAT9 shows results for Random Forests with up to 500 trees and depth 15. ForestPrune fails to compress the biggest models, in which case it returns the empty model. We hypothesise that this is due to the greedy bottom-up approach of ForestPrune: it starts with the empty model and adds (partial) trees incrementally. We will investigate this further. We will add these results to the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your insightful rebuttal. The additional experimental results were quite interesting. I have decided to improve my score. --- Reply to Comment 1.1.1: Comment: We really appreciate that you have taken the time to read our rebuttal. In particular, we are happy that you found our additional results interesting and that it has improved your opinion of the work. We are wondering if anything else could be further addressed to make our paper even more convincing.
Summary: The manuscript proposes a novel method (LOP) for compressing tree-based ensembles by pruning leaves and/or entire trees. LOP is based on sparse optimization and has applications to bagging or boosting ensembles. Experimental results show that the approach cuts down model size with minimal impact on performance. This makes models more energy-efficient and easier to formally verify. ## update after rebuttal The new experiments in reply to other reviewers' comments only reaffirm my view that LOP is a fast and effective method for compressing tree-based models. I think this manuscript makes a meaningful contribution to the literature on this topic, and therefore stand by my original assessment. Claims And Evidence: The primary claim of the manuscript is that LOP reduces model size without harming performance (too much). This is verified in a wide range of experiments on benchmark datasets. Three relevant baselines are considered (GR, IC, LRL1). LOP fares well in all trials. Methods And Evaluation Criteria: The selection of datasets and benchmarks are sensible. The experiments appear well-designed. The results are clear and compelling. Theoretical Claims: The manuscript does not make any theoretical claims. Experimental Designs Or Analyses: The experimental design is sensible and rigorous. I especially like the Pareto frontier plots. Supplementary Material: Yes, the extra experimental results were helpful. Relation To Broader Scientific Literature: The work connects with recent efforts to distill ensemble models into more compressed representations. Essential References Not Discussed: I'm unaware of any essential references that were omitted. Other Strengths And Weaknesses: The writing is clear and direct. The proposal is not exactly revolutionary, but the experimental results clearly suggest that LOP is effective at reducing model size with minimal information loss. This is an important contribution. Other Comments Or Suggestions: -p. 7: "LRL1 is by clearly the slowest approach" -> "LRL1 is clearly" or "LRL1 is by far..." -In Tables 1 and 3, it would be helpful to bold the winning results as elsewhere. Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Many thanks for your positive comments! We will take your suggestions into account.
Summary: The paper "Compressing tree ensembles through Level-wise Optimization and Pruning" proposes a combination of ensemble pruning, decision tree pruning and leaf-refinement to reduce the memory footprint of forests for reduced resource usage during deployment. To do so, the authors re-formulate the inference of a forest into a tensor-based formulation that also captures the inferencing process on each level of each tree in the forest. Based on this formulation, a pruning algorithm is presented, that starts at the root node of each tree and iteratively refines labels while removing subtrees. For level $l=0$ this means removing entire trees, while at the last level we remove leaf nodes. In-between, partial trees are removed. The experimental evaluation on 14 datasets shows that the pruned forests are comparable (in a 0.5% environment) in performance, while being much, much smaller. Claims And Evidence: The paper is generally well-written and the arguments of the authors can be easily followed. While I do have some minor questions wrt. to the experimental evaluation, I think the authors present convincing evidence for each claim. Methods And Evaluation Criteria: Although somewhat limited (14 datasets, 5 methods), the experimental evaluation fits the proposed method. Theoretical Claims: No theoretical claims are made. Experimental Designs Or Analyses: The experimental design is overall sound. Direct competitors are compared over 14 benchmarking datasets. The runtime and memory footprint is included. While I generally prefer larger sets of experiments for tree ensembles (i.e. more datasets and comparisons via critical difference diagrams), I think the overall analysis still holds. Supplementary Material: I skimmed the supplementary material. Relation To Broader Scientific Literature: The paper presents a non-trivial, but somewhat natural extension of the recent "Joint leaf-refinement and ensemble pruning through l1 regularization" paper by Buschjäger and Morik from 2023. While the original paper combines leaf-refinement with ensemble pruning, the method presented in this paper goes one step further and allows for pruning at every stage/level of every tree in the forest. Personally, I see that there is a growing (re-)interest in tree/ensemble pruning in the literature, and hence this paper fits perfectly. Due to its strong performance, it could become one of the go-to papers in this area of research. Essential References Not Discussed: I am not aware of any missing essential references. Other Strengths And Weaknesses: The paper is generally well-written (minus a few questions I state below) and was easy to follow for me. I think, the formulation of the forest in tensor-notation is generally helpful given the current trend in tensor-based computations. The appendix A offers an example of when leaf-refinement is useful, which is nice for future reference. The experimental evaluation could be enhanced by including more datasets and other methods, however, given that this is a conference submission I don't see any issue here (i.e. more is always better, but not always necessary). Other Comments Or Suggestions: In case the paper is not accepted, I suggest improving the following parts: - Personally, the example in section 3.0 and 3.1 was not really helpful to me. I would have preferred more explanations on the method itself - Neither $\alpha$, nor $\Delta$ are explained in detail. While the meaning of both parameters are somewhat clear (see my question below), they are detrimental for Alg 1. I would suggest adding more explanations about that - I think Q3 in the experimental evaluation is slightly misleading: While it is true, that LOP has only two hyperparameters $\Delta$ and $R$, the overall approach has more hyperparameters such as learning rate, optimization method, loss etc. While I understand that these are "additional" hyperparameters, a practitioner has to choose these as well. I would add more discussions about this. Questions For Authors: 1) In the experimental evaluation, you state that you use a 5-fold cross-validation, with 3 folds for training, 1 for validation and 1 for testing. For fitting $\alpha$ you also mention that you are using a validation set. I am a bit confused by this. Typically, I would expect a three-way split: train data (for the forest), prune data (for running LOP) and then test data for testing. Now I would repeat this splitting X times with X different random seed. Please explain how you did this here? 2) Algorithm 1 shows that we have to perform an optimization problem $R\cdot d$ times (line 6). You mention that these are comparably simple / easy and the runtimes you show reflect this, but could you please comment on the exact sizes of these optimization problems. On $l=0$, I have to optimize over 2M parameters and then on the next level $2\cdot 2\cdot M$ (given all trees are balanced and have the same height) and so on? 3) How did you choose $\alpha$? 4) Do you have plans to make your implementation publicly available? 5) The "Joint leaf-refinement and ensemble pruning through l1 regularization" uses Random Forest while you use XGBoost. Why? And, did you try RF as well? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive comments, and especially that this paper “could become the go-to paper in this area”, and “more [experiments] is always better, but not always necessary”. Thanks also for the suggestions for improvement, which we will take into account. To answer your questions: Q1, Q3: The 5 fold cross-validation and tuning of alpha work as follows. We partition into 5 subsets. In each fold, 1 subset serves as test set. Of the 4 remaining subsets, 3 are used for training and pruning the ensemble (that is, X in Eq. 2 is based on these 3 subsets). The fourth 'validation' set is used to tune alpha; a model with performance drop $>\Delta$ is discarded. A log-range of alphas (range 0.001 to 10000) is tried, and among the models not discarded, the smallest model is returned. This is the model for which we ultimately report size, accuracy on test set, etc. Q2: The number of parameters to optimize at level 0 is M (not 2M because the b’s are excluded on this level, see lines 142-143, right column). At level 1, the number of parameters is not 2\*2\*M but 2\*2\*M’ with M’ the number of trees not pruned at level 0 (M’<=M). On each level we try to prune, and each pruned subtree becomes a leaf that contributes only 1 parameter to all lower levels, rather than 2, 4, 8… as we go deeper. Generally, at level l, if there are x active nodes (of which y leaves-on-a-higher-level and z internal nodes), there are y+2\*z < 2x parameters. By the time we proceed to the lowest level d, this number can be 2\*2^d\*M in the worst case, if nothing ever gets pruned, but the point is of course that we do prune a lot, starting at the upper levels. Q3: See Q1. Q4: Yes, the implementation is publicly available. Q5: We used XGBoost because we consider it the state of the art. In response to the review, we have run additional experiments with Random Forests. We also include results for ForestPrune (Liu et al. 2023) suggested by reviewer CuzX. Specifically, we use sklearn RFs with a max depth in [10, 15] and number of trees in [100, 250, 500], on 3 datasets. The table below shows those results. LOP averages a compression ratio of 228.2, whereas GR achieves a ratio of 120.2 and both IC and LRL1 achieve a ratio of 1.3. The average drop in balanced accuracy for LOP is 0.01, similar to the baselines. ForestPrune fails to compress the biggest models; it returned the empty model. Over the successful runs, ForestPrune has a compression ratio of 5 and loses 0.09 in balanced accuracy. In terms of run time, it takes LOP on average 164s to compress these ensembles, compared to 6892s for GR, 5072s for LRL1, and 2665s for ForestPrune. While IC is faster (72s), it has much worse compression ratios than LOP. Hence, LOP compresses more than the competitors without losing accuracy. Moreover, it is around 42x faster than GR which is its nearest competitor in terms of compression ratio. We will add these results to the paper. | Dataset | depth | nb_trees | Compr.: GR | IC | LRL1 | LOP | ForestPrune |Runtime: GR | IC| LRL1| LOP | ForestPrune | |------------|-------|----------|--------------:|------:|--------:|-------------:|---------:|----------------:|-------:|-------------:|------:|-------:| | California |10|100|120.7|1.1|2.6|166.6|2.5|3752.7|14.9|3209.2|213.0|562.6| | California ||250|210.8|1.1|1.0|396.5|2.7|13884.5|82.7|7718.6|235.1|2844.8| | California ||500|254.8|1.1|1.0|788.7|2.5|33363.5|352.2|18589.8|305.2|6030.0| | California |15|100|178.2|1.1|1.0|89.0|6.8|3358.0|19.6|5350.0|370.4|788.7| | California ||250|370.9|1.2|1.0|418.8|6.7|13987.4|105.5|12709.3|404.8|4385.3| | California ||500|564.1|1.2|1.0|533.5|6.8|32928.4|447.2|24220.1|446.9|9698.1| | Spambase |10|100|32.5|1.1|1.2|71.8|3.8|392.7|3.4|316.2|39.8|161.1| | Spambase || 250 | 52.7 | 1.1 | 1.0| 214.0 | 3.5 | 1514.8 | 12.8 | 778.2 | 44.1 | 951.4 | | Spambase || 500 | 81.5 | 1.2 | 1.0 | 405.0 | (fail) | 4978.8 | 44.7 | 1932.3 | 44.5 | 3478.4 | | Spambase | 15| 100 | 64.8 | 1.2 | 1.0 | 128.6 | 5.8 | 597.5 | 3.5 | 357.2 | 58.9 | 246.9 | | Spambase || 250 | 71.6 | 1.1 | 1.0 | 148.2 | 5.5 | 1833.6 | 14.7 | 1233.3 | 59.4 | 1478.6 | | Spambase || 500 | 86.9 | 1.2 | 1.0 | 390.1 | (fail) | 5134.0 | 54.2 | 2590.2 | 58.0 | 5122.1 | | Phoneme | 10 | 100 | 9.2 | 1.6 | 2.2 | 21.0 | 5.3 | 208.0 | 3.4 | 466.2 | 70.0 | 174.5 | | Phoneme | | 250 | 13.9 | 1.6 | 1.7 | 54.6 | (fail) | 981.1 | 13.7 | 1734.5 | 72.7 | 1026.4 | | Phoneme | | 500 | 21.6 | 1.6 | 1.3 | 162.3 | (fail) | 3771.3 | 45.5 | 3193.5 | 93.1 | 3650.6 | | Phoneme | 15 | 100 | 6.2 | 1.6 | 1.9 | 17.1 | 8.2 | 173.8 | 3.9 | 569.0 | 131.0 | 273.3 | | Phoneme | | 250 | 7.4 | 1.6 | 1.9 | 25.3 | (fail) | 626.9 | 15.0 | 2130.4 | 154.1 | 1571.6 | | Phoneme | | 500 | 15.3 | 1.6 | 1.1 | 77.0 | (fail) | 2577.8 | 51.8 | 4194.9 | 150.2 | 5523.0 | | Average ||| 120.2 | 1.3 | 1.3 | 228.2 | 5.0 | 6892.5 | 71.6 | 5071.8 | 164.0 | 2664.9 | --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to answer all of our questions, and especially thank you for running additional experiments. Just to confirm, you prune on the training set, i.e. there is no dedicated pruning set? In any case, I still recommend accepting the paper. In case the paper is accepted, I recommend updating the camera-ready version to include the additional explanation on the 3-way split during cross-validation as well as some additional information on the number of parameters (maybe a paragraph in the appendix is enough). Finally, I would mention your results on using RF and also possibly place them in the appendix. --- Reply to Comment 1.1.1: Comment: Thank you for responding. We will absolutely update a potential camera-ready version to have all of this additional information & results. We do indeed prune on the training set. This can be justified by the fact that the optimisation problem is different during training (i.e., optimising for balanced accuracy per tree) and pruning (i.e., a trade-off between balanced accuracy of the ensemble as a whole and the ensemble size, which uses a regulariser that was ignored during training). Therefore, a separate pruning set is not required.
Summary: The paper introduces LOP (Level-wise Optimization and Pruning), a method for compressing decision tree ensembles by pruning subtrees level by level while updating leaf values to maintain predictive accuracy. Unlike prior methods that prune entire trees or merge only leaf nodes, LOP can remove subtrees at any level, achieving compression rates 10 to 100 times higher than competitors. By optimizing leaf values globally, it minimizes accuracy loss, typically staying within 0.5% of the original model. Empirical results show that LOP significantly reduces memory footprint, speeds up robustness verification, and enhances efficiency on resource-constrained devices, making tree ensembles more practical for deployment. ## update after rebuttal Thank you to the authors for providing detailed feedback and clarifications. While the responses are appreciated, they do not sufficiently address my primary concerns or significantly alter my overall evaluation of the manuscript. Therefore, I will maintain my original score, and I strongly recommend a major revision to strengthen the paper. Claims And Evidence: The paper’s central claims are well supported by empirical evaluations on multiple datasets and rigorous comparisons with baseline compression methods. Specifically, the assertion that LOP can achieve compression rates 10 to 100 times greater than existing techniques while maintaining nearly the same predictive performance is backed by quantitative results across 14 benchmark datasets, demonstrating high compression ratios with minimal accuracy loss. Additionally, the paper provides strong evidence that robustness verification is significantly faster on LOP-compressed models compared to both the original XGBoost models and those compressed by competing methods. The authors further substantiate LOP’s efficiency gains with explicit memory footprint calculations via proxies, highlighting its advantages in resource-constrained environments. However, one limitation is that the experiments are confined to binary classification tasks, leaving open the question of whether LOP’s compression benefits extend to multi-class problems or other data modalities such as image data. Methods And Evaluation Criteria: The proposed LOP method and the evaluation criteria are well-aligned with the problem of compressing tree ensembles while maintaining predictive performance. The authors assess LOP using 14 benchmark datasets from OpenML, covering diverse binary classification domains, which is appropriate given that tree ensembles like XGBoost and Random Forests are widely used for structured data tasks. The evaluation metrics—compression ratio, accuracy retention, robustness verification speed, and memory footprint—are relevant for assessing the trade-offs between model size and performance. Moreover, comparisons against three competitive baselines (Global Refinement, Individual Contribution, and Leaf refinement combined with L1 ensemble pruning) ensure a fair assessment of LOP’s effectiveness. The use of Pareto front visualizations to illustrate the trade-off between compression and accuracy further strengthens the evaluation. However, a potential limitation is that all datasets are binary classification tasks, leaving unanswered how LOP would perform in multi-class classification or regression settings. Additionally, while LOP is tested on XGBoost ensembles, further validation on Random Forests or other tree-based architectures could enhance generalizability. Theoretical Claims: The paper does not present detailed formal proofs of correctness. Experimental Designs Or Analyses: The experimental design in the paper appears well-structured, with a strong benchmarking approach against relevant baselines. However, there are a few potential concerns: - The evaluation is entirely on binary classification tasks, leaving out multi-class classification and regression, which could behave differently under compression. It is unclear whether LOP’s effectiveness generalizes to tasks with continuous target variables or high-cardinality categorical targets. - The chosen baselines (GR, IC, and LRL1) are reasonable, but other pruning techniques, such as cost-complexity pruning in decision trees or neural network-inspired distillation methods, are not considered. - The authors conduct a sensitivity analysis on $\Delta$ (allowed accuracy loss) and $R$ (number of pruning rounds), which is good practice. However, their choice of hyperparameters for the base XGBoost models (e.g., number of trees, learning rate, tree depth) is based on a predefined grid, which may not always yield the best initial models before compression. A stronger baseline might slightly alter results. - Since LOP makes significant changes to the model structure, it would be useful to see results on unseen test distributions to test robustness. Supplementary Material: I did not conduct a thorough review of all aspects of the supplementary material. Relation To Broader Scientific Literature: LOP builds on prior research in tree ensemble pruning, model compression, and robustness verification by introducing a level-wise pruning approach that optimizes leaf values while reducing model size. Unlike traditional ensemble pruning methods (e.g., Margineantu & Dietterich, 1997; Tsoumakas et al., 2009) that remove entire trees, or global refinement techniques (Ren et al., 2015) that adjust leaf predictions post hoc, LOP prunes subtrees at any level while simultaneously updating leaf values, achieving significantly higher compression with minimal accuracy loss. Its focus on reducing memory footprint aligns with recent efforts to make tree ensembles more efficient for edge computing (Fan et al., 2013; Daghero et al., 2021). Additionally, by producing smaller models, LOP accelerates robustness verification (Kantchelian et al., 2016; Devos et al., 2021), addressing a key challenge in ensuring the safety and fairness of machine learning systems. Essential References Not Discussed: While the paper covers traditional ensemble pruning techniques, it overlooks more recent advancements, such as knowledge distillation methods introduced by Hinton, Vinyals, and Dean (2015) in Distilling the Knowledge in a Neural Network. This technique, which transfers knowledge from a large model to a smaller one while preserving performance, has been adapted for tree ensembles, including gradient boosting distillation. Other Strengths And Weaknesses: While the paper discusses memory footprint and decision path length, it does not directly measure inference latency. Including actual prediction time benchmarks on different hardware (e.g., CPU, embedded systems) would provide stronger evidence of real-world efficiency gains. Other Comments Or Suggestions: A deeper analysis of how accuracy degrades as more pruning is applied (beyond the Pareto front visualization) would provide useful insights for practitioners. Questions For Authors: **Question 1:** Have you tested LOP on multi-class problems or regression tasks? If not, do you anticipate any fundamental challenges in applying LOP to these settings? - If LOP does not generalize well beyond binary classification, its applicability would be more limited than suggested. Demonstrating strong performance in multi-class or regression tasks would increase confidence in LOP’s generalizability. **Question 2:** Have you measured actual inference time improvements on different hardware configurations (e.g., CPU, embedded devices)? - If inference time is not significantly reduced despite the smaller model size, the practical efficiency gains may be overstated. Reporting real-world latency measurements would strengthen the claim that LOP improves efficiency. **Question 3:** Have you evaluated LOP’s compressed models on out-of-distribution (OOD) data or domain shifts? Does aggressive pruning increase sensitivity to dataset shifts? - If LOP sacrifices robustness for compression, this would be a key limitation. A robustness analysis would clarify whether LOP is suitable for real-world deployment in dynamic environments. **Question 4:** Did you consider comparing LOP against gradient boosting distillation or other distillation-based tree compression methods? If not, what are the key differences that make LOP preferable? - If LOP outperforms both pruning-based and distillation-based compression techniques, it would further validate its significance. If not, distillation may be a viable alternative that should be acknowledged. **Question 5:** How does the runtime of LOP compare to the time required to train an original XGBoost model? Is LOP feasible for large-scale datasets? - If LOP takes significantly longer than training from scratch, its use case may be limited to scenarios where model compression is a strict requirement. **Question 6:** Have you tested LOP on Random Forests, BART, or other tree ensemble models? - If LOP is specifically optimized for boosting-based ensembles (e.g., XGBoost) but does not work as effectively on bagging-based methods like Random Forests or Bayesian ensemble methods like BART, its applicability may be more limited than suggested. Demonstrating strong performance across diverse tree-based models would increase confidence in LOP’s versatility. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Many thanks for your comments. We acknowledge that this work has many links with XAI (including knowledge distillation), and also with robustness, verification, inference efficiency, and other topics. However, the focus of this work is on the specific task of ensemble compression; we position this work in that area and that is also what the comparative evaluation focuses on. Reaching out to all these other areas would lengthen the paper, would raise more questions that warrant investigating, and would detract from the main message. Rejecting the paper just for not doing this would mean it is held to a much higher standard than related work in this area (none of that work does all these comparisons). Q1: LOP can be applied to regression by using an appropriate regression loss in Eq. 2. Applying it to multi-class problems would require a wrapper around it, such as one-versus-all classification. The same holds for most of the related work. For regression, we trained multiple XGBoost ensembles with [10, 25, 50, 100] trees and [4, 6, 8] max depth with learning rate 0.1 on 2 datasets (Wine Quality, Houses) using 5 fold CV. These datasets are a subset of those used in the paper by Liu et al. (2023) as suggested by reviewer CuzX. LOP achieves an average compression ratio of 14.3, compared to 7.7 for GR, 1.4 for LRL1 and 4.4 for ForestPrune (Liu et al., 2023). In terms of predictive performance, LOP increases the Root Mean Square Error by only 2% compared to the uncompressed model, which equals the allowed loss in performance (i.e., hyperparameter $\Delta$). Hence, the conclusions are similar to the binary classification case. We will add these results to the paper. Q2: We have not measured actual times; prediction requires sequential execution of if-statements and it seems obvious that the computation time is linear in the number of such statements. Q3: We have not investigated this; this is a different research question, completely out of scope for this paper. None of the related work investigates this, and we do not see how it could even fit in an 8 pages paper. Q4: As said, we focus on ensemble compression, and while exploring links with distillation based approaches may be interesting, we consider it out of scope in this paper. Furthermore, we do not know of distillation approaches that start from a forest and return a forest; if the reviewer has specific approaches in mind, a concrete reference would be appreciated. Q5: LOP typically takes longer than XGBoost itself. Ensembles are trained (and potentially compressed) once but used many times: the point of this work is to make them more efficiently usable / analyzable. The combination of training and compression time is usually in the order of seconds or minutes on a laptop: it is not an issue in practice. Q6: In response to the reviews, we ran experiments on Random Forests. Results are comparable to those with XGBoost. Specifically, we use Scikit-Learn Random Forest classifiers with a maximum tree depth in [10, 15] and number of trees in [100, 250, 500], on 3 representative datasets: Spambase, Phoneme and California. A full table of results can be found in our answer to reviewer N4NE (this table also contains results for an additional method called ForestPrune (Liu et al. 2023) suggested by reviewer CuzX). LOP averages a compression ratio of 228.2, whereas GR achieves a ratio of 120.2, and both IC and LRL1 achieve a ratio of 1.3. The average drop in balanced accuracy for LOP is 0.01, which is similar to most baselines. In terms of run time, it takes LOP on average 164 seconds to compress these ensembles, compared to 6892 seconds for GR, and 5071.8 seconds for LRL1. IC takes on average 71.6 seconds, but it has much worse compression ratios than LOP. Hence, LOP compresses more than the competitors without losing accuracy. Moreover, it is around 42x faster than GR which is its nearest competitor in terms of compression ratio. We will add these results to the paper.
null
null
null
null
null
null
Efficient Optimization with Orthogonality Constraint: a Randomized Riemannian Submanifold Method
Accept (poster)
Summary: In the work, the authors propose a randomized Riemannian submanifold approach for optimization on Stiefel manifolds. The authors prove that the proposed method converges under certain conditions. Empirical results are included to support the effectiveness of the proposed method. Claims And Evidence: The claims are sound and solid. Methods And Evaluation Criteria: The proposed method makes sense. The authors properly evaluate their method on several problems. The empirical results are convincing. Theoretical Claims: Convergence analysis is also provided to support the soundness of the proposed method. I do not check the proofs in the appendices. Experimental Designs Or Analyses: The experimental designs make sense. Supplementary Material: No. Relation To Broader Scientific Literature: Optimization Essential References Not Discussed: N/A Other Strengths And Weaknesses: This work is interesting and original. Other Comments Or Suggestions: N/A Questions For Authors: It seems that the proposed method requires an efficient computation of a Riemannian gradient on a submanifold. The authors should comment on whether the proposed idea requires a compatible Riemannian metric to effectively reduce the computational cost. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive evaluation of our work. We are glad that the reviewer found our work to be interesting and original. **1. Whether the proposed idea requires a compatible Riemannian metric to effectively reduce the computational cost.** We thank the reviewer for the question. In our work, we choose the Euclidean metric as the Riemannian metric for the Stiefel manifold. Under such a metric, we have derived the expression for the Riemannian gradient on the submanifold, which can be efficiently computed. We agree that using alternative metric, such as canonical metric may be interesting, which we leave for future exploration.
Summary: This paper improves the efficiency of the retraction operation in the geometric optimization algorithms over the Stiefel manifold by constraining the optimization into a randomly sampled smaller manifold. Specifically, for an element X in the Stiefel manifold, it learns an orthonormal matrix U that acts on X, where only a random subspace in U is learnable. The computation complexity is reduced if the dimension of the random subspace is sufficiently small. The convergence of the proposed algorithm is analyzed, and experiments on four settings show some improvement in the convergence speed. Claims And Evidence: The claims are supported by evidence. Methods And Evaluation Criteria: The method and evaluation make sense. Theoretical Claims: I did not check the proofs of the theorems. Experimental Designs Or Analyses: The experiment design makes sense, except for the MNIST/CIFAR experiment. The final test accuracy matters more than the "faster convergence in terms of runtime". The "convergence" in the context of neural networks can be easily influenced by learning rate schedules, and the test accuracy at the beginning of the run does not tell much about the final performance. Supplementary Material: I did not check the supplementary material. Relation To Broader Scientific Literature: The paper relates to the geometric optimization algorithms on matrix manifolds. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper is well-written, and the presentation is clear. The idea is straightforward, and the results look competitive compared to baselines. The weakness is the lack of large-scale practical applications. All the experiments are either toy problems or applications on tiny-scale neural networks. Other Comments Or Suggestions: 1. Is the "non-standard linear algebra operations" a well-known terminology? SVD and QR decompositions feel quite standard. 2. In Sec 6.4, a six-layer network is used. The first four layers are column orthonormal, and the output layer is unconstrained. How about the missing layer? Questions For Authors: 1. What is the number of inner iterations used in the experiments? 2. What is the reasonable range for choosing $r$, especially, how low could the ratio $r/p$ be? 3. Since the random matrix $P_k$ changes every iteration, the momentum for matrix Y does not make sense, correct? I think the inability to use momentum in RSDM is a drawback compared to RGD. Furthermore, it will be more interesting to add RGD with momentum to the baselines in the experiments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thoughtful feedback and overall positive assessment on our work. We would like to take the chance to address the comments in detail. **1. On the convergence of MNIST/CIFAR experiment.** **R1**: In our MNIST/CIFAR experiments, our aim was to highlight the efficiency of the proposed method in terms of convergence speed, which aligns with our goal of improving optimization efficiency under orthogonality constraints. Faster convergence is particularly relevant in scenarios where reaching a certain level of accuracy is sufficient or where computational resources are limited. To ensure a fair comparison, we used a fixed learning rate schedule and tuned both RSDM and RGD for their best performance. This setup is consistent with our theoretical developments, and the experiments serve to demonstrate the practical potential of RSDM in neural network training. We agree that exploring different learning rate schedules is valuable and leave this for future work. To further evaluate the performance of RSDM in neural network training, we have now added experiment on training a *Vision Transformer*, where we observe that our method consistently achieves higher test accuracy throughout the entire training process (rather than only in early iterations). Please see **R2** for more details. **2. Lack of large-scale experiments.** **R2**: Thank you for the suggestion. We have added an additional experiment on training *(orthogonal) Vision Transformer*. Following [1], we impose orthogonality constraint on the query, key and value projection matrices. We train a 6-layer, 4-head transformer with embedding dimension 1024 and 64 patches, on CIFAR10. This scales the number of parameters from 5.4M in Section 6.4 to *28.5M*. We set $r = 300$ for RSDM and tune the stepsize for both RSDM and RGD. The experiment results are included on https://anonymous.4open.science/r/ICML2025-additional-experiments-08E5. We observe that RSDM converges faster than RGD in test accuracy with a non-negligible gap, throughout training process. This demonstrates the potential of RSDM to improve large-scale model training. As our main focus is on the theoretical development of the framework, testing more diverse applications is beyond the scope of this paper. We plan to explore this direction in the future. **3. On the terminology of "non-standard linear algebra operations".** **R3**: Thank you for the question. In this paper, "non-standard linear algebra operations" refer to operations such as matrix decomposition, matrix inverse and matrix exponential. These operations generally have significantly higher computational complexity compared to standard operations like matrix multiplication or elementwise operations. We will clarify the definition in our revised version. **4. On the orthonormal layers in Sec 6.4.** **R4**: This is a typo. All the first five layers are orthonormal. **5. The number of inner iterations used in the experiments.** **R5**: In the experiments, we implement RSDM according to Algorithm 1, which does not have inner iterations. **6. Reasonable range for $r$.** **R6**: The suitable range of $r$ depends on the problems. As shown in Figure 3, RSDM demonstrates improved convergence over RGD across a wide range of $r$ values. Determining the optimal $r$ is indeed an important and valuable direction for future work. **7. On the RSDM with momentum.** **R7**: Thank you for the suggestion. The idea of combining RSDM with momentum is indeed worth exploring. There are several feasible solutions. One strategy is that we fix $P_k$ for several iterations where we apply momentum. This is equivalent to taking multiple gradient descent (with momentum) for minimizing $\widetilde F_k(Y)$, initialized from $I_r$. To test the viability of the proposed strategy, we evaluate RSDM with momentum and compare against RGD with momentum on the PCA problem. The result is included on https://anonymous.4open.science/r/ICML2025-additional-experiments-08E5. We see that the RSDM with momentum improves the convergence of RSDM, which supports the potential of incorporating momentum into our framework. We hope our responses have addressed your comments. If there are any further questions or suggestions, we would be happy to address them.
Summary: The paper presents an efficient Riemannian optimizer for Stiefel manifolds, introducing two parameterization strategies that reduce the computational complexity of optimization steps while ensuring rigorous convergence analysis. Claims And Evidence: The paper introduces two parameterization strategies—orthogonal and permutation sampling—applied in both deterministic and stochastic settings. It establishes that Riemannian Stochastic Differential Manifolds (RSDM) achieve the same convergence rate as Riemannian Gradient Descent (RGD) when $p \geq Cn$ and demonstrate significantly higher efficiency than RGD when $p \leq n$. Additionally, the paper provides a detailed analysis of the trade-offs between convergence and efficiency across different parameterization methods. These claims are substantiated by rigorous theoretical results and empirical experiments. Methods And Evaluation Criteria: The evaluation is well-aligned with the paper's contributions and application. Notably, the theoretical results are thoroughly developed and rigorously justified. While the experiments are conducted on a limited set of datasets and architectures, this is a minor limitation given the paper’s primary focus on theoretical advancements. Theoretical Claims: I have carefully reviewed all the theoretical proofs and found no major issues. Experimental Designs Or Analyses: I have checked the experimental designs and found no significant issues. Supplementary Material: I have reviewed the theoretical proofs and the experimental details and found no issues. Relation To Broader Scientific Literature: While the paper introduces a Riemannian optimizer for the Stiefel manifold, its underlying ideas have the potential to extend to general Riemannian manifolds or, at the very least, a broader class of quotient manifolds. Essential References Not Discussed: The paper overlooks several relevant orthogonal and Riemannian optimization methods, including: 1. **Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group** 2. **Siamese Networks: The Tale of Two Manifolds** Incorporating a discussion of these works could provide a more comprehensive comparison and contextualize the proposed approach within the broader landscape of orthogonal optimization techniques. Other Strengths And Weaknesses: Several important methods for optimization with orthogonal constraints are not discussed or compared with the proposed approach. Additionally, the experiments are conducted on a very limited set of datasets and architectures. To strengthen the evaluation, it is recommended to test the proposed method on more recent architectures, such as Vision Transformers, and more challenging datasets, such as the Fine-Grained Visual Categorization (FGVC) datasets [3]. Furthermore, the practical benefits of the proposed parameterization are not clearly demonstrated in real-world applications that require orthogonal constraints, limiting its applicability beyond theoretical analysis. **References:** [3] Zhang, Y., Tang, H., & Jia, K. (2018). Fine-Grained Visual Categorization using Meta-Learning Optimization with Sample Selection of Auxiliary Data. *arXiv preprint arXiv:1807.10916.* Other Comments Or Suggestions: The paper would be significantly strengthened by incorporating more diverse experiments on challenging tasks. However, given its primary focus on theoretical contributions, this limitation appears to be a minor concern. Questions For Authors: I have no questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful comments and feedback. We would like to address your comments as follows. **1. Discussions on relevant references.** **R1**: We thank the reviewer for highlighting references [1,2]. Reference [1] re-parameterizes variables in Euclidean space via the Lie exponential map. However, this approach requires differentiating through the exponential map, which can be computationally expensive. Moreover, the re-parameterization may alter the loss landscape, making convergence analysis more difficult. In contrast, our method comes with convergence guarantees, which [1] does not provide. Reference [2] formulates the problem of training a Siamese network as an optimization problem on the Stiefel manifold and employs Riemannian (stochastic) gradient descent with retraction for parameter updates. Our proposed algorithm is task-agnostic, so it can be applied to improve the optimization in [2] as well. We will include the above discussions in the revised manuscript. [1] Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group [2] Siamese Networks: The Tale of Two Manifolds. **2. More practical and diverse datasets and tasks.** **R2**: We thank the reviewer for the comment. As noted by the reviewer, our primary focus is on theoretical developments, and we have followed prior works in using standard benchmarks. As suggested by the reviewer, we have now conducted additional experiments on *(orthogonal) vision transformer*. We followed [1] by imposing orthogonality constraint on the query, key and value projection matrices. We trained a 6-layer, 4-head transformer (embedding dimension 1024, 64 patches) on CIFAR10. We set $r = 300$ for RSDM and tune the stepsize for both RSDM and RGD. The experiment results are included on https://anonymous.4open.science/r/ICML2025-additional-experiments-08E5. We observe our proposed RSDM converges faster than RGD in both test accuracy and training loss. This validates the potential of RSDM in more practical settings. We plan to extend the evaluation to additional real-world tasks and datasets in future work. [1] Fei et al. O-vit: Orthogonal vision transformer. arXiv:2201.12133. We hope we have addressed your concerns. We would be happy to respond to any remaining questions you may have. Once again, we appreciate the reviewer's time and valuable comments. --- Rebuttal Comment 1.1: Comment: Most of my concerns were addressed in this rebuttal as well as the replies in other reviewers. Hence, I decided to adjust my rating accordingly. --- Reply to Comment 1.1.1: Comment: Dear Reviewer ns8p We are deeply encouraged that our responses have addressed your concerns, and we sincerely appreciate your decision to raise the score to 4. Thank you again for your valuable time and effort in reviewing our paper. Best regards Authors
Summary: This paper develops a new approach to performing optimization with orthogonality constraints, with an emphasis on keeping computational complexity low. The authors are able to provide convergence bounds in expectation for nonconvex losses. The authors then run a number of experiments on well-known baselines to confirm the utility of their method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not go through the main proofs in detail, but the convergence results in 5.1.1 and 5.1.2 make sense, and are well-explained by the authors. Experimental Designs Or Analyses: N/A. Supplementary Material: No. Relation To Broader Scientific Literature: This would be interesting to anyone who requires optimizing with orthogonality constraints in their research. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper is nicely written, in a way that I (a non-expert in this area) could follow along and understand what was being done. I worry that this work might be too incremental, after having looked through some of the references provided in the introduction. But as I state above, I am not particularly familiar with this literature so I could be wrong about the novelty of this work. Other Comments Or Suggestions: On the x-axes of Figures 3,4, and 5, please add a label. In this wall-clock time? Number of optimization steps? This information is helpful to know, as most deep learning researchers have a sense of how long it takes to train e.g., MNIST, and it is important to understand the slowdown (if there is one) your algorithm induces. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful that the reviewer found our work interesting and well-written. Below are our responses to your comments. **1. On the novelty of this work compared to related works.** **R1**: Thank you for the comment. We would like to clarify the novelty of our work. We have carefully compared with related works both theoretically and numerically in the main paper and in Appendix I, J. Our method is well-motivated from Riemannian geometry, which gives a principled foundation for the design and convergence analysis of the algorithms within the framework of Riemannian optimization. This geometric perspective also allows our method to generalize naturally to more complex manifolds, including quotient manifolds, as shown in Section 5.4 and Appendix C. We believe this provides both theoretical and practical value. We will expand the discussion on the novelty and contributions of our work in the revised manuscript to make this clearer. **2. Add label to x-axes of Figure 3, 4, 5.** **R2**: Thank you for the question. We have already included the x-axes label for Figure 3,4,5, which is (wall-clock) Time. We hope we have addressed all your comments and concerns. If you have further questions, we would be happy to address them.
null
null
null
null
null
null
Flow-based Domain Randomization for Learning and Sequencing Robotic Skills
Accept (poster)
Summary: This work studies the domain randomization (DR) in reinforcement learning and the focus is on the design of task distribution. With the help of normalizing flows, the author proposes an entropy-regularized policy optimization methods for DR. The experiments are conducted in sim simulated and one real-world robotic domain. --- I've updated the review based on the rebuttal. Claims And Evidence: It is worth noting the line number disappears in the complied pdf, which makes it difficult to point out the content. After going through the manuscript, I find some claims can be further polished and examples are attached as: (1) **An ideal sampling distribution enables the policy to focus training on areas of the distribution that can feasibly be solved in order to maximize the overall success rate of the trained policy while not wasting time on unsolvable regions of the domain.** on Page1. The claim does not hold all the time, for example, in curriculum learning, we tend to pay more time on solving unsolvable regions of the domain in the later learning process. The reference should be attached in the statement or writing should restrict the scope. (2) **We show that GoFlow outperforms fixed and other learning-based solutions to domain randomization on a suite of simulated environments.** on Page2. There miss specific metrics or performance indicators about the “outperform” term. (3) **Too broad a sampling distribution and the training focuses on unsolvable environments and falls into local minima.** On page3. What is the theoretical support or evidence support for this claim? My understanding about the broad sampling distribution is to cover scenarios as many as possible to achieve DR purpose. Similar to the above cases, there are other claims, which can be polished in the future version. Methods And Evaluation Criteria: **I. Method:** Learning the task distribution for decision-making is a promising research direction and robust reinforcement learning is also a crucial consideration to bridge Sim2Real gap. The method of this work is the combination of the normalizing flow and DORAEMON [1]. However, there are some related works [2-4] that deserve discussion or comparison in experiments. For example, [2] considers the distribution shifts from a parameterized task distribution to increase RL robustness. [3] also uses normalizing flows to parameterize MDP distributions and learn and avoids the handcrafted task distribution design for robustness improvement. Though [2-3] are into meta RL, the optimization objectives also apply to DR cases. For [4], it considers the active domain randomization and also relates to robustness. **II. Evaluation:** Throughout the manuscript, I find the coverage ratio works as the primary metric in terms of learning curves and table results. However, its concept is not well introduced at the beginning of evaluation. My understanding about the coverage is related to the reward threshold. Hence, some general metrics can be involved in strengthen this work. (1) In terms of **generalization**, I recommend this work follows [1] and includes the learning curves about the success rate. (2) In terms of **robustness**, I suggest some conditional value-at-risk (CVaR) of success rates for evaluation and OOD tasks performance. Besides, I appreciate the author’s effort in real-world scenarios and this is plus of this work. **Reference** [1] Tiboni, Gabriele, et al. "Domain randomization via entropy maximization." arXiv preprint arXiv:2311.01885 (2023). [2] Ajay, Anurag, et al. "Distributionally adaptive meta reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 25856-25869. [3] Wang, Cheems, et al. "Robust fast adaptation from adversarially explicit task distribution generation." arXiv preprint arXiv:2407.19523 (2024). [4] Mehta, Bhairav, et al. "Active domain randomization." Conference on Robot Learning. PMLR, 2020. Theoretical Claims: Not applicable. There is no theoretical claim in this work. Experimental Designs Or Analyses: Overall, this work considers several scenarios on experimental design, however, more indicators and comparisons will strengthen this work. Supplementary Material: I read all parts. Relation To Broader Scientific Literature: Not applicable. Essential References Not Discussed: [1] Tiboni, Gabriele, et al. "Domain randomization via entropy maximization." arXiv preprint arXiv:2311.01885 (2023). [2] Ajay, Anurag, et al. "Distributionally adaptive meta reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 25856-25869. [3] Wang, Cheems, et al. "Robust fast adaptation from adversarially explicit task distribution generation." arXiv preprint arXiv:2407.19523 (2024). [4] Mehta, Bhairav, et al. "Active domain randomization." Conference on Robot Learning. PMLR, 2020. Other Strengths And Weaknesses: See the above Other Comments Or Suggestions: It would be great if the above suggestions can be incorporated in revision. Meanwhile, the method part needs to exclude some literature work and focuses more on the contributed points. After revision, this will be a strong paper. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and practical recommendations regarding our claims and experimental metrics, which have significantly strengthened our manuscript. Below we address each of your comments and questions. > … The claim does not hold all the time, for example, in curriculum learning, we tend to pay more time on solving unsolvable regions of the domain in the later learning process. The reference should be attached in the statement or writing should restrict the scope. We agree that this claim was imprecise, and there are other methods that train adversarially on difficult problems. While this would not be a good strategy in our class of problems due to the infeasibility of some parts of the parameter space. That statement has been updated to say "An alternative strategy is to learn an environment distribution during training with the aim of finding the broadest possible training distribution that can feasibly be solved in order to maximize the chances of transferring to an unknown target environment." Additionally, we added a section to related work discussing adversarial training. > There miss specific metrics or performance indicators about the “outperform” term. We have modified the statement to a more detailed claim about coverage. > Too broad a sampling distribution and the training focuses on unsolvable environments and falls into local minima. On page3. What is the theoretical support or evidence support for this claim? My understanding about the broad sampling distribution is to cover scenarios as many as possible to achieve DR purpose. The support for this claim comes from our experimental results and from the results of many other papers on learning for domain randomization. Figure 3 shows that full domain randomization (the broadest possible distribution) is insufficient for reaching the target success threshold. For example, in the “ant” domain where the goal is for the ant to run forward, full domain randomization learns a uniformly applied strategy of bracing the weight of the body so that the negative reward is not received from floor contact. The result is that none of the ants (even the lighter ones) can be considered to have a successful policy because none are running forward. > However, there are some related works [2-4] that deserve discussion or comparison in experiments. We agree that these are relevant papers, and we have updated our related work to include them. > Throughout the manuscript, I find the coverage ratio works as the primary metric in terms of learning curves and table results. However, its concept is not well introduced at the beginning of evaluation. My understanding about the coverage is related to the reward threshold. Hence, some general metrics can be involved in strengthen this work. Apologies for any confusion, we moved our discussion of the coverage metric to earlier in the evaluation section (section 5.2) and added detail to how it is calculated in our experiments. Regarding the other suggested metrics - Our coverage metric is the same as the success rate metric from Tiboni et al. More specifically, it is the proportion of environments from the entire parameter range that the policy is expected to succeed under. We prefer the name “coverage” because this quantity does not reflect the actual success rate on a robot. - We agree that this is a good metric to report. In our updated manuscript, we have added a new table with CVaR scores. CVaR was computed as the mean of the final rewards falling below the 10\% percentile (VaR). --- Rebuttal Comment 1.1: Comment: I thank the author's response. Some of my questions are addressed. In the updated manuscript, it is necessary to discuss the mentioned references [1-4] in detail or even include some in experimental comparison, where some of them learns the task distribution or uses normalizing flows. It also requires more efforts to polish the manuscript in both statements and background knowledge from scratch to improve its readability. Particular, remember to explain the evaluation and CVaR metrics at the beginning of experimental section. Also, the review score has been updated accordingly.
Summary: Domain randomization is a known and useful technique to transfer models trained in simulation to the richness of the real-world. There are several known methods in the past. The usual workflow of the algorithm is to learn or estimate the distribution of valid parameters that is solvable by the policy optimization simultaneously with the policy. It requires smart choices of loss function and training algorithms to achieve this goal. In this paper, the authors use a normalizing flow to fit the parameter distribution and use an entropy regularized loss function for training. The conjecture is that neural parameter distributions are more expressive and will be conducive to generalization. Results are demonstrated on some benchmarks to demonstrate more robust generalization as compared to previous baselines. Further, real-world demonstrations are shown on a manipulation task and extensions are made to multi-step planning. Claims And Evidence: The authors claim that using a neural normalizing flow for learning the distribution of domain parameters is an improvement on the state of the art for domain randomization. This is supported by adequate experimental evidence. Also, this same method is illustrated to be useful for learning the precondition distribution in a belief space planner. Methods And Evaluation Criteria: The methods are sound and the evaluation criteria is acceptable. The proposed method achieves a bigger coverage range as compared to the baselines. The previous solution of fitting beta distributions has certain advantages. That is amenable to using a trust region constraint rather than a trust region regularizer which is adhoc. Theoretical Claims: N/A Experimental Designs Or Analyses: There is a toy example which is useful for the reader to understand the essence of the paper. Then, there are results illustrated on MuJoCo benchmarks. The coverage metric is an indicator of how much of the parameter space results in a successful policy. Further experiments on a Bayesian multi-step planner illustrate a novel use case for the algorithm. Supplementary Material: Yes, I reviewed the supplementary material Relation To Broader Scientific Literature: The topic of domain randomization is under discussion for some time and gained visibility by OpenAI's work on solving the sim2real problem for rubik's cube manipulation. Even though the methods are simple, the effectiveness is usually quite good. This paper is an useful extension. Apart from incremental advances on existing benchmarks and sim2real application, it introduces sampling distribution modeling in the context of belief state preconditions. Further, there is a lot of work in the use of domain randomization for out-of-distribution generalization in the context of images. The dynamics randomization is relatively less explored. Essential References Not Discussed: The citations can be more comprehensive with more mention of domain adaptation, transfer learning and sim2real transfer. Some examples are mentioned below: [A1] Yu, Wenhao, C. Karen Liu, and Greg Turk. "Policy transfer with strategy optimization." arXiv preprint arXiv:1810.05751 (2018). [A2] Sagawa, Shogo, and Hideitsu Hino. "Gradual domain adaptation via normalizing flows." arXiv preprint arXiv:2206.11492 (2022). Also, the authors are requested to expand further on these lines: "Some previous works have combined domain randomization with information gathering via system identification (Ramos et al., 2019; Sagawa & Hino, 2024)" Other Strengths And Weaknesses: Strengths: The flow of the paper with the motivation, method and experimental results are good. I find the illustrations on robot assembly tasks and belief-space planning a new addition. Overall, there is a knowledge gain for the reader. The strength of the proposed method lies in the effectiveness despite its simplicity. More theoretical study on why this particular loss function and choice of normalizing flow is good will be interesting for future research. Weaknesses: The use of domain randomization is to result in domain generalization and sim2real. It is difficult to assume that domain randomization alone will achieve this. There could be some more discussion about the other approaches. On the flipside, the methods by themselves are not very novel though effective. The entropy regularization objective, the normalizing flow and neural probablistic modeling are all well-known. They are used in an interesting and effective way. The writing is unclear in certain places. See below. Other Comments Or Suggestions: “Reinforcement learning (RL) has proven to be a useful tool in robotics for learning control or action policies for tasks and systems which are highly variable and/or analytically intractable” - unclear statement Fig 5 caption: “The thresholded sampling distribution is further thresholded by the value function to get the belief-space precondition” - unclear statement Some additional information in the supplementary material could be discussed further in the main section of the paper. For example, there is a passing mention of the policy being independent to the yaw. This could be discussed in more detail. Questions For Authors: In terms of the choice of neural sampling distribution, LSDR also uses a learned distribution. Is the main difference that LSDR uses a neural network to fit the GMM parameters? Or do they learn the GMM parameters directly? What happens if we increase the number of mixtures or the expressivity of the distribution without a neural network? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and suggestions on expanding our related work and improving figure clarity, which have enhanced the overall presentation of our paper. Below we address each of your comments and questions. > The citations can be more comprehensive with more mention of domain adaptation, transfer learning and sim2real transfer. Some examples are mentioned below: We agree that these are relevant papers, and we have updated our related work to include them. > Also, the authors are requested to expand further on these lines: "Some previous works have combined domain randomization with information gathering via system identification (Ramos et al., 2019; Sagawa & Hino, 2024)" We have expanded on this discussion further in the related work section. That section of the related work now reads as follows: "Beyond training robust policies in simulation, learned sampling distributions can be tied to the real-world environmental conditions under which policies are likely to succeed. Previous works have integrated domain randomization with real-world interactions for more informed training distributions [citations] or to find the maximally effective real-world strategy [citations]. However, these methods often necessitate expensive policy retraining or data-intensive evolutionary search based on real-world feedback, posing challenges for real-time applications. Instead, we utilize our learned sampling distribution as an out-of-distribution detector within a multi-step planning framework, enabling fast and data-efficient information gathering in the real world." > The use of domain randomization is to result in domain generalization and sim2real. It is difficult to assume that domain randomization alone will achieve this. There could be some more discussion about the other approaches. We agree that domain randomization alone is not always sufficient for transfer. Hopefully our updated related work discussion helps highlight that real-world feedback is an important component of sim2real transfer. > In terms of the choice of neural sampling distribution, LSDR also uses a learned distribution. Is the main difference that LSDR uses a neural network to fit the GMM parameters? Or do they learn the GMM parameters directly? The LSDR baseline directly learns multivariate gaussian parameters. They don’t learn a GMM or use neural networks, but rely a simple Gaussian representation. > The writing is unclear in certain places. See below. Thank you for pointing these out. We have addressed all of these in the updated manuscript. Specifically, we made the following updates: 1. We clarified our statement regarding the usefulness of RL in the introduction. It now reads as follows: "Reinforcement learning (RL) is a powerful tool in robotics because it can be used to learn effective control policies for systems with highly complex dynamics that are difficult to model analytically. Unlike traditional control methods, which rely on precise mathematical models, RL learns directly from simulated or real-world experience [citations]." 2. We updated the caption of figure 5 to much more clearly state what each subfigure is. Please see our rebuttal to Reviewer aKt5 for details. 3. We elaborated on the yaw invariance property of the gear-insertion problem in the main text of the paper. It now reads as follows: "Despite an unknown yaw dimension, the robot is confident in the insertion because the flow $p_\phi$ indicates that success is invariant to the yaw dimension. This is due to the fact that success in the insertion task is defined by the distance between the bottom center of the gear and the base of the gear shaft, which is independent of gear rotation." --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my reviewer feedback. Most of the warranted fixes are minor and can be done easily. The presentation and writing in the first draft is effective. I am currently retaining my score.
Summary: This paper proposes a normalizing flow based approach to learn sampling parameters for domain randomization. Instead of doing naive sampling for domain randomization hyper-params, the paper uses a more principled way (which has been proposed before). But different from previous works, this paper proposes using a normalizing flow based model to learn the sampling distribution and iteratively improve the policy by sampling domain parameters from this distribution. Experimental results in simple domains shows the effectiveness of the proposed approach. The proposed approach is also combined with a belief space planner to show how the proposed approach can be combined with traditional belief space planning to accomplish long horizon tasks. Claims And Evidence: Yes, the claims are well supported. The paper claims that their proposed domain randomization appraoch is more robust which is what the experiments correctly validate and show. Methods And Evaluation Criteria: Yes, although more complex tasks/benchmarks could be created (e.g. more complex control tasks). But even the current set of tasks seem adequate. Theoretical Claims: No, there is minimal theory and the overall algorithm is sound. Experimental Designs Or Analyses: Yes. The toy experiment is a bit contrived in my opinion since there is assumed to be this complex relation in the space. Clearly, a more complex model would be able to fit it much better than other naive approaches (most baselines). For the other mujoco experiments it was not immediately clear how the base sampling distributions were chosen, but it seems they were chosen to higlight larger difference than baseline. Figure 12. in the Appendix provides a more clear picture. I guess one big concern would be how realistic are some of these assumptions made in the paper. In real-world scenarios we often do Sys-ID on the robot to find good initial set of hyperparams and then make it robust around that nominal set. But the proposed approach (in the main paper) uses a very broad initial set which is a bit more complex and clearly a non-learning based approach would not work in this scenario. Supplementary Material: yes, briefly looked through it, looked at some sections more carefully. Relation To Broader Scientific Literature: Domain randomization (DR) is very important for making robots work in the real world especially sim2real. Most legged locomotion stuff relies on DR. However, in those scenarios we often do sys-ID and then use careful engineering/expertise to iterate on things. The proposed approach is more automated, however, it is unclear if it can be applied to complex real-world problems, i.e, does it obviate the need of engineering (most likely not). Overall, the problem statement is important, Essential References Not Discussed: no Other Strengths And Weaknesses: The paper focuses on an important problem. It is quite well written and clearly explains the problem and the solution. The proposed approach is not extremely novel but is well executed. I do have some concerns about the experiments. Basically, I am curious to hear how would the experiments (mujoco ones) change if the feasible domain regions are not that complex? Also, is the assumption made in the paper realistic? Most real world robotics works such as OpenAI hand rubik's cube and recent legged robots work often use good old-fashioned engineering and robotics to solve similar problems. How challenging or complex would be to apply the proposed approach for such scenarios, since the base assumption is that a flow based model could learn complex sampling distributions. Why was the choice of spline based normalizing flow used to learn the sampling distribution? That part does not seem to be motivated in any way (unless I missed it). Other Comments Or Suggestions: none Questions For Authors: please see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful analysis and positive review. Below we address each of your comments and questions. > For the other mujoco experiments it was not immediately clear how the base sampling distributions were chosen We attempted to select ranges for these parameters that were large enough to capture all possible physically realistic parameter settings. > I guess one big concern would be how realistic are some of these assumptions made in the paper. In real-world scenarios we often do Sys-ID on the robot to find good initial set of hyperparams and then make it robust around that nominal set. But the proposed approach (in the main paper) uses a very broad initial set which is a bit more complex and clearly a non-learning based approach would not work in this scenario. We agree with the assessment of how sim-to-real issues are typically addressed, and the limitations of this process are some of the driving motivations for this paper. First, this manual Sys-ID to find a nominal set of hyperparameters requires some human effort on a per-task basis, which we are able to eliminate by selecting broad enough ranges. Second, the nominal parameter values and resulting distributions may not be optimal ones for task performance and robustness. Lastly, the integrated planning system is much more efficient and effective with large parameter ranges as opposed to narrower distributions around nominal values. This is because less information gathering is required by policies that are robust to a larger space of parameters. > Basically, I am curious to hear how would the experiments (mujoco ones) change if the feasible domain regions are not that complex? If the feasible domain regions are centered and regularly shaped, we see the performance of other learning-based methods increase. The experiments in Appendix A.6 demonstrate this property. > Why was the choice of spline based normalizing flow used to learn the sampling distribution? We did not have a specific reason to use spline flows over other models outside of its superior performance compared to other architectures in the Zuko library.
Summary: The paper proposes updating environment parameters for policy training by learning a sampling distribution parameterized as a normalizing flow. Normalizing flow is known to be capable of representing more expressive distributions. The distribution is trained to maximize policy performance, maximize its marginal entropy, and minimize the change from the previous distribution. The method is validated on a few simulated tasks showing improved coverage (percentage of the parameters where the policy performance is higher than some threshold) over the baselines, as well as being applied as a OOD detector for a real-world robot manipulation planner. Claims And Evidence: The paper's main claim is that the learned sampling distribution improves the overall performance in testing environments. The results are mostly shown in Fig. 3. While the results themselves look convincing, many details of the experiment setup seem missing (or should have been provided in the main text). For example, how are the initial distribution and target distribution decided? How are $\alpha$ and $\beta$ chosen in Fig. 3? More critically, it was not clear how $J_t$, the threshold for reaching target performance at any environment, is chosen. I imagine the choice can largely affect the behavior of the curves shown in Fig. 3. More discussion/justification (or additional results on varying $J_t$) should be provided. Methods And Evaluation Criteria: I think the set of tasks considered in the environments is fairly comprehensive, and the study on a real-world manipulation task is well-appreciated. However, I do feel the toy problem in Fig. 2 is too contrived given the nature of flow matching vs. the baselines. Theoretical Claims: The paper has a small amount of theoretical claims in Appendix A.2, and they look correct to me. Experimental Designs Or Analyses: Again, I think the paper is missing important details on the experimental designs as discussed above. Supplementary Material: Yes, I reviewed the proofs, additional experiment details (still lacking the discussion/justification), and additional studies (varying hyperparameters). Relation To Broader Scientific Literature: The paper studies the effect of environment parameters for training effective control policies. Such study is fitting as we consider more generalizable policy. However, I do think the paper lacks more discussion on how the approach can be applied in more realistic real-world tasks (for improved performance instead of just for OOD detection). Essential References Not Discussed: I think the line of work around task-driven system identification [1,2,3] is very relevant, also trying to identifying relevant parameters for improving policy performance. [1] Muratore et al., Data-efficient domain randomization with bayesian optimization [2] Ren et al., Adaptsim: Task-driven simulation adaptation for sim-to-real transfer [3] Liang et al., Learning active task-oriented exploration policies for bridging the sim-to-real gap Other Strengths And Weaknesses: Some of the figures in the paper can be improved. I don't understand the $\theta$ part of Fig. 4. It also took me quite a while to understand Fig. 5, especially top row vs. bottom row. I think the caption can be vastly improved to provide more context and detailed explanations. Other Comments Or Suggestions: I recommend putting GoFlow as either the first or the last method in the legend in Fig. 3. Questions For Authors: Do you have intuition why quadruped works the best with $\beta=0$ as Fig. 7 shows? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed insights on experimental design and parameter selection, which have greatly helped us clarify our approach. Below we address each of your comments and questions. > … how are the initial distribution and target distribution decided? How are $\alpha$ and and $\beta$ chosen in Fig. 3? The target distribution is the same across all methods, as it is a uniform distribution over a set of physical properties such as link masses, joint frictions, and object poses. We attempted to select ranges for these parameters that were large enough to capture all possible physically realistic parameter settings. The initial distribution depends on the method. For GoFlow, the initial distribution is defined by the random initialization of the network. $\alpha$ and $\beta$ were chosen through a hyperparameter selection process detailed in Appendix A.5. > More critically, it was not clear how $J_t$, the threshold for reaching target performance at any environment, is chosen. More discussion/justification (or additional results on varying $J_t$) should be provided. $J_t$ was chosen to be slightly below the optimal performance under no environment randomization. We verified that the trained policy still exhibited “qualitatively successful” performance at the target reward threshold. For example, we verified that the ant still exhibits running behavior at the chosen target threshold. We have updated the manuscript to describe this selection process. Additionally, we performed an experiment showing how coverage changes with $J_t$. Although we cannot update the manuscript during the rebuttal process, new results in appendix section A.7 show that while the coverage is highly dependent on the choice of $J_t$, GoFlow outperforms baseline methods across almost all choices of $J_t$. > I think the line of work around task-driven system identification [1,2,3] is very relevant We agree that these are relevant papers, and we have updated our related work [1,2] and introduction [3] to include them. > I recommend putting GoFlow as either the first or the last method in the legend in Fig. 3. We have moved GoFlow to be the first method in Figure 3. > Do you have intuition why quadruped works the best with \beta=0 as Fig. 7 shows? While $\beta$ can help with training stability, it can also cause the distribution to converge more slowly. The quadruped domain specifically was less sensitive to large swings in the sampling distribution, and therefore did not benefit from larger $\beta$. > Some of the figures in the paper can be improved. I don't understand the \theta part of Fig. 4. It also took me quite a while to understand Fig. 5, especially top row vs. bottom row. I think the caption can be vastly improved to provide more context and detailed explanations. Thank you for this suggestion. We have expanded on the captions to figures 4 and 5 to improve clarity. For figure 4, we modified the figure to remove $\theta$ and replace it with yaw, which is how it was described in the caption and elsewhere in the paper. We also defined it in the caption. For Figure 5, we rewrote the entire caption to clearly describe the meaning of each column/row of subfigures. The new caption reads as follows: > "A visual example of the precondition computation described in Section 6.2 for the gear assembly plan shown in Figure 4. The two rows show two different projections of the 3D sampling space (x position vs y position in the top row and y position vs yaw rotation in the bottom row). We apply a threshold $\epsilon$ to the sampling distribution to remove low-probability regions (column 1). Additionally, we filter the value function by retaining only the regions where the expected value exceeds a predetermined threshold $\eta$ (column 2). The intersection of these two regions defines the belief-space precondition, indicating where the policy is likely to succeed (column 3). Comparing the precondition to the beliefs, we can see that the belief is not sufficiently contained within the precondition at $t=0$ (column 4), but passes the success threshold $\eta$ at after closer inspection at $t=4$ (column 5)."
null
null
null
null
null
null
Can Transformers Learn Full Bayesian Inference in Context?
Accept (poster)
Summary: The paper introduces an innovative approach to full Bayesian inference using in-context learning (ICL) with transformers. By leveraging ideas from continuous normalizing flows and flow matching, the authors propose a framework that learns to approximate the posterior distribution P(z|x) directly from synthetic samples drawn from the joint distribution. Evaluations on generalized linear models (GLMs), factor analysis, and Gaussian mixture models demonstrate that the ICL method produces posterior samples comparable to those obtained via Hamiltonian Monte Carlo and state‐of‐the‐art variational inference techniques. The work is well-motivated and offers a promising new direction by uniting meta-learning ideas with Bayesian inference. **update after rebuttal** I thank the authors for their responses. Although it is understandable that not many new empirical evidence can be provided due to the restricted length of rebuttal this year, the empirical evaluation remains the weakness of this paper. Hence I will maintain my current score. I disagree that image datasets are less relevant, since there are many works (e.g, Bayesian neural networks, and VAE related works) trying expand Bayesian inference to the image domain. I think potential evaluations on image datasets (with Bayesian versions of vision transformers) would greatly strengthen the paper in the future. Claims And Evidence: The central claim that transformers can learn full Bayesian inference in context is well-supported by extensive experiments on GLMs, factor analysis, and GMMs—showing that the posterior samples produced by the ICL approach closely match those obtained via HMC and often outperform several VI methods on synthetic and limited real-world datasets. However, the claims on large-scale training is not well-backed by experiments. Although real-world data are used, they remain at tabular datasets. More advanced and large datasets like image datasets should be used to verify the effectiveness of the proposed frameworks. Also, some image generation tasks (e.g., on MNIST) can be included to test the quality of the learned posterior. Somehow, what the authors claim is related to density estimation methods. Experiments on UCI datasets and related density estimation methods would better support the claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense for the problem at hand. The authors introduce a transformer-based in-context learning approach that leverages continuous normalizing flows and flow matching to approximate full posterior distributions. This method is evaluated using well-established metrics—C2ST, MMD, and Wasserstein—which are standard for comparing how close the inferred distributions are to those obtained by gold-standard methods like HMC. Furthermore, the experiments span a range of models (GLMs, factor analysis, and GMMs) and use both synthetic and curated real-world tabular datasets, which is appropriate for demonstrating feasibility across different Bayesian inference scenarios. However, while the chosen benchmarks and evaluation criteria are solid for controlled experiments, the reliance on synthetic data and moderate-scale real-world datasets raises questions about the scalability and robustness of the method when dealing with more complex, high-dimensional tasks. And the baselines compared are insufficient - the authors only verify the comparability to existing optimization methods, but yet its practicability to different downstream tasks. Somehow what the authors are claiming are related to density estimation methods. Experiments on UCI datasets and related density estimation methods would better support the claims. Theoretical Claims: Many theoretical results are only outlined rather than given in full detail. The results heavily follow previous work which are well-verified. Perhaps the authors need to include the key assumptions by these works here for better clarity. Experimental Designs Or Analyses: The experiments and designs are generally adequate and sound. Supplementary Material: I checked for all experiment settings and results. Relation To Broader Scientific Literature: As mentioned before, I think the task that the authors are more like posterior density estimation. In this sense, including comparisons to density estimation benchmark would be necessary (e.g., UCI) and even image generation tasks for higher dimensional cases. Essential References Not Discussed: A comparison to related density estimation methods is missing, e.g., Salazar, Sebastian. "VaRT: variational regression trees." Advances in Neural Information Processing Systems 36 (2023): 45681-45693. Other Strengths And Weaknesses: Strength * Novelty: The paper is novel. applies transformer-based in-context learning to perform full Bayesian inference—a departure from conventional MCMC or VI methods. * Flexibility: The proposed approach can handle multivariate and complex posterior distributions, showing robust performance even in cases with non-standard posterior shapes (e.g., skewed or multi-modal distributions). * Empirical Results: Comparative evaluations indicate that the ICL approach outperforms traditional VI methods in capturing the full posterior, particularly in synthetic experiments. Weaknesses * My largest concern is the scalability issues. The authors only adopt small datasets (e.g., simulated and tabular datasets by Grinsztajn metal.). Authors should test the capability of the proposed transformer on image datasets (e.g., MNIST or ImageNet) to verify their claims on scalability. *Although the comprehensiveness of evaluation in the suggested settings, I think comparisons to density estimation methods (e.g., UCI) and image generation (for validating the quality of the posterior distribution) would be necessary. The current empirical evaluation is less sufficient. Other Comments Or Suggestions: N.A. Questions For Authors: In practical settings where model misspecification is a concern, how robust is the ICL approach? Have the authors explored scenarios where the generative model does not perfectly capture the true data distribution? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to read our manuscript and for providing detailed feedback. >More advanced and large datasets like image datasets should be used to verify the effectiveness of the proposed frameworks. Please note that, to the best of our knowledge, our paper presents the first thorough investigation of in-context learning for full Bayesian inference. We think that tabular data is a very natural domain for full Bayesian inference and in particular for the latent variable models considered in our work. While fully Bayesian methods for image data exist, they are arguably substantially less prominent. Furthermore, we believe that the inherent heterogeneity of the 17 considered real-world tabular datasets, spanning domains such as superconductors, environmental pollution, or wine quality, is well suited to provide a challenging and diverse benchmark. Following your comment, we now also include an [ablation study](https://anonymous.4open.science/r/Extra-B820/Dim_Ablation.pdf) that investigates the effect of the dimensionality of the used data. > Also, some image generation tasks (e.g., on MNIST) can be included to test the quality of the learned posterior. We would like to argue that image generation is usually not considered to be within the scope of full Bayesian inference, which we investigate in this paper. > Somehow, what the authors claim is related to density estimation methods. Experiments on UCI datasets and related density estimation methods would better support the claims. We would like to emphasize that in this work, we consider Bayesian inference as the task of providing samples from the posterior. This is in scope of what MCMC methods do and thus also importantly allows to compare our sampling-based method and the VI baselines to HMC as a gold standard. Even though Bayesian inference can also be accomplished via density estimation, we would argue that this is a different task. > the authors only verify the comparability to existing optimization methods, but yet its practicability to different downstream tasks. We thank the reviewer for this comment. Exclusively comparing posterior samples themselves is, however, a common practice in amortized (simulation-based) inference [1]. We would also like to highlight that, in practical applications, the results of the latent variable models we study are arguably mostly analyzed in isolation rather than being used directly for downstream tasks, which motivates our choice of evaluation procedure. However, following your recommendation, we now include results for [new experiments](https://anonymous.4open.science/r/Extra-B820/Pred_Performance.pdf) where we use the posterior samples for the GLMs case for the purpose of prediction. > Perhaps the authors need to include the key assumptions by these works here for better clarity. Thank you for this point. We will include the assumptions underlying flow matching in our paper besides just referring to the literature. > A comparison to related density estimation methods is missing, e.g., Salazar, Sebastian. "VaRT: variational regression trees." Advances in Neural Information Processing Systems 36 (2023): 45681-45693. We will add a more extensive discussion of density estimation methods, including the aforementioned paper, to the revised version of the manuscript and how they relate to our approach. > Have the authors explored scenarios where the generative model does not perfectly capture the true data distribution? Please refer to Appendix H for experimental results regarding the OOD performance of our approach and to Appendix B for a more detailed discussion on model misspecification. We will expand our explanations of the situation regarding OOD performance in a new section in the revised version of the manuscript. We created a [new plot](https://anonymous.4open.science/r/Extra-B820/plot_ood.pdf) for this purpose. ## References [1] Lueckmann, Jan-Matthis, et al. "Benchmarking simulation-based inference." AISTATS 2021.
Summary: The paper proposes an in-context learning (ICL) approach for performing Bayesian inference over three classes of models: Generalized Linear Models (GLMs), Gaussian Mixture Models, and Factor Analysis (FA). The paper shows that their ICL method can produce similar posterior samples to Hamiltonian Monte Carlo (HMC). Claims And Evidence: * Claim 1: ICL yields posterior samples that are very similar to HMC: * This does seem to be the case from the experimental results. Looking specifically at distributional metrics for measuring similarity between samples. * Claim 2: ICL samples are preferred over popular VI techniques across the experiments: * Default hyperparameters are used for the VI approaches according to the appendix (no hyperparameter optimization) and the VI approaches do not seem to be state-of-the-art in VI. Methods And Evaluation Criteria: * The benchmarks and evaluation criteria make sense. Theoretical Claims: * This does not seem to be applicable here. I do not believe any theoretical claims are made. Experimental Designs Or Analyses: * See related comments in strengths and weaknesses. Supplementary Material: * I read through the appendix and saw that code was linked in the abstract. Relation To Broader Scientific Literature: * The paper is focused on ICL for performing Bayesian inference. The results show there is potential to leverage transformer-based architectures for performing Bayesian inference. What is less clear is how this work’s conclusions differ from the paper “Transformers can do Bayesian inference”. Essential References Not Discussed: * Not that I am aware. Other Strengths And Weaknesses: ### Strengths: * The paper is well-written and the presentation (Figure 1 etc.) is strong. * The results seem to support the claims. ### Weaknesses * The key weakness that is preventing me from providing a higher score is in the identification of the novelty of the work in comparison to Müller et al. I think the main difference (novelty) is in the architecture (Figure 2). The loss function appears to be comparable to Müller et al. As such, it would seem like comparing to prior-data fitted networks from that paper, would really help show if the proposed ICL approach is superior. * Another component of the Müller et al. paper that is missing from this paper is a reference to how many synthetic datasets were needed to achieve the reported performances. For example, it would be good to know how many synthetic data sets were needed to get to the current performance, and what the behaviour of the performance is as the number of datasets trained on are increased. Other Comments Or Suggestions: * I could not find details of the size of the data trained on for each class of model. (If it is hiding in the appendix, I apologise for missing it.) * The experiments on the OOD performance in the appendix are interesting and probably should be promoted to the main paper. I would suggest thinking of a new way of presenting these experiments though. Perhaps as a plot where the y-axis is the metric, and the x-axis is the KL-divergence between the training and test data distributions. (This is just a suggestion.) Questions For Authors: In addition to responding to the comments regarding the weaknesses: * Is it the case that the architecture is the main novelty of the paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to read our manuscript and for providing detailed feedback. > Default hyperparameters are used for the VI approaches according to the appendix (no hyperparameter optimization) We would like to kindly point out that we investigate the role of the learning rate, which is a crucial hyperparameter for the VI methods, in appendix J of the manuscript. We furthermore decided to use standard “out-of-the-box” variational methods as baselines to allow for a direct with the in-context learner, which cannot even adapt its weights for each specific dataset. Furthermore, we include ablations in our paper (please see Appendix D) where we compare against strong diffusion-based baselines. > The key weakness that is preventing me from providing a higher score is in the identification of the novelty of the work in comparison to Müller et al. In their paper on PFNs, Müller et al. [1] focus exclusively on **univariate** posterior predictive distributions. Our approach, on the other hand, targets multivariate posteriors of latent variables. The key difference is thus in a scalar-valued distribution for PFNs versus a multivariate distribution in our approach. This is a fundamental difference comparable in scope to the difference between supervised learning (PFNs) and unsupervised learning (our method). This difference manifests itself in various aspects that include: - The tasks for which inference is learned in context: - Müller et al: Regression with Bayesian neural networks, regression with Gaussian processes, and classification of hand-written digits. - Ours: Full Bayesian inference regarding the latent variables of generalized linear models, factor analysis, and Gaussian mixture models. - The framework to learn the distributions - Müller et al: Discretization of the distribution of interest and a cross-entropy loss. - Ours: Flow matching. - The architecture - Müller et al: A slightly modified standard transformer. - Ours: A novel setup combining a PFN-style transformer-encoder together with a diffusion transformer decoder. - The evaluation - Müller et al: Predictive metrics and metrics for calibration - Ours: Comparison of posterior samples from our method and the samples from HMC using measures for the similarity of distributions (MMD, C2ST, and W2 metrics) > how many synthetic datasets were needed to achieve the reported performances. Thank you for this question. We train on 37.5 million synthetic datasets, test on 30 million datasets, and validate on 7.5 million datasets. In terms of computational resources, each of our training runs takes at most three hours on an A100 graphics card. To put that into relation, for example, TabPFN version 1 is trained for 20 hours on 8 GPUs (Nvidia RTX 2080 Ti) and TabPFN version 2 even for 2 weeks on 8 Nvidia RTX 2080 Ti. Besides the existing detailed explanation in sections A and E of the appendix, we will include this information in the paper and thank the reviewer again for this suggestion. > The experiments on the OOD performance in the appendix are interesting and probably should be promoted to the main paper. I would suggest thinking of a new way of presenting these experiments though. Perhaps as a plot where the y-axis is the metric, and the x-axis is the KL-divergence between the training and test data distributions. Thank you for your comment. We agree that the OOD performance is an interesting aspect worth highlighting in the main body of our paper. Since we focus on controlled setups (GLM, FA, GMM) to enable comparisons to baselines from standard Bayesian inference (HMC and VI), limited OOD generalization is expected — a trade-off we confirm in Appendix E. We will make this clearer in the main paper. We also followed your great recommendation and created the [following plot](https://anonymous.4open.science/r/Extra-B820/plot_ood.pdf) which we will add to the new section in the main paper explaining the situation in terms of OOD performance. ## References [1] Müller, Samuel, et al. "Transformers Can Do Bayesian Inference." Eleventh International Conference on Learning Representations. [2] Lipman, Yaron, et al. "Flow Matching for Generative Modeling." The Eleventh International Conference on Learning Representations.
Summary: The authors leverage transformers architecture to amortize Bayesian posterior estimation based on training data / observations fed in-context to the model. They conduct analysis on generalized linear models, factor analysis and mixture models and highlight that the proposed method discovers the true posterior distribution quite well, measured through classifier 2-sample test (C2ST), maximum mean discrepancy (MMD) and 2-Wasserstein ($\mathcal{W}_2$) metrics. Training such an amortized model is done through the lens of forward-KL minimization, which becomes tractable when amortized (i.e. the outer expectation w.r.t observations) and the parameterization of the approximate density is accomplished using flow matching. The paper shows that across a suite of synthetic and real world tasks, the proposed neural posterior estimation method stays competitive and outperforms reverse KL Variational inference when the metrics are computed through MCMC samples as proxy. Claims And Evidence: While the authors claim that they propose a general-purpose machinery for Bayesian posterior estimation, I will consider the contribution of the work as the application of existing approaches towards a suite of interesting and useful tasks, with a focus on studying its generalization capability to real world tasks. This is because the framework is quite similar to some of the existing works [1, 2] and is primarily the application of neural posterior estimation in simulation-based inference on the class of tasks described in the paper using flow matching and the transformer architecture. In addition, the following claims made by the authors in their contributions section are not well supported - - The authors claim that their method yields samples from the posterior distribution without parameter updates or parametric assumptions about the posterior. This is untrue as they do make parametric assumptions about the posterior, in the sense that the posterior is modeled through an ordinary differential equation with explicit parameters. - The authors *do not provide a general framework to* analyze the circumstances that enable learning $P^{z|x}$ purely through samples from the joint. In particular, they do not provide any theory on when such a density can be reliably learned. In addition, it is claimed in Section 3 that the proposed method does not suffer from overly or insufficiently flexible distribution assumptions as in VI. This is untrue, as one could also leverage continuous-time methods for reverse KL minimization, yielding the same family of distributions as the proposed work. [1] Wildberger, Jonas, et al. "Flow matching for scalable simulation-based inference." Advances in Neural Information Processing Systems 36 (2023): 16837-16864. [2] Mittal, Sarthak, et al. "Exploring exchangeable dataset amortization for bayesian posterior inference." ICML 2023 Workshop on Structured Probabilistic Inference {\&} Generative Modeling. 2023. Methods And Evaluation Criteria: Since the proposed method is aimed at posterior estimation, the benchmark datasets and evaluation criteria considered make sense and are relevant. However, there are a number of limitations in the evaluation procedure, which I detail below - The paper does not contain any evaluation based on predictive performance. In particular, for supervised learning problems (i.e. GLM experiments) at least, it is imperative to also provide performance metrics (e.g. $l_2$ loss or the likelihood) and compare it to directly learning the linear method through gradient descent or estimating the posterior through MCMC / VI. - Given that the work claims to use the data generating mechanism for TabPFN [1], it would be useful to get a comparative analysis in terms of predictive performance from TabPFN. - For the VI-style baselines, the authors consider a non-amortized setup as baseline while using an amortized model as the proposed approach. An identical formulation can be framed from the reverse KL minimization perspective, and should be considered as one of the baselines. In particular, [2, 3] show that reverse KL often outperforms forward KL methods in terms of predictive metrics especially for high-dimensional tasks. - Details regarding the dimensionality of tasks considered is missing and is extremely relevant. The authors should also provide some form of analysis into how their performance varies as a function of dimensionality of the task, at least for the GLM experiments. This is because with increasing dimensionality, it becomes harder and harder to maintain similarity to real-world tasks in the simulated data, and could lead to worsening performance on them. [1] Hollmann, Noah, et al. "Tabpfn: A transformer that solves small tabular classification problems in a second." arXiv preprint arXiv:2207.01848 (2022). [2] Mittal, Sarthak, et al. "In-Context Parametric Inference: Point or Distribution Estimators?." arXiv preprint arXiv:2502.11617 (2025). [3] Mittal, Sarthak, et al. "Amortized In-Context Bayesian Posterior Estimation." arXiv preprint arXiv:2502.06601 (2025). Theoretical Claims: The authors do not make any theoretical claims, except for Proposition 1 which is well known in literature and is the basis for neural posterior estimation methods within the framework of simulation-based inference. Please note that not making any theoretical claims is not a critique of the paper. Experimental Designs Or Analyses: Unfortunately, the descriptions of the experimental designs were unclear and / or potentially riddled with typos. Appendix A.1 in the write-up often confuses between $\mathbf{u}$, $\mathbf{z}$ and $\mathbf{x}$. For example, the authors claim that $\mathbf{x} := (\mathbf{z}, y)$ while also claiming that $\mathbf{z} = \mathbf{\beta}$, which should not be the case. Similarly, for Factor Analysis in Algorithm 3, the authors do not include $\mathbf{W_i}, \mathbf{\psi_i}, \mathbf{\mu_i}$ in the latent. Additionally, line 6 should be $\mathbf{z}_{i,j}$. Supplementary Material: I went over most parts of the supplementary material, focusing primarily on data generating distributions, ablation experiments and baselines. Relation To Broader Scientific Literature: The key contributions of the work lie in their experimental setup, as prior works have leveraged transformers for amortization, forward KL minimization as the training signal for posterior estimation and flow matching as the parameterization of density. I think the work is well positioned and shows interesting experiments and outcomes, but needs more rigorous testing and benchmarking, as well as additional baselines and ablations. Essential References Not Discussed: The authors should consider citing [1] which essentially looks at the problem of amortized posterior inference in known likelihood models like linear models, Bayesian neural networks and Gaussian Mixture Models through the lens of both forward and reverse KL methods using a Gaussian distribution or normalizing flows as the approximate density. They should also cite [2] which, among other SBI methods, describes how forward KL can be used as a measure of divergence for training amortized posterior estimators. [1] Mittal, Sarthak, et al. "Exploring exchangeable dataset amortization for bayesian posterior inference." ICML 2023 Workshop on Structured Probabilistic Inference {&} Generative Modeling. 2023. [2] Radev, Stefan T., et al. "BayesFlow: Learning complex stochastic models with invertible neural networks." IEEE transactions on neural networks and learning systems 33.4 (2020): 1452-1466. Other Strengths And Weaknesses: **Strengths** - The problem considered in this work is important and relevant and the authors conduct analysis on a variety of synthetic and real-world tasks which is useful. - The work successfully shows that through posterior estimation metrics, amortization for posterior estimation uncovers the true posterior quite well. **Weaknesses** - The novelty of the work lies solely in their experimental results as the use of forward KL minimization, flow matching and amortization on transformers has been already studied considerably in related works. - Benchmarking and evaluation of the method is limited as the authors do not provide predictive metrics, or compare to amortized VI methods. - The work provides limited ablations; it could consider benchmarking the methods with different assumptions about the modeling distribution: flow matching, score-based diffusion, Gaussian approximation and discrete normalizing flows. Only two of the mentioned methods are tested. - The writing needs more work; currently the mathematical formalism in the main paper and the data generation outline in the Appendix are not consistent. Other Comments Or Suggestions: - The authors claim in line 146 that VI problems typically consider a simplified factorization of the variational density, which is not always true. The works cited (e.g. VAEs) perform this approximation as they consider latent variables corresponding to every observation (i.e. only local latent variables) while other works like Neural Processes [1,2] consider global latent variables which are shared across observations for a particular dataset. The authors should make it clear that they do not provide a more general assumption. - The writing of the draft can be significantly improved. The authors often use the notation of $\mathbf{x}$ to either mean a dataset (e.g. line 187) or a meta-dataset (dataset of datasets; line 177). - The authors claim that they do not make any assumptions about the latents $z \in \mathcal{Z}$ in footnote 2, regarding its decomposition into $z_i$. However, immediately afterwards in line 189 they consider a mapping $f_0$ which maps $\chi$ to $\mathcal{M}(\mathcal{Z})$ implying that $x_i$ maps to a density over $z$, which does imply that $z$ is local to each dataset. - In light of the above two points, I would highly recommend the authors to provide a clearer write-up in their draft and make sure that the mathematical objects that they are working with are consistent. In particular, they can either describe everything in the general framework of $z \in \mathcal{Z}$ or work under the factorization $P^{z_i | x_i}$, but they should be consistent. [1] Garnelo, Marta, et al. "Neural processes." arXiv preprint arXiv:1807.01622 (2018). [2] Kim, Hyunjik, et al. "Attentive neural processes." arXiv preprint arXiv:1901.05761 (2019). Questions For Authors: I found the following details to be incorrect, could the authors clarify if I am missing something? - Equation 5 appears to be wrong. The objective for learning the vector field should be $z^{(1)} - \omega z^{(0)}$ (note the lack of division). - Equation 6 needs to be pre-multiplied by $\frac{1}{N}$. - Please use $\gamma_t(z^{(1)}, z^{(0)})$ (functional notation) as it is not a density, hence should not use the conditional operator. - Line 300 introduces an additional notation of $\psi$ which is exactly same as the $\gamma$ considered before. - Have the authors considered using a nonlinear neural network for computing C2ST? Could the authors do so and highlight if they get similar performance? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable suggestions — we extended the experiments as proposed and now prominently highlight Mittal et al. as particularly relevant related work. > The key contributions of the work lie in their experimental setup [...]. We fully agree that our core contribution lies in the experimental setup. However, we also (a) introduce a new architecture combining the PFN-type encoder and a modified version of diffusion transformers, (b) point out a sufficient condition that is central for many related amortized inference setups but has, to the best of our knowledge, not been made explicit in this form. Unlike your references [2] and [3], we (c) directly evaluate the quality of the posterior distribution. > Training such an amortized model is done through the lens of forward-KL minimization Please note that we do not use a KL-based objective, but train the in-context learner within the Flow Matching framework. > The authors claim that their method yields samples from the posterior distribution without parameter updates or parametric assumptions about the posterior. We use the term “nonparametric” in the sense that we make very weak assumptions about the underlying distribution of the data. In particular, we obtain a posterior distribution only implicitly defined through flow matching. > The authors do not provide a general framework to analyze the circumstances that enable learning purely through samples from the joint. We use the terminology “general framework” to refer to (a) our architectural framework in combination with Flow Matching, which arguably is a generic framework for learning distributions, and (b) Proposition (1) that, although a direct corollary of the law of total expectations, is to the best of our knowledge the first sufficient condition explicitly presented to describe Neural Posterior Estimation [1], PFNs, Score Matching Posterior Estimation [2] and Flow Matching Posterior Estimation [3]. >In addition, it is claimed in Section 3 that the proposed method does not suffer from overly or insufficiently flexible distribution assumptions as in VI. This is untrue, as one could also leverage continuous-time methods for reverse KL minimization, yielding the same family of distributions as the proposed work. A known benefit of in-context learners in the form of PFNs, and especially TabPFNs, is that they can flexibly adapt to the complexity of the presented data and thus do not require extensive hyperparameter tuning. This is what we mean by that claim. We will clarify this in the manuscript. > The paper does not contain any evaluation based on predictive performance. Please refer to our [new results](https://anonymous.4open.science/r/Extra-B820/Pred_Performance.pdf) regarding the predictive performance in the GLM scenarios. Those results show that forward-KL based VI and in particular MAP solutions perform strongly in terms of predictive performance, similar to results first discovered by your references [2] and [3]. > [...] it would be useful to get a comparative analysis in terms of predictive performance from TabPFN. Please see the previous answer for our new results that also include a comparison to TabPFN. > [...] An identical formulation can be framed from the reverse KL minimization perspective, and should be considered as one of the baselines. We also conducted [comprehensive experiments](https://anonymous.4open.science/r/Extra-B820/FwdKL_Gaussian.pdf) investigating the performance of using a Gaussian approximation in the forward-KL setup. >Details regarding the dimensionality of tasks considered is missing and is extremely relevant. Please refer to our [new results](https://anonymous.4open.science/r/Extra-B820/Dim_Ablation.pdf). We will also rigorously adapt the claims and limitations section of our paper in light of the new results here. > Unfortunately, the descriptions of the experimental designs were unclear and / or potentially riddled with typos. We thank the reviewer for pointing out those typos! We fixed all the issues you mentioned. > The authors should consider citing [...] We thank the reviewer for pointing out those missing and clearly important references. We will include a discussion of those two papers and think that especially the work by Mittal et al. will nicely complement our findings while analyzing further interesting aspects of in-context learning. > The authors claim in line 146 that VI problems typically consider a simplified factorization of the variational density, which is not always true. Thank you for this remark. We will expand the section on related work on amortized inference to include your comment. ### Questions For Authors We thank the reviewer for the very helpful suggestions regarding different conventions in our notation and for pointing out typos. We will follow the reviewer's recommendations. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing clarifications. Unfortunately, a number of my questions (re Questions For Authors) have not been addressed, especially regarding the correctness of Equation 5 and the comparison to C2ST using a nonlinear neural network. Additionally, the authors provide new results for ablating dimensions and posterior approximation but I cannot understand the results that they have in Table 1 Scenario 2 where it seems like posterior estimation improves on going from 5-dimensional tasks to 20. This seems counter-intuitive as it has been generally shown that posterior inference mechanisms get worse with increasing dimensionality of the problem. It is also surprising that the predictive performance of VI methods is low compared to that of the proposed methodology, and I suspect this is due to insufficient hyperparameter tuning for the VI methods. This finding is surprising because VI methods are supposed to be more mode-seeking, and should thus model a smaller volume around the mode as opposed to the full distribution. This means that they should not be very far away from the MAP-based solutions, however the tables suggest something different. Especially using the mean of DiagonalNormal for the prediction should be somewhat similar to the MAP predictions, if I understand things correctly? Could the authors clarify on why it is the case that a Gaussian VI is considerably worse than the proposed method for the specific case of predictive metrics? I would be happy to raise my score if the authors address my concerns above. --- Reply to Comment 1.1.1: Comment: We apologize for not answering some of your questions in our first answer in detail and would like to take this opportunity to address your questions below: > Equation 5 appears to be wrong. Thank you very much for spotting this error. We will fix it in the revised version of the manuscript. We would like to point out, however, that this error in the formula neither affects any of the other equations of the paper nor our implementation. > Equation 6 needs to be pre-multiplied by $\frac{1}{N}$. We agree that the term “empricial risk” is commonly used to refer to equation (6) divided by $N$, and will use the term “objective function” instead. > Please use (functional notation) as it is not a density [..] We initially decided to also use the conditional operator for a vector field to stay consistent with the notation introduced in [1]. However, we thank the reviewer for pointing out that this might cause confusion and will switch to the notation proposed by you. > Line 300 introduces an additional notation [...] This is indeed a mistake. The notation should be the same as before. > Have the authors considered using a nonlinear neural network for computing C2ST? Please refer to our extensive [new experimental results](https://anonymous.4open.science/r/Extra-B820/Ablation_CLF_for_C2ST.pdf) that confirm that using a neural network leads to overall very similar results. > Additionally, the authors provide new results for ablating dimensions and posterior approximation but I cannot understand the results that they have in Table 1 Scenario 2 [...] We agree with the reviewer that this is an interesting phenomenon that occurs in this specific scenario. However, it does not occur in any of the other 5 scenarios. To provide further evidence, we [ran the experiment](https://anonymous.4open.science/r/Extra-B820/Dim_Ablation.pdf) for the remaining GLM scenarios 1,4,6,7 where we also did not notice this behavior. While we also find that VI will typically become worse with increased dimensionality, the performance can potentially also improve in our case because of a better signal-to-noise ratio resulting from fixing the variance of the noise term while increasing the number of regression coefficients. > I suspect this is due to insufficient hyperparameter tuning for the VI methods Please note that the learning rate hyperparameter for the VI methods is chosen based on empirical results (Appendix L) and that we also do not tune the ICL method. > Especially using the mean of DiagonalNormal for the prediction should be somewhat similar to the MAP predictions [...]. While the MAP estimate does consistently outperform other VI methods (Table 1), the predictive performance is in 13/14 cases within two standard errors of the best-performing VI method. For the dimensionality ablation, we find that the MAP performance is better compared to the VI methods with high dimensionalities. Overall, we think that this can be explained by two common issues with variational inference that can be particularly prominent in our case due to the relatively low number of data points per dataset ($N=50$) combined with a relatively high variance of the additive noise. - A misspecified variational family to optimize over: [2] show that this incurs a variational approximation error that only vanishes if the number of samples reaches infinity. - Overfitting due to more variational parameters than actual parameters. Even the considered DiagonalNormal approximationr has twice as many parameters as the Bayes-optimal model. This can lead to overfitting of VI methods [3]. The MAP has neither of those problems. > Could the authors clarify on why it is the case that a Gaussian VI is considerably worse than the proposed method for the specific case of predictive metrics? We politely disagree with the statement that our results show that the Gaussian VI is **considerably** worse than the proposed method for the predictive metrics and do not think that we make this claim anywhere. For example, in our previous answer, we said that “[...] those results show that forward-KL based VI and in particular MAP solutions perform strongly in terms of predictive performance, similar to results first discovered by your references [2] and [3].” We think that the slightly better predictive performance in, for instance GLM scenario 4, is in line with the fact that HMC gives the best predictive performance and the ICL method yields samples often closest to HMC. We sincerely appreciate the reviewer's constructive feedback. As we have made every effort to respond thoroughly and substantively to all concerns, we would be grateful if the reviewer could kindly reappraise the initial score. ### References [1] Lipman, et al. "Flow Matching for Generative Modeling." ICLR. 2023 [2] Wang, et al. "Variational Bayes under model misspecification." Neurips. 2019. [3] Cremer, et al.. "Inference suboptimality in variational autoencoders." ICML. 2018.
Summary: The authors present an approach for (approximate) Bayesian inference, based on in-context learning with a combined transformer/flow model. The approach relies on access to the generative model, and generates data to learn the inverse model. The method is reasonable and performs well relative to baselines. Claims And Evidence: Overall, the claims are supported, although the baselines and experimental evaluation are relatively weak (discussed below). Methods And Evaluation Criteria: The authors should better justify applications in which the this degree of amortized inference is necessary/useful. There are many inverse problems, for which simulation-based data generation is done, that could perhaps provide better motivation than what is currently provided. Theoretical Claims: The mathematical results appear to be correct, and are primarily direct application of previous results. Experimental Designs Or Analyses: The major weakness of this paper is the experimental evaluation: - The paper is missing an application that motivates a reader, and it is currently not clear that one really exists. I could imagine that this approach would be useful for situations in which some form of parameter identification is desirable and must be done repeatedly, but the authors really must add an experiment with a real application. - The authors only compare to simple VI methods and Laplace approximation. The use of LA for this problem seems somewhat strange, it is not clear to me why this would be a sensible approach to this problem. There is an existing literature on Bayesian methods for inverse problems that would provide better methods, and the authors could also compare to other amortized variational methods beyond the IAF. This is partially addressed in the diffusion objective ablation, which should probably be presented in the paper body. - While there is some discussion of robustness to OOD in appendix H, the model still feels quite sensitive to the data generation process and a more thorough characterization of robustness to model misspecification is likely necessary. This is especially pertinent for real world data. Supplementary Material: I briefly reviewed the supplementary material, which appeared to be fine. Relation To Broader Scientific Literature: The work exists within the literature of approximate/amortized Bayesian inference. It also overlaps with the literature on Bayesian inverse problems. The authors discuss approximate methods in Bayesian inference, and discuss TabPFN in detail. The authors could provide better motivation for why their approach may be useful than they currently do. Essential References Not Discussed: The authors may find "General-purpose in-context learning by meta-learning transformers" interesting as another example of ICL on (partially) synthetically generated data, but citing this paper isn't critical. Other Strengths And Weaknesses: Overall I found the paper to be fairly well presented, although the authors could mention what their approach is (transformer encoder + flow matching) earlier in the paper for people who are skimming the paper. Other Comments Or Suggestions: None Questions For Authors: Feel free to respond to the weaknesses I stated, but I have no other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to read our manuscript and for providing detailed feedback. > The paper is missing an application that motivates a reader, and it is currently not clear that one really exists. We would like to point out that the question addressed in our paper is “can transformers learn full Bayesian inference in context?”, which we see and treat as a fundamentally conceptual question. Our work thus stands in line with papers such as [1-2] that investigate different aspects of in-context learning from a conceptual perspective. However, unlike the aforementioned papers [1-2], we also present results on real-world data. We see this as the next, substantially more difficult step to benchmark for the conceptual capabilities of the proposed in-context learning approach. However, we think that learning full Bayesian inference in context has **conceptually** great promise to alleviate important issues with traditional Bayesian inference, including (a) slow inference time of methods like MCMC. Once pre-trained, an in-context learner can provide samples quickly. (b) inferior predictive performance compared to non-Bayesian approaches: TabPFN [3] and related approaches show that carefully designing a realistic synthetic distribution to train an in-context can lead to Bayesian methods with excellent predictive performance. > The authors only compare to simple VI methods and Laplace approximation. The main concern of our benchmarks is to demonstrate that transformers are capable of in-context learning full Bayesian inference akin to other recent in-context learning works analyzing whether transformer can learn optimization routines or decision boundaries of other methods in-context, e.g. [1-2]. We thus compare our method against popular and commonly used methods in variational inference as well as HMC for the latent variable models we analyze. Please also note that the Laplace approximation, while we fully agree is certainly an outdated method for the considered problems, serves as a simple baseline. > There is an existing literature on Bayesian methods for inverse problems that would provide better methods First, we would like to point out that approaches that also operate in the flow-matching framework are indeed state-of-the-art methods for Bayesian inference in inverse problems [5]. We further compare against the recently proposed score-matching-posterior-estimation [3] via our ablations in appendix E. Our [new results](https://anonymous.4open.science/r/Extra-B820/FwdKL_Gaussian.pdf) are a comparison further method that can also be used for simulation-based inference. Please note that our goal is not to propose a new method for inverse Bayesian problems but to show that in-context learning can be effective, even compared to well-established methods for Bayesian inference that do not operate in context, which present an arguably even stronger baseline. >While there is some discussion of robustness to OOD in appendix H, the model still feels quite sensitive to the data generation process [...] We fully agree that in our setup, the data-generating process we use to train the in-context learner determines what datasets the in-context learner can provide meaningful inference on. Please note, however, that this is expected as we consider controlled and simple setups (GLM, FA, and GMM scenarios) to perform full Bayesian inference. We think that having a somewhat artificial setup of well-established latent variable models where strong traditional baselines exist (HMC, VI) is crucial for exploring whether transformers can do full Bayesian inference in context. As you correctly point out, however, this comes at the cost of inherent limitations in terms of OOD performance, which we confirm in our ablation study in appendix E. In principle, however, our in-context learner could be applied to compley, more realistic setups similar to what TabPFN does; but then we would lose the ability to do meaningful evaluations. > The authors may find "General-purpose in-context learning by meta-learning transformers" interesting [...] Thank you for pointing out this relevant and interesting reference. We will discuss its relation to our work in the revised related work section. > The authors could mention what their approach is [...] earlier in the paper Thank you for this very helpful remark! We will add a short explanation to the introduction. ## References [1] Garg et al. What can transformers learn in-context? NeurIPS 2022. [2] Panwar, Ahuja & Goyal. In-Context Learning through the Bayesian Prism. ICLR 2024. [3] Gloeckler, Manuel, et al. All-in-one simulation-based inference. ICML 2024. [3] Hollmann et al. Accurate predictions on small data with a tabular foundation model. Nature, 2025. [3] Ahuja et al. In-Context Learning under Distribution Shifts. Workshop on Efficient Systems for Foundation Models @ ICML 2023.
null
null
null
null
null
null
Convergence of Policy Mirror Descent Beyond Compatible Function Approximation
Accept (poster)
Summary: This paper establishes an upper bound on the convergence rate of the on-policy policy mirror descent (PMD) algorithm in an agnostic learning setting for discounted MDPs. The authors replace the commonly assumed closure condition with a weaker variational gradient dominance (VGD) assumption. Under this condition, they prove that PMD achieves a convergence rate of $O(K^{-2/3})$with Euclidean regularization and $O(K^{-2/7})$ with negative entropy regularization, where $K$ is the number of iterations. Their analysis frames PMD as a non-Euclidean proximal point method, leveraging a local smoothness property of the value function defined by local norms induced by the policy’s occupancy measure. ## update after rebuttal I have reviewed the authors’ rebuttal and found their responses satisfactory. My assessment remains unchanged, and I continue to recommend acceptance. Claims And Evidence: The claims are well-supported by rigorous theoretical proofs and an illustrative example. Methods And Evaluation Criteria: The authors carefully discuss and compare their results with prior work. Theoretical Claims: I did not find any issues in the theoretical claims, though I did not verify the deferred proofs in the appendix. A key component of the analysis is Lemma 2, which establishes the local smoothness of the value function with respect to the local norm induced by the policy’s occupancy measure. The proof relies on value difference lemmas and occupancy measure properties. This local smoothness condition enables the authors to derive convergence rates independent of the state space size. Another crucial component is the reduction of the PMD setup to an optimization framework for constrained non-convex optimization of locally smooth objectives. The theoretical framework introduced is novel. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The discussion of related prior work is thorough. Other Strengths And Weaknesses: Overall, this paper is well-structured and provides a solid theoretical contribution to the convergence of PMD under the agnostic learning setup for discounted MDPs. Strengths: - Introducing the VGD condition and proving the convergence of PMD without closure conditions are novel contributions. - Using local smoothness of the value function to establish convergence rate independent of the cardinality of state space is a novel technical contribution. Weaknesses: - The paper lacks a comparison between the convergence rate obtained under the VGD condition with existing convergence results under more restrictive conditions, such as the complete policy class or the closure condition. - There is a lack of experiments to validate the theory, though this is common in RL theory papers. Other Comments Or Suggestions: - Empirical validation of the theoretical results would strengthen the paper. Implementing PMD under the VGD condition and comparing its empirical convergence with the derived theoretical bounds could provide valuable insights. - A careful comparison between the convergence rates under the VGD condition and those under the closure condition would help contextualize the benefits and limitations of the proposed framework. This could clarify whether VGD provides comparable or significantly different guarantees relative to more restrictive assumptions. Questions For Authors: How do the derived rates under VGD (e.g., $O(K^{-2/3})$) compare to known rates under closure conditions (e.g., does VGD degrade rates?)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our work and for your thoughtful comments. **“Overall, this paper is well-structured and provides a solid theoretical contribution...”** We were happy to read that the reviewer appreciates the value of our contribution, and in particular that the reviewer recognizes **“Using local smoothness of the value function to establish convergence rate independent of the cardinality of state space is a novel technical contribution.”** - we completely agree! ## Weaknesses **“The paper lacks a comparison … the closure condition.” / “A careful comparison between … more restrictive assumptions.”** This is a good point which we will address in our revision. Thanks to your comment, and questions from other reviewers, we have extended the “perfect closure implies VGD” from our Appendix A.2 to hold generally; see below “Relation of VGD and Closure”. Also below, we have included a summary comparing assumptions and rates of ours and prior works “Comparison with prior works”. **“Empirical validation … could provide valuable insights.”** Our contribution is primarily theoretical and empirical evaluations are outside the scope of our work. We agree that such research may be valuable and should be considered for future work. ## Questions For Authors: **“How do the derived rates … degrade rates?)?”** With constant step sizes, the best rate achievable (given closure or in the tabular, exact, complete class setup) is $O(1/K)$. With non constant geometrically increasing step sizes, linear rates (i.e., $O(e^{-cK})$ for some constant $c$) are attainable (again, given closure or a tabular setup). We do not expect these rates to be attainable assuming only VGD. We include additional remarks in the comparison below. # Relation of VGD and Closure We adopt the closure conditions as formalized in Alfano et al. 2023. Let $C_v$ be concentrability coefficient (A2 in Alfano et al.) and $\nu_\star$ the distribution mismatch coefficient (A3 in Alfano et al.). Then the following relation holds: $\epsilon$-Approximate Closure (A1 in Alfano et al.) $\implies$ $(\nu_\star,H\nu_\star\sqrt{C_v \epsilon})-$VGD. Importantly, the $\epsilon_{vgd}=H\nu_\star\sqrt{C_v \epsilon}$ is **exactly** the error floor exhibited in the results of Alfano et al. 2023 (as well as in other works based on closure). # Comparison with prior works The table below summarizes representative papers and their rates for constant step size PMD. Columns state assumptions made in different works. | Paper | VGD | Closure Assumptions | Parametric Assumptions | Realizability | Rate | | -------------------- | --- | ------------------- | ---------------------- | ------------- | ----------- | | Lan 2023 / Xiao 2022 | Yes | Yes (perfect) | Yes (Tabular) | Yes | $1/K$ | | Yuan et al. 2022 | Yes | Yes (approx) | Yes (Log-linear) | Yes | $1/K$ | | Ju & Lan 2022 | Yes | Yes (approx) | Yes (EMaP$^{[a]}$) | Yes | $1/\sqrt K$ | | Alfano et al. 2023 | Yes | Yes (approx) | Yes (EMaP$^{[a]}$) | Yes | $1/K$ | | **Our work** | Yes | No | No | No | $1/K^{2/3}$ | | | | | | | | - $[a]$ Stands for Exact Mirror and Project ### Remarks - *Our rate.* We report our $K^{2/3}$ rate for the Euclidean regularization case. More generally, our rates depend on smoothness of the action regularizer, which leads to worse rates for regularizers such as the negative-entropy. We conjecture both that the base rate can be improved to $1/K$ and that dependence on regularizer smoothness can be lifted; however, this seems to require new ideas and we leave it for future work. - *Non constant step size results.* With non constant geometrically increasing step sizes, linear rates (i.e., $O(e^{-cK})$ for some constant $c$) are attainable (e.g., Alfano et al. 2023, Xiao 2022, Jhonson et al. 2023). We do not expect this is possible assuming only VGD. The reason these rates are attainable subject to closure is that the algorithm dynamics mimic those of the tabular setting, where policy iteration converges at a linear rate. Assuming only VGD, policy iteration no longer converges (at all), as the policy class loses the favorable structure allowing for convergence of such an aggressive algorithm. This should highlight the value in studying the function approximation setup without closure. - *Convexity of the policy class.* In our submission we point out the convexity of the policy class as a potential differentiator between our approach and that of Alfano et al., 2023. Following the reviewers comments, we realized this is not the case, and that convexity requirements are similar in our work and that of Alfano et al. 2023. Given closure, our analysis does not require convexity, same as Alfano et al. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their rebuttal. Most of my concerns have been addressed, and I am satisfied with the clarifications provided. I will maintain my positive evaluation and recommend acceptance.
Summary: The paper addresses the convergence of policy mirror descent method for generalized policy classes ie policy classes with a general parametrization. Most of the prior literature considers either the tabular case where each state can be mapped to any probability vector over the action space, or they consider policy classes which are closed under the PMD update. The main assumption the authors rely on is the variational gradient dominance (VGD) (a PL inequality of sorts for the policy class) of the value function and the convexity of the policy class. The authors argue that the prior assumption over the policy class being closed is stronger than VGD since a closed policy class implies VGD whereas the converse isn't true, making VGD a weaker assumption. Claims And Evidence: I haven't read through the appendix in detail but the paper in its current form cannot be assessed without a detailed reading of the appendix. However I do some comments/concerns that I have elaborated on later. Methods And Evaluation Criteria: NA Theoretical Claims: I haven't examined the proofs in detail. Experimental Designs Or Analyses: NA Supplementary Material: No. Relation To Broader Scientific Literature: Convergence of PMD methods for generalized policy classes is indeed an important question that needs to be answered. Especially in the context of Deep reinforcement learning, where the policies are parametrized in terms of the weights of the NN, the theoretical performance of these algorithms is not very well understood. Essential References Not Discussed: I am not well versed in the literature pertaining to the non tabular PMD methods, hence I am unsure of all the important references being included. Other Strengths And Weaknesses: Strengths: 1. Addresses an important problem in the literature 2. Provides bounds that are independent of the cardinality of the state space 3. Relaxes an assumption (from closed policy class to VGD) typically involved in the analysis of these problems Weaknesses: 1. The paper can use better presentation. In its current form it is hard to delineate what the different assumptions are in different prior works. Also the example in Section 1.2 needs better elaboration. It is not clear at all as to why the policy class here is not closed. 2. In appendix C.3 Proof of theorem 1, line 1034, in the suboptimality expression of $V$, the exploration probability $\epsilon_{expl}$ appears in the denominator. Typically inducing an $\epsilon$ exploration in your policy class leads to a suboptimality which grows $O(\epsilon)$, but here greater the exploration probability smaller the suboptimality, which is contradicting to whats typically observed. I don't understand the reason for this. Also, is there a reason for setting $\epsilon_{expl} = K^{-2/3}$? Cause this seems to lead to a regret that eventually increases in terms of the approximation errors (in theorem 1). Other Comments Or Suggestions: NA Questions For Authors: What does a closed policy class implying VGD mean in terms of $\epsilon_{vgd}$? Since $\epsilon_{vgd} = \frac{1}{1-\gamma}$ is trivially true for all policy classes, do closed policy classes yield an $\epsilon_{vgd} < \frac{1}{1-\gamma}$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing thoughtful comments. We were glad to hear you appreciate our work, in particular, that the reviewer recognizes our paper **“addresses an important problem in the literature”**, and **“relaxes an assumption (from closed policy class to VGD) typically involved in the analysis of these problems”**. # Weaknesses **1.** Following your comment and those of the other reviewers, we have tightened our comparison with prior works. ### Comparison of assumptions and rates The table below summarizes representative papers and their rates for constant step size PMD. Columns state assumptions made in different works. | Paper | VGD | Closure Assumptions | Parametric Assumptions | Realizability | Rate | | -------------------- | --- | ------------------- | ---------------------- | ------------- | ----------- | | Lan 2023 / Xiao 2022 | Yes | Yes (perfect) | Yes (Tabular) | Yes | $1/K$ | | Yuan et al. 2022 | Yes | Yes (approx) | Yes (Log-linear) | Yes | $1/K$ | | Ju & Lan 2022 | Yes | Yes (approx) | Yes (EMaP$^{[a]}$) | Yes | $1/\sqrt K$ | | Alfano et al. 2023 | Yes | Yes (approx) | Yes (EMaP$^{[a]}$) | Yes | $1/K$ | | **Our work** | Yes | No | No | No | $1/K^{2/3}$ | | | | | | | | - $[a]$ Stands for Exact Mirror and Project For additional remarks, we refer to our response to Reviewer 9aB9 (“Comparison with prior works”). Regarding the formal relation between VGD and closure, see below “Relation of VGD and Closure”. ### The example in Section 2.1 The technical argument is given with full detail in Appendix E.2, where we demonstrate a strong form of closure w.r.t. the optimal policy occupancy (bias-error) does not hold. The simplest way to see that the standard approximation-error closure does not hold globally is since the setup in question is not approximately realizable (we have $V^\star=0$ while $V^\star(\Pi)=H/2$). The goal with this example was not only to show that closure does not hold, but also that the bounds of prior works become vacuous, and for that, additional technical arguments are required. Following your comment, we will revise the exposition of this example, and try to find a better balance between conveying the idea while keeping technical details to the minimum. **2.** The $O(\epsilon)$ term you are missing is there through the additive $\delta$ term. We define it in the first line of the proof, line 1013, $\delta=\epsilon_{\rm vgd} + 12\epsilon_{\rm expl}C_\star H^2 A^2$. We suppose it is now clear that the reason for setting $\epsilon_{expl}=K^{-2/3}$ is to balance the two terms. We will highlight the definition of $\delta$ in a display equation so that it cannot be easily missed. ## Questions for Authors: This is a good question, as we show in the paper in Appendix A.2, perfect closure with negative entropy regularization implies $\epsilon_{\rm vgd}=0$. Thanks to your comments and those of the other reviewers, we have added a simple extension of this implication to hold for the general closure conditions as formalized in Alfano et al. 2023. # Relation of VGD and Closure We adopt the closure conditions as formalized in Alfano et al. 2023. Let $C_v$ be concentrability coefficient (A2 in Alfano et al.) and $\nu_\star$ the distribution mismatch coefficient (A3 in Alfano et al.). Then the following relation holds: $\epsilon$-Approximate Closure (A1 in Alfano et al.) $\implies$ $(\nu_\star,H\nu_\star\sqrt{C_v \epsilon})-$VGD. Importantly, the $\epsilon_{vgd}=H\nu_\star\sqrt{C_v \epsilon}$ is **exactly** the error floor exhibited in the results of Alfano et al. 2023 (as well as in other works based on closure). --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and comments. I am not incredibly well versed with this domain to assess the contributions with a high confidence, however I am increasing my score since my questions have been addressed.
Summary: This paper studies the convergence of policy mirror descent under the variational gradient dominance condition. Claims And Evidence: Under the variational gradient dominance condition, this work obtains the first convergence rate of PMD. I have some major concerns: 1. I am confused about the claim in Section 1.2 that VGD is not comparable with the other conditions in the literature. Since this is a relaxation of the usual PL-inequality, at least one can compare the theoretical results in this paper with the prior works assuming PL or strong convexity. Following this logic, I suggest the authors providing a table showing how condition (2) is satisfied in the literature and list the corresponding convergence rate of the algorithm. At least, the comparison to [1, 2] should be presented. 2. I am concerned about the assumptions in Lemma 4. Could you show how $D$, $M$ and $\beta$ depend on the parameters of a MDP? 3. I am also concerned about Assumption 1 & 2. Usually in the literature, we need a subroutine to have small $\varepsilon_{\rm act}$ and $\varepsilon_{\rm crit}$. How do you achieve this in your setup? 4. Seemingly Theorem 1 is only a consequence of Theorem 3. Then what is the contribution of this paper? [1] Lan, G. (2023). Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes. Mathematical programming, 198(1), 1059-1106. [2] Zhan, W., Cen, S., Huang, B., Chen, Y., Lee, J. D., & Chi, Y. (2023). Policy mirror descent for regularized reinforcement learning: A generalized framework with linear convergence. SIAM Journal on Optimization, 33(2), 1061-1091. Methods And Evaluation Criteria: NA Theoretical Claims: NA Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: The presentation of this work should be significantly improved. 1. Introduce notations, assumptions and definitions only when you need them. Move any notions that are not directly relevant to your analysis to the appendix. 2. Avoid redundancy. Many results and notations are introduced twice, first time in Section 1.1 and then again in Section 2. Be precise and consistent. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and comments. **1.** We show in the paper (Appendix A.2) that perfect closure in the negative entropy case implies VGD. Following your comment and those of the other reviewers, we have extended this implication to hold for general approximate closure assumptions as formalized in Alfano et al., 2023. A summary comparing our assumptions / bounds with prior works is given below. Columns state assumptions required by the different works. Rates are for constant step size PMD. | Paper | VGD | Closure Assumptions | Parametric Assumptions | Realizability | Rate | | -------------------- | --- | ------------- | ---------------------- | ------------- | ----------- | | Lan 2023 / Xiao 2022 | Yes | Yes (perfect) | Yes (Tabular) | Yes | $1/K$ | | Yuan et al. 2022 | Yes | Yes (approx) | Yes (Log-linear) | Yes | $1/K$ | | Ju & Lan 2022 | Yes | Yes (approx) | Yes (EMaP$^{[a]}$) | Yes | $1/\sqrt K$ | | Alfano et al. 2023 | Yes | Yes (approx) | Yes (EMaP$^{[a]}$) | Yes | $1/K$ | | **Our work** | Yes | No | No | No | $1/K^{2/3}$ | | | | | | | | - $[a]$ Stands for Exact Mirror and Project We refer to our response to Reviewer 9aB9 (Under “Relation of VGD and Closure” and “Comparison with prior works”) for additional details regarding the comparison. The other paper [2] Zhan et al. 2023 you have mentioned studies the regularized MDP setup, and requires the same set of assumptions as Lan 2023/ Xiao 2022. Note that while VGD is indeed a relaxation of the PL inequality, prior works do not make the PL-inequality assumption directly. **2.** Lemma 4 is just a building block in the proof of Theorem 1. The connection between the MDP parameters and the parameters stated in the Lemma is given in the proof of Theorem 1; the proof sketch for the Euclidean case is given in the main text in the final paragraph of Section 3.3, and the detailed proof for both Euclidean and negative entropy cases is given in Appendix C.3. **3.** These are standard assumptions in the general function approximation setup; we expect these to be achieved by optimizing neural network models (e.g., actor and critic networks). Since we do not make any structural assumptions, the corresponding optimization problems are non convex in the network parameter space and thus do not admit provably efficient optimization procedures. See e.g., Alfano et al. 2023 and Xiong et al., 2024 where similar assumptions are made. The assumption on optimality conditions is implied by approximate optimality in function values. **4.** We are not sure we understand the question, the main contribution of the paper is Theorem 1. In order to prove Theorem 1, we break it into several building blocks. The central building block on the optimization side is Theorem 3. In order to obtain Theorem 1 from Theorem 3, we need to combine it with Lemma 2 (which establishes the key local smoothness property of the value function), and Lemma 4, which links Theorem 3 with the MDP setting. ## Other Strengths and Weaknesses Thank you for your comments, we will revise our exposition for the final version. Please note however that it is a common practice to provide an overview of the results with only minimal preliminaries as we do in Section 1.1. This is beneficial for the many readers that only want to understand the main results at a high level. That said, if you have any specific suggestions we will be happy to hear and consider them during our revision.
null
null
null
null
null
null
null
null
Safe-EF: Error Feedback for Non-smooth Constrained Optimization
Accept (poster)
Summary: he paper establishes a convergence lower bound for the non-smooth convex distributed setting, where EF-21 and similar methods operate. Next, it proposes Safe-EF (Algorithm 1), an extension of EF14 (Seide et al., 2014) that incorporates safety constraints and bidirectional compression. Safe-EF is provably effective in non-smooth distributed settings and efficiently minimizes the objective function. The paper proves that the convergence rate of Safe-EF matches the aforementioned lower bound up to a numerical constant, assuming a constant accuracy of the server compression C₀. Furthermore, the paper studies Safe-EF in practically relevant stochastic scenarios, where exact subgradients and function evaluations are unavailable. It establishes high-probability bounds in these settings. Extensive experiments and ablation studies validate Safe-EF, demonstrating its effectiveness on the challenging task of distributed humanoid robot control. Paper contains source code for repoducability. Claims And Evidence: No claims. The paper is strong and presents an interesting perspective for EF type methods in case of non-smooth convex distributed settings with constraints. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. No issues. Supplementary Material: Source code. Relation To Broader Scientific Literature: Error Feedback is a popular and immensely effective mechanism for fixing convergence issues that arise in distributed training. While EF was proposed almost a decade ago, and despite concentrated effort by the community to advance the theoretical understanding of this mechanism, there is still a lot to explore. Essential References Not Discussed: No. All is fine. With one exception. Thanks authors for "B. Failure of CGD and EF21 in Non-smooth Convex Setting". However, Convergence Upper Bound for EF21 in Smooth Convex Setting is not necessarily optimal selection of stepsize. The work "Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants" https://openreview.net/pdf?id=Ch7WqGcGmb, https://arxiv.org/abs/2402.10774 Improves quadratic mean to arithmetic mean. Please, revisit Chapter C for Smooth Convex Setting under this consideration. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: (1) Convergence Upper Bound for EF21 in Smooth Convex Setting should be revisited under a better setting of selecting meta-parameters (2) I have observed \gamma = 1/\sqrt{T} as step-size in Figure 1, but this information is missed in other experiments. Please be specific about stepsize selection (theoretical, tuned for specific instance, estimated in another form) in experiments from Figures 2-11 in Experiment sections. Please elaborate about your selection. (3) Please provide information about the number of trials in experiments with error bars. (4) Theorem 4.2 contains a reference to Appendix D. This suggests select \gamma and c as a function of T. Please elaborate more and provide implicit or explicit formulas in terms of the target discrepancy in function gap. Either provide the notion that T is the conceptual budget for iterations. But I strongly ask to elaborate on what constants should be estimated before Safe-EF can be launched. Questions For Authors: Please address suggestions (1) - (4). Under this condition, I am glad to recommend the paper for acceptance. Ethical Review Concerns: No concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. We thank the reviewer for this valuable comment and pointing out this reference! Indeed, the workers cloning idea described in that paper can be extended to our result in convex (smooth) setting. We have revisited the convergence analysis of EF21 in our supplementary Theorem C.1. following the analysis from the proposed reference. The updated rate is $\mathbb{E}[f(x^T) - f(x^*)] \le \frac{R_{\mathcal{X}}^2}{\gamma T}\left(1+ \log\left(\frac{\gamma \Lambda_0 T}{R_{\mathcal{X}}^2}\right)\right),$ where $$\gamma \le \mathcal{O}\left(\frac{\delta}{\frac{1}{n}\sum_{i=1}^nL_i}\right).$$ The potential improvement compared to our Theorem C.1. is in the difference between the average $\frac{1}{n}\sum_{i=1}^nL_i$ and the quadratic mean $\sqrt{\frac{1}{n}\sum_{i=1}^n L_i^2}$. This derivation follows directly from our analysis by replacing equation (15) in the proof of Theorem C.1. with more refined Lemma 7 in Appendix C.1. of the suggested paper. We will include the improved derivation in the next revision and cite the missing reference. 2. We appreciate this useful suggestion. The hyperparameters used in Section 6.1 are provided in Appendix I. We perform a grid search over several choices of $\gamma$ and select the one that gives the smallest train loss at the end of training. Table 1 in Appendix I gives concrete numbers for the selected step size for each heterogeneity level $s = 0.1, 1.0, 10.0$. In Safe RL experiment, we don’t tune the hyper-parameters and use the default step-size $\gamma = 10^{-4}$ from the (non-distributed) PPO implementation that is known to work well on standard RL benchmarks. The slackness parameter $c$ is set to $0$, which is sufficient to get decent performance for both tasks. We did not try to further tune these parameters and agree that, in general, these parameters are problem-dependent. We will clarify these details in the revision, thank you for this advice! 3. Thank you for bringing this to our attention. We highlight that we use $5$ seeds for all of our RL experiments, as also mentioned in the “Setup” paragraph of section 6.2. 4. We emphasize that the choice of $c$ highly depends on the application. The choice of $c$ is dictated by the fact of how much constraint intolerance we can allow. From theory perspective, the choice of hyperparameters $\gamma = \mathcal{O}(\frac{R\sqrt{\delta\delta_s }}{M\sqrt{T}})$ and $c=\mathcal{O}(\frac{MR}{\sqrt{\delta\delta_s T}})$ depends on the compression levels $\delta$ and $\delta_{s}$, the gradient bound $M$, number of iterations $T,$ and the upper bound on the initial distance to the optimal set $R=\\|x^0-x^*\\|^2.$ The compression levels and the number of iterations are typically known beforehand as it is a user who sets them. Therefore, there is no need to estimate them. To set $\gamma$ and $c$ we should provide an estimate on $M$ and $R.$ For example, we can estimate $M = \max_i\\|f_i^\prime(x^0)\\|$ and set $R \geq \\|x^0\\|,$ if $x^*$ is close to zero. In general, we believe that choosing $\gamma, c = \mathcal{O}(1/\sqrt{T})$ or diminishing $\gamma, c = \mathcal{O}(1/\sqrt{t})$, $1 \leq t \leq T$, can be a good starting point. We appreciate the reviewer for their thoughtful feedback! We believe we have addressed the main concerns (1–4) and would appreciate if you could confirm that everything is resolved and consider adjusting the score.
Summary: In this paper, the authors investigate a non-smooth optimization setting with bounded gradients in the context of Top-K compression. Communication compression using contractive compressors (e.g., Top-K) is commonly preferred in practice due to its efficiency; however, it can significantly degrade performance if not properly managed. The authors demonstrate a scenario in which Error Feedback (EF) can lead to divergence but also propose a method to mitigate this issue. They contribute to the theoretical understanding of EF in the canonical non-smooth convex setting by establishing new lower complexity bounds for first-order algorithms under contractive compression. To address the limitations of existing approaches, the authors introduce Safe-EF, a novel algorithm that achieves the established lower bound (up to a constant factor) while incorporating safety constraints crucial for practical applications. Furthermore, they extend their approach to the stochastic setting, enhancing its applicability. The proposed method is thoroughly evaluated through extensive experiments in a reinforcement learning setup, specifically simulating distributed humanoid robot training. The results demonstrate that Safe-EF effectively maintains safety constraints while reducing communication complexity, highlighting its potential for real-world deployment in distributed optimization scenarios. Claims And Evidence: Most of the claims in the paper are well-formulated and supported by either rigorous proofs or relevant references. The arguments are clearly articulated, contributing to the overall clarity and coherence of the work. However, I identified one claim that might be considered questionable: **Lines 102–104 (right side, page 2):** *"Despite their importance, constrained optimization with communication compression remains under-explored."* While I believe this statement is accurate, it would benefit from further elaboration. Providing additional context, references, or a more detailed discussion would help clarify the extent to which this area has been explored and why it remains an open challenge. This would make the claim more accessible to a broader audience and strengthen its impact. Methods And Evaluation Criteria: In this paper, the authors present theoretical convergence guarantees alongside experimental results for reinforcement learning tasks. These criteria are both relevant and well-justified, as they ensure that the proposed method is supported by rigorous mathematical analysis while also being validated through practical implementation. The combination of theoretical and empirical evaluation strengthens the credibility of the approach, demonstrating its effectiveness in real-world reinforcement learning scenarios. This dual perspective enhances the overall contribution of the paper by providing both fundamental insights and practical applicability. Theoretical Claims: In this paper, the authors establish tight convergence guarantees by providing both lower and upper bounds for the considered setting of non-smooth convex optimization with contractive compression operators. The lower bounds presented in this work are particularly valuable, as deriving them is generally more challenging than establishing upper bounds. The proposed algorithm achieves a convergence rate that matches the lower bound (Equation 9), up to numerical and logarithmic factors. While the presence of additional logarithmic factors could be perceived as a drawback, this is a relatively minor issue compared to the overall theoretical contribution of the paper. The extension to the stochastic setting further enhances the significance of the work, broadening its applicability. I have briefly reviewed the analysis, and it appears to be correct. However, there is always a possibility that some errors may have been overlooked, so I recommend a thorough double-check for accuracy. Experimental Designs Or Analyses: In the experimental section, the authors conduct experiments on synthetic data to illustrate their theoretical findings. Subsequently, they explore the application of Safe-EF in reinforcement learning. In this setup, each worker represents a humanoid robot that collects noisy measurements of certain utility and constraint functions to solve a constrained Markov decision process (CMDP). The experimental evaluation consists of several key components: Comparison with CGD: The authors first evaluate Safe-EF using Top-K and Rand-K sparsifiers and compare its performance with a constrained version of CGD employing a Top-K sparsifier. Constraint Satisfaction Analysis: They then analyze Safe-EF’s ability to satisfy constraints, comparing it against unsafe error feedback algorithms (EF14 and EF21). Additionally, they benchmark Safe-EF against a parallel variant of CRPO, a CMDP solver that enforces constraints through the subgradient switching method. Effect of the Number of Workers: The authors examine how the performance of Safe-EF varies with the number of available workers, providing insights into its scalability. Effect of Batch Size: Finally, they study how different batch sizes impact the method’s performance. The presented experiments are valuable and well-structured, providing strong empirical support for the proposed approach. However, exploring additional settings could further strengthen the study by demonstrating the robustness of Safe-EF across a broader range of scenarios. Supplementary Material: I have briefly reviewed the analysis, and it appears to be correct. However, there is always a possibility that some errors may have been overlooked, so I recommend a thorough double-check for accuracy. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature on optimization. In particular, the derivation of lower bounds is especially valuable, as such results are typically more challenging to establish and provide fundamental insights into the theoretical limits of the considered optimization setting. Essential References Not Discussed: There are two papers that are discussing biased SGD and consider contractive compression in details, which is highly related to the current paper. Ajalloeian, Ahmad, and Sebastian U. Stich. "On the convergence of SGD with biased gradients." arXiv preprint arXiv:2008.00051 (2020). Demidovich, Yury, et al. "A guide through the zoo of biased SGD." Advances in Neural Information Processing Systems 36 (2023): 23158-23171. Other Strengths And Weaknesses: The paper is well-written and presents its ideas in a clear and structured manner. The theoretical contributions are strong and well-founded, making a significant impact on the understanding of the problem. Additionally, the figures and plots are carefully formatted, with appropriate use of grids and markers, which enhances readability and interpretability. The attention to detail in visualization and presentation reflects the authors’ commitment to clarity and precision. The stochastic case extension section could be further expanded, as its current description lacks sufficient detail. Providing a more comprehensive discussion would enhance the clarity and depth of this section. Other Comments Or Suggestions: Please review the issues identified in the previous sections and ensure they are addressed appropriately. Questions For Authors: Do you have any insights on how to eliminate the additional logarithmic factors in order to achieve the optimal rate? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: 1. We appreciate the reviewer’s comment and agree that further elaboration on the lack of prior work in this area would strengthen the manuscript. To the best of our knowledge, constrained optimization with communication compression remains largely unexplored. In fact, besides the two works we cited—(Fatkhullin et al, 2021), which considers projection-based methods, and (Nazykov et al, 2024), which employs linear minimization oracles—we are unaware of any other studies that explicitly address even simple constrained problems in the presence of contractive compression. In the more limited setting with Rand-$K$ operator (unbiased compressor), the recent work (Laurent Condat and Peter Richt{\'a}rik. Randprox: Primal-dual optimization algorithms with randomized proximal updates, 2023) has established some results in the proximal setting. However, this is still limited to constraints/proximal functions with simple structure (e.g., Euclidean ball, simplex, affine constraints, etc). The situation becomes even more challenging when considering the general constraints of the form (4) in the manuscript, as no existing work tackles such cases. Fundamentally, addressing this problem requires overcoming new technical difficulties arising from the interaction between constrained optimization and communication compression. Specifically, in addition to the usual error accumulation from compression in unconstrained settings, constrained problems introduce further complexities: (i) compression errors affect the feasibility of iterates, making it harder to ensure constraint satisfaction, and (ii) primal-dual methods introduce an additional projection step, which requires the control of errors from projection, the objective and constraint updates simultaneously. These challenges necessitate novel algorithmic techniques, which we discuss in the subsequent paragraph of our introduction by distinguishing between primal-only and primal-dual approaches. We will revise the manuscript to clarify these points and emphasize the fundamental difficulties that remain open in this research direction. 2. We thank the reviewer for providing the references. We agree that these studies are relevant to our work, and we will incorporate them in the introduction. These papers examine a general case of SGD with biased stochastic gradients. The special case of their analyses is CGD, described in the introduction. However, their analysis does not fully align with our setting because they consider (i) a single-node training regime $n=1$, (ii) $L$-smooth functions while we focus on a non-smooth setting, and (iii) a plain SGD without a memory buffer to mitigate the bias (e.g., the compression error). Nonetheless, we agree that extending our results from unbiased gradients to biased ones is an interesting research direction for future work. 3. Currently, a detailed description of the stochastic setting is provided in section F due to the page limit. We will try to reorganize the main body to fit the description in the main paper. We appreciate this suggestion. 4. The logarithmic dependency on the failure probability $\beta$ is a typical outcome of the high probability analysis that comes from the use of concentration inequalities [1,2]. To the best of our knowledge, the logarithmic dependency on the failure probability cannot be avoided as there exists a lower bound for a sample complexity of any learning algorithm Theorem 5.2 in [3]. We will add a note on this aspect in the revised version of the paper. [1] Nemirovski et al., Robust stochastic approximation approach to stochastic programming, SIAM 2009 [2] Ghadimi \& Lan, Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: A generic algorithmic framework, SIAM 2012 [3] Anthony \& Bartlett, Neural Network Learning: Theoretical Foundations, Cambridge University Press, 2009 --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their responses! I appreciate the clarifications and look forward to the promised adjustments being implemented. I believe the work is of high quality and will maintain my score of acceptance. Best regards, Reviewer
Summary: The paper presents Safe-EF, an error feedback (EF) algorithm designed for non-smooth constrained optimization in distributed settings. It establishes lower complexity bounds for first-order algorithms with contractive compression and introduces Safe-EF, which matches these bounds while ensuring safety constraints. The algorithm uses bidirectional compression and a constraint-aware switching mechanism. The effectiveness of Safe-EF is demonstrated in a reinforcement learning (RL) setting, specifically in distributed humanoid robot training, where it reduces communication overhead while maintaining constraint satisfaction. Claims And Evidence: The claims about Safe-EF’s theoretical guarantees (matching lower bounds) are well-supported with clear derivations. The empirical results effectively show that Safe-EF outperforms EF21 and CGD in terms of communication efficiency and constraint satisfaction. However, the claim that Safe-EF applies to federated learning (FL) is less substantiated, as the experiments do not fully capture the typical FL challenges (e.g., heterogeneous data, decentralized aggregation). Methods And Evaluation Criteria: The problem setting and evaluation criteria are reasonable for distributed optimization. The experiments use a multi-agent RL setting, which is a valid but non-standard way to test federated learning algorithms. The humanoid training task is complex and relevant, but an additional FL benchmark would strengthen the argument that Safe-EF is useful for FL. Theoretical Claims: I reviewed the lower bound proof and the convergence analysis for Safe-EF. The derivations appear correct, and the methodology follows standard approaches in optimization theory. The results for error feedback in non-smooth settings are a useful contribution. However, the key proofs should be moved to the main text rather than being relegated to the appendix. Experimental Designs Or Analyses: The experiments convincingly show Safe-EF’s superiority over EF21 and CGD. However, some key experimental details are missing: -No clear explanation for why EF14 works while EF21 fails in non-smooth settings. -No guidance on practical choices for γ (step size) and c (constraint threshold), making Safe-EF harder to implement. -The humanoid RL task is well-chosen, but its setup is not clearly described, making reproducibility difficult. Supplementary Material: Reviewed the appendix, particularly the lower bound proof and convergence analysis. The proofs are detailed and appear correct, but their placement in the appendix makes them less accessible. Relation To Broader Scientific Literature: Safe-EF builds on prior error feedback methods (EF21, EF14, CGD) and contributes to communication-efficient distributed optimization. However, its positioning in federated learning needs stronger justification, as it lacks evaluation on heterogeneous FL datasets. Essential References Not Discussed: The paper cites relevant work in error feedback, compressed optimization, and distributed learning. I do not see any critical omissions. Other Strengths And Weaknesses: Strengths: Novel bidirectional compression approach. Strong theoretical guarantees. Clear improvements over EF21 and CGD in non-smooth settings. Weaknesses: Federated learning framing is weak; an FL benchmark would make the claims stronger. The introduction is too long and should be split into introduction, practical challenges, and related work. Lack of practical guidance on hyperparameters γ and c. Other Comments Or Suggestions: Move key proofs to the main text instead of the appendix. Questions For Authors: Why does EF14 work while EF21 fails? More insight into this difference would strengthen the theoretical claims. What values of γ and c should be used in practice? Can practical guidance be provided for choosing these parameters? Could Safe-EF be tested in a standard FL benchmark? This would help justify its relevance to federated learning. Would Safe-EF generalize to non-convex settings? Since many real-world RL problems are non-convex, would it still maintain convergence? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: 1. We appreciate the reviewer's high evaluation of our theoretical guarantees and empirical results. For empirical evaluation, we (i) test our method on a well-controlled synthetic environment with different levels of heterogeneity and (ii) provide an extensive ablation study in more challenging continuous control environments (Cartpole and Humanoid). We believe the latter Federated Safe RL setting is one of the most relevant and challenging applications of our theory that tests the real potential of Safe-EF. Different transition dynamics $p_i$ of each worker make the loss $f_i$ different, resulting in a highly heterogeneous setting. We mentioned this aspect in Section 6.2 and Appendix I, but we will put more emphasis on the heterogeneity aspect in the revision. In our work, we focus on non-smoothness, safety constraints, biased and bidirectional compression. Decentralized aggregation is an important topic, and we leave it to future work. 2. We test Safe-EF on Neyman-Pearson (NP) classification problem following the work (He et al., Federated Learning with Convex Global and Local Constraints, TMLR 2024). This statistical formulation aims to minimize type II error while enforcing an upper bound on type I error, making it particularly relevant for applications with asymmetric misclassification costs, such as medical diagnosis. The NP classification is $$\min_xf(x)=\frac{1}{n_0}\sum_{i=1}^{n_0}\phi(h_{x}, z_{i,0}),\text{ s.t. }\quad g(x)=\frac{1}{n_1}\sum_{i=1}^{n_1}\phi(h_{x}, z_{i,1})\le c,$$ where $f_x$ is a classifier parameterized by $x$ (3 layers MLP with 64 units in each layer and Relu activation); $\phi$ is a CE loss; $\\{z_{i,0}\\}\_{i=1}^{n_0}$ and $\\{z_{i,1}\\}_{i=1}^{n_1}$ are training samples from class 0 and class 1, respectively. The constraint ensures that the classification error for class 1 does not exceed a predefined threshold $c$. We include the results in https://ibb.co/SXYqWQtB. This benchmark further supports the argument that Safe-EF is useful for federated learning by showing its effectiveness in a well-established classification framework. We believe the Federated Safe RL remains the most relevant application naturally formulated as a stochastic optimization of the form (72), (73) in Appendix F (unlike the standard supervised learning task with finite dataset), but we welcome the reviewer for a discussion if this new set of experiments strengthens our argument that Safe-EF is useful for FL. 3. While we show our hard instance (functions, constraints, compressors) in the main paper, we will detail key steps of the proof in the revised version of the paper. 4. The design of EF21 algorithm can be seen as a variance reduction (VR) technique. In a nonsmooth setting, gradients can change abruptly. Thus, VR mechanisms such as EF21 may fail. In contrast, EF14 algorithm's design is based on tracking the compression error, and the control of gradient change is not needed. 5. We acknowledge reviewer's concern. From theory, in the choice $\gamma=O(\frac{R\sqrt{\delta\delta_s }}{M\sqrt{T}})$ and $c=O(\frac{MR}{\sqrt{\delta\delta_s T}})$ parameters $\delta,\delta_s,T$ are typically set by a user. Thus, there is no need to estimate them. We can estimate $M=\max_i\\|f_i^\prime(x^0)\\|$ and $R>\\|x^0\\|,$ if $x^*$ is close to zero. In general, we believe that choosing $\gamma,c=O(1/\sqrt{T})$ or $O(1/\sqrt{t})$ can be a good starting point. In our experiments, $\gamma=$1e-4 is chosen based on existing default value that is known to work well for PPO on standard RL benchmarks. The slackness parameter $c$ is set to 0, which is sufficient to get decent performance for both tasks. We did not try to further tune these parameters and agree that, in general, these parameters are problem-dependent. 6. Thank you for pointing this out. Based on your suggestion, we will add additional details on our experimental setup to Appendix I. We kindly ask the reviewer to point out more specifically which parts in the exposition should be further clarified. Additionally, we emphasize that the code for our experiments can be found in https://anonymous.4open.science/r/safe-ef-3ABC, as mentioned in Appendix I. We believe that this makes our experiments fairly easy to reproduce. 8. We reference all relevant work within each subsection or paragraph of the introduction while discussing practical challenges and existing solutions. While we can divide the first section into an introduction and practical challenges, separating the related work into a standalone section would not be appropriate. Any additional related work that is not directly relevant to our main narrative is included in the appendix. 9. Investigating convergence in the non-convex setting is indeed an interesting and important question motivated by real-world RL problems. We believe our analysis can be extended to weakly convex functions, i.e., $f(x)+r\\|x\\|^2$ and $g(x)+r\\|x\\|^2$ are convex following, e.g., (Jia and Grimmer, 2022); (Boob et al., 2023)
null
null
null
null
null
null
null
null
MaskTwins: Dual-form Complementary Masking for Domain-Adaptive Image Segmentation
Accept (poster)
Summary: This paper tackles image segmentation within the unsupervised domain adaptation (UDA) framework. Rather than employing random masks, the authors develop dual-form complementary masked images to strengthen the generalization capabilities of their approach. They demonstrate that robust, domain-invariant features can be extracted through consistency learning applied to these masked images. The method's effectiveness is validated through comprehensive experiments across six distinct test cases. Claims And Evidence: To comprehensively understand the effectiveness of the proposed Complementary Mask, I recommend the authors can conduct experiments on some more challenging segmentation datasets. For example, the medical image segmentation has strong domain shifts across domains. The fundus segmentation tasks such as OC and OD segmentation is a good choice. Methods And Evaluation Criteria: Since the overall framework of this paper relies on multiple component e.g., the AdaIN and the EMA-based pseudo labels, it seems to be over-complicated and is not elegant enough from a perspective of ML community. Meanwhile, according to the ablation study, it seems that the proposed Complementary Mask cannot work on Complementary Mask-only framework? This eliminates the effectiveness of the proposed Complementary Mask. I suggest the authors can report the result of a Complementary Mask-only framework to demonstrate the efficacy of the proposed Complementary Mask framework. Theoretical Claims: To be honest, although I try to understand the envisioned Theorems, I find that there is not a directly related connection between the sparse signal reconstruction in Assumption 1 and corresponding generalization bound as well as feature consistency. Therefore, I do not know how the actually sparse signal reconstruction theory is utilized? It seems that envision Theorems are not directly link to the claimed contribution 1. Experimental Designs Or Analyses: I appreciate nice numerical results, but it is also necessary for the reader to vividly understand why the proposed Complementary Mask works. I suggest some feature visualizations may be added to compare with random mask scheme. Supplementary Material: I read the Supplementary Material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - This paper introduces the Complementary Mask strategy based on random mask, the empirical experiments look good. - This paper is easy to follow and well-written. - The authors provide theoretical analysis to justify the proposed complementary mask related to the sparse signal reconstruction. Weakness: - Since the overall framework of this paper relies on multiple component e.g., the AdaIN and the EMA-based pseudo labels, it seems to be over-complicated and is not elegant enough from a perspective of ML community. Meanwhile, according to the ablation study, it seems that the proposed Complementary Mask cannot work on Complementary Mask-only framework? This eliminates the effectiveness of the proposed Complementary Mask. I suggest the authors can report the result of a Complementary Mask-only framework to demonstrate the efficacy of the proposed Complementary Mask framework. - To be honest, although I try to understand the envisioned Theorems, I find that there is not a directly related connection between the sparse signal reconstruction in Assumption 1 and corresponding generalization bound as well as feature consistency. Therefore, I do not know how the actually sparse signal reconstruction theory is utilized? It seems that envision Theorems are not directly link to the claimed contribution 1. - To comprehensively understand the effectiveness of the proposed Complementary Mask, I recommend the authors can conduct experiments on some more challenging segmentation datasets. For example, the medical image segmentation has strong domain shifts across domains. The fundus segmentation tasks such as OC and OD segmentation is a good choice. - I appreciate nice numerical results, but it is also necessary for the reader to vividly understand why the proposed Complementary Mask works. I suggest some feature visualizations may be added to compare with random mask scheme. ===================================================================================== Thanks for authors' responses that have addressed my concerns. I have raised my score. Other Comments Or Suggestions: Please see weakness Questions For Authors: Please see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable time and comments. The main concerns are addressed below. > **W1:** Complementary Mask-only framework The proposed complementary masking strategy certainly works on the Complementary Mask-only framework and we highlight that it does not rely on the network structure or additional internal components. Specifically, AdaIN is a classic component used in domain adaptation and EMA commonly serves for teacher-student structure. We adopt the EMA teacher model for pseudo-label generation, following common practices in UDA methods [1]. These components are simple and require no special hyper-parameter tuning. In fact, the corresponding contributions are quite limited according to Table 4. Considering the misunderstanding caused by, we have supplemented Table 4 with the results of Complementary Mask-only framework as follows. The results show that remarkable gains can be achieved by using the complementary mask alone. We highly agree that this can better demonstrate the effectiveness of the proposed complementary mask framework, and we will include this in the revised paper. --- Table R1. Supplementary ablation study based on Table 4. | CL | CMask | RMask | EMA | AdaIN | mIoU | | :--: | :---: | :---: | :--: | :---: | :--: | | √ | √ | - | - | - | 76.0 | --- > **W2:** Question on theorem The assumption of sparse signal in Assumption 1 is a general background. This assumption serves as the foundation for our theoretical analysis. Although we use the image (X) in the formula without decomposing it into the sparse signal (S) and the noise (N), it holds true completely in both the analysis of the generalization bound and that of the feature consistency. This connection is implicit and more importantly, intrinsic. In our analysis, the generalization bound focuses on the theoretical explanation of mask reconstruction, while feature consistency is applied to account for the setting of domain adaptation. Meanwhile, we explicitly use Assumption 1 in the proofs of Theorem 5 (Signal Recovery Guarantee) in the supplementary materials. Specifically, we reframe X as a signal generated from the sparse linear model, and apply Compressed Sensing Recovery Guarantees during the proofs. > **W3:** Application on more challenging segmentation dataset While our main experiments have comprehensively covered both natural images and medical images, we agree that extending our method to fundus segmentation tasks can further validate its effectiveness. To address this, we conducted additional experiments on the RIM-ONE-r3 dataset. --- Table R2. Quantitative results of OD and OC segmentation on the RIM-ONE-r3 dataset with a CBMT backbone, using only the proposed complementary masking strategy. | | $Dice_{OD}$↑ | $ASSD_{OD}$↓ | $Dice_{OC}$↑ | $ASSD_{OC}$↓ | | ---------- | :--------------: | :-------------: | :--------------: | :-------------: | | EOAPNet | 92.61(±3.13) | 6.67(±2.91) | 74.59(±25.64) | 8.74(±5.34) | | CBMT | 93.36(±4.07) | 6.20(±4.79) | 81.16(±14.71) | 8.37(±6.99) | | **Ours** | **94.17(±2.48)** | **5.15(±2.14)** | **82.74(±9.13)** | **7.52(±6.33)** | The results show that our method improves the performance of OD and OC segmentation by 0.81%(1.58%) Dice and 1.05(0.85) ASSD, respectively. Regarding the additional experiments on fundus image segmentation, we appreciate your interest and we will include these results in the supplementary materials of the revised paper. --- > **W4:** Feature visualizations As suggested, we provide some feature visualizations compared with the random masking strategy to more comprehensively demonstrate the effectiveness of the proposed complementary masking strategy. We provide the anonymous links for online viewing. For the direct comparison with random mask scheme, Figure R1 is: https://picx.zhimg.com/80/v2-c3ade1e071de958d79964bbb69c3564f.png. The input, feature and segmentation results are presented in dual-form. 'CMask' means using the proposed complementary masking strategy while 'RMask' means random masking. This is the results of the model trained with complementary masking strategy: https://picx.zhimg.com/80/v2-b6a712296ffc45377c54284a2ffb029b.png (Figure R2, last layer) and https://picx.zhimg.com/80/v2-603fcc5b6d192413c8ec988c28dffbc8.png (Figure R3, middle layer); this is the rusults of using random masking: https://picx.zhimg.com/80/v2-655ba0ffed4cbe7ad9168ad1e5f66e44.png (Figure R4, last layer) and https://picx.zhimg.com/80/v2-2957e2cca93f1cb93c05c95020c298e4.png (Figure R5, middle layer). --- Once again, thanks for your time and comments. We hope these concerns can be adequately addressed through the above explanations. --- **Reference** [1] Hoyer, Lukas, et al. "MIC: Masked image consistency for context-enhanced domain adaptation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
Summary: In this paper, the authors introduce MaskTwins, a UDA framework that integrates masked reconstruction into the main training pipeline. They argue that existing UDA methods leveraging masked image modeling treat masking merely as a form of input deformation and lack theoretical analysis, resulting in a superficial understanding and underutilization of its potential for feature extraction and representation learning. To address this, the authors reframe masked reconstruction as a sparse signal reconstruction problem and theoretically demonstrate that the dual form of complementary masks enhances the extraction of domain-agnostic image features. Experimental results on both natural and biological images show that MaskTwins improves domain generalization by enforcing consistency between predictions from complementary masked images, uncovering intrinsic structural patterns that persist across domains. Claims And Evidence: Most claims are valid. However, the authors state that they demonstrate the superiority of the proposed approach through extensive experiments. I disagree, as the experiments were conducted on a few relatively small datasets, and the domain gap between the source and target domains appears limited. This raises concerns about the generalizability of the proposed method. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are generally reasonable. However, their effectiveness remains to be evaluated on larger datasets with more significant source-target domain gaps to better assess their real-world applicability. Theoretical Claims: I briefly reviewed the proofs but did not conduct a thorough verification, so I cannot confirm their correctness. Experimental Designs Or Analyses: I reviewed the experimental design and found it generally reasonable. However, the small dataset sizes and limited source-target domain gap raise concerns about the generalizability of the results. Further evaluation on larger and more diverse datasets is needed to confirm the method’s effectiveness. Supplementary Material: Yes, I reviewed both Sections A and B. Relation To Broader Scientific Literature: This work builds upon Hoyer et al.’s approach, which masks the target image and evaluates consistency between predictions from images with and without masking. It advances UDA by introducing a complementary masking strategy and providing a formal theoretical analysis to support the proposed method. The complementary masking strategy has been shown to improve domain generalization. Furthermore, the method closely aligns with semi-supervised image segmentation, differing mainly by incorporating an additional consistency loss that quantifies the discrepancy between predictions from masked target-domain images. While semi-supervised learning often assumes that labeled and unlabeled datasets originate from the same or similar domains, I argue that this assumption is not strictly necessary for semi-supervised methods to be effective. Therefore, the connection to semi-supervised image segmentation should be discussed more explicitly. Essential References Not Discussed: Given the similarities between the proposed method and semi-supervised image segmentation, relevant works in semi-supervised image segmentation should be reviewed, and their connections and differences should be explicitly discussed. Other Strengths And Weaknesses: In addition to domain adaptation methods, I think it would be better if some semi-supervised learning methods can be used for comparison. Other Comments Or Suggestions: I have no other comments or suggestions. Questions For Authors: The paper is clearly written. I think I only have the following questions: 1. Complementary masks provide tighter bounds, but how significant is the improvement? What is the ratio between the first and second terms in the bound shown in the SM for random masks, using typical values for the relevant symbols? 2. The results indicate that segmentation performance improves as the mask ratio increases from 0.1 to 0.5. Are these differences statistically significant? 3. Can the same conclusions hold across different architectures, such as CNNs, Transformers, and hybrid networks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable time and comments. The main concerns are addressed below. > **W1:** Application on more challenging segmentation datasets Thank you for the concerns on the generalizability of the proposed method. On the one hand, the SYNTHIA dataset we use is of a relatively large scale, boasting over 20,000 synthetic images with diverse urban scene representations. As for the "SYNTHIA→Cityscapes" task, it has been extensively studied in domain adaptation [1], with a domain gap between synthetic and real-world data. On the other hand, biological image segmentation is commonly acknowledged to exhibit strong domain shifts between various domains. Still, we agree that extending our method to more challenging segmentation dataset can further validate its effectiveness. To address this, we conducted additional experiments on the RIM-ONE-r3 dataset for fundus optic disc (OD) and cup (OC) segmentation. --- Table R1. Quantitative results of OD and OC segmentation on the RIM-ONE-r3 dataset with a CBMT backbone, using only the proposed complementary masking strategy. | | $Dice_{OD}$↑ | $ASSD_{OD}$↓ | $Dice_{OC}$↑ | $ASSD_{OC}$↓ | | ---------- | :--------------: | :-------------: | :--------------: | :-------------: | | EOAPNet | 92.61(±3.13) | 6.67(±2.91) | 74.59(±25.64) | 8.74(±5.34) | | CBMT[2] | 93.36(±4.07) | 6.20(±4.79) | 81.16(±14.71) | 8.37(±6.99) | | **Ours** | **94.17(±2.48)** | **5.15(±2.14)** | **82.74(±9.13)** | **7.52(±6.33)** | The results show that our method improves the performance of OD and OC segmentation by 0.81%(1.58%) Dice and 1.05(0.85) ASSD, respectively. --- >**W2:** Connection with semi-supervised methods While we acknowledge the similarities between our method and semi-supervised image segmentation, they differ in task settings. Semi-supervised methods require labeled target data, whereas our focus is on unsupervised domain adaptation, especially under significant domain gaps. In the absence of labeled data, image-level masking plays a crucial role, and experiments focusing on this better validate our theoretical analysis. Still, we appreciate your interests and we will briefly discuss the connection with semi-supervised methods in the revised paper to strengthen the comprehensiveness and robustness of our research. --- >**Q1:** Gain of complementary masks and term ratios in formulas Intuitively, the generalization bound for random masking includes an additional term compared to complementary masking. While this extra term complicates the bound, complementary masking significantly tightens the generalization bound, leading to better performance across various scenarios. Directly quantifying this improvement from the formulas is complex due to multiple terms, but experimental results provide clarity. As shown in Table 1, MaskTwins outperforms MIC by +2.7 mIoU, demonstrating the effectiveness of our approach. Additionally, complementary masks contribute a +1.5 mIoU improvement over random masks (Table 4), confirming their value. Regarding the ratio of terms in the bounds, we observe that the additional term in random masking is influenced by the masking strategy. Complementary masking not only improves information preservation but also reduces generalization error variance, as stated in Theorem 4 of the supplementary material. Finally, "SM" does not appear in the paper, and we speculate the reviewer refers to the generalization bounds section, which discusses the benefits of complementary masking. The theory shows that complementary masking offers a tighter generalization bound compared to random masking. --- > **Q2:** Statistical results We provide the mean and standard deviation of the experiments as follows. The results show that the standard deviation of the metrics does not vary significantly with the change in the mask ratio. Table R2. Statistical results based on Table 5(a). "±" refers to the standard deviation over 3 random seeds. | Mask Ratio | mIoU | | ---------- | ---------- | | 0.1 | 72.0(±0.2) | | 0.2 | 74.6(±0.3) | | 0.3 | 75.4(±0.2) | | 0.4 | 76.5(±0.2) | | 0.5 | 76.7(±0.2) | --- > **Q3:** Question on architectures The proposed complementary masking strategy certainly works on different architectures and we highlight that it does not rely on the network structure or additional internal components. Specifically, for natural and biological image segmentation, we use a Transformer network and a CNN network respectively, following the corresponding pipelines in [1]. The details of the architectures in the experiments can be found in Appendix F. **Reference** [1] Hoyer, Lukas, et al. "MIC: Masked image consistency for context-enhanced domain adaptation." CVPR 2023. [2] Tang L, Li K, He C, et al. Source-free domain adaptive fundus image segmentation with class-balanced mean teacher, MICCAI 2023. --- Rebuttal Comment 1.1: Comment: 1. The authors did not directly address my question about statistical significance. Please use t-test or Wilcoxon signed rank test to assess significance. 2. I disagree with the claim that semi-supervised methods require labeled target data. I would like to see results from a typical semi-supervised segmentation approach as a baseline for comparison. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt feedback. We now fully and accurately understand your questions. >**1:** Statistical significance We agree that quantitatively evaluating by statistical significance can further demonstrate the performance improvement of complementary masking. Specifically, we use two related samples (complementary masks vs. random masks), each of size n=5. Initially, we verify whether the two sets of data are derived from a normal distribution by conducting the Shapiro-Wilk test. The corresponding p-values are 0.5767 and 0.5866. So we are inclined to accept the hypothesis of normal distribution at a confidence level of 0.05. Considering that the t-test is highly sensitive to the overall distribution characteristics of the data, we also perform the Wilcoxon signed-rank test in the one-sided case (the default two-sided case would result in a zero statistic value). Under both test scenarios, the p-value is lower than the significance level alpha=0.05. Hence, we draw the conclusion that the performance improvement of complementary masking is statistically significant. --- Table R3. The results of the t-test and Wilcoxon signed rank test (n=5). | | statistic | p-value | | - | - | -| | t-test | 60.56 | 4.453e-07 | | Wilcoxon signed rank test | 15.0 | 0.03125 | --- > **2:** Semi-supervised methods for comparison We fully understand and agree with the reviewer's perspective after thorough research. To the best of our knowledge, only in the image classification task have we found that several articles [1, 2, 3] discuss and compare both unsupervised domain adaptation (UDA) and semi-supervised learning (SSL) methods. Both UDA and SSL aim for the model trained on labeled source data to perform well on target data. Intuitively, the difference is that SSL typically assumes the source domain is identical to the target domain. Additionally, UDA methods focus on minimizing the discrepancy between the two domains based on the theoretical bounds, while the motivation of SSL methods is the basic assumptions about the data structure. During the early developmental stage, UDA and SSL were developed independently for each specific setting. Subsequently, more and more researchers have realized that there is no technical barrier to applying SSL to UDA. In recent years, UDA methods have widely adopted SSL methods, such as mean teacher [4] and consistency regularization [5]. As a result, the boundary between UDA and SSL has become blurred. The existing explorations [1,2,3] have been predominantly focused on image classification tasks. Importantly, we have evaluated the effectiveness of our proposed method on the image classification task in the supplementary materials. Therefore, based on Table 7 in the supplementary materials, we have added a typical semi-supervised classification approach [1] as a baseline for comparison to address the reviewer's concerns. --- Table R4. The comparison results with a semi-supervised method A2LP [1] on VisDA-2017 with ResNet-101. | | Acc. | | ---- | ---| | LP [1] | 73.9 | | A2LP [1] | 82.7 | | **Ours** | **87.3** | --- For image segmentation, we additionally provide the comparison with ILM-ASSL [6] which is one of the state-of-the-art semi-supervised methods on the leaderboard of the "Papers with Code" website. It follows a typical semi-supervised learning framework, combined with an uncertainty selection strategy of active learning. Compared with ILM-ASSL using 1% and 2.2% of labeled target data, our method consistently achieves higher performance, which strongly demonstrates the superiority of our proposed complementary masking strategy. --- Table R5. The comparison results with a semi-supervised method ILM-ASSL [6] on SYNTHIA-to-Cityscapes. | | mIoU | | ------ | ------ | | ILM-ASSL(1%) | 73.2 | | ILM-ASSL(2.2%) | 76.0 | | **Ours** | **76.7** | --- Once again, thanks for your time and comments. We hope these concerns can be adequately addressed through the above explanations. --- **Reference** [1] Zhang Y, Deng B, Jia K, et al. Label propagation with augmented anchors: A simple semi-supervised learning baseline for unsupervised domain adaptation. ECCV 2020. [2] Roelofs B, Berthelot D, Sohn K, et al. Adamatch: A unified approach to semi-supervised learning and domain adaptation. ICLR 2022. [3] Zhang Y, Zhang H, Deng B, et al. Semi-supervised models are strong unsupervised domain adaptation learners. arXiv preprint arXiv:2106.00417, 2021. [4] Tarvainen A, Valpola H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. NeurIPS 2017. [5] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. ICLR 2017. [6] Guan L, Yuan X. Iterative loop method combining active and semi-supervised learning for domain adaptive semantic segmentation. arXiv preprint arXiv:2301.13361, 2023.
Summary: This paper introduces a complementary masking strategy for the semantic segmentation UDA task. Building on the existing masked image consistency (MIC) training paradigm, which relies on pseudo-labels from the teacher model, the authors propose a complementary masking loss to further enforce consistent predictions on complementary masked images. Additionally, the paper provides a theoretical analysis of the complementary masking strategy. Experiments demonstrate the effectiveness of the proposed approach. Claims And Evidence: Yes, the paper mainly proposes a complementary masking strategy for input images and provides a theoretical analysis of this approach. Methods And Evaluation Criteria: This paper mainly introduces a complementary masking loss to encourage consistent predictions on complementary masked images. The model architecture follows the baseline MIC, consisting of a student and an EMA teacher. The masked consistency learning loss also follows the MIC loss from the baseline MIC (Hoyer et al., 2023), which uses pseudo-labels of the complete target image from the teacher for supervision. Notably, complementary masking strategies have already been explored in existing multi-modal works, both at the input and feature levels (e.g., Shin et al., 2024; Yang et al., 2025). Therefore, the primary contributions of this paper are its theoretical analysis and experimental validation, which demonstrate the effectiveness of the complementary masking strategy for RGB images alone. However, the method lacks sufficient novelty and insight. Theoretical Claims: I have reviewed the theoretical claims. Experimental Designs Or Analyses: In Table 4, although the ablation study is not directly discussed against the baseline MIC, Table 1 shows that simply replacing complementary masking with random masking (Row 3 in Table 4) already outperforms MIC (75.2 mIoU vs. 74.0 mIoU). This indicates that the performance gains in this paper heavily rely on the masking strategy itself, even when using random masking. Therefore, while the paper reports state-of-the-art results, these improvements are less convincing. Moreover, the authors do not provide an explanation for why masking strategies, including random ones, lead to such significant gains. Supplementary Material: I have reviewed the supplementary material, which contains detailed theoretical proofs. Relation To Broader Scientific Literature: 1. Model architecture: Follows the baseline MIC, consisting of a student and an EMA teacher. 2. Masked consistency learning loss: Directly follows the MIC loss from the baseline (Hoyer et al., 2023), using pseudo-labels of the complete target image from the teacher for supervision. 3. Complementary masking strategy: Already explored in existing multi-modal works at both input and feature levels (e.g., Shin et al., 2024; Yang et al., 2025). Essential References Not Discussed: I think this paper has included enough related works to provide a clear understanding Other Strengths And Weaknesses: Advantages: 1.The paper is well-written and clear. 2. The experiments are sufficient to evaluate the proposed method. 3. The paper provides detailed theoretical analysis and proofs. Disadvantages: 1. As mentioned earlier, the method largely builds on strategies from existing baselines, with only minor modifications to achieve good performance, but lacks novel or insightful contributions. I have another key question: how this masking strategy would perform in feature space, as both input and feature masking have been studied in multi-modal tasks. Similar concerns apply to the theoretical analysis. Other Comments Or Suggestions: No other comments Questions For Authors: As mentioned in previous comments: 1. Explain why random masking alone can achieve state-of-the-art performance. 2. Discuss how the complementary masking strategy would perform in the feature space in this single-modal RGB setting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable time and comments. The main concerns are addressed below. > **W1:** Misunderstanding on Complementary Masking Techniques Thank you for your comments again. But we respectfully cannot agree with the justifications here, and would like to further clarify as below: - First, the proposed complementary masking is not limited to "RGB images alone"; it can also be applied to biological datasets that contain gray-scale images. - Meanwhile, we highlight our theoretical contributions, which are an indispensable part of our work and are entirely lacking in existing works. Specifically, we have provided a theoretical foundation for masked reconstruction from a perspective of sparse signal reconstruction problem and have rigorously analyzed the properties of complementary masking from three aspects: information preservation, generalization bound, and feature consistency. --- > **Q1:** Explanation on the gain of random masking over MIC The reason why random masking achieves a certain improvement compared to MIC (75.2 mIoU vs. 74.0 mIoU) is the addition of the dual-form masking consistency, while MIC only has a single masked image. Mask itself is indeed of great significance. However, the improvement brought about in this way is limited, and having more branches of masked images will also lead to the problem of redundancy. In our experiments, the performance of a single masked branch reached 74.0 mIoU, that of two branches improved to 75.2 mIoU and using three or more branches will not yield further gains. In contrast, the complementary masking can achieve significantly higher improvement by only changing the masking strategy (76.7 mIoU vs. 75.2 mIoU). This further validates the substantial performance boost brought about by the complementary masking and the total gain does not rely on the masking strategy itself. --- > **Q2:** The performance of complementary masking strategy in the feature space We are highly intrigued by the ideas you proposed, and as a result, we conducted the following experiments in Table R1. Since applying masking during the decoder phase may significantly impact the already-extracted high-level semantic information and lead to a decline in model performance, we perform masking on the four layers of encoded features (where 1 represents the shallowest layer and 4 represents the deepest layer) respectively. --- Table R1. The performance of complementary masking strategy in the feature space on SYNTHIA→Cityscapes. | Complementary Masking | mIoU | | ----------------------------- | ---- | | image level | 76.7 | | feature level: 1 (shallowest) | 76.0 | | feature level: 2 | 74.6 | | feature level: 3 | 72.4 | | feature level: 4 (deepest) | 67.7 | --- It is worth noting that the feature-level masking are mainly utilized in generative tasks, such as in MaskGit [1]. According to Assumption 1 in the main paper, latent-level masking may result in more information loss. Therefore, it is not suitable for comprehension tasks, for example, image segmentation. The above results also confirm this. The deepest feature layer contains the most semantic information and applying complementary masking to this layer will result in the most substantial loss of valuable details, leading to the most significant drop in performance. Regarding the additional experiments on feature-level masking, we appreciate your interest and will include these results in the supplementary materials of the revised paper. --- **Reference** [1] Chang H, Zhang H, Jiang L, et al. Maskgit: Masked generative image transformer[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11315-11325. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the rebuttals to my reviews. After reviewing the responses, I think most of my concerns have been addressed. Therefore, I am changing my score from "2: Weak reject " to "3: Weak accept". I would like to clarify a point regarding my comment in W1. I understand that the authors have conducted experiments using grayscale images. In my previous review, I mentioned that "this paper demonstrates the effectiveness of the complementary masking strategy for RGB images alone." By "RGB images alone," I meant that the complementary masking strategy in this paper is applied solely in the input image space, rather than in the feature space. As noted in my review, "complementary masking strategies have already been explored in existing multi-modal works, both at the input and feature levels." I also raised this as a formal question in Q2 to encourage further discussion, and I appreciate the authors’ inclusion of results and discussions on feature-level masking. --- Reply to Comment 1.1.1: Comment: Thank you for your clarification and updated score! We're glad our response addressed your concerns and that we had the opportunity to discuss feature-level masking. Once again, we sincerely appreciate your time and effort in reviewing our paper and reading our comments!
Summary: This paper explores the connection between Masked Image Modeling (MIM) and consistency regularization in Unsupervised Domain Adaptation (UDA). It reframes masked reconstruction as a sparse signal reconstruction problem and theoretically proves that complementary masks can effectively extract domain-agnostic features. Based on this insight, the authors propose MaskTwins, a UDA framework that integrates masked reconstruction into the main training pipeline. MaskTwins enforces consistency between predictions of images masked in complementary ways, uncovering intrinsic structural patterns across domains and enabling end-to-end domain generalization. Extensive experiments on natural and biological image segmentation demonstrate its superiority over baseline methods, highlighting its effectiveness in extracting domain-invariant features without separate pre-training. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The key contributions of this paper relate to the broader scientific literature by advancing the understanding of masked image modeling (MIM) in the context of unsupervised domain adaptation (UDA). It builds on prior works that connect MIM and consistency regularization but goes further by reframing masked reconstruction as a sparse signal problem and proving the effectiveness of complementary masks for domain-agnostic feature extraction. This theoretical grounding differentiates it from previous methods that treat masking superficially. The proposed MaskTwins framework integrates these insights directly into the training pipeline, offering a novel approach to domain generalization. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The paper is well-organized, with a clear and intuitive research motivation. It provides comprehensive theoretical analysis, and the proposed MaskTwins framework is validated through extensive experimental results. Weaknesses: 1. Although the paper offers thorough theoretical analysis, the proposed MaskTwins framework lacks architectural innovation. The ideas related to Masked Image Modeling (MIM) are not novel. 2. The ablation study in Table 4 is not comprehensive. Ablation experiments should control for single variables, but the current design lacks specificity. 3. The formatting of Table 4 and Table 5 is inconsistent with other tables in the paper, such as the use of bold text with a gray background. Other Comments Or Suggestions: Some of the statements in this paper need refinement. For example, the first contribution in the introduction appears to be an overclaim, and the subsequent analysis lacks sufficient detail. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable time and comments. The main concerns are addressed below. >**W1:** Misunderstanding on Complementary Masking Techniques First, the ideas related to Masked Image Modeling (MIM) can be novel. For instance, the latest ECCV 2024 paper MICDrop [1] explored the masking of multi-modal information, indicating continuous innovation within the MIM field. And MIM serves as the foundation of our approach. Based on this, our work focus on the extension to domain adaptative tasks and the exploration of the upper bounds of performance using the proposed complementary masking strategy. Meanwhile, the improvements in our approach stem from both the dual-form and the framework. Moreover, we conducted a series of additional experiments specifically on the masks themselves to validate our theoretical analysis. We highlight our theoretical contributions, which are an indispensable part of our work and are entirely lacking in existing works. Specifically, we have provided a theoretical foundation for masked reconstruction from a perspective of sparse signal reconstruction problem and have rigorously analyzed the properties of complementary masking from three aspects: information preservation, generalization bound, and feature consistency. For more theoretical details, please see the supplementary materials. --- > **W2:** More ablation experiments In Table 4, we focus on exploring the individual effects of each component by controlling for single variables. More importantly, we aim to compare with random masking to validate the effectiveness of the proposed complementary masking strategy and support our theoretical analysis. Still, to conduct more rigorous experiments, we additionally ablated complementary mask-only and the effect of EMA and AdaIN using random masking strategy. --- Table R1. Supplementary ablation rxperiment results for Table 4. The last four rows are newly added experimental data. | CL | CMask | RMask | EMA | AdaIN | mIoU | | :--: | :---: | :---: | :--: | :---: | :--: | | - | - | - | - | - | 53.7 | | √ | - | - | √ | √ | 72.8 | | √ | - | √ | √ | √ | 75.2 | | √ | √ | - | √ | √ | 76.7 | | √ | √ | - | - | √ | 76.1 | | √ | √ | - | √ | - | 76.4 | | √ | √ | - | - | - | 76.0 | | √ | - | √ | √ | - | 75.0 | | √ | - | √ | - | √ | 74.6 | | √ | - | √ | - | - | 74.3 | --- > **W3:** Revision of the formatting of tables Thank you for pointing it out. We will reorganize the formatting of tables carefully in the revised paper. --- **Reference** [1] Yang L, Hoyer L, Weber M, et al. Micdrop: masking image and depth features via complementary dropout for domain-adaptive semantic segmentation[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 329-346.
null
null
null
null
null
null
SERENA: A Unified Stochastic Recursive Variance Reduced Gradient Framework for Riemannian Non-Convex Optimization
Accept (poster)
Summary: The paper presents a variance reduction framework for Riemannian non-convex optimization. The proposed framework covers many existing variance reduction algorithms on manifolds and theoretical analysis matches the best known results in the Euclidean space. Numerical experiments are conducted on PCA, LRMC and RC problems and benchmarked against extensive baselines. The performance of the proposed algorithm seems promising. Claims And Evidence: It is a bit unclear what is the extent of the contributions of this paper. The paper seems to propose a unified framework (based on hybrid-SGD, a method already proposed in the Euclidean space). In this regard, the contribution seems limited, given the analysis would readily follow from the paper. On the other hand, it seems the paper also establishes a recursive variance reduction framework, which seems novel, provided no such algorithms exist in the Euclidean space. Methods And Evaluation Criteria: The evaluation criteria is standard for the problem and the datasets/problem instances are commonly used as benchmarks for Riemannian non-convex optimization. Theoretical Claims: I briefly went through the proof in the appendix. The proof seems to be correct. Experimental Designs Or Analyses: The experiment designs are standard. However, the analyses of the experiment results lack depth and insights. For example, how does the change in beta improves the convergence. Supplementary Material: I have briefly gone through the proofs and checked the additional experimental results on sensitivity of the beta and eta. Relation To Broader Scientific Literature: Established a unified framework, which completes the variance reduction methods for Riemannian optimization. Essential References Not Discussed: All the key related works have been cited. Other Strengths And Weaknesses: It is unclear about the contributions of this work, relative to existing works in the Euclidean space. Other Comments Or Suggestions: (1) the presentation of the results can be improved. The constants in the theorems can be simplified. (2) Assumption 2.2(2), Riemannian Hessian bounded not written out explicitly. (3) There are too many assumptions and it is unclear when such assumptions are satisfied. Questions For Authors: Please clarify the contributions relative to the prior variance reduction methods in the Euclidean space. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. You are primarily concerned with the contributions of this paper compared to previous variance reduction algorithms in Euclidean space, as well as whether the assumptions made in this paper can be satisfied. Additionally, you are interested in the impact of the parameter $\beta$ on convergence and the simplification of symbols in the theorem. Below, we will address each of these points in detail. 1. Compared to previous work in Euclidean space, our contributions are as follows. (1) Firstly, we propose the SRVRG estimator, which does not exist in Euclidean space. Secondly, we present a unified theoretical analysis for the first time. In non-convex settings, the theoretical analysis of various variance reduction algorithms shows significant differences even in Euclidean space, with some requiring the construction of Lyapunov functions. (2) Our Corollary 5.4 indicates that SRVRG exhibits superiority in parameter selection. For instance, in finite-sum case where $ n < \varepsilon^{-2} $, when the other parameter settings adhere to those specified in the Corollary, it is sufficient for $ \beta $ and $ b'$ to satisfy ${\beta}/{b'} = n^{-1} $ to achieve optimal complexity. Given that $ \beta = m^{-1} $, it follows that $ mb' = n $. For other algorithms, although some can also achieve optimal complexity, the parameters in their analyses are fixed; for example, in R-AbaSRG, $ b' = m = n^{1/2}$. Our experimental results also validate the robustness of $\beta$. We will add a remark after Corollary 5.4 to clarify the following point, for example, "Corollary5.4 indicates that if $ \beta $ and $ b'$ satisfy $\sqrt {\frac{\beta }{{b'}}} = \max \\{ {\frac{1}{{\sqrt n }},\varepsilon } \\}$ (finite-sum case) or $\sqrt {\frac{\beta }{{b'}}} = {\varepsilon }$ (online case), our proposed algorithm can achieve optimal complexity. This highlights the advantages of our algorithm in terms of parameter selection.". 2. Due to space limitations, we omitted the explanation of the assumptions outlined below. (1) Assumption 2.2 (1) is to avoid a bad case in Riemannian optimization. Namely, the sequence $\{x_k\}$ may converge to an optimum $x_*$, while the connecting retraction $\{R_{x_k}(\xi_k)\}$ does not converge where $x_{k+1}=R_{x_k}(\xi_k)$. This assumption is also employed in papers (Zhou et al., 2021, Han \& Gao, 2021b, Kasai et al., 2018). (2) In Assumption 2.2 (2), the boundedness of the Riemannian Hessian is unnecessary, and we will remove it. Assumptions 2.2 (1) to (3) clearly hold for compact manifolds, including sphere, (compact) Stiefel and Grassmann manifolds. For non-compact manifolds, like SPD matrices, the assumptions hold by choosing a sufficiently small neighbourhood $\mathcal{X}$. (3) Assumption 2.2 (4) generalizes the notion of smoothness on the euclidean space to Riemannian manifold and is satisfied if $\frac{d^2 f(R_x(t \xi))}{d t^2} \leq L$ for all $x \in \mathcal{X}, \xi \in T_x \mathcal{M}$ with $\||\xi\||=1$. Assumption 2.2 (5) and (6) are necessary due to our reliance on vector transport to approximate parallel transport. These two assumptions can be derived by requiring the vector transport $\mathcal{T}$ to be isometric and satisfy $\|| \mathcal{T}\_{x}^y u- \mathrm{D}R_x(\xi)[u] \|| \leq c_0 \|| \xi \|| \|| u \||$, where $\mathrm{D}R_x(\xi)[u]=\frac{d}{d t} R_x(\xi+t u)|_{t=0}$ is the differentiated retraction. Assumption 2.2 (6) is ensured by Taylor approximation in a compact set $\mathcal{X}$. (4) Assumption 2.3 (1) ensures that the Riemannian distance can be expressed in terms of the inverse exponential map. Assumption 2.3 (2) hence relates the exponential map with the retraction. Indeed, we have $\left\|R_x^{-1}(y)\right\| \leq \mu\left\|\operatorname{Exp}_x^{-1}(y)\right\|$ and $\left\|\operatorname{Exp}_x^{-1}(y)\right\| \leq \nu\left\|R_x^{-1}(y)\right\|$. This assumption remains valid when $\mathcal{X}$ is sufficiently small. These assumptions are also standard as in (Sato et al., 2019, Han \& Gao, 2021b) 3. The experimental results in the appendix C.3 indicate that the algorithm is robust to $\beta$. Theoretically, Corollary 5.4 indicates that setting $\sqrt{\beta/b'} = \varepsilon$ can ensure the optimal complexity of the algorithm, which means that $\beta$ and $b'$ can balance each other, and when a larger $b'$ is chosen, the algorithm will be more robust to $\beta$. 4. We will simplify the symbols in the theorem, such as $\mathcal{L} = {{\tilde L}^2}{\mu ^2}{\nu ^2}$. --- Rebuttal Comment 1.1: Comment: I thank the authors for detailed responses. Most of my concerns have been addressed and I would like to increase my rating to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback regarding our response. We are delighted to learn that you feel our reply effectively addressed your concerns. We sincerely appreciate your support and encouragement.
Summary: The paper proposes the combination of SVRG and SARAH for stochastic variance reduced non-convex optimization, and extends it to the Riemannian setting. Combinations of Riemannian stochastic methods with or without variance reduction, existing or proposed in this paper, all can be subsumed into a unified update with mix parameter beta which was proved to converge at a rate matching existing IFO complexity lower bound in the finite-sum or online setting. Experimental results are provided to demonstrate the slight advantage of the proposed Riemannian methods over existing ones. Claims And Evidence: I feel that the motivation of coming up with the combination of SVRG and SARAH for stochastic variance reduction is missing. Given a lot of existing combinations, why does the proposed combination stand out? Although experimental results demonstrate the proposed method converges faster, it is just slightly faster and measured in terms of IFO. Since the update is more complicated than existing ones, I guess this method doesn't work well in practice, e.g., in terms of running time. Thus, I feel the contribution mainly lies in theoretical investigation on the combination of SVRG and SARAH, and a unified perspective that integrate all existing combinations. Methods And Evaluation Criteria: Evaluation of the methods looks reasonable. Theoretical Claims: I didn't check proofs. Experimental Designs Or Analyses: yes. My concern on experimental design is that there are so many baseline methods. It's unclear what is a good way to set their hyper-parameters in order for a fair comparison with the proposed method. For example, in the first paragraph of Section 6, it says "We selected the same parameters for R-SRVRG and R-SVRRM. Similarly, we chose identical parameters for R-SRM and R-Hybrid-SGD for comparison". But why? Also, as I mentioned previously, it might be a totally different case if the x-axis is running time. Supplementary Material: No Relation To Broader Scientific Literature: The paper proposes a new combination of R-SVRG and R-SARAH, and further provides a unified perspective on Riemannian stochastic variance reduced methods. Essential References Not Discussed: Literature work seems complete. Other Strengths And Weaknesses: NULL Other Comments Or Suggestions: In Section 2, about exponential map definition, the derivative of r(t) should be evaluate at 0. The first condition of vector transport seems incorrect. Also, why two notations for it, T_x^z and T_{\xi}(y)? Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. 1. We will increase the discussion on the motivation behind the proposed SRVRG algorithm. The motivation for proposing SRVRG is that when $u_k$ is an SVRG-type estimator, parameter $\beta$ can be larger, which allows the variance of $v_k$ to decrease more rapidly (see the formula in the right column on lines 168-170 of the paper). Our theory also demonstrates that the range of $\beta$ is relatively expansive. Corollary 5.4 indicates that setting $\sqrt{\beta/b'} = \varepsilon$ or $\sqrt{\beta/b'} = n^{-1}$ can ensure the optimal complexity of R-SRVRG, which means that $\beta$ and $b'$ can balance each other. In contrast, Corollary 5.10 indicates that the R-SRM algorithm requires $\beta = K^{-2/3} = \mathcal{O}(\varepsilon^{2})$ to achieve optimal complexity. This is the reason why our algorithms perform better. Experimental results also validate the robustness of $\beta$. 2. Although the updates in our proposed algorithm are somewhat more complex, the average time for a single IFO call is 0.634s, which is 3.65 times the average time for a Riemannian update and 4.7 times the average time for a vector transport. This indicates that a significant portion of the algorithm's runtime is consumed by IFO calls, especially when a larger batch size $b'$ is selected in the inner loop. Our algorithm allows for a trade-off between the inner loop size $m$ and the batch size $b'$ (because in Corollary 5.4, $m = \beta^{-1}$), meaning that for the same $m$, our algorithm can support smaller batch sizes. Therefore, even when the x-axis represents running time, our algorithm still performs well. The comparison of algorithm performance with time on the x-axis demonstrates that our algorithm is comparable to the best-performing algorithms. See https://anonymous.4open.science/r/SERENA-RiemanOptimization/. 3. For the parameters of the other algorithms, we adopted most of the settings from (Han\&Gao,2021b). Additionally, we performed parameter optimization for certain algorithms, such as R-PAGE. The reason that algorithms R-SRVRG and R-SVRRM, as well as R-SRM and R-Hybrid-SGD, are selected with the same parameters is that algorithm R-SRVRG can cover R-SVRRM, and similarly, algorithm R-Hybrid-SGD can cover R-SRM. 4. Thank you very much for other comments or suggestions. The final part of the definition of the exponential mapping will be modified to $\dot{\gamma}(0)=\frac{d}{d t} \gamma(t)|_{t=0}=u$. The first condition of vector transport will be modified to ''1) $\mathcal{T}$ has an associated retraction $R$, i.e., for $x \in \mathcal{M}$ and $\iota, \xi \in \mathrm{T}\_x \mathcal{M}, \mathcal{T}\_\xi(\iota)$ is a tangent vector at $R_x(\iota)$'' with the symbol $y$ is changed to $\iota$.
Summary: In this paper, the authors provide a unified framework for several Stochastic Recursive Variance Reduced Gradient algorithms. They first propose a new algorithm that integrates recursive momentum with variance reduction techniques, called Stochastic Recursive Variance Reduced Gradient (SRVRG), and extended it to Riemannian manifolds. Next, they propose a unified framework (SERENA), which includes SRVRG and extends to other variance reduction methods. Via studying their unified formulation, they derive Incremental First-order Oracle (IFO) complexity of finite-sum problems and online problems. These complexities are shown to match the theoretical lower bound for stochastic non-convex optimization. Claims And Evidence: Claims made in the submission are well supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed framework makes sense as it unifies/generalizes a range of previous works. Theoretical Claims: I did not check the correctness of proofs for theoretical claims. Experimental Designs Or Analyses: I did not check the soundness/validity of the experimental designs. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper provides connections among several previous works, including SGD, SVRG-type, SARAH-type, Hybrid-SGD, STORM, SVRRM and the newly proposed SRVRG. (see Table 1 in the paper) Essential References Not Discussed: I am not aware of any essential references not discussed. Other Strengths And Weaknesses: Strengths: - SERENA unifies several variance reduction techniques, achieving optimal IFO complexities for both finite-sum and online settings, improving upper bound results compared to prior analysis. - The unified approach paves the way for further exploration of variance reduction techniques in manifold optimization. - The formulation of recursive gradient estimators adapted to the geometry of Riemannian manifolds can influence future algorithm design. Weaknesses: - While the unification and improvements are valuable, the core ideas are extensions of well-established variance reduction methods in Euclidean spaces. Other Comments Or Suggestions: I have no other comments or suggestions. Questions For Authors: I have no other questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging feedback and valuable comments. We introduce the SRVRG estimator, which is proposed for the first time in both Euclidean space and Riemannian space. Numerical experiments illustrate the superiority of the R-SRVRG algorithm. Furthermore, we present a unified framework for Riemannian stochastic variance reduction methods and provide a unified theoretical analysis by giving an upper bound on the variance of the Riemannian stochastic estimators. Our proposed algorithm can achieve optimal complexity while also offering a wide range of parameter choices. For example, when other parameter settings are as outlined in the Corollary 5.4, $\beta$ and $b'$ only need to satisfy $\sqrt {\frac{\beta }{{b'}}} = \max \{ {\frac{1}{{\sqrt n }},\varepsilon } \}$ (finite-sum case) or $\sqrt {\frac{\beta }{{b'}}} = {\varepsilon }$ (online case) for the R-SRVRG algorithm to achieve optimal complexity. Other methods often directly specify parameters in order to achieve optimal complexity, such as in the analysis of R-AbaSRG for online case, $b' = \varepsilon^{-1}$. Furthermore, experimental results demonstrate robustness of parameters.
Summary: This paper proposes a unified stochastic recursive variance reduced gradient framework for Riemannian non-convex optimization. Claims And Evidence: The algorithm and its analysis are presented with derivations and proofs, but clarification on assumptions and the construction of the variance reduced gradient needed. Methods And Evaluation Criteria: The applicability of the proposed methods is fine. But it lacks a comparison with SOTA Riemannian natural gradient methods, see, Hu, J., Ao, R., So, A.M.C., Yang, M. and Wen, Z., 2024. Riemannian natural gradient methods. SIAM Journal on Scientific Computing, 46(1), pp.A204-A231. Theoretical Claims: The proofs seem correct. Experimental Designs Or Analyses: As commented earlier, it lacks a comparsion with Riemannian natural gradient methods. Supplementary Material: Have checked the proofs and additional numerical experiments. Relation To Broader Scientific Literature: The paper proposes a unified variance-reduced gradient estimator on manifolds by consolidating existing methods from both Euclidean and Riemannian optimization literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The optimal complexity for Riemannian variance-reduced methods, e.g., R-SRM (Han & Gao, 2021a), has already been established in prior work. This paper primarily unifies existing methods and follows standard proof techniques to recover the same complexity guarantees, offering limited new theoretical insights for the field. Other Comments Or Suggestions: - **Page 2, Line 57:** Typo – "euclidean" should be "Euclidean." - **Assumption 2.2(1):** The requirement that all iterates stay within a totally retractive neighborhood of $x_*$ seems restrictive. Can this assumption be relaxed? - **Assumption 2.2(2):** Is the assumption of a bounded Riemannian Hessian necessary for the analysis? - **Equation (3):** What does $\tilde{x}$ represent? Please clarify. - **Table 1:** Can you explain the meaning of each column for better clarity? - **Second Line After Equation (7):** Should "SGD" be "SG"? It seems like it should refer to "stochastic gradient" rather than "stochastic gradient descent." - **Algorithm 1:** - What are the sizes of $I_k$ and $J_k$? - How is $v_k^s$ computed? Note that Equation (7) provides the formula for $v_k$, not $v_k^s$. - **Paragraph After Algorithm 1:** - The connections between R-SRM, Riemannian SVRG, and R-PAGE with Algorithm 1 are not clearly explained. - Additionally, some of these methods follow a single-loop structure, while Algorithm 1 appears to be a two-loop framework. Please clarify these distinctions. - **Section 5:** The choice of \( S \) is missing in the convergence analysis. Please specify how it is determined. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thorough review and useful comments. Please find our responses below. 1. Methods And Evaluation Criteria: Thank you for your valuable feedback regarding the applicability of our proposed methods. First, please allow me to explain why the RNGD method was not compared previously. (1) The RNGD algorithm utilizes an approximation of the Riemannian Hessian, making it a second-order method, while this paper primarily focuses on Riemannian stochastic variance reduction methods, which are the first-order methods. (2) This paper provide a unified framework and theoretical analysis, which cannot encompass the RNGD algorithm. We compared our proposed R-SRVGR and R-SVRRM with RNGD on the LRMC and PCA problems, respectively. The results indicate that the performance of our algorithms is comparable to that of RNGD. In particular, when the x-axis represents running time, ours perform better in most experiments. See details in https://anonymous.4open.science/r/SERENA-RieOptimization We will replace the last sentence in the conclusion and add references. "Furthermore, Riemannian second-order methods have garnered increasing attention, such as the RNGD[1], R-SQN-VR[2], and R-SVRC[3] algorithms. We will explore the integration of this framework with second-order information to enhance the convergence rate.". 2. Other Strengths And Weaknesses: Although the optimal complexity of the Riemannian VR method has been established in previous work, R-SRVRG and R-SVRRM show better performance. Theoretically, we first provide a unified analytical framework, as the analyses of different algorithms previously varied significantly. Additionally, there are new theoretical insights; for instance, when the other parameter settings adhere to those specified in the Corollary 5.4 in finite-sum case, our algorithms can guarantee optimal complexity as long as $mb'= \min\\{n,\varepsilon^{-2}\\}$ is satisfied and R-AbaSRG needs $ b' = m = n^{1/2}$. This indicates that our algorithms exhibits superiority in parameter selection, i.e., robustness and a broader range. 3. Other Comments Or Suggestions: (1) We will correct the typo. (2) Assumption 2.2(1) follows papers (Zhou et al., 2021, Han & Gao, 2021b, Kasai et al., 2018), and it is difficult to relax. In Assumption 2.2(2), the boundedness of the Riemannian Hessian is unnecessary, and we will remove it. More details for assumptions, see 2 in our rebuttal to Reviewer PqwF. (3) We will change $\tilde x$ to $\tilde x_{s-1}$ and provide the necessary explanations as follows " $\tilde x_{s-1}$ represents the point at which the true gradient is calculated in the outer loop of SVRG algorithm". For Equation (3) and (7), we will add superscript $s$ and indicate that it refers to the $s$-th outer loop. In second line after Equation (7), it is "SG". (4) We will add the caption for Table 1 as follows: Given $\beta$, the second and third columns represent the types of stochastic gradient estimators for $u$ and $w$, respectively, while the last column indicates the corresponding algorithms. (5) In Algorithm 1, $|\mathcal{I}_k| = |\mathcal{J}_k| = b'$, we will add " of size $b'$ " at the end of line 7. After Algorithm 1, we will add the following note "For single-loop algorithms, such as R-Hybrid-SGD and R-SRM, as well as loopless algorithms like R-PAGE, $S=1$ in Algorithm 1. And we omit the superscript for these algorithms to simplify notation, as in Equations (8) and (10).". (6) R-SRM, R-SVRG, and R-PAGE can be classified as special cases of Algorithm 1. These three algorithms are highlighted because they each exemplify distinct types: R-SRM has a single-loop structure with its stochastic estimate represented as a convex combination of SARAH-type and SG stochastic estimates. In contrast, R-SVRG employs a double-loop structure, while R-PAGE can be regarded as a loopless SARAH-type algorithm. We present these specific forms of the stochastic estimates to facilitate theoretical analysis. (7) For our R-SRVRG and R-SVRRM, $S = \frac{K}{m} = \Theta (\frac{1}{\sqrt {mb'} {\varepsilon ^2}})$. The optimal complexity can be guaranteed as long as $mb'= \min\\{n,\varepsilon^{-2}\\}$ is satisfied in finite sum case, thus $S = \Theta (\max \\{ n^{-1/2} {\varepsilon ^{-2},\frac{1}{\varepsilon }} \\})$. Similarly, in the online case, $mb'= \varepsilon^{-2}$, $S = \Theta ({\varepsilon ^{ - 1}})$. For R-SVRG, $S = \Theta(\varepsilon^{-2}n^{-1/3})$ in finite-sum case and $S= \Theta(\varepsilon^{-4/3})$. We will include the choice of $S$ in corollary. [1] Hu,J. et al. Riemannian natural gradient methods. SIAM J. Sci. Comput., 2024. [2] Kasai, H. et al. Riemannian stochastic quasi-newton algorithm with variance reduction and its convergence analysis. In AISTATS, 2018. [3] Zhang, D. and Davanloo Tajbakhsh, S. Riemannian stochastic variance-reduced cubic regularized newton method for submanifold optimization. J. Optim. Theory Appl., 2023. --- Rebuttal Comment 1.1: Comment: I appreciate the reviewer’s thoughtful responses and the addition of numerical tests, which address my concerns. I am happy to raise my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your recognition of our responses and your positive feedback. We are pleased to hear that you believe the modifications we made and the additional numerical tests effectively address your concerns. Your support and encouragement are greatly appreciated.
Summary: This paper proposes a generalization of stochastic recursive variance reduced gradient framework (SERENA) that unifies various gradient-like objects that serve as the first order information for Riemannian optimization. Based on the appropriate formulation of the gradient estimator proposed in this framework, the existing variance reduced algorithms on Riemannian optimization and the new Riemannian adaptations of existing Euclidean variance reduced algorithms are subsumed under the same framework which established a unified theoretical analysis on convergence. The convergence results of the newly proposed Riemannian adaptations, namely the Riemannian stochastic recursive variance reduced gradient algorithm (R-SRVRG), the Riemannian stochastic variance reduced momentum algorithm (R-SVRRM) and a Riemannian hybrid stochastic gradient descent algorithm (R-Hybird-SGD) are established, among which the R-SRVRG achieves the optimal incremental first-order oracle(IFO) complexity. The numerical experiments demonstrate superior performances of the proposed R-SRVRG and R-SVRRM in terms of IFO-complexity, i.e., the convergence with fewer stochastic gradient computations. Claims And Evidence: The claims made in this submission are accurately stated and supported by convincing evidence and discussions. Methods And Evaluation Criteria: The proposed evaluation criteria, the use of the optimality gap/mean squared error along with IFO complexity as evaluation criteria is appropriate, as one of the compelling contributions of this paper is a unified theoretical analysis on IFO complexity. However, I would also expect the complexity discussion on the Riemannian update, retraction or exponential map. While an analysis on IFO complexity alone is usually sufficient for a Euclidean stochastic first order algorithm with neglectable vector update, the update computation in a Riemannian setting is almost always a nontrivial primitive. It is important to include the impact of the Riemannian update in the complexity analysis, especially when the ratios between the IFO call and update call are different. For example, R-SRVRG (Algorithm 2) makes $4b'$ calls to IFO computation and $1$ call to Riemannian update in the inner iteration, while R-SVRRM (Algorithm 3) makes $3b'$ calls to IFO computation and $1$ call to Riemannian update in the inner iteration. With that being said, a throughout complexity discussion that examines the profile of IFO computation and Riemannian update derails from the main focus and scope of this paper, as the profile significantly varies depending on the optimization problem, the Riemannian manifold and the choice of retraction map. I would prefer a simple discussion that reports the ratios between the IFO calls and update calls in different algorithms. It is recommended to also include the elapsed time (on average) for each call as a more practical reference. Theoretical Claims: Yes, I checked and confirmed the correctness of the proofs in appendix B. In particular, I read the proof of Theorem 5.3 and Corollary 5.4 in details. Experimental Designs Or Analyses: The experimental designs and analyses are solid to me. Other than the missing discussion on the impact of the Riemannian update, which was mentioned in the ``Methods And Evaluation Criteria'' section, I am a bit puzzled with the experiment on the Riemannian centroid. To the best of my knowledge, the Riemannian centroid (RC) problem on the SPD manifold with the affine-invariant metric is a convex optimization problem, as the SPD manifold with the affine-invariant metric is totally geodesically convex. How does that fit in this paper, which focuses on Riemannian non-convex optimization? While the convex optimization is certainly a more feasible setup than a non-convex one, does any experimental result pick out the implications brought by the convex condition with the RC? Supplementary Material: Yes, I have reviewed the material on the proofs and the detailed experiment setup in the appendix. Relation To Broader Scientific Literature: This paper propose a Riemannian framework (SERENA) for stochastic gradient estimators that adapts the existing Riemannian variance reduced algorithms including the Riemannian SRM, Riemannian SVRG and Riemannian PAGE. The R-SVRRM and R-SRVRG derived from SERENA not only outperforms (in terms of IFO complexity) the above-mentioned variance reduced algorithms, but also outperforms other existing Riemannian variance reduced algorithms that are not in SERENA. In addition, SERENA provides a unified theoretical analysis on IFO complexity for the algorithms it subsumes, which provides a different angle of interpretations for the existing algorithms. Essential References Not Discussed: I am not aware of any other missing/not-cited related work that is essential to this paper. Other Strengths And Weaknesses: This submission is well written and organized. Readers can expect a fluent reading experience with it. The idea of proposing Riemannian adaptation of framework that unifies existing algorithms and derives new algorithms is always significant. As there are too many details in algorithmic designs vanish in the Euclidean setting, an appropriate adaptation that leads to one or more better Riemannian algorithms sheds some lights on the interpretations of the vanished algorithmic designs, which could have more implications beyond the scope of stochastic optimization. Other Comments Or Suggestions: Here is a list of minor things and suggestions. 1. As the paper mostly (if not always) works with the retraction, the discussion on manifolds and exponential mapping in the Preliminaries section 2 is too restrictive and unnecessary. It is not a good idea to mention a unique existence of geodesic globally: ``If there exists a unique geodesic between any two points on $\mathcal{M},\cdots$'' This is quite rare in practice (although the SPD manifold is one of those examples), so mentioning it might give the incorrect impression that the paper is limited to such cases. 1. The IFO complexity is not clearly defined. For those readers that are less familiar with the stochastic optimization like me, it could be difficult to relate IFO complexity with the counts of the call to stochastic gradient computation via Definition 2.1, especially when Def 2.1 says IFO complexity but it only defines what IFO is. The similar issue occurs to me in the Experiment section 6, which states that ``In Figure 1, the $x$-axis represents IFO(n)''. Meanwhile, the figures says IFO/n and Def 2.1 says IFO takes an index and a point to return a gradient. 1. The $[n]$ in Def 2.1 is not defined before. 1. In section 3, check equation (2) and the equation above of a SARAH-type estimator. The $i_k$ and $j_k$ are not consistent with each other. Equation (2) should be Riemannian adaptation of that SARAH-type estimator, I tend to think this is a typo that will cause confusions, or correct me if there are reasons for that. 1. Appendix B.2.' title, ``The proof in Section 4'', I believe it should be Section 5. Questions For Authors: I do not have a question for the authors that would likely change my evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging feedback and valuable comments and suggestions. We will respond to each point below. 1. Methods And Evaluation Criteria: Thank you for your insightful comments regarding the evaluation criteria and complexity analysis in our paper. As you mentioned, R-SRVRG makes $4b'$ calls to IFO computation and 1 call to Riemannian update in the inner iteration, while R-SVRRM makes $3b'$ calls to IFO computation and 1 call to Riemannian update in the inner iteration. In fact, for the R-SRVRG and R-SVRRM algorithms, the value of parameter $b'$ is closely related to $m$ (because Corollary 5.4 indicates that setting $\sqrt{\beta/b'} = \varepsilon$ or $\sqrt{\beta/b'} = n^{-1/2}$ can ensure the optimal complexity of R-SRVRG and R-SVRRM, and $\beta = m^{-1}$). When $m=n$ or $m=\varepsilon^{-2}$, $b'$ can be set to 1, the ratio $(b+ 4mb')/(m+1)= \Theta((n+\varepsilon^{-2})/m)$ between IFO calls and update calls can be a small constant $\mathcal{O}(1)$, our algorithm requires $S(1+m) = \Theta(\varepsilon^{-3})$ or $\Theta(n^{-1/2}\varepsilon^{-2})$ Riemannian updates. If we set $m = b'$, only $\Theta(\varepsilon^{-2})$ Riemannian updates are needed, but the ratio will be $\Theta(\varepsilon^{-1})$ or $\Theta(n^{1/2})$. From an experimental perspective, we recorded the time taken for 1 IFO call, 1 Riemannian update, and 1 vector transport across different problems and datasets. The average time for a single IFO call is 0.634s, which is 3.65 times the average time for a Riemannian update and 4.7 times the average time for a vector transport. This indicates that a significant portion of the algorithm's runtime is consumed by IFO calls, especially when a larger batch size is selected in the inner loop. Thus we can choose a smaller batch size $b'$. Furthermore, we also performed a comparison of the performance of various algorithms with the x-axis representing runtime. The results show that our proposed algorithm remains comparable to the best-performing algorithms. see https://anonymous.4open.science/r/SERENA-RiemanOptimization/. 2. Experimental Designs Or Analyses: In the experimental design, we selected the RC problem on the SPD manifold. As you mentioned, this problem is indeed a convex optimization problem. We chose this problem because both papers (Han \& Gao, 2021a, 2021, Han \& Gao, 2021b, Kasai et al., 2018) tested their algorithms on this issue; thus, to facilitate comparisons between algorithms, we opted for the same problem. 3. Other Comments Or Suggestions: (1) Thank you very much for your suggestion. We will reduce the discussion on manifolds and exponential mappings in subsequent versions. (2) We apologize for any confusion caused. Due to space limitations, we provided only a simplified definition of IFO complexity in the submission. We will include a complete version and further explanations in subsequent papers. The definition "For problem (1), an Incremental first-order oracle (IFO) takes an index $i \in \\{1,\ldots,n\\}$ and a point $x$, and returns the pair $\left(f_i(x), \operatorname{grad} f_i(x) \in \mathrm{T}_x \mathcal{M}\right)$. " and the explanations "The IFO complexity effectively captures the overall computational cost of a first-order Riemannian algorithm, as the evaluations of the objective function and gradient typically dominate the per-iteration computations." 3. We will modify $[n]$ to $\\{1,\ldots,n\\}$. 4. The first line of the formula above Equation (2) denotes the stochastic estimator for the Stochastic Recursive Momentum (STORM) algorithm, while the second line corresponds to the stochastic estimator for the Hybrid-SGD algorithm. The Hybrid-SGD estimator can be interpreted as a convex combination of the SARAH-type estimator and the stochastic gradient estimator. When $i_k = j_k$, the Hybrid-SGD reduces to STORM. And we propose a Riemannian extension of Hybrid-SGD, called R-Hybrid-SGD, as outlined in Equation (2). We will modify the formula above Equation 2 as follows: $v_k = \overbrace { (1 - \beta )( {v_{k - 1} - \nabla {f_{{i_k}}}({x_{k - 1}})} ) + \nabla {f_{{i_k}}}({x_k}) }^{STORM} \approx \overbrace{(1 - \beta )(\underbrace { {v_{k - 1} + \nabla {f_{{i_k}}}({x_{k }}) - \nabla {f_{{i_k}}}({x_{k - 1}})}}_{SARAH-type \\;estimator}) + \beta \nabla f\_{j_k} (x_k)}^{Hybrid-SGD}.$ 5. Thank you for the reminder, we will modify it.
null
null
null
null
Nesterov Method for Asynchronous Pipeline Parallel Optimization
Accept (poster)
Summary: In this paper, the authors proposed a Nesterov Accelerated Gradient algorithm for asynchronous pipeline parallel optimization. It use Nesterov acceleration to reduce the negative impact of delay in asynchronous iterations. The authors firstly provide the convergence guarantee that the Nesterov acceleration can help the asynchronous pipeline parallel algorithm achieve the $\mathcal{O}(1/T)$ convergence rate. Experiments on LLMs also validate the benefits of the proposed algorithms. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The reviewer has taken a check on the theoretical results in this paper and not found any fatal mistakes. However, one limitation of the theory is that the convergence analysis focus on deterministic scenarios. The theoretical claims will be better if the authors provide the analysis of the stochastic scenarios. Experimental Designs Or Analyses: The experiments have shown the benefits of the proposed algorithms. However, the reviewer suggests the authors present some results on the validation set so as to illustrate the value in practical application. Moreover, it is better for the authors to offer their core codes. Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths** 1. Good presentation and writing. 2. Detailed analysis on the insight of algorithm design. **Weaknesses** 1. There is a typo in Eq. (5). Please take a check and revision. 2. The convergence analysis is based on gradient descent without any stochastic noise. However, the convergence on stochastic algorithms like SGD or Adam may be essential for LLMs due to their small batch size. Other Comments Or Suggestions: NA Questions For Authors: 1. Whether the author can provide the experimental results on validation? 2. Whether the author can provide a theoretical analysis based on stochastic algorithms? The reviewer may consider raising the score on overall recommendation according to the response to the weaknesses and questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for encouraging comments and address the specific points below. ## Results on the validation set We have **already provided the results on the validation set** in Table 1 (perplexity scores) and Fig. 3,9 (validation loss curves). Additionally, as requested by Reviewer tQWW, we have included some generated text as qualitative results in the response to tQWW. ## Theoretical analysis in the stochastic setting **We followed the standard practice in machine learning literature to provide theoretical analysis in a simplified setting and validate it using realistic large-scale experiments**. Many seminal works can be pointed out as examples (Kaifeng and Li 2019, Soudry et al 2018, Zhihui, et al. 2021). Furthermore, there have been instances where convergence analysis derived for non-stochastic settings are shown to be validated by experiments in the stochastic settings (Wu et al 2023, Arora et al 2018). We believe ours is one such example. Rigorous theoretical analysis matching the practical setting of large-scale language models requires significant research effort and we believe it would warrant a standalone publication. We emphasize that our empirical results including the results with the 1B parameter model, and the SWARM experiments validate the effectiveness and practical usefulness of our method beyond doubt. - Lyu, Kaifeng, and Jian Li. "Gradient descent maximizes the margin of homogeneous neural networks." ICLR (2019). - Soudry, Daniel, et al. "The implicit bias of gradient descent on separable data." JMLR (2018) - Zhu, Zhihui, et al. "A geometric analysis of neural collapse with unconstrained features." NeurIPS (2021) - Wu, Yongtao, et al. "On the convergence of encoder-only shallow transformers." NeurIPS (2023). - Arora, Sanjeev, et al. "A convergence analysis of gradient descent for deep linear neural networks." ICLR (2018). ## Code As requested by the reviewer, we provide a minimal version of our code [here]. Upon publication, we intend to release the code for reproducibility and to enable future improvements. [here]: https://drive.google.com/file/d/10Alm3sgwSic3Hi3w7gbJrme5JgUcsP9j/view?usp=sharing ## Typo in Eq. 5 Thanks for pointing out the extra closing parentheses. We will correct it. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The theoretical results in this paper is generally valuable and the reviewer recommends to accept. However, due to the lack of the convergence analysis in stochastic scenario, the reviewer will maintain the rating. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response and recommending acceptance. We appreciate it! We agree with the reviewer that convergence analysis in the stochastic setting is an interesting research direction, which we intend to explore as a future work.
Summary: This paper tells the story of overcoming a major challenge in training huge neural networks with pipeline parallelism. When models are split into stages running on different devices, asynchronous updates keep the pipeline full but introduce the problem of stale gradients, updates based on outdated information. To fix this, the authors reimagine Nesterov’s Accelerated Gradient method: they modify its look-ahead step to serve as a delay correction, effectively aligning the updates with the current gradient direction despite the delay. Their theoretical analysis shows that this approach converges reliably, and experiments on large-scale language models, even one with 1B parameters, demonstrate that it outperforms existing asynchronous methods and even beats synchronous training. Claims And Evidence: The claims are backed by both empirical and theoretical results. Methods And Evaluation Criteria: The evaluation includes several datasets for LLM evaluation. Theoretical Claims: The proposed algorithm is backed by a theoretical convergence proof. Experimental Designs Or Analyses: The experiments are conducted on several datasets and the proposed algorithm is compared to several previous works. The result include real-world scenario and convergence speed, both in terms of time in iterations. However, the proposed modification to NAG is not well justified empirically and the paper does not provide ablations for this modification. Supplementary Material: Section B. Relation To Broader Scientific Literature: This work extends the broader scientific literature on distributed optimization and parallel training by addressing the longstanding challenge of gradient staleness. Building on prior methods that either forecast gradients or use weight stashing and learning rate adjustments to mitigate delays, the paper uniquely repurposes the look-ahead step of Nesterov’s Accelerated Gradient method to serve as an intrinsic delay corrector. In doing so, it offers a theoretically sound and practically effective solution that outperforms earlier approaches in large-scale, asynchronous pipeline parallel training. Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: * **Q.1.** How come the proposed asynchronous algorithm outperforms synchronous (GPipe) in terms of convergence speed in number iterations? * **Q.2.** The modification to NAG can be see as a dependency between learning rate and momentum coefficient which might not hold in different settings. Would it make sense to validate this on a single worker setting? If not, then at how many workers does this modification start making senes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback and address the specific concerns below. ## Ablations for NAG We have **already provided the ablation experiments for our modifications in Sec. 5.6**. Specifically, the effect of the momentum coefficient for NAG is reported in Fig. 6 and the effect of the gradient discounting term (our main modification to NAG in Eq. 9) is illustrated in Fig. 7. We believe we have clearly justified our modifications to NAG, however, if the reviewer believes any specific experiment is missing, we are happy to include them. ## How can an asynchronous method be better than GPipe? This is an intriguing question and we believe further study is required to completely understand the effects of asynchronous updates. Nevertheless, we answer this from a practical point of view below. There are two points to consider when comparing two algorithms: 1) the frequency of the weight updates, and 2) the effectiveness of the weight updates. Typically, asynchronous methods perform more frequent weight updates than synchronous methods (such as GPipe) when processing the **same amount of data**. Specifically, GPipe accumulates gradients for a particular number of steps (4 in our experiments) to increase the pipeline utilization (Huang et al., 2019)) and update the weights, whereas asynchronous methods update the weights for every microbatch. Since more frequent updates (if the updates are beneficial) translate into a faster reduction in loss, an asynchronous method can be better than GPipe. Here, the key point is “if the updates are beneficial” – as not all asynchronous methods are better than GPipe. In fact, Pipedream and Pipemare are significantly worse than GPipe despite doing 4 times more weight updates (Fig. 2). This takes us to the second point, the effectiveness of the updates. To test this, we trained our base model (134M model on WikiText, same as in the paper) with 2 pipeline stages and updated the weights at every microbatch for GPipe and compared against our method (same number of weight updates for both methods). Even in this case, **our method is better than GPipe**: training loss at 50k iterations is **3.275 vs 3.323**. This confirms that *weight updates of our method are more effective than GPipe updates*, and overall, it is an effective asynchronous PP optimization method. ## At how many workers does this modification start making sense? Since we consider pipeline parallel optimization, single worker setting does not make sense, and at minimum, we require 2 pipeline stages. In the paper, we have tested our method by varying the number of pipeline stages from 4 to 24 (Fig. 5). We now tested the same base model (134M model on WikiText) for 2 pipeline stages, and even in this case **our method outperforms GPipe in terms of both training and validation losses**: |Method|Train Loss| Validation Loss| |-|-|-| |GPipe|3.374|3.410| |Ours|**3.275**|**3.318**| This further confirms the effectiveness of our method even when the number of pipeline stages is small. ### NAG modification might not hold in different settings We agree with the intuition that our gradient discounting term can be seen as an interplay between learning rate and momentum coefficient. However, we are unclear about the reviewer’s point that “the modification might not hold in different settings”. In case our response above misses any setting that the reviewer is interested in, if the reviewer can be more specific, we can try to better clarify them. --- Rebuttal Comment 1.1: Comment: Thank you for the provided details, this partially satisfies my concerns. I'm still confused by how an asynchronous method outperforms a synchronous method in terms of better accuracy with respect to number of iterations. However, I accept your comment about the more frequent updates and therefore I will update my score. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response and updating the score. We appreciate it! This is one of the first works to test asynchronous PP methods in large-scale settings, and as you mentioned, the behaviour of asynchronous methods is not well understood. It would be an interesting research direction to theoretically/empirically understand how/when an asynchronous method can be better than synchronous methods.
Summary: The gradient staleness caused by existing asynchronous pipeline parallelism mechanisms hinders the practical usage in contrast to synchronous pipeline parallelims. This paper aims to solve tackle the stale gradients when using asynchronous pipeline parallelisms by nesterov accelerated gradients. The paper has made the following contributions: (1) the paper has demonstrated that the asynchoronous pipeline parallelism can outperform the synchronous alternative on large language model workloads with partical setups; and (2) the paper proposes both theoretical and empricial justification of the proposed methods. Claims And Evidence: While the paper has shown great accuracy improvement by the proposed methods, the performance evaluation is limited (Figure 5 right). It is thus obscure how the proposed methods will work in practice. The proposed methods should also be compared against other synchronous methods (e.g. PipeDream) in terms of performance to claim it is better than other synchronous methods. Methods And Evaluation Criteria: The paper is evaluated with commonly used language models and the experimental setups does align with the problem settings by comparing the loss curves of training against other baselines. Theoretical Claims: I do not find any problems in the theoretical analysis in the paper but my expertise is limited. Experimental Designs Or Analyses: While it is great to see the training curves of the proposed methods, it is obscure how the method could affect the generated outputs in practice. I would recommend the authors to add the inference results (e.g. the generated text from langauge models) of the trained models with different baselines. Supplementary Material: I have reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: The paper may also include the discussion about how the proposed methods could be applied to training with low-precision floating point numbers, since they are becoming more and more prevalent in training modern DNNs. Essential References Not Discussed: I think the paper has already discussed all essential related works. Other Strengths And Weaknesses: I do find the evaluation of the paper robust since it have evaluated on multiple widely used datasets. However, it would also be interesting to see how the proposed methods perform on other types of networks (e.g. convolutional networks). Other Comments Or Suggestions: The paper's writing is in good shape. However, I would suggest the author to consider moving the related work section (section 4) as the second-to-last section of the paper. Questions For Authors: * How is the proposed method scalable & sensitive to multiple GPU servers? * How much speedup can the proposed method achieve compared to its synchronous alternative? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for encouraging comments and address the specific points below. ## Scalability and sensitivity to multiple GPU servers Our **SWARM experiments were conducted on multiple GPU servers** (24 to be exact) in GCP, confirming that our method works seamlessly in such scenarios (Fig. 8 and Appendix B.1). We will include this information in the main paper. Additionally, in the paper, we have tested the scalability of our method by increasing the parameter count (Fig. 3) and by increasing the number of pipeline stages (Fig. 5). In both cases, our method **shows superior scalability**. ## Speed-up compared to its synchronous alternative Asynchronous methods offer 100% pipeline utilization by construction. In contrast, pipeline utilization of synchronous methods depends on the bubble size (Huang et al., 2019). For GPipe, the pipeline utilization is estimated as $\frac{N}{N+P-1}$ (Yang et al., 2021), where $N$ is the number of microbatches and $P$ is the number of pipeline stages. Typically, $N = P$ and therefore, the pipeline utilization of GPipe is about 50%. Based on this, **asynchronous methods would be twice as fast compared to GPipe** in a homogeneous environment with full GPU utilization. In a heterogenous, bandwidth constrained setup, the speed-up can be higher as asynchronous methods completely mask the communication bottleneck. ## Training time compared to synchronous methods As noted by the reviewer, we have **already provided the training time comparison in Fig. 5**, where it is clearly visible that *GPipe becomes exponentially slower* when increasing the number of stages compared to our method. The reviewer’s confusion might be due to misinterpreting Pipedream as a synchronous method. However, we would like to clarify that **Pipedream is an asynchronous method** and the *training time is the same for all asynchronous methods* including Ours, Pipedream, and Pipemare. The training curves for the 1B parameter model in Fig. 10 in the appendix illustrates this, and shows that the synchronous **GPipe is about 1.5x slower than the asynchronous methods** in this case. Nevertheless, as mentioned in the paper, the absolute wall-clock times are not comparable between asynchronous methods (ours, pipedream and pipemare) and the synchronous method (GPipe), due to the differences in the underlying implementation – asynchronous methods are based on the 6 years old third-party pipedream codebase, whereas GPipe is a pytorch official implementation. ## Inference results We have **provided perplexity scores on the validation set in Table 1** as a quantitative measure of inference results. Additionally, the validation loss curves were provided in Fig. 9 and also Fig. 3 right. As requested by the reviewer, we have now provided some generation results for GPipe and Our method below as qualitative results: |Prompt|GPipe Generation|Ours Generation| |-|-|-| |Cyclone|Iloʿ Trespass was a little slower organized than initially, but rapidly strengthened to become a hurricane … |Tropical Storm Bonnie (Harak 10) made landfall on the Outer Banks of North Carolina, dropping moderate rainfall … | |Largest country in the world|Hair bars at Festival Day Student Association events ...|France ...| |Wikipedia has a variety of articles in topics like|The regime constitutes a more moral dependence on Wikipedia than … |vernacular romances and classical history …| |Some of the popular sports in the world include|[Wurgar Canoe] (meaning football game) Wurgar Canoe, the city hall, … |Cricket and ice hockey (football, soccer, ice hockey, ice hockey, men's and women's hockey ) are the sports …| Note, this is the output of pretraining 1B model on the WikiText dataset which is relatively small and contrived to train useful LLMs compared to larger datasets like C4 or FineWebText. Furthermore, post-training plays a major role in incorporating instruction following capabilities. Nevertheless, both methods generate plausible English sentences relevant to the prompt, whereas our method seems to be better for some prompts. Also, as seen in Figs. 2, and 9, the models are not yet saturated and can be trained further to improve the generation quality. ## Convolutional networks To test the generality of our method, we trained a Resnet50 model on the TinyImageNet image classification dataset with 8 pipeline stages. In short, the conclusions in the paper hold: |Method|Top-1 Validation| |-|-| |GPipe|55.70| |Pipedream|55.02| |Ours|**55.79**| This is a small network and dataset in which Pipedream yields similar results to GPipe, even without any delay correction, highlighting that asynchrony is not a major pain point in these tasks. This observation aligns with previously reported results (Yang et al., 2021). ## Low-precision training We see no restrictions in using low-precision training in our method. We expect it to affect asynchronous and synchronous methods in the same way, however, experiments are needed to validate this. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the clarification provided. After reviewing the details, I have decided to maintain my original scores, as they already accurately reflect my evaluation. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response and recommending acceptance. We appreciate it!
null
null
null
null
null
null
null
null
Heavy-Tailed Linear Bandits: Huber Regression with One-Pass Update
Accept (poster)
Summary: This paper proposes a one-pass algorithm for stochastic linear bandits with heavy-tailed noise, reducing per-round computational cost from $\mathcal{O}(t \log T)$ to $\mathcal{O}(1)$ using an online mirror descent framework. Unlike existing methods that require storing and processing all past data, the proposed approach updates using only current round data while achieving a near-optimal, variance-aware regret bound. The authors also deduced the regret bound for the proposed algorithm, which is nearly optimal. Claims And Evidence: I read the technical sketch in the main paper but didn't look closely at the details in the appendix. I think the claims and theorems make sense to me. Methods And Evaluation Criteria: The experimental setting and evaluation criteria are aligned with the state-of-the-art evaluation framework in bandit. Theoretical Claims: I think the theoretical claims are correct. I will look into other reviewers' opinions for further reference. However, the current method can not handle the case when $\nu_t$ is unknown, which weakens the theoretical contributions of this work. Experimental Designs Or Analyses: 1. It seems that the experiments only consider the case when the noise has fixed $\nu_t$, which may be too weak. It would be better if the authors could show your algorithm can yield robust performance under varying $\nu_t$ across time $t$. 2. It seems that several existing methods such as CRTM, CRMM and HeavyOFUL can yield competitive or even better empirical performance with less computational cost. This raises doubts about the advantage of the proposed approach. Supplementary Material: Yes, I took a look at the proof of the main results in this work, but I didn't verify all the details. So I will refer to other reviewers' judgement for technical evaluation. Relation To Broader Scientific Literature: This paper integrates adaptive robust estimation (Huber regression), computationally efficient online learning (OMD), and variance-aware regret analysis into a single unified framework. As the authors point out in Section 5, the proposed methodology of this work has potential in other problems such as linear MDP problem and online adaptive control problem. Essential References Not Discussed: I think most of the notable references have been sufficiently discussed in this work. One minor suggestion: Since I am not familiar with Huber regression and its applications, I need to read some literature to read this work. I suggest the authors could add a little bit more details on the deduction of Huber loss and its current applications on bandits and reinforcement learning problems. For example, it would be better if the authors could explain some details of [Sun et al. 2020]. I notice the work [Kang & Kim 2023] also used the same loss Huber regression for heavy-tailed bandit, so the authors should highlight the similarity and differences between Kang & Kim 2023 and yours, which is missing now. I also find [1] and [2] about Huber regression in bandits/RL, and the authors should also discuss these two works in the revision: [1] studies the matrix linear contextual bandit with Huber regression for heavy-tailed rewards, which is an extension to SLB and obtains similar regret bound. [2] considers Huber regression in distributed reinforcement learning, which also considers a similar framework in RL parameter adaptation. [1] Kang et al. Low-rank Matrix Bandits with Heavy-tailed Rewards [2] Malekzadeh et al. A Robust Quantile Huber Loss With Interpretable Parameter Adjustment In Distributional Reinforcement Learning Other Strengths And Weaknesses: Some typos: 1. line 39 right part: "cannot not". 2. line 231 left part: "both of" should be changed to "both". 3. line 443: "none of" instead of "none". 4. The estimator in line 193 left part: there are $\hat\theta_{t+1}$ in both two equations, which is a little bit confusing. Although I can finally get it, but it is better to change $\hat\theta_{t+1}$ to something else in the first equations. 5. There are some non-English words in the submitted codes in the supplementary files, which are not very professional and may lead to the violation of ICML regulations. Other Comments Or Suggestions: I think this paper has its own merit. I will reconsider my rating on this work during the rebuttal phase, and potentially raise or decrease my rating based on the discussion with the authors and other reviewers. Questions For Authors: 1. Can the approach generalize to the generalized linear bandit setting, or is it strictly limited to the linear case? 2. How sensitive is the proposed method to hyperparameter tuning (e.g., Huber threshold selection)? I do not see an ablation study on parameters or the hyperparameter setting in the manuscript. 3. I feel the regret curves for most algorithms in your experiments are linear instead of sublinear, which looks contradicts with the theoretical results. Can you explain that? 4. In [Kang & Kim et al.] and [Kang et al. Low-rank Matrix Bandits with Heavy-tailed Rewards] (mentioned above), the Huber loss function does not contain the variance normalization factor $\sigma_s$. Is that because they assume the variance is fixed? Therefore, if the upper bound $\nu$ is known, then your Huber loss can be free of the normalization factor as well? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your helpful comments. Below, we address your main questions regarding the technical contributions, extensions to GLB, experiments, and related works. For other minor issues (e.g., typos), due to limited space, we will directly revise the paper according to your suggestions. --- **Q1.** "cannot handle the case when $\nu_t$ is unknown, which weakens the theoretical contributions" **A1.** Thanks for the comment. We'd like to remind that knowing the moment $\nu_t$ is shared by our work and previous works for heavy-tailed linear bandits, and this issue is orthogonal to our main contributions. - **Relying on moment $\nu_t$**: Note that existing works including the current SOTA [Huang et al., 2024] require the moment information $\nu_t$, even if they can use an offline MLE estimator. Similarly, our work also faces this issue; however, in contrast, we can ensure one-pass efficiency. Handling unknown $\nu_t$ is indeed important and challenging for the community, which is left as future work to explore. - **Theoretical contribution**: Even with known $\nu_t$, designing a one-pass algorithm remains highly non-trivial due to several challenges, including using one-pass OMD to approximate full-batch MLE, and handling Huber loss and heavy-tailed noise. We kindly request the reviewer to refer to **A2 for Reviewer McYB** for more details. Thanks! --- **Q2.** "...generalize to the generalized linear bandit (GLB)..." **A2.** In fact, there is prior work that applies OMD to logistic bandits (an important class of GLB) with sub-Gaussian noise, while our paper uses OMD for linear bandits with heavy-tailed noise. This suggests a potential combination. Nonetheless, an evident challenge is how to design a suitable Huber-type regression loss tailored to nonlinear link functions of GLB for the offline MLE estimator. We leave this as an exciting direction for future work. Thanks! --- **Q3.** Experiments on several aspects ("time-varying $\nu_t$"; "the empirical advantage of HvtLB"; "parameter sensitivity"; "linear regret curves", etc) **A3.** Thanks for those careful comments! To address your concerns, we have conducted additional experiments and comparisons in the anonymous URL https://default-anno-bucket.s3.us-west-1.amazonaws.com/HvtLB_revised_exp.pdf Including, varying $\nu_t$ in Figure 2, empirical advantage in Figure 4, and linear regret issue in Figure 5. For parameter sensitivity, key parameters in the algorithm, such as Huber threshold, are not chosen by tuning but are derived from theoretical analysis. As a result, they indeed can be relatively sensitive. We will add these results in the revised version. --- **Q4.** More details on [Sun et al. 2020], and differences between [Kang & Kim] **A4.** Thanks for the helpful suggestion. In the revised version, we will include more details on [Sun et al. 2020]. As for [Kang & Kim], the similarity in titles may cause confusion. However, there are key differences in both setting and methods. - **Setting**: We focuses on linear bandits with **infinite arm set**, where context $X_t$ chosen depends on previous $\\{X_s\\}_{s<t}$. Our goal is to design a one-pass algorithm with optimal regret. In contrast, [Kang & Kim] studies linear contextual bandits with **finite arm set**, where $X_t$ is **i.i.d. sampled**. Their method is based on offline MLE and is *not* online. - **Algorithm**: Even with a similar Huber regression, the designs differ fundamentally. We use OMD with a carefully designed recursive normalization factor, while [Kang & Kim] uses offline MLE with a forced exploration mechanism. Please refer to **A5** for more details. --- **Q5.** "In [Kang & Kim] and [Kang et al.], ... if the upper bound $\nu$ is known, then your Huber loss can be free of the normalization factor as well?" **A5.** Thanks for the question. Our Huber loss still requires this factor, even if $\nu$ is known. The normalization factor $\sigma_s$ not only ensures variance-awareness but also guarantees that the denoised data lies within the quadratic region of the Huber loss. This ensures that the Hessian has a lower bound, which is crucial for the analysis. We achieve this by designing a recursive factor, where $\sigma_t$ depends on the last round confidence bound $\beta_{t-1}$ (see line 5 in Algorithm 1). While these two works avoid this factor, they introduce additional algorithmic mechanisms and assumptions to achieve a similar goal: (i) [Kang & Kim] introduces a forced exploration strategy and Assumption 4 to ensure that the Hessian has a lower-bounded minimum eigenvalue; (ii) [Kang et al.] adopts the local adaptive majorize-minimization method and Assumption 3.1 to establish the Local Restricted Strong Convexity condition (Definition A.3), which guarantees a lower bound on the Hessian. --- We are grateful for your careful review! If our clarifications have adequately addressed your concerns, please consider updating your score. Thanks! --- Rebuttal Comment 1.1: Comment: Thank you for your responses. After reading the authors' rebuttal, I feel this work has its merit and is an interesting exploration on heavy-tailed linear bandits. Could you please include the arguments about the difficulty of agnostic to $\epsilon$ and $\nu_t$, and the detailed comparison with Sun et al. 2020, Kang & Kim 2023 and Kang et al. 2024 (responses to my Q4 and Q5) into the next revision? I think that is very helpful for readers like me who are not familiar with Huber regression. --- Reply to Comment 1.1.1: Comment: Thank you once again for your thoughtful review and for appreciating our work! We will certainly incorporate these discussions (**A4** and **A5**) into the revised version and improve the presentation based on your suggestions to make the paper clearer and easier to understand. Moreover, we thank the reviewer for pointing out the related works [Kang et al., 2024] and [Malekzadeh et al., 2024], which were previously missing from our paper discussion. After a careful check recently, we found that [Kang et al., 2024] is highly relevant to our work in terms of both the problem setting and methodology. - Problem setting: both works consider the heavy-tailed nosie with finite $(1+\epsilon)$-moment. The main difference lies in the reward model: our work considers vector-valued context and parameter, i.e., $X_t, \theta_* \in\mathbb{R}^d$, while theirs considers matrix-valued context and parameter, i.e., $X_t, \theta_* \in\mathbb{R}^{d_1\times d_2}$. - Methodology: both works adopt the Huber loss. Our approach adopts a one-pass update scheme, while theirs relies on offline MLE. As discussed in **A5**, both methods incorporate additional algorithmic designs aimed at the same goal: ensuring a lower-bounded Hessian. This may suggest that our approach may have the potential to enable one-pass updates in their setting as well. We will include a discussion of these works in the revised version. Finally, we sincerely appreciate your recognition of our technical contributions. As we will continue to refine the presentation and further enrich the discussion of related works, we would be deeply grateful if you could kindly consider raising your score to further support our paper. Thank you again for your valuable feedback!
Summary: This work introduces a new algorithm, inspired by [1], where solve the computation issues with the algorithms in [1]. Claims And Evidence: see Other Strengths And Weaknesses Methods And Evaluation Criteria: see Other Strengths And Weaknesses Theoretical Claims: see Other Strengths And Weaknesses Experimental Designs Or Analyses: see Other Strengths And Weaknesses Supplementary Material: see Other Strengths And Weaknesses Relation To Broader Scientific Literature: see Other Strengths And Weaknesses Essential References Not Discussed: see Other Strengths And Weaknesses Other Strengths And Weaknesses: Weaknesses: 1- In Eq. (7), replace the notation with $ z_t(\theta) $. 2- Compared to [1], the contribution of this work is incremental. 3- It would be beneficial to elaborate on the main theoretical contributions in relation to [1]. 4- In the experiments, for a Student’s t-distribution with a degree of freedom greater than 2, the second moment is bounded. It would be valuable to explore the case where the degree of freedom satisfies $ 1 < df \leq 2 $. 5- The current experiments are insufficient, as they are conducted solely on synthetic datasets. 6- The present definition of heavy-tailed distributions does not encompass the sub-Gaussian case. Therefore, I suggest discussing only the heavy-tailed case in Section 4 to avoid potential confusion. ---- References: [1]: Huang, J., Zhong, H., Wang, L., and Yang, L. Tackling heavy-tailed rewards in reinforcement learning with function approximation: Minimax optimal and instance-dependent regret bounds. NIPS, 2024. Other Comments Or Suggestions: see Other Strengths And Weaknesses Questions For Authors: see Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your helpful suggestions. Below we will address your main concerns regarding the technical contributions and report additional experiments. --- **Q1.** about the contributions ("Compared to [1], the contribution of this work is incremental", "elaborate on the main theoretical contributions in relation to [1]") **A1.** Thank you for the comments. We believe there may have been a misunderstanding due to our insufficient emphasis on the technical contributions, particularly the challenges of applying OMD to heavy-tailed bandits. We will revise the paper to ensure this aspect is presented more clearly. Here, we'd like to take this opportunity to clarify these challenges and further emphasize our contributions. - **Challenge 1: Using one-pass OMD to approximate full-batch MLE.** The offline MLE estimator leverages the entire history of data, enabling its estimation error to be well-controlled. In contrast, OMD updates the one-pass estimator using only current data, making it inherently challenging to ensure the final regret remains unaffected. While OMD is known for its effectiveness in minimizing regret, adapting this capability into a good statistical estimator is far from straightforward. In fact, as shown in Section 4.1, even under Sub-Gaussian noise (a significantly easier scenario), deploying OMD requires: (i) carefully designing the local norm to collaborate with self-normalized concentration, and (ii) properly using the negative term in the regret analysis to control the generalization gap. - **Challenge 2: Handing Huber loss and heavy-tailed noise.** In the heavy-tailed setting, deploying OMD as a one-pass estimator is even more challenging. The primary difficulty arises from the curvature of the Huber loss, which is partially linear (undesired region) and partially quadratic (desired region), necessitating careful control of the threshold. Furthermore, in the estimation error analysis, the bias introduced by the one-pass OMD update manifests as an additional stability term that can grow as $\mathcal{O}(\sqrt{T})$. It is critical to carefully design both the adaptive normalization factor of the Huber loss and the learning rate in OMD, together to effectively manage the estimation error. We will revise the paper to ensure these challenges and our technical contributions are clearly elaborated. --- **Q2.** About experiment "explore the case where the degree of freedom satisfies $1<df\leq 2$" **A2.** Thank you for the suggestions. We have conducted additional experiments to address your concerns. We summarize all the details and results in the following anonymous URL: https://default-anno-bucket.s3.us-west-1.amazonaws.com/HvtLB_revised_exp.pdf Please refer to Figure 2 in this anonymous URL for details, where we use $df = 1.7$ for the Student’s t-distributions. Additionally, we also tested other heavy-tailed noise distributions and ensured they satisfy the condition that there exists an $\epsilon \in (0, 1)$ such that the $(1+\epsilon)$-th moment is bounded, which can be found in Figure 1. We will add these results in the revised version. --- **Q3.** About experiment "insufficient, as they are conducted solely on synthetic datasets" **A3.** Thanks for your comments. We'd like to emphasize that our paper primarily focuses on the theoretical side, which we have elaborated the technical contributions in detail (please refer to **A1**). The synthetic data are well-suited for setting different configurations to support our theoretical findings. On the practical side, it might be more interesting to explore online MDPs and adaptive control, given the fundamental connection between linear bandits and decision-making problems. We believe that our proposed algorithm (with suitable modifications) could be highly useful. We leave these extensions for future investigation and prefer not to spend more spaces for them in this paper to avoid diverting readers from the main focus. --- **Q4.** "In Eq.(7), replace the notation with $z_t(\theta)$" **A4.** Thank you for pointing out the confusion, we will make the presentation clearer in the revised version. --- **Q5.** "heavy-tailed distributions does not encompass the sub-Gaussian case...I suggest discussing only the heavy-tailed case in Section 4 to avoid potential confusion" **A5.** Thank you for this important suggestion. We intended to use the sub-Gaussian case as a simplified example to demonstrate how to deploy the OMD estimator in linear bandits. However, your suggestion is very reasonable, and we will revise the paper to avoid this confusion. --- Thanks for your helpful suggestions. We will add more experiments and improve the paper writing to more clearly emphasize the technical contributions. If our responses have properly addressed your concerns, please consider updating your score. Thanks! --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in the rebuttal. Conducting experiments on real datasets is crucial. Is it possible to apply other loss function for this scenario? For example check tilted risk minimization [1]. [1]: Li, Tian, et al. "Tilted empirical risk minimization." arXiv preprint arXiv:2007.01162 (2020). --- Reply to Comment 1.1.1: Comment: Thanks for your appreciation of our rebuttal and suggestions for the real dataset! We plan to incorporate more experiments on real datasets in the next version based on your suggestions. **Regarding the possibility of other loss functions.** Thank you for the insightful question and the provided reference. Both tilted empirical risk minimization (TERM) [1] and Huber loss aim to mitigate the impact of outlier losses during empirical risk minimization (ERM). While Huber loss is specifically designed for the squared loss, TERM provides a more general weighting mechanism applicable to a wide range of loss functions. This suggests that TERM can effectively handle offline linear regression under heavy-tailed noise and has the potential to achieve sound theoretical guarantees, as evidenced by its empirical advantage over Huber loss (see Section 5.1 of [1]). ***However, in the online learning scenario, from our current understanding, TERM may lack sufficient flexibility to support one-pass update as effectively as Huber loss.*** - Huber loss defines its penalty locally: the loss of each new sample can be independently determined to follow either a quadratic or linear regime. Adding a new data point does not affect the penalty of previous samples, making Huber loss particularly well-suited for the online scenario. - TERM, on the other hand, relies on a log-sum-exp aggregation over *all* sample losses, i.e., $\frac{1}{t} \log (\frac{1}{N} \sum_{i=1}^N e^{t f(x_i ; \theta)})$. This means that the weight of each sample is determined relative to the rest of the dataset. Adding a new sample alters the overall distribution of losses and changes the relative weighting across all data points. This global coupling implies the re-computation of the entire objective for each new sample, making TERM potentially unsuitable for one-pass updates in online learning, or at least its extension to online learning is non-trivial. We will include a discussion of this alternative loss function for handling outliers [1] in the revised version. If our clarifications have properly addressed your concerns, please consider updating your score. Thanks!
Summary: This paper studies the heavy-tailed linear bandits problem. Doing OMD on the Huber loss, the authors yielded an algorithm with near-optimal $\tilde{\mathcal O}(d T^{1/(1+\epsilon)})$ regret which only needs $\mathcal O(1)$ computation per round (instead of doing a huge Huber regression as previous ones in the literature). Furthermore, the algorithm also attains a "variance-aware" property that automatically scales with the $(1+\epsilon)$-th moments. Claims And Evidence: The theorems are complemented with easy-to-follow sketches. Methods And Evaluation Criteria: The setup and notation is standard and consistent with previous works in the literature. Theoretical Claims: I didn't check the details of the proof, but the sketches are convincing enough. Experimental Designs Or Analyses: The numerical illustration is valid. Supplementary Material: No Relation To Broader Scientific Literature: Can be interesting to other problems with heavy-tailed distributions; but mostly talored towards the heavy-tailed linear bandits Essential References Not Discussed: Looks good. See questions for some not-so-related ones. Other Strengths And Weaknesses: Strength: 1. The writing is pretty clear, introducing previous ideas on heavy-tailed linear bandits and highlighting the technical difference / innovation. 2. The analysis sketch is well-written, making it easy to follow. Weakness: 1. The algorithm requires knowledge of $\epsilon$ and $\nu_t$. I feel it's acceptable as it's the common challenge for heavy-tailed linear bandits (see Q1 & Q2). Other Comments Or Suggestions: No Questions For Authors: 1. Looks like all previous heavy-tail linear bandit algorithms require the exact knowledge of $\epsilon$ and $\nu_t$. Recently this has been relaxed in the heavy-tail multi-armed bandit literature: [1] for the adversarial case, [2] for the stochastic case, and [3] for both ("best-of-both-worlds"). I know the techniques here and there are substancially different, but it'd be great to still give some discussions. 2. Also, regarding the knowledge of $\nu_t$, there has been Empirical Bernstein inequalities which removes this for standard linear bandits [4, 5]. Is there any similar self-bounding concentrations in the world of heavy-tailed distributions? If yes, could you briefly explain what's the main issue when doing it for the learning setup? [1] Jiatai Huang, Yan Dai, and Longbo Huang. "Adaptive best-of-both-worlds algorithm for heavy-tailed multi-armed bandits." ICML 2022. [2] Gianmarco Genalti, Lupo Marsigli, Nicola Gatti, and Alberto Maria Metelli. "(ε,𝑢)-Adaptive Regret Minimization in Heavy-Tailed Bandits." COLT 2024. [3] Yu Chen, Jiatai Huang, Yan Dai, and Longbo Huang. "uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs." ICLR 2025. [4] Zihan Zhang, Jiaqi Yang, Xiangyang Ji, and Simon S Du. "Improved variance-aware confidence sets for linear bandits and linear mixture mdp." NeurIPS 2021. [5] Yeoneung Kim, Insoon Yang, and Kwang-Sung Jun. "Improved regret analysis for variance-adaptive linear bandits and horizon-free linear mixture mdps." NeurIPS 2022. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the valuable feedback and appreciation of our work! We will revise the paper accordingly. Below, we answer your questions. --- **Q1.** More discussions about algorithms without knowledge of $\epsilon$ and $\nu_{t}$ in advance. **A1.** Thanks for the suggestion. Relaxing this assumption is indeed an important direction for future research of heavy-tailed bandits. The literature and techniques you mentioned for removing the need for known $\nu_t$ could potentially be adapted to the heavy-tailed setting. We will include a more detailed discussion of these works in the revised version. --- **Q2.** " Is there any similar self-bounding concentrations in the world of heavy-tailed distributions? If yes, could you briefly explain what's the main issue when doing it for the learning setup?" **A2.** Thanks for the question. As far as we know, Existing results based on empirical variance typically require either a bounded norm or sub-Gaussian assumption which is not applicable to heavy-tailed distributions, since in the heavy-tailed setting (e.g., when only the $(1+\epsilon)$-moment is bounded), the empirical variance itself can be unbounded. Besides, after our submission, we noticed a recent paper published on arXiv that studies the heavy-tailed bandit setting (with $\epsilon = 1$) and claims to eliminate the need for knowing the variance using a peeling-based algorithm, under certain assumptions [1]. Although we have not studied the paper in detail yet, we believe its result is noteworthy. [1] Ye et al., Catoni Contextual Bandits are Robust to Heavy-tailed Rewards. --- We are grateful for your careful review! We will provide more discussion about the literature you suggested in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you. I like this paper overall. I encourage the authors to incorporate above discussions into Related Works or Conclusions in the revision. --- Reply to Comment 1.1.1: Comment: Thanks for your appreciation of our work! We will incorporate these discussions into the revised version.
Summary: The paper proposes a method for linear bandits with heavy-tailed noise variable, that efficiently estimates the bandit parameter with a single pass through the data and doesn't require processing the complete historical data. Claims And Evidence: It is claimed that the method can be adapted to more generalized decision-making scenarios, such as reinforcement learning, but no theoretical or experimental evidence is provided, even though a discussion is provided. Methods And Evaluation Criteria: The evaluation criteria are standard, as regret bound and time complexity are the common metrics in the literature. Theoretical Claims: The theoretical claims are built towards a single, important theorem: the regret bound. The proof sketch provided in section 4 make the proof process easier to follow. The major part of the theoretical stuff focuses on the estimation error of the bandit parameter, which is essential for a regret analysis. Experimental Designs Or Analyses: The. experiment setup is very limited and insufficient to prove the claims of the authors. 1. One of the claims of the paper was that previous studies assumed a fixed action set, while the proposed method doesn't require such an assumption. But in the experiments, the action set is a fixed set of 50 arms. 2. The same statement can be mentioned. for the case of Reinforcement Learning or other decision-making scenarios. 3. There is a vast range of heavy-tailed distributions, but only student-t distribution is tested in the experiments, which is specifically very similar to normal distribution, hence the good performance on only the t distribution is not enough. Supplementary Material: The supplementary material is merely the theoretical analysis and detailed proof of the lemmas, the estimation error theorem and regret bound. Relation To Broader Scientific Literature: The contribution of the paper is not significant and essential. Compared to the Heavy-OFUL, whichi is a strong SOTA in the literature, the only contribution is the replacement of the batch GD with OMD algorithm which allows an online per-sample update. The idea of Huber loss and its application on heavy-tailed linear bandits have been thoroughly discussed in Heavy-OFUL and the OMD algorithm is a well-known optimization algorithm. Yet, the theoretical analysis of the combination of OMD with huber objective can be views as the only contribution of the paper. Essential References Not Discussed: The literature of the offline bandit learning could be explored more. Especially off-policy evaluation, where MOM and truncation methods are provided to tackle heavy-tailed noise and outlier effect. For example, the following papers could be studied and reviewed. Sakhi, Otmane, et al. "Logarithmic smoothing for pessimistic off-policy evaluation, selection and learning." arXiv preprint arXiv:2405.14335 (2024). Behnamnia, Armin, et al. "Batch Learning via Log-Sum-Exponential Estimator from Logged Bandit Feedback." ICML 2024 Workshop: Aligning Reinforcement Learning Experimentalists and Theorists. Other Strengths And Weaknesses: The contributions of the paper are weak. Altogether it provides a theoretical analysis of huber-based linear bandit parameter estimation with OMD optimization, with insufficient experimental evidence. Other Comments Or Suggestions: I don't have any particular suggestion. Questions For Authors: The paper's structure is clear. I don't have any question. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your suggestions on experiments and literatures. We will revise the paper accordingly. However, there is an important misunderstanding regarding our technical contributions that we need to clarify. --- **Q1.** "contribution is not significant...only contribution is the replacement of the batch GD with OMD..." **A1:** We respectfully disagree with the comments. The SOTA method (HEAVY-OFUL) [Huang et al., 2024] adopts an offline MLE estimator and achieves optimal regret for heavy-tailed linear bandits. However, ensuring the one-pass" property is crucial for online algorithms to ***update efficiently without storing the entire historical data***. Our work builds on HEAVY-OFUL and develops an OMD-based one-pass algorithm, while also ensuring that the final regret remains unaffected. **This is technically highly non-trivial.** We believe it is unfair to describe as merely "replacing the batch GD with the OMD algorithm for online per-sample updates". Below, we outline the technical challenges in more detail. - **Challenge 1: Using one-pass OMD to approximate full-batch MLE.** The offline MLE estimator leverages the entire history of data, enabling its estimation error to be well-controlled. In contrast, it is intuitively non-trivial to develop a one-pass estimator (which uses only the current data) while still ensuring that the estimation error remains as good as the offline MLE estimator. In fact, as shown in Section 4.1, even under Sub-Gaussian noise (a significantly easier scenario), deploying OMD requires: (i) carefully designing the local norm to collaborate with self-normalized concentration, and (ii) properly using the negative term in the regret analysis to control the generalization gap. - **Challenge 2: Handing Huber loss and heavy-tailed noise.** In the heavy-tailed setting, deploying OMD as a one-pass estimator is even more challenging. The primary difficulty arises from the curvature of the Huber loss, which is partially linear (undesired region) and partially quadratic (desired region), necessitating careful control of the threshold. Furthermore, in the estimation error analysis, the bias introduced by the one-pass OMD update manifests as an additional stability term that can grow as $\mathcal{O}(\sqrt{T})$. It is critical to carefully design both the adaptive normalization factor of the Huber loss and the learning rate in OMD, together to effectively manage the estimation error. Overall, we believe these technical contributions are not only interesting to the bandits community but also have a broad impact on the audiences of online MDPs and online adaptive control. We will improve the paper's writing to emphasize these points more clearly. Thanks! --- **Q2.** "offline bandit learning could be explored more" **A2.** Thanks for pointing out the literature on offline bandit learning, which relates to managing extreme values caused by importance weighting. We are happy to incorporate a discussion in the revision. --- **Q3.** "Experiment setup is very limited and insufficient to prove the claims" **A3.** We'd like to emphasize that our paper primarily focuses on the theoretical side, which we have elaborated the technical contributions in detail (please refer to **A1**). Experiments are intended to support our theoretical findings. We are happy to conduct additional tests to better address your concerns, and new results and plots are summarized in the following anonymous URL: https://default-anno-bucket.s3.us-west-1.amazonaws.com/HvtLB_revised_exp.pdf Concretely, we add experiments of various heavy-tailed noise (including Pareto, Lomax, Fisk); time-varying $\nu_t$; varying arm set case; more comparation about empirical advantage of our methods. We will include these additional tests in the revised version. Hope these will address your concerns. --- **Q4.** "It is claimed that the method can be adapted to more generalized decision-making ... but no theoretical or experimental evidence is provided" **A4.** Without a doubt, the linear bandit model is fundamental in online learning. Our work makes an important step for linear bandits with heavy-tailed noise. As mentioned, our technical contributions are already substantial enough to gain interest from the community. Given the importance of linear bandits, the discussion in Section 5 is intended to highlight the potential of our Hvt-LB algorithm to broader decision-making scenarios. However, we prefer not to spend more spaces for those extensions to avoid diverting readers from the main focus, i.e., linear bandits. We believe our techniques will be of interest to audiences working on online linear MDPs and adaptive control. --- We appreciate your suggestions and will add additional experiments and related works accordingly. We will also emphasize technical contributions more clearly. If our clarifications have adequately addressed your concerns, please consider updating your score. Thanks! --- Rebuttal Comment 1.1: Comment: I thank the authors for the additional experiments. I appreciate the authors' work on the design of an OMD algorithm based on Huber loss. The algorithm altogether is indeed novel, and hence the theoretical analysis is so. I can summarize the contribution as proposing an online method that can do as well as the well-known offline Heavy-OFUL method, in the presence of heavy-tailed rewards (which is part of the contribution of Heavy-OFUL itself). Are there any experimental designs for which only the one-step historyless approaches are computationally feasible? --- Reply to Comment 1.1.1: Comment: Thank you for your recognition of the novelty and technical contributions of our work! Your summary accurately captures the essence of our contribution: designing a novel online algorithm (which relies only on current data) to achieve an optimal regret guarantee same as the MLE-based offline algorithm (which uses the entire historical data). **Regarding the question about experimental settings with computational feasibility:** a feasible online algorithm should have a per-round update complexity independent of the iteration count (i.e., historyless). Otherwise, as the online data stream grows, the per-round computational cost will increase, eventually making the update infeasible (since it requires using the entire historical data). To provide a concrete example: completing an 18,000-round online estimation takes **approximately 3 hours** with Heavy-OFUL, whereas our proposed one-pass method (and other one-pass baselines) finishes in under **1 minute** (as shown in Figure 1&2 in paper). This significant difference becomes more problematic when running repeated experiments or tuning hyperparameters. The computational infeasibility of the offline algorithm becomes even more severe in realistic applications, where online estimation often requires timely updates that may lose their value if delayed. We plan to include experiments on real data in a future version to better illustrate the practical limitations of MLE-based offline approaches, according to your suggestion. --- Finally, we sincerely appreciate your recognition of our contributions. As we continue to add more experiments and discussions on related works, we would be deeply grateful if you would consider raising your score to further support our submission. Thank you again for your valuable feedback!
Summary: This paper proposes a novel, computationally efficient algorithm for heavy-tailed linear bandits that achieves the best-known regret bound in the literature. Claims And Evidence: The claims are clear and supported by convincing evidence from the literature. Methods And Evaluation Criteria: The methods and evaluation criteria follow standards in the literature. Theoretical Claims: The proofs are based on standard techniques for heavy-tailed distributions and online mirror descent Experimental Designs Or Analyses: While the experimental design is standard, I suggest the authors provide results for varying $\nu_t$ to support the claim that the algorithm is adaptive to $\nu_t$ Supplementary Material: I checked Sections A, B, and C for the proofs of the theoretical claims. Relation To Broader Scientific Literature: As the authors mentioned in the discussion section, their results are applicable to linear Markov decision processes and online adaptive control. Because online estimation for heavy-tailed distributions is not well established in bandits or reinforcement learning, I believe this work will have a wide impact on those literatures Essential References Not Discussed: All the essential related references are discussed in the paper. Other Strengths And Weaknesses: Strengths: The paper is well-organized, with a thorough discussion that is easy to follow as well as with clear motivation. Weaknesses: The novelty seems limited, as the challenges of combining online mirror descent with heavy-tailed bandits are not discussed clearly. Other Comments Or Suggestions: In Lemma 1, $\sigma_{\min}$ is used without being defined beforehand. ============== After Rebuttal ================ I appreciate the authors detailed responses. I acknowledge that this work has novel contribution on heavy-tailed bandits. Please highlight the responses in the revision for clarity, especially on the novelty. Questions For Authors: Q1: How will the empirical performance of the algorithm vary with changing $v_t$? Q2: Is there a way to remove the requirement for knowledge of the horizon time $T$? Q3: Could the authors provide specific challenges in deriving the results of applying online mirror descent to heavy-tailed bandits? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your careful review! In the following, we will address your main concerns and report additional experiments. --- **Q1.** "How will the empirical performance of the algorithm vary with changing $\nu_t$? I suggest the authors provide results for varying $\nu_t$ to support the claim that the algorithm is adaptive to $\nu_t$." **A1.** Thanks for the suggestion. We have conducted additional experiments of time-varying $\nu_t$ to further support the variance-aware property of our method. We summarize all the details and results in the following anonymous URL: https://default-anno-bucket.s3.us-west-1.amazonaws.com/HvtLB_revised_exp.pdf Please refer to Figure 2 of this anonymous URL for details. We will add these results in the revised version. --- **Q2.** " The novelty seems limited, as the challenges of combining online mirror descent with heavy-tailed bandits are not discussed clearly. Could the authors provide specific challenges in deriving the results of applying online mirror descent to heavy-tailed bandits?" **A2.** Thank you for the comments. We believe there may have been a misunderstanding due to our insufficient emphasis on the challenges of applying OMD to heavy-tailed bandits. We will revise the paper to ensure this aspect is presented more clearly. Here, we'd like to take this opportunity to clarify these challenges. - **Challenge 1: Using one-pass OMD to approximate full-batch MLE.** The offline MLE estimator leverages the entire history of data, enabling its estimation error to be well-controlled. In contrast, OMD updates the one-pass estimator using only current data, making it inherently challenging to ensure the final regret remains unaffected. While OMD is known for its effectiveness in minimizing regret, adapting this capability into a good statistical estimator is far from straightforward. In fact, as shown in Section 4.1, even under Sub-Gaussian noise (a significantly easier scenario), deploying OMD requires: (i) carefully designing the local norm to collaborate with self-normalized concentration, and (ii) properly using the negative term in the regret analysis to control the generalization gap. - **Challenge 2: Handing Huber loss and heavy-tailed noise.** In the heavy-tailed setting, deploying OMD as a one-pass estimator is even more challenging. The primary difficulty arises from the curvature of the Huber loss, which is partially linear (undesired region) and partially quadratic (desired region), necessitating careful control of the threshold. Furthermore, in the estimation error analysis, the bias introduced by the one-pass OMD update manifests as an additional stability term that can grow as $\mathcal{O}(\sqrt{T})$. It is critical to carefully design both the adaptive normalization factor of the Huber loss and the learning rate in OMD, together to effectively manage the estimation error. We will revise the paper to ensure these challenges are clearly elaborated. --- **Q3.** "In Lemma $1$, $\sigma_{\min }$ is used without being defined beforehand." **A3.** Thank you for the detailed review! We will correct this in the revised version, specifically, $\sigma_{\min }$ is a small positive constant to avoid singularity, and we provide the setting of $\sigma_{\min}$ in Theorem 1. --- **Q4.** "Is there a way to remove the requirement for knowledge of the horizon time $T$?" **A4.** Thanks for the question. The requirement of a known time horizon $T$ can be removed using the doubling trick. Specifically, we let the algorithm restart at time steps $2^1, 2^2, 2^3, 2^4, \ldots, 2^M$. If the time horizon is $T$, then the total number of intervals is $M \approx \log T$. When the time horizon is known, the regret bound is $\widetilde{\mathcal{O}}(T^{\frac{1}{1+\epsilon}})$. Since the length of each interval $2^m$ is known in advance, we can apply the same bound within each interval. Then the overall regret can be calculated by: $$\widetilde{\mathcal{O}}\left(\sum_{m=1}^M\left(2^m\right)^{\frac{1}{1+\epsilon}}\right)=\widetilde{\mathcal{O}}\left(\sum_{m=1}^M 2^{\frac{m}{1+\epsilon}}\right)=\widetilde{\mathcal{O}}\left(2^{\frac{1}{1+\epsilon}} \frac{2^{\frac{M}{1+\epsilon}}-1}{2^{\frac{1}{1+\epsilon}}-1}\right)=\widetilde{\mathcal{O}}\left(\frac{2^{\frac{1}{1+\epsilon}}}{2^{\frac{1}{1+\epsilon}}-1}\left(T^{\frac{1}{1+\epsilon}}-1\right)\right) = \widetilde{\mathcal{O}}(T^{\frac{1}{1+\epsilon}}).$$ In this way, we are able to remove the requirement of known $T$ in advance, while still achieving a regret bound of $\widetilde{\mathcal{O}}(T^{\frac{1}{1+\epsilon}})$, up to a constant factor overhead. --- Thanks again for your insightful comments. We will add more experiments and improve the presentation in the revised version. --- Rebuttal Comment 1.1: Comment: I appreciate the authors comments and efforts for the additional experiments. On Question 2, could the authors point out the novel technical analysis or results that overcomes the mentioned challenges? --- Reply to Comment 1.1.1: Comment: Thanks again for your careful review and your appreciation of our efforts! In the following, we explain how we address the two mentioned challenges: one arising from one-pass update and the other from the heavy-tailed noise. Specifically, **(i) One-pass Update.** In the estimation error analysis, the one-pass update introduces the additional stability term compared to offline algorithms, specifically, $\sum\_{s=1}^t \\| \nabla \ell\_s(\widehat{\theta}\_s) \\|\_{V\_s^{-1}}^2$, where $\ell_s$ is the Huber loss function and $V_s$ is the local norm matrix. In the presence of heavy-tailed noise, the gradients $\nabla \ell_s(\widehat{\theta}_s)$ can be significantly corrupted by noise, making prior techniques inapplicable. Moreover, the introduction of a normalization factor causes the stability term to grow as $\mathcal{O}(\sqrt{T})$ (as shown in line 316-328 right column), which is undesirable. We observe that standard OMD analyses include a negative term typically used to control the generalization gap, and we found that this negative term can also be exploited to partially cancel the $\mathcal{O}(\sqrt{T})$ growth of the stability term. By carefully designing the learning rate of OMD, we are able to effectively eliminate this growth. For more details, please refer to lines 745-751 in Appendix B.3 and lines 898-916 in Appendix B.5. **(ii) Heavy-tailed noise.** The Huber loss may assign a linear penalty even to normal (non-outlier) data, which can reduce its effectiveness. To address this issue, we design a *recursive normalization factor* to ensure that denoised data lies within the quadratic region of the loss function, thereby making better use of clean data. Specifically, the normalization factor at round $t$ is set based on the estimation error from the previous round, i.e., $\sigma_t \sim \sqrt{\beta_{t-1}}$ (shown in line 5 in Algorithm 1). Intuitively, when the current estimation error is large, a larger factor $\sigma_t$ helps improve the estimation accuracy. From a theoretical perspective, we leverage a high-probability bound on the estimation error, $\\|\widehat{\theta}\_t - \theta\_\*\\|_{V\_{t-1}} \leq \beta_t$, which ensures that $\left| \left( X\_t^\top \widehat{\theta}\_t - X\_t^\top \theta\_* \right) / \sigma\_t \right| \leq \tau\_t / 2$. Additional efforts have also been made to achieve the high-probability bound, which are not expanded upon here. For more details, please refer to lines 634–659 in Appendix B.2, the proof of Lemma 5 in Appendix B.4, and lines 916–961 in Appendix B.5. In the revised version, we will include more experiments and improve the presentation to better emphasize the challenges and technical contributions. Thank you again for your valuable feedback!
null
null
null
null
Balanced Learning for Domain Adaptive Semantic Segmentation
Accept (poster)
Summary: The paper proposes Balanced Learning for Domain Adaptation (BLDA) to address class bias in unsupervised domain adaptation (UDA) for semantic segmentation. BLDA analyzes logits distributions to assess prediction bias and introduces an online logits adjustment mechanism to balance the class learning in both source and target domains. Experimental results demonstrate consistent performance improvements when integrating BLDA with various methods on standard UDA benchmarks. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method makes sense for the problem and experiment results demonstrate its effectiveness. Theoretical Claims: I have checked the correctness of any proofs for theoretical claims. The paper states that "the distribution of logits predicted by the network can assess the degree of class bias", which is demonstrated by Fig.1(c) and Fig.1(d). Experimental Designs Or Analyses: I have checked the experimental results. What are the experimental results of unsupervised domain adaptation at Cityscapes to ACDC[1]? [1] ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding. ICCV 2021 Supplementary Material: I have reviewed the supplementary material. It includes the derivation of formulae, evaluation metrics, implementation details of online logits distribution estimation, the influence of parameters setting, and more experimental results. Relation To Broader Scientific Literature: BLDA analyzes logits distributions to assess prediction bias and introduces an online logits adjustment mechanism to balance class learning in both source and target domains. It inspires more researchers to utilize logit distributions to assess prediction bias. Essential References Not Discussed: No related works that are essential to understanding the context for key contributions of the paper, but are not currently cited/discussed in the paper. Other Strengths And Weaknesses: The writing is excellent, the charts are beautiful, and the formulas are exciting! The experiments are slightly lacking and would like to show more experimental results for unsupervised domain adaptation settings such as Cityscapes to ACDC. Other Comments Or Suggestions: No Questions For Authors: What are the experimental results of unsupervised domain adaptation at Cityscapes to ACDC[1]? [1] ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding. ICCV 2021 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive and constructive feedback. We appreciate your recognition of **motivation**, **theoretical intuition**, and **clear writing**, as well as your kind comments on our visualizations and formula design. --- Regarding your suggestion to include more experimental results for the Cityscapes → ACDC setting, we agree that evaluating BLDA under more diverse and challenging UDA scenarios is important and we conduct extended experiments on image classification/video semantic segmentation in Appendix G/H. We conduct additional experiments on the Cityscapes → ACDC benchmark and present the results below. As shown in the following tables (∗ denotes the reproduced result), integrating BLDA consistently improves performance across multiple baselines, including DACS, DAFormer, and MIC, on both mIoU and mAcc metrics. Importantly, BLDA is designed as a plug-and-play module that is model-agnostic and can be seamlessly integrated into most existing UDA baselines without modifying their core architectures or training objectives. Since class bias is a pervasive issue across UDA settings, our method provides a general mechanism to mitigate this imbalance during training. As such, BLDA is able to consistently improve performance across a wide range of models and target domains, as demonstrated in our experiments on both standard benchmarks and the newly added Cityscapes → ACDC setting. **Cityscapes → ACDC (IoU, %)** | Method| Arch. |Road|Sidewalk|Building|Wall|Fence|Pole|Light | Sign | Veg | Terrain | Sky | Person | Rider | Car | Truck | Bus | Train | Motor | Bike | **mIoU** | **std** | |-|-|-|-|-|-|-|-|-|-|- |- |- |-|- |- |- | - | - |- | - | - |- | | DACS* | C | 78.5 | 36.3 | 73.4 | 28.7 | 17.0 | 42.2 | 57.1 | 45.6 | 68.3 | 26.7 | 75.0| 51.3 | 24.5 | 75.6 | 37.8 | 43.1 | 41.3 | 28.8 | 26.8 | 46.0 | 19.3 | | **+BLDA** | C | 74.6 | **41.9** | 67.3 | **31.5** | **19.8** | **46.9** | 54.8 | **50.9** | **71.7** | **29.5** | 69.4 | **54.6** | **26.6** | 74.2 | **41.9** | **45.0** | **49.8** | **31.1** | **27.9** | **47.9** | **17.1** | | DAFormer* | T | 65.5 | 52.8 | 79.1 | 39.8 | 37.2 | 56.4 | 57.6 | 51.3 | 72.2| 37.3| 59.9 | 54.7 | 25.0 | 83.6 | 69.6 | 68.0 | 72.9 | 39.6 | 33.2 | 55.5 | 16.3 | | **+BLDA** | T | 63.8 | 50.7 | 73.4 | **40.2** | **40.7** | 53.3 | 54.6 | 50.0 | 70.7 | **40.3** | **64.3** | **58.9** | **33.9** | 83.5 | **74.3** | **75.5** | **78.9** | **45.1** | **37.6** | **57.4** | **15.2** | | MIC* | T | 89.5 | 63.0 | 86.3 | 56.7 | 46.2 | 64.2 | 65.8 | 64.2 | 75.5 | 46.9 | 83.2 | 67.9 | 45.6 | 87.7 | 85.9 | 92.3 | 88.7| 54.0| 55.4 | 69.4 | 15.9 | | **+BLDA** | T | 88.7 | **66.8** | **87.8** | **61.1** | **48.9** | 63.3 | 65.2 | **67.3** | 73.7 | **50.5** | 82.4 | **68.2** | **49.5** | 86.7 | 80.5 | 91.1 | **91.0** | **54.6** | **61.8** | **70.5** | **14.2** | **Cityscapes → ACDC (Acc, %)** | Method | Arch. | Road | Sidewalk | Building | Wall | Fence | Pole | Light | Sign | Veg | Terrain | Sky | Person | Rider | Car | Truck | Bus | Train | Motor | Bike | **mAcc** | **std** | | -- | - | -| --| - | - | -| -- | - | - | - | -- | --- | --- | --- | --- | -- | -- | ---- |--- | ---- | --- | - | | DACS* | C | 96.7 | 52.9 | 81.9 | 39.2 | 36.3 | 51.8 | 73.1 | 55.0 | 90.7 | 35.0 | 76.1 | 60.1 | 33.3 | 84.5 | 44.3 | 44.2 | 59.7 | 35.8 | 39.8 | 57.4 | 20.0 | | **+BLDA** | C | 96.1 | **66.7** | **82.7** | **40.7** | **36.6** | **58.9** | 66.4 | **67.4** | 84.1 | **41.3** | 70.7 | **68.4** | **36.3** | **84.8** | **48.9** | **46.5** | **59.9** | **47.4** | **41.4** | **60.3** | **17.7** | | DAFormer* | T | 98.2| 63.4 | 86.7 | 51.2| 42.0 | 70.3 | 77.3 | 65.1 | 93.4 | 52.6 | 60.6 | 60.3 | 53.7 | 90.7 | 89.3 | 70.8 | 90.3 | 57.8| 42.7 | 69.3 | 17.4 | | **+BLDA** | T | **98.8** | 61.3 | 85.5 | **55.8** | **45.4** | **73.2** | **82.5** | **72.8** | 92.0 | 51.7 | **65.0** | **69.2** | 51.5 | 90.5| **89.5** | **80.0** | **92.8** | 60.6 | **51.1** | **72.1** | **16.4** | | MIC* | T | 99.4 | 69.2 | 92.2 | 76.0 | 57.7| 73.5 | 89.5 | 80.9 | 94.9 | 62.8 | 89.3 | 82.7 | 57.5 | 97.6 | 91.9 | 96.9 | 96.6 | 60.2 | 72.6 | 81.1 | 14.2 | | **+BLDA** | T | 98.8 | **72.3** | **93.8** | **86.1** | **59.7** | **73.6** | **90.8** | **89.9** | 93.2 | **73.9** | **90.8** | 81.4 | **68.3** | 96.3 | **94.8** | **97.0** | **97.6** | **65.3** | 74.1 | **84.1** | **12.1** | We will include these results in the revised version of the paper. --- We hope our response can resolve your concern. Please do not hesitate to let us know if you have further questions.
Summary: This paper presents Balanced Learning for Domain Adaptation (BLDA), an innovative approach to address class imbalance and distribution shifts in Unsupervised Domain Adaptation (UDA) for semantic segmentation. Specifically, it identifies over-predicted and under-predicted classes through the analysis of predicted logits and employs a post-hoc approach to align logits distributions across different classes using shared anchor distributions. During self-training, BLDA estimates logits distributions online and incorporates correction terms into the loss function to ensure unbiased pseudo-label generation. Extensive experiments on standard UDA benchmarks have demonstrated that BLDA consistently improves performance, particularly for under-predicted classes, when integrated with various existing methods. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There is some confusion about Eq.(4). As $\mathbb{P}(\arg max_{c' \in [C]}f_{\theta}(x)[c']=l\|y=c)$ represents the probability of predicting class $c$ as $l$ and $c \neq l$, why does the positive bias $\mathrm{Bias}(l)$ indicates over-prediction? Given the condition $y=c$, how do we explain the representation of the summation of $c' \in [C]$? As my understanding, it may be $\mathbb{P}(\arg max_{l \in [C]}f_{\theta}(x)=l\|y=c)$. Experimental Designs Or Analyses: This paper has constructed extensive experiments on the standard UDA benchmarks to demonstrate the effectiveness of the proposed post-hoc method. Supplementary Material: The supplementary material has provided sufficient experiment results, discussions about equations, implementation details, and comparisons with existing methods for the proposed method. Relation To Broader Scientific Literature: The class imbalance problem is widely studied in semantic segmentation, cross-domain semantic segmentation, and other perception fields. This paper proposes a post-hoc method to balance over-prediction and under-prediction classes during domain adaptation, which is straightforward and makes sense. Moreover, the proposed method may cost a lot of time and computational resources due to the multiple GMMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is generally well-written, well-structured, and easy to follow. 2. The experiments are comprehensive, covering three transfer tasks for segmentation, an additional image classification task (included in the supplementary materials), and extensive qualitative analyses. Weaknesses: 1. The paper claims to stress the class-imbalanced problem by the proposed BLDA. However, it seems like the proposed method profoundly depends on the baseline models. For example, the performance of "Train" in Table 1 is close to 0 for DACS and DAFormer (C), which shows that the proposed method may still suffer from the class-imblanced problem. 2. Concerns regarding the fairness of the comparisons. In Tables 1 and 2, the experiments seem to leverage high-quality pseudo-labels for self-training, which may inherently provide an advantage over existing methods, such as DAFormer, CDAC, HRDA, and MIC. This raises questions about whether the improved performance is due to the proposed method's unique contributions or simply a result of the enhanced quality of the pseudo-labels used. 3. It would be more appropriate to compare the performance with the existing method beginning with the same source-only model. Other Comments Or Suggestions: N/A Questions For Authors: The major question is about Eq. 4. As $\mathbb{P}(\arg max_{c' \in [C]}f_{\theta}(x)[c']=l\|y=c)$ represents the probability of predicting class $c$ as $l$ and $c \neq l$, why does the positive bias $\mathrm{Bias}(l)$ indicates over-prediction? Given the condition $y=c$, how do we explain the representation of the summation of $c' \in [C]$? As my understanding, it may be $\mathbb{P}(\arg max_{l \in [C]}f_{\theta}(x)=l\|y=c)$. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable feedback and thoughtful comments. We appreciate the recognition of our **clear writing**, **comprehensive experiments**, and **extensive qualitative analyses**. We address each of your concerns point by point. --- **Q1: Clarification on Eq. (4) and the Definition of Positive Bias** **A1:** Sorry for any misunderstanding. In Eq. (4), the expression $\arg\max_{c' \in [C]} f_\theta(x)[c']$ refers to the predicted class for input $x$. $P(\arg\max_{c' \in [C]} f_\theta(x)[c'] = l \mid y = c)$ represents the probability that a sample from class $c$ is predicted as class $l$. There is no constraint that $c \ne l$ in Eq. (4). The summation in this equation is taken over the conditioning variable $c$, i.e., it averages the conditional probabilities $P(\arg\max_{c' \in [C]} f_\theta(x)[c'] = l \mid y = c)$ across all classes $c \in [C]$. By summing over all $c$ and taking the average, we obtain the expected probability that a sample from any class (including $l$ itself) is predicted as class $l$. Under an unbiased model, this expectation should be approximately $1/C$. Therefore, a positive bias indicates that class $l$ is over-predicted, as its average prediction probability exceeds the uniform expectation. --- **Q2: Dependence on Baseline Models** **A2:** Our method is designed as a plug-and-play module that can be integrated into any self-training-based UDA framework. Naturally, its performance is influenced by the underlying baseline, especially when estimating the target-domain logits distribution, which relies on the pseudo-labels generated during self-training. Hence, the quality of pseudo-labels affects the accuracy of our distribution estimation and, consequently, the effectiveness of the class-aware adjustment. However, we emphasize that BLDA consistently reduces class-level prediction variance across all baselines. As reported in the main paper, the standard deviation of per-class IoU and Acc drops significantly when BLDA is applied. This demonstrates that our method consistently yields more balanced predictions with reduced class bias, regardless of the baseline. --- **Q3: Fairness of Comparisons and Pseudo-Label Quality** **A3:** We would like to clarify that our method adheres to the standard UDA setting and does not modify the pseudo-label generation process of any baseline. As described in Section 3.2, we employ the self-training framework as implemented in existing works such as DACS, DAFormer, CDAC, HRDA, and MIC. All these methods adopt an online self-training protocol: the model is trained from scratch using labeled source data and unlabeled target data with pseudo-labels generated during training. BLDA is inserted into this process as a lightweight module that adjusts the logits distributions based on estimated class-wise prediction behavior. The pseudo-label generation and training pipeline of each baseline remains unchanged. Therefore, the comparisons are conducted under fair and consistent settings, where each method follows the same online self-training paradigm (note that all baselines train the model from scratch, without using a source-only pretrained model). --- **Q4: Computation Overhead** **A4:** We provide a detailed analysis of the computational overhead in Appendix I, including actual resource usage and training time across various baselines. In summary, the additional cost introduced by our proposed components stems from three main operations. We have implemented them efficiently to minimize overhead: 1. **GMM Implementation**: Instead of using off-the-shelf libraries that update Gaussian parameters sequentially, we store the parameters of all $C \times C \times K$ components as tensors in PyTorch and update them in parallel using matrix operations. 2. **CDF Computation**: We approximate the cumulative distribution function using the Abramowitz-Stegun formula, which allows efficient polynomial evaluation. 3. **Inverse CDF Computation**: We use interpolation techniques within the estimated value range, which avoids costly numerical inversion. All the above operations can be efficiently performed using simple matrix operations on tensors. Moreover, the storage of Gaussian component parameters and the additional regression head (a $1\times 1$ conv) introduced by our method are lightweight. Overall, our method demonstrates high efficiency in both training time and GPU memory, as reported in Table 13 of Appendix I. Moreover, our method introduces no additional overhead during inference. --- We hope our response can resolve your concern. Please do not hesitate to let us know if you have further questions.
Summary: The paper proposes Balanced Learning Domain Adaptation for addressing class bias in unsupervised domain adaptative semantic segmentation. The authors identify class imbalance and distribution shifts as major obstacles in UDA and propose techniques to analyze logit distributions to assess prediction bias. The method introduces post-hoc logits adjustment and online logits adjustment to mitigate class bias and improve balanced learning across classes. Additionally, cumulative distribution estimation is used as domain-shared structural knowledge. The paper demonstrates significant performance improvements on standard UDA benchmarks. Claims And Evidence: 1. The claim that logit distribution differences correlate with class bias is well-supported by experimental visualizations of FIG 6. 2. The effectiveness of online logits adjustment and post-hoc adjustment is supported by ablation studies。 3. The effectiveness of Balanced Learning is supported by consistent improvements in mIoU and mAcc metrics across several baselines and benchmarks. Methods And Evaluation Criteria: 1. The proposed methods, including post-hoc logits adjustment and online logits adjustment, are well-suited to address the stated problem of class bias in UDA. 2. The use of standard UDA benchmarks ensures fair and meaningful evaluation. Theoretical Claims: No major theoretical claims were presented beyond the statistical modeling of logits distributions and their alignment. Experimental Designs Or Analyses: The experimental designs are sound, with appropriate baselines and thorough ablation studies. The use of mIoU and mAcc metrics is effective for assessing both overall accuracy and balanced performance across classes. Supplementary Material: The derivations and more experiments in the supplementary material were reviewed. Relation To Broader Scientific Literature: The idea of using logits distribution analysis is related to class-imbalanced learning and focusing on distribution shifts between domains. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The method is versatile and can be integrated with various self-training-based UDA frameworks. 2. The authors provide comprehensive experiments, including multiple benchmarks and ablation studies. 3. The paper provides a clear theoretical explanation for logit distribution alignment and its connection to class bias, grounding the proposed method in established statistical principles. Weakness: 1. The GMM-based logits modeling may be computationally expensive, especially for larger datasets. 2. The reliance on pre-defined anchor distributions may not generalize well to all datasets or domain shifts. The paper lacks discussion on alternative approaches, such as learned or adaptive anchors. Other Comments Or Suggestions: 1. The scalability of the method should be discussed, especially in terms of computational overhead introduced by GMMs. Questions For Authors: 1. How does the method scale to larger datasets or real-time applications, given the computational cost of GMM-based logits modeling? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive and constructive feedback. We appreciate your recognition of **our method’s versatile design**, the **thoroughness of our experimental validation**, and the **clarity of our theoretical explanation**. We address each of your concerns point by point. --- **Q1: Computational Cost and Scalability of GMM-based Modeling** **A1:** We provide a detailed analysis of the computational overhead in Appendix I, including actual resource usage and training time across various baselines. In summary, the additional cost introduced by our proposed components stems from three main operations. We have implemented them efficiently to minimize overhead: 1. **GMM Implementation**: Instead of using off-the-shelf libraries that update Gaussian parameters sequentially, we store the parameters of all $C \times C \times K$ components as tensors in PyTorch and update them in parallel using matrix operations. 2. **CDF Computation**: We approximate the cumulative distribution function using the Abramowitz-Stegun formula, which allows efficient polynomial evaluation. 3. **Inverse CDF Computation**: We use interpolation techniques within the estimated value range, which avoids costly numerical inversion. All the above operations can be efficiently performed using simple matrix operations on tensors. Moreover, the storage of Gaussian component parameters and the additional regression head (a $1\times 1$ conv) introduced by our method are lightweight. Overall, our method demonstrates high efficiency in both training time and GPU memory, as reported in Table 13 of Appendix I. Moreover, our method introduces no additional overhead during inference. For larger datasets, i.e., with more classes, the only change is that the GMM module needs to store a proportionally larger number of Gaussian components, which remains tractable in practice. --- **Q2: Generalization of Pre-defined Anchor Distributions** **A2:** We provide a detailed discussion of the anchor distribution design and its alternatives in Appendix J. A summary is provided below: 1. **Definition and Role of Anchor Distributions:** We use the global positive and negative logits distributions from the source domain to estimate a shared anchor distribution for both the source and target domains. This shared anchor allows the target logits distribution to gradually align with the source, serving two purposes: - It provides a reference to balance learning progress across classes within each domain. - It acts as a bridge to align the source and target domains in terms of class-wise logits behavior. 2. **Different Selection Criteria for Anchor Distributions:** While the anchor distribution is estimated from the source domain, we acknowledge that some discrepancy may exist between this estimation and the true distribution of the target domain. However, our analysis in Appendix B shows that the relative pos/neg bias is more important than the absolute value of the logits, and such relative structure tends to be preserved across domains due to the observation that the biases of positive and negative logits are coupled in Fig.1 (c). This explains why the impact of distributional mismatch is limited in practice. We also conduct ablation studies (Table 14) on different anchor selection strategies, including: Global source distribution (default); Global target distribution (oracle); Two biased classes (*building* and *fence* in Fig. 1(d)). The results show that different anchor choices lead to only marginal differences in performance, further demonstrating the robustness of our method to anchor selection and its ability to generalize across domain shifts. 3. **Generalization Across Datasets and Domain Shifts:** Our logits adjustment mechanism can be interpreted as a cross-entropy loss with an adaptive margin. As discussed in Appendix B, this margin is implicitly determined by the relative difference between the positive and negative logits distributions. There are two key cases: - If both positive and negative logits in the anchor increase or decrease simultaneously, the margin remains stable. - If the relative gap between pos/neg distributions in the target differs from that in the anchor, the margin adapts accordingly to guide alignment. This behavior enables the method to generalize across domain shifts. As demonstrated in Table 15, the adaptive margin mechanism drives the target logits distribution to progressively align with the anchor distribution over training, confirming the intended behavior of our method. --- We hope our response can resolve your concern. Please do not hesitate to let us know if you have further questions.
Summary: The study conducts an in-depth investigation into class bias within UDA scenarios, demonstrating that this bias stems from simultaneous shifts in both the label and data distributions, which complicates the domain adaptation process. To address this challenge, the authors introduce a novel approach that evaluates and reduces class bias through a class-balanced learning method. This approach derives adjustment factors from the distribution of logits, thereby overcoming the constraints inherent in conventional imbalanced class techniques. Notably, the proposed solution is implemented as a versatile plug-and-play module, suitable for broad applications in UDA. It begins by estimating distribution parameters, then applies dynamic logit adjustments to monitor the model’s learning progression, ensuring balanced class performance. Extensive experiments confirm that the method consistently yields significant improvements in performance, highlighting its effectiveness and flexibility. ## update after rebuttal Thanks for the authors' detailed response. The response has addressed most of my concerns. I hope the authors can revise their paper according to the reviewers' suggestions. I will keep my original rating. Claims And Evidence: The experimental claims are supported with thorough empirical results. However, my concern is the authors need further justification about why using logits distributions as a proxy for class imbalance. Additionally, the method needs comparison against alternative distribution estimation methods except using Gaussian Mixture Models. Methods And Evaluation Criteria: The selected metrics—mIoU and mAcc are well-suited for assessing performance, as they capture both the precision of segment overlap and overall classification accuracy. Additionally, the method itself is thoughtfully designed, with a clear and sound intuition underpinning its architecture and approach. Theoretical Claims: The theoretical claims about the connection between logits distribution and class biases are well-defined. The authors clearly depict their relationships and provide convincing theoretical justifications. Experimental Designs Or Analyses: The experimental systematically demonstrating improvements introduced by BLDA. The authors conduct extensive ablations on each model and various configurations, which strongly support the design. Supplementary Material: The supplementary material adequately details methodology and experiments. Relation To Broader Scientific Literature: The work is well related with domain-adaptive semantic segmentation literature, clearly point out the limitations of prior class-balancing methods such as re-weighting and re-sampling. Essential References Not Discussed: There are plenty of domain adaptive semantic segmentation methods. I know it is not possible for the authors to plug their method to each of them to show the effectiveness of their method, but some typical methods representing different technical solutions need to be compared. For example, comparing to the works listed below [1] Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018. [2] Confidence regularized self-training, ICCV 2019. [3] Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation, NeurIPS 2020. Other Strengths And Weaknesses: Strengths: -Clear written paper. -Clearly identified problem of class imbalance unique to UDA. Weaknesses: -Lack of analysis concerning computational overhead and scalability. -Need further justification about why using logits distributions as a proxy for class imbalance. -Need further comparison against alternative distribution estimation methods except using Gaussian Mixture Models. Other Comments Or Suggestions: No other comments. Questions For Authors: Please refer to Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and constructive feedback, as well as for acknowledging our contributions, including the **clear problem formulation**, the **clarity of the writing**, and the **effectiveness of empirical validation**. We address each of your concerns point by point. --- **Q1: Computation Overhead** **A1:** We provide a detailed analysis of the computational overhead in Appendix I, including actual resource usage and training time across various baselines. In summary, the additional cost introduced by our proposed components stems from three main operations. We have implemented them efficiently to minimize overhead: 1. **GMM Implementation**: Instead of using off-the-shelf libraries that update Gaussian parameters sequentially, we store the parameters of all $C \times C \times K$ components as tensors in PyTorch and update them in parallel using matrix operations. 2. **CDF Computation**: We approximate the cumulative distribution function using the Abramowitz-Stegun formula, which allows efficient polynomial evaluation. 3. **Inverse CDF Computation**: We use interpolation techniques within the estimated value range, which avoids costly numerical inversion. All the above operations can be efficiently performed using simple matrix operations on tensors. Moreover, the storage of Gaussian component parameters and the additional regression head (a $1\times 1$ conv) introduced by our method are lightweight. Overall, our method demonstrates high efficiency in both training time and GPU memory, as reported in Table 13 of Appendix I. Moreover, our method introduces no additional overhead during inference. --- **Q2: Why use logits distributions** **A2:** In UDA, explicit class distribution priors are often unavailable, especially in the target domain, where distribution shift is severe. Therefore, we propose leveraging the logits distributions as an online proxy to assess a model’s class-wise prediction bias. We justify this design choice from both theoretical and empirical perspectives: 1. Theoretical Justification: As shown in Definition 2, the prediction bias $\mathrm{Bias}(l)$ is directly related to the probability of predicting class $l$ across all ground-truth classes, which in turn depends on the relative distributions of logits. We further show in Eq. (5) that under the assumption of independent logits distribution, this probability can be estimated by comparing the positive and negative class logits. A sufficient condition for unbiased prediction is that these distributions are aligned. 2. Empirical Evidence: In Figure 1(d), we demonstrate a clear linear correlation between the prediction bias and the differences in logits distributions across classes. This supports our hypothesis that logits distributions serve as an effective proxy to capture class imbalance in the network’s behavior. --- **Q3: Alternative Distribution Estimation Methods** **A3:** Thank you for this valuable suggestion. We chose GMMs primarily for their balance between modeling capacity and computational efficiency, which is critical for our online training setup. Specifically: - GMMs can approximate a wide range of 1D distributions with a small number of parameters. - They allow closed-form CDF and inverse CDF computation (using polynomial approximations), enabling efficient matrix-based implementation for large-scale training. - The distributions we model are scalar-valued logits for each class pair $(c, l)$, making GMMs a sufficiently expressive and computationally tractable choice. Additionally, as shown in Appendix M, GMMs empirically fit the logits distributions well, and as discussed in Appendix E, they converge quickly during training. While more complex estimators (e.g., kernel density estimation or deep density models) could be considered, they often introduce significant overhead and do not offer closed-form CDFs, which are essential for our framework. --- **Q4: References** **A4:** Thank you for pointing this out. These references are indeed important and representative works in UDA for semantic segmentation. We will include them in the Related Work section in the revised version and compare with them in our experiments. --- We hope our response can resolve your concern. Please do not hesitate to let us know if you have further questions.
null
null
null
null
null
null
Clustering Items through Bandit Feedback: Finding the Right Feature out of Many
Accept (poster)
Summary: This paper studies a problem of clustering $n$ items into two groups using bandit feedback. This setting considers an $n \times d$ matrix $M$, where each row represents an item's feature vector. The $n$ rows are partitioned into two unknown groups, such that items within the same group share the same feature vector. The learner sequentially and adaptively an item $I_t \in [n]$ and a feature $J_t \in [d]$, and gets a noisy observation drawn from an unknown distribution with mean $M_{I_t,J_t}$. The goal is to recover the correct item partition with minimal observations. The authors propose an algorithm called `BanditClustering`, which operates in two steps: (1) identifying two representative items from different groups, and (2) selecting a discriminative feature to classify all items. The authors provide theoretical results such as a tight upper bound on the required budget and a matching instance-dependent lower bound. Numerical experiments validate the efficiency of the method compared to non-adaptive clustering approaches. ## update after rebuttal Overall, I appreciate the technical novelty of this paper. Therefore, I decided to maintain my positive score. Claims And Evidence: All claims are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: I have not checked all the proofs in detail but did not identify any obvious errors. Experimental Designs Or Analyses: The experiments rely solely on synthetic data and consider only uniform sampling and K-means as baselines. It would be better to compare other adaptive clustering methods, such as the ones mentioned in the related work. Also, the comparison does not examine performance in highly imbalanced scenarios (when $\theta$ is small). Supplementary Material: I reviewed the source code in the supplementary material and found no obvious issues. Relation To Broader Scientific Literature: This work extends the bandit clustering problem studied in prior works, such as Thuot et al. (2024) and Yang et al. (2024). Unlike these studies, where the entire noisy feature vector of an item is observed at each step, the setting introduced in this paper allows only a single noisy feature to be observed per step. This introduces a new and inherently more challenging problem. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths 1. This paper is well-organized and easy to follow. 2. This paper studies a new bandit clustering problem where at each step, only a single noisy feature of a given item is observed. This setting is different from previous studies where the entire noisy feature vector is observed. 3. The authors provide rigorous upper and lower bounds on sample complexity, ensuring that their method is provably efficient. Weaknesses 1. The algorithms and theoretical analysis assume only two clusters, which is a strong limitation. While the authors suggest that they could "straightforwardly extend" their methodology to multiple clusters, this extension is not formally developed or analyzed. 2. The paper primarily focuses on theoretical contributions and lacks strong motivation. Although the introduction includes a motivating example, it does not clearly articulate the significance of the problem. Other Comments Or Suggestions: Quotation marks in LaTeX should be formatted using `` and '' instead of " ". The legend in Figure 1 is unclear. What does the algorithm "Cluster" refer to? Also, it seems that the uniform sampling algorithm is not explicitly labeled. Furthermore, what does the shaded area indicate? Questions For Authors: 1. Why do the authors restrict their analysis to only two clusters? What challenges arise when extending the approach to multiple clusters? 2. How does the proposed algorithm perform when the data is imbalanced (i.e., $\theta$ is small), and how does it compare to the baselines? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First, we would like to thank you for your time and effort in reviewing our paper. Below, we address the key remarks and questions raised in your review: - **Extension to $K>2$ clusters** In the paper, we analyze the case of two clusters, as even in this simpler setting, significant challenges appeared to obtain optimality. It happens that it is possible to use our algorithms as subroutines in order to handle the extension to $K$ cluster. To avoid redundancy, we invite you to read the answer to reviewer RdCv where this extension is discussed in detail. When dealing with $K>2$ groups, it is significantly more challenging to find a strategy which adapts optimally to the relative position of the centers of the $K$ groups, we will leave this question for future works. - **Dependency on the balancedness $\theta$** Thank you for the question. From a theoretical point, we do prove in the paper that the balancedness parameter $\theta$ has only an impact on the first step of our procedure, i.e., the identification of representatives. Furthermore, gathering Proposition C.1 (analysis of Algorithm 2), and the lower bound in Lemma E.1, we are able to prove that our procedure exhibits the optimal dependency on the budget with respect to $\theta$. The only difficulty to deal with very unbalanced clusters is that it is hard to detect a row in the smallest cluster. We will add further discussion about the dependency on $\theta$ of our method in the final paper. We have also run numerical experiments to investigate the influence of $\theta$. They will be included in the paper. In a first setup, we are able to see that indeed, $\theta$ mainly influences the CandidateRow-step of our clustering procedure and that the influence on the budget of this step is comparable to a factor $1/\theta$, which fits in with our theoretical findings. Furthermore we can see that in this setup, our algorithm clearly outperforms uniform allocation strategies for very small values of $\theta$, while being competitive for larger values of $\theta$. - **Suggestions** Thank you to your suggestion, we will correct all quotation marks, and improve the explanation and legend of Figure 1. - **Motivation** We thank the reviewer for pointing out that we should extend the discussion of our motivating example, which comes from image labeling. Recent work about distinguishing "doppelganger" animals uses expert annotators to ensure a correct labeling (see [Herde, Marek, et al. "dopanim: A Dataset of Doppelganger Animals with Noisy Annotations from Multiple Humans." Advances in Neural Information Processing Systems 37 (2024): 51085-51117.]). This works because experts know which features are relevant for a correct distinction. If no experts are available, our approach still provides a solution, because we can find the relevant features without prior knowledge and use non-expert workers for the classification task. - **Numerical experiments** As mentioned above, we will illustrate the dependency with $\theta$ in an additional numerical experiment. Besides, we will also compare our method with the method from [Ariu et al.(2024) "Optimal clustering from noisy binary feedback"]).
Summary: This paper investigates the problem of clustering items via bandit feedback. The items can be partitioned into two unknown groups that share the same feature vector within each cluster. The authors propose a sequential and adaptive setting where the learner can only select one item-feature pair per round. The objective is to accurately recover the correct partition while minimizing the number of observations. The paper presents an algorithm that identifies a relevant feature for clustering by leveraging the Sequential Halving algorithm. With probability at least $1-\delta$, the algorithm achieves accurate recovery of the partition, and the authors derive an upper bound on the required budget. Additionally, they establish an instance-dependent lower bound that is tight in certain relevant cases. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I checked the main content of the paper. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: ## Contributions: 1. Introduction of a new bandit cluster model where only one item-feature pair can be selected in each round, distinguishing it from existing works that select an item with all features. 2. Development of the first lower bound that characterizes the fundamental difficulty of the bandit clustering problem. 3. Design of a near-optimal bandit clustering algorithm with theoretical guarantees. Essential References Not Discussed: There is another line of clustering of bandits’ works. For example: 1. Gentile, Claudio, Shuai Li, and Giovanni Zappella. "Online clustering of bandits." In International conference on machine learning, pp. 757-765. PMLR, 2014. 2. Li, Shuai, Wei Chen, and Kwong-Sak Leung. "Improved algorithm on online clustering of bandits." arXiv preprint arXiv:1902.09162 (2019). 3. Liu, Xutong, Haoru Zhao, Tong Yu, Shuai Li, and John CS Lui. "Federated online clustering of bandits." In Uncertainty in Artificial Intelligence, pp. 1221-1231. PMLR, 2022. 4. Li, Zhuohua, Maoli Liu, Xiangxiang Dai, and John Lui. "Demystifying Online Clustering of Bandits: Enhanced Exploration Under Stochastic and Smoothed Adversarial Contexts." arXiv preprint arXiv:2501.00891 (2025). Other Strengths And Weaknesses: ## Weaknesses: 1. In Section 6, the authors discuss two important extensions: increasing the number of groups ($K>2$) and handling heterogeneous groups, which would better model real-world applications. However, the discussion provides only vague ideas without specific results for each extension. More concrete analysis or preliminary results would strengthen the paper's practical relevance. 2. Regarding the upper bound result, there appears to be a factor of $d$ in the regret upper bound in Equation (3). It seems possible that fixing one feature to explore could remove this d factor, potentially at the cost of a larger $\Delta$ term. The paper would benefit from a more thorough discussion of this trade-off, especially in cases where the d improvement might dominate any negative effects on the $\Delta$ term. Other Comments Or Suggestions: ## Minor Issues: There are several typographical errors throughout the paper. For example, on line 230, a sentence is incomplete. Questions For Authors: Please comment/discuss the weakness part. I am happy to raise my score if these questions are well discussed/solved. -------------Post-rebuttal---------------- The reviewer thanks the authors for their detailed responses, including the discussion on extending to more than two groups ($K > 2$), addressing heterogeneous groups, elaborating on the trade-offs between $d$ and $\Delta$, and discussing the missing references. My concerns have been resolved, and I would like to raise my score. I encourage the authors to incorporate these discussions into the final version to improve clarity and completeness of the current work. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First, we thank you for your time and effort in reviewing our paper. We will correct the typographical errors you identified. Below, we discuss about the different weaknesses that you identified. We will provide intuitions and thorough discussions on these points in the final version of the paper. - **Extension to $K>2$ clusters** Our algorithm identifies a discriminative feature to separate two groups. To achieve this. We fix arbitrarily a first item, and find an item from a different cluster (Algorithm 2). Then, we identify a feature that separates the two clusters well, balancing the cost of identifying such a feature with the cost of classification (Algorithm 3). We proved that this requires a budget of $H(\mu^a-\mu^b)\log(1/\delta)$ (see Thm 3.1 and Eq. (3)). Here is a way to extend the method to $K>2$ clusters, assuming $K$ is known. We first identify $K$ representatives from different clusters, one at a time. If $k<K$ items have already been identified from $k$ different clusters, we can run $k$ calls of Algorithm 1 in parallel to identify an item whose mean vector differs from all the previously identified representatives. Repeating this operation $K$ times yields $K$ representatives. Then, we use these representatives to learn $K(K-1)/2$ discriminative features (one for each pair of representatives), ultimately classifying all items. Compared to the case where $K=2$, the total budget then scales by a factor of $K^2 \times \min_{k\ne k'} H(\mu^k-\mu^{k'})$, where $\mu^1,\dots,\mu^K$ are the mean vectors of the $K$ clusters. This approach would be order-optimal if $K$ is considered a constant. We will add this procedure for $K>2$ clusters in the Appendix of the final version, along with this (non-optimal) bound. However, finding a distribution-dependent optimal strategy, taking into account the relative positions of all means $\mu^1,\dots, \mu^K$, remains highly non-trivial and is way beyond the scope of our paper. - **Robustness to heterogeneity within the clusters** Actually, our method can be adapted to deal with problems where there exists some heterogeneity inside the groups. We invite you to read the answer in the rebuttal to reviewer AdRS. - **Intuition on the complexity $H$ for sparse vector** We would like to provide more intuition on the trade-off $H$ (see Eq. (3)). In particular, we will explain why a dependency in the dimension $d$ is unavoidable and that $H$ is the optimal trade-off between $d$ and $\Delta$. Most of our intuitions about the problem lies in the simple case (Corollary 3.2), where the gap vector is $s$-sparse with a constant magnitude $h$. In this case, the l2 norm of the gap vector $\Delta$ (appearing in $H$) becomes $||\Delta||_2^2=sh^2$. Before explaining the intuition behind this rate, we emphasize that the lower bound matches Theorem 4.1 in this setting thereby showing that our budget is indeed optimal. To perform the clustering task, we need to select a feature that discriminates the two groups. if one item in each cluster are identified , this problem is equivalent to finding a good arm of reward $h$ among $d$. In this case, a simple algorithm --which is at the core of our approach-- proceeds as follows. - First, sub-sample $\frac{d}{s\theta}\log(1/\delta)$ entries from the matrix, so that, with high probability, a nonzero entry is selected. If $s=d$, the problem reduces to one dimension; if $s=1$, all features must be explored. - Second, sample all these entries $\frac{1}{h^2}\log(1/\delta)$ times, select the entry with the largest (absolute) mean, and store the associated feature. - Finally, sample $\frac{1}{h^2}\log(1/\delta)$ each item on this feature and classify the items. In the paper, our algorithm (especially through the use of sequential halving) adapts to the unknown parameters $s$ and $h$, and leads to a budget $\frac{d}{\theta sh^2}\log(1/\delta)^2+\frac{n}{h^2}\log(1/\delta)$. We can argue that all these steps cannot be improved with respect to $d,s,h$. We will provide more intuition on this optimal trade-off in the final version. Also, you can find a discussion on the optimality of $H$ for general shapes of $\Delta$ in the rebuttal of reviewer z8W5. - **Literature on online clustering of bandits** We appreciate your suggestion regarding references on the problem of online clustering of bandits and its extensions. This problem is indeed related to our setting, as it involves exploring a bandit environment where the items exhibit an underlying clustering structure. We will include a discussion on the similarities and differences between our approach and this interesting line of research in the literature review of our paper. Still, we would like to point out there are two major differences with our problem: (1) the learner has no control over which items are presented at each time step, and (2) the algorithms (such as CLUB and its extensions) are evaluated in the cumulative regret setting.
Summary: This paper addresses the problem of clustering items based on bandit feedback in a sequential and adaptive setting. Each of $n$ items is characterized by a $d$-dimensional feature vector, and the items are partitioned into two unknown groups where items within the same group share the same feature vector. The learner sequentially selects an item and a feature, observes a noisy evaluation, and aims to recover the correct partition with a small number of observations. The main contribution is the BanditClustering procedure, which operates in three steps: - Identifying two items from distinct groups - Finding a discriminative feature between them - Using this feature to cluster all items. The authors provide non-asymptotic bounds on the budget required and prove their algorithm's optimality through matching lower bounds in certain cases. The approach leverages techniques from best arm identification (Sequential Halving) and adaptive sensing strategies. The paper presents a theoretically sound algorithm for clustering with bandit feedback that achieves optimal sample complexity by efficiently identifying discriminative features. The paper includes theoretical analysis characterizing the sample complexity based on the difficulty of the clustering task, which is quantified by the difference between feature vectors of the two groups. Experimental results demonstrate the advantage of their adaptive approach over uniform sampling, particularly in sparse regimes. Claims And Evidence: The main claims of the paper are supported by convincing theoretical and empirical evidence: - The upper bound on sample complexity (Theorem 3.1) is rigorously proven and demonstrates how the algorithm adapts to the sparsity pattern of the gap vector. - The lower bounds (Theorem 4.1) establish the optimality of the approach in certain regimes, particularly when the gap vector takes only two values. - The experimental results in Section 5 supports the theoretical findings, showing the algorithm's superior performance compared to uniform sampling in sparse regimes. The claims about the algorithm's adaptivity to unknown structures are well-supported by the theoretical analysis, which shows how the budget scales with problem-dependent parameters without requiring prior knowledge of these parameters. Methods And Evaluation Criteria: The methods and evaluation criteria are generally sound, though there might be room for improving sample efficiency: - The three stage approach with primary focus on identifying a single feature goes well with the narration of sparse feature distinction vectors. - The budget (number of observations) is an appropriate metric for evaluating efficiency in the bandit setting. - The theoretical analysis properly characterizes the algorithm's performance in terms of relevant problem parameters (gap magnitude, sparsity, balancedness). However, there appears to be potential inefficiency in the sampling strategy, particularly regarding samples from row 1. Maybe a better book-keeping of what samples are already present? Theoretical Claims: I skimmed through the details of the proof, they did appear correct. My primary focus was on : - Lemma B.1 (guarantees for CSH algorithm) - Theorem 3.1 (upper bound) - Theorem 4.1 (lower bound) The theoretical analysis is generally sound. I noticed an issue in Section 3, where there appears to be a typo: $μ_{ij}-μ_{ij}$ should likely be $μ_{ic,j}-μ_{1,j}$ or similar? Experimental Designs Or Analyses: Summary of Experimental setup: - Experiment 1 examines how sparsity affects budget requirements, comparing BanditClustering against uniform sampling with K-means. - Experiment 2 studies how the budget scales with problem dimension when the sparsity is fixed. These experiments showcase support to the theoretical claims about adaptivity to sparsity and scaling with problem dimensions. The comparison with uniform sampling is informative, demonstrating clear advantages of the adaptive approach. However, the experimental section is relatively brief compared to the theoretical analysis. As a proof-of-concept it works, though more detailed experiments wouldn't be a bad addition. Supplementary Material: The main sections in supplemental material correspond to : - Notation - Proof of performance statements of Algo 1,2,3 - Lower bound proof I have tried to go through the supplemental material and they seem correct (at-least no obvious mistake), but it could be possible that I might have missed some crucial detail. Relation To Broader Scientific Literature: The paper appropriately positions its contributions within several related research areas: - **Best arm identification literature :** The paper builds on Sequential Halving algorithms and extends these techniques to the clustering context. - **Adaptive sensing strategies :** The three phase approach is definitely a nice novel addition to the sparse sensing literature - **Dueling bandits :** The cleaver isolation of 1-d dueling bandit setup as a sub-sampler is a good addition. - **Other bandit clustering problems :** Limited impact here since the setup is of a bipartite graph kind of setup. The authors claim their approach allows for more efficient budget allocation by focusing on the most relevant features, leading to lower observation budgets than previous methods, particularly in high-dimensional settings. Essential References Not Discussed: The paper does a good job at comparing with other bandit clustering frameworks but could potentially elaborate more on the quantitative differences in sample complexity. I don't identify any critical omissions in the literature review. Other Strengths And Weaknesses: **Strengths:** The paper's main strengths lie in its theoretical rigor and adaptive approach, while limitations include the restricted problem setup and questions about practical applicability. - Clearly written paper with well structured arguments and explanation. - The paper provides a clear theoretical characterization of the sample complexity, showing how it depends on the sparsity pattern of the gap vector. - The algorithm is adaptive to unknown structures without requiring prior knowledge of the problem parameters. - The theoretical analysis is rigorous, with matching upper and lower bounds in certain regimes. - The three-step approach (finding representatives, identifying discriminative features, clustering) is conceptually clear and intuitive. - The experimental results convincingly demonstrate the advantage over uniform sampling approaches. **Weaknesses:** - The setup is limited to binary partitioning (just two clusters) with identical feature vectors within clusters. The scaling up of the computational complexity and regret due to more clusters is something of importance - The algorithm seems to oversample from row 1, raising questions about sample efficiency. The paper doesn't explore potential downstream effects or implications for regret bounds. - The practical impact of the theoretical findings on real-world applications is not extensively discussed. - The experimental section is relatively brief compared to the theoretical analysis. It would be valuable to see how the algorithm performs on a real-world dataset where the feature sparsity pattern is naturally occurring rather than artificially constructed. Other Comments Or Suggestions: - The paper would benefit from a more intuitive explanation of why the specific trade-offs in the algorithm design are optimal, perhaps with a simplified example. - The paper mentions budget comparison with other approaches in the introduction but doesn't provide detailed quantitative comparisons in the main text. (maybe I missed this.) Questions For Authors: - Is there a way to use the samples from row 1 more efficiently? The current approach seems to involve significant oversampling from this row. Could this sampling strategy be optimized, and what would be the impact on the theoretical guarantees? - How would the algorithm perform in settings with more than two clusters? Would the approach of finding discriminative features scale well, or would the sample complexity increase significantly? - The paper focuses on the case where items within the same group have identical feature vectors. How robust is the approach to small variations within groups? Could the analysis be extended to allow for some heterogeneity is of the form that there is a linear seperator between the two groups? There is some discussion but would like to know more. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We first would like to thank you for your time and effort in reviewing it, and for your insightful questions. We now overview the remarks and questions that you formulate in your review: - **Oversampling from row 1** First, we would like to emphasize that improving the budget compared to non-active settings requires over-sampling certain rows, which we refer to as representatives, following [Thuot et al.(2025) "Clustering with bandit feedback: breaking down the computation/information gap"]. This step is crucial for accurately estimating the means $\mu_1$ and $\mu_2$ and, ultimately, for accelerating the clustering task. This is not, per se, a weakness of our procedure but rather an essential aspect for achieving order-wise budget optimality (up to logarithmic factors). Notably, we chose row 1 as the first representative, but by symmetry, any randomly selected row could have served the same purpose. Second, it may indeed be possible to slightly reduce the budget allocated to the first row by reusing previously sampled data. For instance, if an index pair $(i,j)$ is identified such that $|\mu_{ij}-\mu_{1j}|$ is large, we can allocate $c\frac{n}{\hat{\Delta}^2}\log(n/\delta)$ samples to estimate $\mu_{1j}$ and $\mu_{ij}$ once and for all. These estimates could then be reused for classifying the remaining rows, reducing the confidence bounds required for perfect classification by a constant factor. However, applying such refinements in other parts of the algorithm would significantly reduce its transparency, both in terms of implementation and theoretical analysis. Moreover, the resulting improvement in the budget would be limited to a factor 2. For the final version, we will keep our method as it is, but we will clarify our motivations, and highlight potential ways to improve numerical constants. - **Extension to $K>2$ clusters** In this manuscript, we focused on the case of two groups because it serves as an informative baseline for analyzing the optimal trade-off for the problem. We will add to the appendix an algorithm that extends our method to the general case ($K>2$), together with an analysis of its budget which scales with a factor $K^2$. However, obtaining the optimal dependency with respect to $K$ is significantly more challenging. We invite you to read the detailed discussion on this point in our response to Reviewer RdCv. - **Robustness to heterogeneity within the clusters** As mentioned page 8, our algorithm can be adapted to handle some degree of heterogeneity by appropriately adjusting certain tuning parameters. For instance, assume that we have prior knowledge that, for any feature $j\in[d]$, the within-group variation satisfies, $\max_{g(i)=g(i')} |\mu_{ij}-\mu_{i'j}|\leqslant c\min_{g(i) \ne g(i')} |\mu_{ij}-\mu_{i'j}|$. If $c<1/4$, our algorithm remains correct -- searching a single discriminative feature is still meaningful and enables classification. However, if the within-group heterogeneity is comparable to the inter-group differences in some features, our method could fail, and further investigation would be required. Observe also that if $c=1/2$, the problem is not identifiable anymore. We will expand on this discussion in the final version. - **Discussion on the optimal trade-off $H$** We appreciate your suggestion and will provide more intuition about the complexity term $H$ (see Eq. (3)), along with a discussion on why we believe it is optimal. Also, we will base our discussion, as for the upper bound, on the simple example of sparse vectors.You can find a discussion on the intuitions about $H$ in the sparse setting in the rebuttal of review RdVc, and a discussion on its optimality for general values of $\Delta$ in the rebuttal of review z8W5. - **Budget comparison with the literature** We acknowledge that further quantitative comparisons with other approaches and existing literature are missing. We will add this discussion in the final version, we will in particular discuss about the benefice of our model in high dimension settings. Thank you for pointing this out. - **Real-world applications** While we described in the introduction that image labeling might be a very natural application for our algorithm, it seems challenging to process existing datasets such that we obtain data for the bandit framework. We agree that the paper would benefit from real-world experiments, but in view of the fact that this work is intended to focus on an in-depth theoretical investigation, we decided to only consider synthetical data. A possibility could be to setup experiments using crowdsourcing platforms or a group of experts. One could use image data (as done in [Herde, Marek, et al. "dopanim: A Dataset of Doppelganger Animals with Noisy Annotations from Multiple Humans." Advances in Neural Information Processing Systems 37 (2024): 51085-51117.]), but ask experts about specific features of depicted animals instead of the concrete class. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed responses, especially regarding my oversampling query. I encourage the authors to incorporate these discussions (especially oversampling and budget comparision) into the final version to improve clarity and completeness of the current work. My concerns have been resolved.
Summary: This paper considers the problem of classifying $ n $ items into two categories based on bandit feedback. Each item is associated with $ d $ features that vary depending on the class it belongs to. In each observation, an item and a feature are selected, and the corresponding feature value is observed with noise. Under this setting, the paper focuses on the *fixed confidence* regime, evaluating the minimum number $T$ of observations required to achieve a given accuracy level. It proposes an algorithm along with an upper bound and a nearly matching lower bound. Additionally, the proposed algorithm is evaluated through numerical experiments. Claims And Evidence: The main contributions of this paper are theoretical, and all appear to be supported by correct proofs. Methods And Evaluation Criteria: The evaluation metric used in this paper is natural and reasonable. Theoretical Claims: I have briefly checked the proofs of Theorem 3.1 and Theorem 4.1. No particular issues were found. Experimental Designs Or Analyses: I was not able to spend much time verifying the correctness of the numerical experiments. Supplementary Material: I have briefly checked the proofs of Theorem 3.1 and Theorem 4.1. Relation To Broader Scientific Literature: The problem setting considered in this paper is a special case of the problem studied in (Ariu et al., 2024). However, to the best of my knowledge, this is the first paper to provide theoretical guarantees and analysis for this type of problem. Essential References Not Discussed: I am not aware of any particular relevant literature not discussed. Other Strengths And Weaknesses: As mentioned in Section 4, one limitation is that the matching upper and lower bounds are restricted to specific cases of $ \Delta $. Additionally, the fact that the number of groups is limited to two poses a practical constraint. While Section 6 outlines a general approach for extending the method to three or more groups, obtaining tight upper and lower bounds in such cases would likely require nontrivial further analysis. That being said, this paper serves as a strong starting point for exploring broader problem settings. Overall, the paper is written in a very clear and readable manner. Other Comments Or Suggestions: There seems to be an extraneous period at the end of *Extension to a larger number of groups.* in Section 6. Questions For Authors: At the end of Section 4, it is stated, *"For more general $ \Delta $, we conjecture that the trade-off in $ H $ in (3) is optimal and unavoidable."* Could you provide any supporting facts or intuitions that lead to this conjecture? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First, we would like to thank you for your time and effort in reviewing our paper. We corrected the small typo in Section 6 that you pointed out. Besides this, we would like to provide more insights regarding the two limitations you mentioned. - **Discussion on the optimal trade-off $H$ (Eq. (3)) for general $\Delta$** In Corollary 3.2, we prove that our procedure is optimal when the gap vector is $s$-sparse with a constant magnitude $h$. You can find a discussion on the intuitions behind $H$ on this sparse setting in the rebuttal to reviewer RdVc. For a general gap vector $\Delta$, we have good reasons to think that understanding the optimality of this trade-off for this simple example allows us to understand (at least intuitively) the optimality for general vectors. Our upper bound is constituted of two terms. A first term $\frac{d\log(1/\delta)}{\theta||\Delta||^2}$, we proved that this is optimal (see Thm 4.1) for the sub-problem of identifying an item in the second group. The second term is defined as this minimum $\min_{s\in[d]} \left[\frac{d}{s\Delta_{(s)}^2} + \frac{n}{\Delta_{(s)}^2}\right] \log(1/\delta)$. Indeed, the term $\frac{n}{\Delta_{(s)}^2} \log(1/\delta)$ is the price for clustering if we use a feature with a gap $|\Delta_{(s)}|$ while $\frac{d}{s\Delta_{(s)}^2}\log(1/\delta)$ corresponds to the price for selecting a feature with a magnitude at least $|\Delta_{(s)}|$. Define the effective sparsity $s^*$ for which the minimum holds, and define the effective magnitude as $\Delta_{(s^*)}$. Intuitively, we can argue that entries significantly larger than $\Delta_{(s^*)}$ are too rare to be detected (otherwise, $s^*$ would be smaller), and entries much smaller than $\Delta_{(s^*)}$ are too weak to be used for classification. Our insight is that the problem is as hard as if the gap vector were $s^*$-sparse with a constant magnitude $\Delta_{(s^*)}$, a setting where we have matching lower and upper bounds (see corollary 3.2), leading to our complexity. We believe that proving formally this optimality is difficult in general. We have currently a sketch of proof for which we reduce the problem into a problem of good-arm identification -- we prove that each algorithm that solves the problem should be able to identify a feature larger (up to a constant) than $\Delta_{(s^*)}$ in the gap-vector. However, we would then need an instance-dependent lower bound for such problem. Unfortunately, contrary to the problem of best-arm identification, such a lower bound does not exist yet in the literature (see [Zhao et al.(2023), Chaudhuri et Kalyanakrishnan(2019),Katz-Samuels et Jamieson(2020), De Heide et al.(2021)]. We are currently working on such lower bound, we believe that this would be a more valuable contribution in a separate paper fully dedicated to good-arm identification. - **Extension to $K>2$ clusters** In this manuscript, we focused on the case of two groups because it serves as an informative baseline for analyzing the optimal trade-off in clustering with bandit feedback. This problem had not been studied before, even in the binary setting. Naturally, extending the approach to $K>2$ groups is an important and practical direction. We will add to the appendix an algorithm that extends our method to the general case where $K>2$, together with an analysis of its (non-optimal) budget. However, as you observed, obtaining the optimal dependency with respect to $K$ is significantly more challenging. We invite you to read the detailed discussion on this point in our response to Reviewer RdCv.
null
null
null
null
null
null
BinauralFlow: A Causal and Streamable Approach for High-Quality Binaural Speech Synthesis with Flow Matching Models
Accept (poster)
Summary: This paper explores the streaming generation of high-quality binaural audio from monaural audio, considering the spatial positions of both the speaker and listener. Specifically, the task is approached as a generative problem, utilizing a flow matching model to generate stochastic binaural details absent in the monaural input. The paper also proposes a continuous inference pipeline to support streaming rendering. Both objective and subjective metrics demonstrate the superiority of the proposed approach. ## Update after rebuttal Thank you for your rebuttal response. I appreciate the added experimental results regarding inference speed. I suggest incorporating these into the main paper. Though, I still have reservations about the method motivation and the technical distinction between the proposed method and the previous works. Therefore, I will maintain my original score. Claims And Evidence: The paper claims to achieve high-quality and realistic binaural audio rendering with faster inference speed. The evidence provided includes performance comparisons on an in-house test set. However, the evidence is not sufficiently convincing for several reasons. First, the evaluation results on a public dataset (Appendix F) show that the proposed approach performs on par with the baseline, except for the Wave L2 metric, which is significantly worse than the baseline. This suggests that the overall performance improvement may be limited. Second, necessary details about the baseline model are missing. For example, it is unclear whether the baseline model was trained on the same dataset or tested in an out-of-domain scenario. Additionally, the model size is not reported, which is essential for evaluating inference speed, as model size directly impacts computational efficiency. These details are crucial for a fair comparison and to validate the claims of improved performance and speed. Methods And Evaluation Criteria: The enhancements in effectiveness and efficiency are indeed valuable for the binaural audio rendering task and practical applications, though a more thorough exploration of the underlying motivations of the proposed methods would help clarify the rationale and ensure the contributions are fully understood. Theoretical Claims: This work primarily focuses on practical application rather than theoretical development. I specifically review the formulas related to flow matching, and they seem to be correctly formulated. Experimental Designs Or Analyses: Some questions about the experimental design are mentioned in the “Claims and Evidence” section. Additionally, I would like to understand the rationale behind the choice of the SGMSE baseline, since its original paper may not explicitly address the binaural audio task. I am also confused by the omission of SGMSE results in Table 4. Lastly, I suggest clarifying the reasons for the inconsistent scales used in the L2 error reporting (1e-5 in Table 1 vs. 1e-3 in Table 4). Supplementary Material: Yes. I review the supplementary material and appendix. Relation To Broader Scientific Literature: The key contributions of this paper build on prior work by continuing the paradigm of framing this task as a generative problem. The authors focus on adapting the model architechture and some implementation details to enhance both performance and efficiency. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper demonstrates notable improvements in subjective testing, which underscores the effectiveness of the proposed approach. Overall, however, the technical contributions appear relatively incremental. The paper could benefit from a more thorough discussion of the motivations behind the proposed improvements. To some extent, the work reads more as a technical report, which, while valuable, may limit its broader impact. Other Comments Or Suggestions: No. Questions For Authors: 1. The discussion on the differences between this work and simplified flow matching in Section 3.2 is somewhat perplexing, particularly regarding the inclusion of task-related conditions (such as mono audio). The current text appears to emphasize experimental results, such as the impact of noise scale variations on training stability, rather than clearly articulating the distinctions between the two methods. Could the authors further clarify these distinctions to provide a clearer understanding of their respective approaches and implementations? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for your constructive comments. We will address your concerns below and will revise our paper following your suggestions. **Performance on the public dataset** (Claims And Evidence) We agree that the proposed method performs comparably to the baseline (BinauralGrad) on some metrics, but we respectfully clarify that our approach outperforms the baseline on **key perceptual metrics** (PESQ and MRSTFT). These percetual metrics are often prioritized in audio synthesis tasks as they correlate better with human perception of quality and intelligibility, whereas minor waveform-level differences (L2) may not impact perceived audio quality. **Implementation details of baselines** (Claims And Evidence) For all baselines, we trained the models from scratch on our new dataset and tested them on the same dataset. We did not perform pretraining or cross-dataset evaluation. We will include additional implementation details in our revision. **Model size and speed** (Claims And Evidence) We thank the reviewer for pointing out this issue. We present the model size and inference speed for all baselines below. We test the inference speed on a single 4090 GPU. The audio sampling rate is 48 kHz, and the audio length is 683 ms. As shown in the table, our model achieves the fastest inference speed among generative models. Our model acheives a more favorable trade-off between performance (Table 1) and inference speed compared to the baseline approaches. |Methods|Type|NFE|Speed (ms)|Model Size (MB)| |:-|:-:|:-:|:-:|:-:| |SoundSapces 2.0|-|1|-|-| |2.5D Visual Sound|R|1|1.1|82.0| |WaveNet|R|1|21.0|32.7| |WarpNet|R|1|21.9|32.8| |BinauralGrad|G|6|221.1|52.9| |SGMSE|G|30|770.2|273.6| |BinauralFlow (Ours)|G|6|163.0|314.5|$\mathbf{1.00}$|$\mathbf{0.0071}$|$\mathbf{1.33}$| **Method motivation** (Methods And Evaluation Criteria, Other Strengths And Weaknesses) Real-time, high-quality spatial audio is essential for immersive applications such as gaming, VR/AR, and cinematic experiences. These applications motivate the design of a high-quality, efficient, and streaming binaural audio generation framework. For **quality**, we frame the binaural audio rendering task as a generative problem and introduce a high-fidelity generative model. To improve inference **efficiency**, we choose flow matching models over diffusion models because they render high-fidelity spatial audio with fewer inference steps. We further reduce inference steps by using the midpoint solver and an early skip strategy. To satisfy the **streaming** requirement of real-world applications, we design our causal U-Net architecture and continuous inference pipeline that processes audio input in chunks. Our method is designed to meet real-world needs, and our model successfully addresses these challenges. **Use of SGMSE baseline** (Experimental Designs Or Analyses) Although SGMSE is not intended for binaural audio rendering, we included it as a baseline due to its U-Net-based diffusion model. While BinauralGrad also uses a diffusion model, it employs a WaveNet architecture. We used SGMSE to examine the impact of different architectures on the results. **Omission of SGMSE results in Table 4** (Experimental Designs Or Analyses) Since SGMSE was not evaluated on the public dataset, it was not included in Table 4. To expand our experiment, we test SGMSE on the public dataset and report its performance below. Our model outperforms SGMSE with noticeable margins. We will include these results in the revision. |Methods|PESQ$\uparrow$|MRSTFT$\downarrow$|Wave L2$\downarrow$|Amplitude L2$\downarrow$|Phase L2$\downarrow$| |:-|:-:|:-:|:-:|:-:|:-:| |SGMSE|2.256|1.352|0.230|0.033|0.983| |BinauralFlow (Ours)|2.806|1.252|0.192|0.030|0.918| **L2 error scale** (Experimental Designs Or Analyses) The difference in scale is due to the audio volume in the two datasets not being the same. Since the audio volume in our collected dataset is lower than in the public dataset, we used a different scaling factor during testing. **Difference between BinauralFlow and simplified flow** (Questions For Authors) The key difference between our approach and the simplified flow lies in the flow function and vector field. Our flow function is defined as $\phi_t(\mathbf{z}) = t\mathbf{y} + (1-t)\mathbf{x} + (1-t)\sigma \mathbf{\epsilon}$, where $\mathbf{x}$ and $\mathbf{y}$ are mono and binaural audio, respectively, $\mathbf{\epsilon}$ is Gaussian noise, $t$ is the time step, and $\sigma = 0.5$ in our experiments. The simplified flow uses $\phi_t(\mathbf{z}) = t\mathbf{y} + (1-t)\mathbf{x} + \sigma \mathbf{\epsilon}$ with $\sigma$ near zero (e.g., $1e{-}4$), reducing to a deterministic interpolation $\phi_t(\mathbf{z}) = t\mathbf{y} + (1-t)\mathbf{x}$. Consequently, our vector field $\mathbf{y}-\mathbf{x}-\sigma\mathbf{\epsilon}$ introduces stochasticity, enabling variability in generative tasks, unlike the deterministic $\mathbf{y}-\mathbf{x}$ in the simplified flow.
Summary: This paper proposes a streaming binaural speech synthesis method using a causal architecture design and flow matching models. It introduces a flow matching model to generate binaural speech from a single-channel input. Additionally, it adopts a causal architecture to predict the next frames based on past information for streaming inference. To efficiently estimate the vector fields, buffers for the features at each sampling step are stacked, and only the current frame is computed using them. The results show better performance than other baselines. ## Update after rebuttal I remain positive about the paper and will keep my score. Claims And Evidence: [Flow Matching for Binaural Audio Generation] Based on previous work [1], [2], [3], flow matching models have already been verified to effectively generate high-quality waveform signals. The related works on flow matching-based waveform generation are not discussed in the current manuscript. [1] https://openreview.net/forum?id=tQ1PmLfPBL [2] https://openreview.net/forum?id=gRmWtOnTLK [3] https://openreview.net/forum?id=uxDFlPGRLX [Future Frame Generation based on the past information] In terms of generative tasks, this paper adopts future frame generation using flow matching based solely on past information. This approach has potential for streaming generation in real-time applications. However, the current paper does not describe the real-time capabilities; it only discusses streaming generation in terms of a smaller NFE and its causal design. Please evaluate the real-time factor for streaming generation, as it is important to demonstrate real-time streaming ability if the paper claims to support streaming generation. [Causal Model Design with Buffer] I like the proposed causal design for streaming generation using buffers at each sampling step. However, this structure was already proposed in CosyVoice 2, which employs a chunk-aware causal flow matching model for streaming synthesis. Furthermore, the model structure is identical to that of Matcha-TTS based UNet architectures. However, this paper does not refer Matcha-TTS. Please discuss the differences between the proposed models and those in Matcha-TTS and CosyVoice 2. CosyVoice 2 was released at Dec. 2024 so I just think that this work is a concurrent work. Feel free to add the discussion about this. [4] Mehta, Shivam, et al. "Matcha-TTS: A fast TTS architecture with conditional flow matching." ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. [5] Du, Zhihao, et al. "Cosyvoice 2: Scalable streaming speech synthesis with large language models." arXiv preprint arXiv:2412.10117 (2024). [Comparison with Parallel Model] It would be better if the ablation study with a parallel model is added. [Early Skip] This paper only empirically claims the early skip strategy could improve the performance. Also, the ratio of 0.5 was chosen heuristically. I recommend using the Sway Sampling of F5-TTS using different coefficient. [6] Chen, Yushen, et al. "F5-tts: A fairytaler that fakes fluent and faithful speech with flow matching." arXiv preprint arXiv:2410.06885 (2024). Methods And Evaluation Criteria: Evaluation metrics are limited in supporting the streaming ability. Please added more evaluation metrics regarding the real-time factor, and compare the performance with or without the buffer to support the efficient causal model design. Theoretical Claims: . Experimental Designs Or Analyses: The evaluation might be conducted on the speech dataset. It would be beneficial to include results from a sound dataset as well. Supplementary Material: Please add a demo page. It is very difficult to listen to the audio files when they are only available in the supplementary material. I think the audio paper should have a demo page. Relation To Broader Scientific Literature: . Essential References Not Discussed: [1] https://openreview.net/forum?id=tQ1PmLfPBL [2] https://openreview.net/forum?id=gRmWtOnTLK [3] https://openreview.net/forum?id=uxDFlPGRLX [4] Mehta, Shivam, et al. "Matcha-TTS: A fast TTS architecture with conditional flow matching." ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. [5] Du, Zhihao, et al. "Cosyvoice 2: Scalable streaming speech synthesis with large language models." arXiv preprint arXiv:2412.10117 (2024). Other Strengths And Weaknesses: . Other Comments Or Suggestions: The concept of this paper is good to me. However, the details for real-time streaming generation and the survey of the related works are limited. Please discuss the related works more and evaluate the real-time factor to demonstrate the concept of this paper. In the future, GPUs will become much faster so iterative sampling methods for waveform generation will be used more practically. However, I hope that it is not the overstated claims for the streaming ability in the current state. Please discuss the real-time ability. Questions For Authors: [Q1: GAN-based models] Have you compared the model with GAN-based models? Although CFM-based models can generate high-quality waveforms, GAN-based models are still effective for waveform generation. Specifically, PeriodWave-Turbo [7] fine-tuned the CFM models with adversarial feedback to improve performance and reduce the number of sampling steps. For real-time applications, this model still requires a midpoint method with six NFEs, resulting in higher latency. [Q2: Reshape and Linear Projection methods instead of STFT/iSTFT] Have you tried using different input features instead of STFT components? I recommend using the waveform directly with a reshape method and linear projection based on WaveNeXt [8]. This approach does not require a reflected padding before STFT. [7] Lee, Sang-Hoon, Ha-Yeong Choi, and Seong-Whan Lee. "Accelerating High-Fidelity Waveform Generation via Adversarial Flow Matching Optimization." arXiv preprint arXiv:2408.08019 (2024). [8] Okamoto, Takuma, et al. "WaveNeXt: ConvNeXt-based fast neural vocoder without iSTFT layer." 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2023. Ethical Review Concerns: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your positive feedback on our work. We will address your questions in the responses below. **Related work** (Claims And Evidence) We thank the reviewer for pointing out these related works. PeriodWave designs a multi-period flow matching model for high-fidelity waveform generation. FlowDec introduces a conditional flow matching-based audio codec to noticeably reduce the postfiler DNN evaluations from 60 to 6. RFWave proposes a multi-band rectified flow approach to reconstruct high-fidelity audio waveforms. These works are all related to flow matching models and show the effectiveness in generating high-quality waveform signals. We will include our discussion in the revised paper. **Real-time factor** (Claims And Evidence, Methods And Evaluation Criteria) We calculate the real-time factor of our model for different numbers of function evaluations on a single 4090 GPU. The audio sampling rate is 48 kHz, and the audio length is 0.683 seconds. As shown in the table, when NFE is set to 6, the real-time factor is 0.239. If we sacrifice some performance for faster inference, setting NFE to 1 results in an RTF of 0.04. Our model demonstrates potential for real-time streaming generation. | NFE | Inference Time (sec) | Real-Time Factor| |:-:|:-:|:-:| | 1 | 0.027 | 0.040 | | 2 | 0.055 | 0.081 | | 4 | 0.109 | 0.160 | | 6 | 0.163 | 0.239 | | 8 | 0.217 | 0.318 | | 10 | 0.271 | 0.397 | **Discussion on Matcha-TTS and CosyVoice 2** (Claims And Evidence) We carefully reviewed the papers and code of these two works. Matcha-TTS employs a 1D U-Net model with 1D ResNet layers and Transformer Encoder layers. Neither the ResNet layers nor the Transformer Encoder layers are causal, which means that Matcha-TTS does not achieve time causality or support streaming inference. In contrast, our model is fully causal and supports streaming inference. CosyVoice 2 introduces a chunk-aware causal flow matching model that uses causal convolution layers and attention masks to enable causality. However, the CosyVoice 2 model does not include feature buffers for each causal convolution layer, which may result in audio interruptions and discontinuities during streaming inference in real-world scenarios. We will include our discussion of Matcha-TTS and CosyVoice 2 in our revision. **Experiment of parallel model** (Claims And Evidence, Methods And Evaluation Criteria) We compare the model's performance with and without buffers to examine the effectiveness of our causal model design. The results show that BinauralFlow with buffers achieves higher quality than the model without buffers. Additionally, in Figure 7 in the main paper, we show that excluding buffers causes noticeable artifacts in the generated spectrograms. | Methods | L2 $\downarrow$ | Mag $\downarrow$ | Phase $\downarrow$ | |:-|:-:|:-:|:-:| |BinauralFlow w/ buffer | $\mathbf{1.00}$ | $\mathbf{0.0071}$ | $\mathbf{1.33}$| |BinauralFlow w/o buffer | 13.25 | 0.0398 | 1.34 | **Sway Sampling** (Claims And Evidence) As suggested by the reviewer, we use Sway Sampling with different coefficients ranging from -1 to 1 to systematically evaluate our model. The results are shown in the table below. Changing the coefficients does not lead to significant changes in the quantitative results. However, we observe that setting coefficients greater than 0, which shifts the time steps to the second half, results in better qualitative outcomes. Specifically, background noise becomes more realistic when the coefficient is increased. These results support the rationale behind our early skip strategy. | Coefficients | L2 $\downarrow$ | Mag $\downarrow$ | Phase $\downarrow$ | |:-|:-:|:-:|:-:| | -1.0 | 1.06 | 0.0070 | 1.29 | | -0.8 | 1.10 | 0.0070 | 1.29 | | -0.4 | 1.00 | 0.0069 | 1.29 | | 0 | 1.02 | 0.0069 | 1.29 | | 0.4 | 1.03 | 9.0070 | 1.31 | | 0.8 | 1.04 | 0.0071 | 1.32 | | 1.0 | 1.02 | 0.0072 | 1.33 | **Sound dataset** (Experimental Designs Or Analyses) Thank you for the suggestion. We plan to conduct this experiment in our future work. **Demo** (Supplementary Material) As recommended by the reviewer, we have created a demo page and will release it following the acceptance of our paper. **GAN-based models** (Questions For Authors) Utilizing GAN-based models, such as PeriodWave-Turbo, to enhance inference efficiency is a promising direction. We plan to discuss this possibility in our paper and explore it further in future work. **Reshape and linear projection** (Questions For Authors) We tried using waveform directly without STFT/iSTFT but it did not lead to superior performance. We are still interested in waveform-based approaches. We will explore the reshape method and linear projection proposed in WaveNeXt and discuss them in our paper.
Summary: They sought to address the binaural speech synthesis task by using mono-channel audio to generate binaural speech. To support streaming and produce audio aligned with a given pose, they employed a flow-matching-based generative model with a causal structure. In the process, they introduced streaming STFT/ISTFT and a buffer bank to enable seamless streaming. Furthermore, to enhance per-chunk generation speed in the flow-matching-based approach, they adopted a solver designed to improve sampling speed and integrated a noise-skip strategy. ## update after rebuttal I have reviewed the authors' response. They provided sufficient experimental results and explanations addressing my previous concerns. Therefore, I maintain my positive assessment of this submission. Claims And Evidence: At least, there do not appear to be any issues with the authors’ claims. Methods And Evaluation Criteria: There also seem to be no issues with the methodology or the evaluation. Theoretical Claims: It appears that, rather than putting forth a separate theoretical argument, they have primarily adopted existing theoretical backgrounds (e.g., flow matching, midpoint solver), which seems reasonable. Experimental Designs Or Analyses: The experimental design and analysis also appear to be reasonable. Supplementary Material: I examined the data generation section and also listened to the accompanying audio and video samples. Relation To Broader Scientific Literature: They appear to have achieved similar or even improved quality compared to previous studies, while also providing streaming support. Essential References Not Discussed: I do not see anything else noteworthy in that regard. Other Strengths And Weaknesses: **[S1]** The figures and explanations are well-presented, making it easy for individuals who are not familiar with binaural speech synthesis to understand. **[S2]** It is also commendable that the authors collected new data for verification, and that they provide results on publicly available datasets (as mentioned in the Appendix). **[S3]** Moreover, enabling streaming capability further enhances the model’s practical utility. I will outline my questions below. Other Comments Or Suggestions: I will outline my questions below. Questions For Authors: **[Q1]** I am curious whether you plan to make the collected dataset publicly available. **[Q2]** Could you provide any results comparing the midpoint solver with other solvers? **[Q3]** Since I am not very familiar with this domain, I wonder why you use STFT-based complex spectrograms rather than mel spectrograms, which are more common in typical speech synthesis (TTS). **[Q4]** Finally, I would like to know more about the latency of the streaming model and how it compares in terms of performance with the non-streaming version. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your positive feedback on our work. We will address your concerns and include your suggestions in the revision. **Dataset** (Questions For Authors) Regarding open-sourcing the dataset, we fully understand the importance of reproducibility and transparency in research. However, due to privacy constraints and participant confidentiality, we are unable to publicly release the full dataset. To ensure the reproducibility of our work, we will release **all implementation code, training scripts, pretrained model weights, and a test subset** that has been carefully curated to exclude any personal information. **Different solvers** (Questions For Authors) Besides the Midpoint solver, we test the Euler and Heun solvers. The Euler solver is a first-order solver and Midpoint and Heun solvers are second-order. We set the number of function evaluations (NFE) to 6 and present the results below. Although the Euler solver yields lower error values than the Midpoint solver, it fails to generate realistic background noise. Setting NFE to 6 is insufficient for the Heun solver, which requires 30 steps to achieve comparable error values. In conclusion, the Midpoint solver provides the best trade-off between error values, qualitative results, and inference efficiency. | Solver Type | NFE |Quality | L2 $\downarrow$ | Mag $\downarrow$ | Phase $\downarrow$ |:-|:-:|:-:|:-:|:-:|:-:| | Euler | 6 | Medium | 0.90 | 0.0066 | 1.24 | | Midpoint | 6 | High | 1.00 | 0.0071 | 1.33 | | Heun | 6 | Low | 16.86 | 0.0499 | 1.44 | | Heun | 30 | Medium | 1.27 | 0.0087 | 1.36 | **STFT-based complex spectrograms** (Questions For Authors) A mel spectrogram is derived by mapping the STFT-based complex spectrogram to the mel scale. In this process, only the magnitude is retained, while the phase is discarded. Spatial audio rendering relies on precise phase information to capture interaural time differences (ITD) and interaural phase differences (IPD). These phase cues are essential for accurately conveying spatial positioning in binaural audio, ensuring a realistic 3D auditory experience. Therefore, we use STFT-based complex spectrograms, which retain both magnitude and phase information. **Latency** (Questions For Authors) We test the inference latency of our streaming model using a single 4090 GPU. The audio sampling rate is 48 kHz and the audio length is 0.683 seconds. We vary the NFE from 1 to 10 and report the corresponding inference times below. We also report the real-time factor (RTF), which is calculated by dividing the processing time by the actual time. If RTF is less than 1, the system runs faster than real-time. With NFE set to 6, the real-time factor is 0.239. Reducing NFE to 1 improves inference speed at the cost of some performance, yielding an RTF of 0.040. These results demonstrate our model's potential for real-time streaming generation. | NFE | Inference Time (sec) | Real-Time Factor| |:-:|:-:|:-:| | 1 | 0.027 | 0.040 | | 2 | 0.055 | 0.081 | | 4 | 0.109 | 0.160 | | 6 | 0.163 | 0.239 | | 8 | 0.217 | 0.318 | | 10 | 0.271 | 0.397 | **Streaming vs non-streaming models** (Questions For Authors) For a sequence of audio chunks, the streaming model buffers intermediate features to enable seamless streaming inference, while non-streaming model processes each audio chunk independently without buffering. We visualize the spectrograms generated by both the streaming and non-streaming models in Figure 7 of the main paper, where the non-streaming models generate noticeable artifacts between audio chunks. Below, we present a quantitative comparison between the two. The streaming model achieves better audio quality than the non-streaming version, producing smoother and more continuous audio generation. | Methods | L2 $\downarrow$ | Mag $\downarrow$ | Phase $\downarrow$ | |:-|:-:|:-:|:-:| |Streaming | $\mathbf{1.00}$ | $\mathbf{0.0071}$ | $\mathbf{1.33}$| |Non-streaming | 13.25 | 0.0398 | 1.34 |
null
null
null
null
null
null
null
null
C2IQL: Constraint-Conditioned Implicit Q-learning for Safe Offline Reinforcement Learning
Accept (poster)
Summary: This paper considers offline CMDP problem where one needs to maximize the expected cumulative reward subject to the constraint that the expected total cost is below a certain threshold. The paper then proposed constrained conditioned implicit Q learning for safe offline RL. The main novelty is redefining the reward function in a way that only when the policy is safe it is getting reward. Empirical results have shown that their result is better compared to the state-of-the-art CDT results in most cases. Claims And Evidence: The paper seeks to show the value of the constrained condition implicit Q learning for finding safe policies with good reward. The main difficulty is how to train the new model. In particular, they use a novel reward and a novel cost reconstruction for different parameters of the discount function. The claim is that they achieve a better policy compared to the state-of-the-art approaches has been backed up by empirical results. Methods And Evaluation Criteria: They used detailed experimentation on well-known benchmark problems to validate their claims. The evaluation criteria are the rewards attained and whether the policy is safe or not. I do not find any weakness in the evaluation criteria. Theoretical Claims: The two theoretical results are proved. They are pretty straightforward and there is no issue. Experimental Designs Or Analyses: Experimental design is solid. However, the improvement over the CDT approach is very minimal. See the questions for detailed explanation. Supplementary Material: I checked the appendix. Relation To Broader Scientific Literature: The paper seeks to contribute to the space of the safe offline RL algorithm. However, the paper has not compared with one of the most prominent literature. Hence, the contribution part is not clear. Essential References Not Discussed: The main drawback of this paper is that essential references have not been discussed. In particular, the paper did not compare with the following paper [A1] which also proposed a novel approach for the safe offline RL and has shown better results compared to CDT. Further, [A1] considered the following objective $V_r^{\pi}+\lambda (V_{c}^{\pi}-b)_+$, hence, only when the policy is feasible it adds to the objective, otherwise, it is not which is in the similar flavor of the approach proposed in the paper. It is true that this paper uses the Implicit Q learning, however, [A1] shows provable safety and sub-optimality gap. **Overall, the contribution part is not clear.** [A1]. Wei, Honghao, Xiyue Peng, Arnob Ghosh, and Xin Liu. "Adversarially Trained Weighted Actor-Critic for Safe Offline Reinforcement Learning." In The Thirty-eighth Annual Conference on Neural Information Processing Systems. Other Strengths And Weaknesses: Weakness: The improvement compared to the CDT seems to be marginal. Hence, it is not clear the benefits of the proposed approach. The cost reconstruction part is not clear. The paper did not provide any theoretical guarantee on the safety or the sub-optimality gap. Other Comments Or Suggestions: **Post-Rebuttal** I am happy with the responses provided by the authors and am more confident about the contributions. Thus, I have raised my score. I would encourage the authors to add those discussions (especially, the need for reconstruction of cost, and the augmentation of the state) in the final version. It would be super important. Questions For Authors: 1. How did the paper reconstruct the cost by finding the value function corresponding to different discount factors? 2. The overall idea is to solve equation (4). However, how is it related to (3), should the reward be $0$ if the policy is not feasible? In particular, the indicator function should be multiplied with the reward as well. 3. The paper has 4 different loss functions which might be tedious for training. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1: The main novelty is redefining the reward function that only gets rewards when the policy is safe. The contribution part is not clear. In SORL, the most important concern is the OOD problem, while existing methods can only mitigate it via policy constraining but cannot avoid it. The first novelty of this paper is proposing C2IQL to completely avoid the OOD problem in SORL. The second novelty is that we may be the first one to propose and analyze the problem of discounted cost value formulation in detail. The contribution is illustrated in line 87, page 2 while redefining the reward function is the novelty of CPQ (2022). For better clarity, we will add new content to line 88 page 2 and line 100 page 2: "We first propose Constrained IQL (CIQL) to address the OOD problem, which can only be mitigated for existing RL methods in constrained settings." "As far as we know, we may be the first one to propose and analyze the problem of discounted cost value formulation in detail." ### Q2: References [A1] is not discussed/compared. Thank you for bringing this paper to our attention. We agree that [A1] is relevant to SORL, and we will include it in the related work section and as a baseline in the experiments. Specifically, we will add the following to line 205, page 4: "... problem. WSAC (Wei et al., 2024) focuses on the policy inferior problem and proposes a refined adversarial objective function. OASIS ..." Additionally, we compared WSAC with C2IQL across all environments. The results are summarized below where bold results are unsafe: |Algo||AR|BR|CR|DR|AC|BC|CC|DC|Avg| |-|-|-|-|-|-|-|-|-|-|-| |WSAC|R|0.25|0.804|0.86|0.659|0.40|0.69|0.61|0.024|0.537| ||C|0.179|**1.98**|0.40|**2.518**|0.98|0.78|0.51|0.45|0.97| |C2IQL|R|0.74|0.59|0.95|0.71|0.66|0.72|0.74|0.78|0.74| ||C|0.94|0.95|0.08|0.73|0.76|0.85|0.93|0.85|0.76| C2IQL achieves better performance while satisfying constraints on all tasks. ### Q3: The improvement compared to the CDT seems to be marginal. The performance gain only appears marginal because **the unsafe results in smaller constraints are averaged by safe results in larger constraints** since Table 1 is averaged out 3 constraint thresholds following the style of most existing work. Thus we add Figure 2 as a supplement of Table 1 to illustrate that: 1. CDT **cannot achieve safe policy for small constraints (like L<30)** under some cases. This is very fatal disadvantage since satisfying the constraint is the foundation of safety. 2. CDT **cannot achieve reward maximization for large constraints (like L<70)** compared to C2IQL. 3. C2IQL achieves best and safe performance for all three constraints. This indicates C2IQL provides substantial benefits in handling a wider range of constraints and reward maximization accordingly. ### Q4: The cost reconstruction part is not clear. How did the paper reconstruct the cost by finding the value function corresponding to different discount factors? We would like to provide the following clarifications: The definition of the cost value function is a linear function where $\gamma$ is the constant and $c(.,.)$ are variables. $V_c^\pi(s_t)=\sum_{j=0}^T\gamma^jc(s_{t+j},a_{t+j}\sim\pi)$ The goal is to reconstruct the non-discounted cost value: $\hat C^{\pi}(s_t)=\sum_{j=0}^Tc(s_{t+j},a_{t+j}\sim\pi)$ To approximate the non-discounted cost value, we construct a set of cost value functions with different discount factors: $V_c^{\pi,i}(s_t)=\sum_{j=0}^T\gamma^j_ic(s_{t+j},a_{t+j}\sim\pi),i=1,...,m$ Using these multiple discounted value functions, we train a supervised learning model to estimate the non-discounted cost value, where the inputs are estimated $V_c^{\pi,i}(s_t)$ and the output is $\hat C^{\pi}(s_t)$. ### Q5: Should provide theoretical guarantee. Please refer to **Q2 of reviewer MiHo** for theoretical analysis due to character limit. The mathematical proof are out of the scope of this paper. Besides, C2IQL has good empirical results including additional experiments on SafetyGymnasium from **Q1 reviewer ojm5**. ### Q6: Question about equation (3) and (4). Yes, equation (3) lets the infeasible reward Q-value to be 0 and equation (4) indicates only feasible policy is updated, which are the key ideas of CPQ (2022). However, this is not what C2IQL should be focused on. While your suggestion to multiply the indicator function with the reward is reasonable and could improve CPQ, it is outside the focus of our work, as we adopt CPQ’s framework without modification. Investigating this idea further would be interesting for future work. ### Q7: About 4 different loss functions. Compared with existing safe RL algorithms, we only add one loss to train the cost reconstruction model (CRM). In safe RL, value loss, cost value loss, and policy loss are essential. For the CRM, it is trained separately with the dataset and some randomly generated data. It is used in C2IQL after the loss is minimized and stable. Thus the training stage is not influenced. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses. The new results seem promising. I have a few more comments: 1. It seems that the approach is based on CPQ. The main claim of this paper is that CPQ cannot address the OOD actions. The paper then considers Implicit Q learning proposed for unconstrained cases, in the constrained setting. Hence, the algorithmic contributions seem to be limited. Can the authors highlight that? 2. The standard CMDP approach admits a Markov optimal policy. However, the CPQ approach seems to be like an augmented MDP where the Q function for the constraint is augmented with the state space in order to find a policy. Hence, conceptually, it is more complicated. Thus, it is not clear why one needs such complicated state augmentation to solve the CMDP problem. --- Reply to Comment 1.1.1: Comment: We sincerely thank reviewer `qPih` for the engagement. We are encouraged by the review, which described our work as novel, with solid experiments and presenting promising new results. For the additional comments, we would like to provide the following clarifications: ### Q1: Thank you for your question. First, we would like to highlight that addressing the OOD issue is not our only contribution. Additionally, we study the problem of discounted cost formulation in detail and then develop a novel approach for reconstructing non-discounted costs. Regarding the OOD avoidance, we address the following three key technical challenges: - **First, how to update the constrained reward value/Q-value function following IQL style?** To address this problem, CIQL formulates a constraint-penalized reward Q-value function following CPQ and utilize a value function with expectile regression to approximate the maximized Q-value function in Bellman backup procedure. - **Second, how to update the cost value function under the same implicit policy since it is hidden in the reward value function?** To address this problem, we rederive CIQL and obtained the formulation of the implicit policy in theorem 1 following IDQL, and then derive the formulation of the cost value function with this implicit policy. - **Third, How to extract the policy?** We extract the policy in an expectile way following Equation 18." Besides, we think the clarifications in Q2 can further answer your questions about Q1 very carefully. We will illustrate why the idea of CPQ is needed and what is the main novel for building CIQL rather than simply converting IQL to the constrained settings. ### Q2: This is a good question and we would like to provide the following discussion: **The main claim of this paper is that most SORL methods cannot avoid the OOD problem, including but not limited to CPQ. This motivates us to propose a method that can completely avoid the OOD problem** so we resort to IQL, which is an algorithm that can avoid OOD problems in unconstrained settings. **However, IQL cannot be simply extended to constrained settings due to the following challenges:** IQL proposes to approximate $\max_aQ(s,a)$ by a value function with expectile regression without any explicit policy. The key idea to avoid OOD problem is that the policy is implicitly hidden in the value function during training. After the value function is trained well, IQL utilize a policy extraction method to obtain the final explicit policy. As we know, Safe RL usually contains two value function: reward value function and cost value function following the same policy. Thus the main gap is that IQL utilizes a implicit policy hidden in the reward value function but the cost value function should following the same implicit policy in constrained settings. We address this gap in our paper by answering: **How to make sure both value functions follows the same policy without facing the OOD problem?** Existing standard CMDP methods, such as primal-dual optimization (PDO), cannot address this problem. In PDO (like BCQ-Lag), an explicit policy is needed to make sure both primal and dual objectives follows the same policy. However, an explicit policy will result in the OOD problem in offline RL during Bellman backup. Thus explicitly extracting the policy of IQL out and then utilizing this policy to update cost value function is not reasonable because the OOD problem is introduced for cost value function. **This motivates us to find a method that satisfies: (1) both value functions follows the same policy and (2) both value functions follows the implicit policy.** One advantage of CPQ (like augmented MDP) is that the update of the reward value function and policy are not tightly tied together. This provides us a possibility to satisfy (1) and (2) together. Thus we first formulate the constraint-penalized reward value function (**Equation 11, 12 and 13**) for CIQL following the idea of CPQ. Then the problem becomes how to update cost value function under the same implicit policy hidden behind the reward value function. To address this problem, we rederive the representation of CIQL to a generalized AC structure (**Equation 14**) via first order optimality condition (**Appendix A.1**) to obtain the representation of the implicit policy. **Following the implicit policy, we construct how to update the coat value function under the same implicit policy (Equation 15, 17) without extracting it out explicitly.** We hope these responses and additional results provided in our rebuttal address your concerns and encourage you to consider a more favorable evaluation of our paper. Thank you again for the time you investigated in evaluating our paper.
Summary: The paper introduces Constraint-Conditioned Implicit Q-Learning (C2IQL), a novel approach for Safe Offline Reinforcement Learning (SORL) that improves constraint satisfaction while maximizing rewards. The key innovations include a Cost Reconstruction Model (CRM), which estimates non-discounted cumulative costs to improve safety, and constraint-conditioned learning, which allows policies to dynamically adapt to different safety thresholds. The authors provide theoretical results supporting their approach and conduct experiments on Bullet-Safety-Gym tasks, demonstrating that C2IQL outperforms existing safe RL baselines in terms of both constraint satisfaction and reward maximization. ## update after rebuttal I appreciate the authors’ detailed response and the additional experiments. The new SafetyGymnasium results and the clarifications on the theoretical derivations (particularly how Theorem 1 supports cost value updates in Theorem 2) help address my main concerns. The runtime and baseline setup explanations are also helpful. In short, I find the core contributions meaningful for the SORL community. I maintain my score. Claims And Evidence: 1. The claim that C2IQL improves constraint satisfaction is supported by experiments where it achieves better safety-performance tradeoffs than baselines. 2. The claim that CRM improves cost estimation is justified through its design, but there is no formal analysis of how errors in cost reconstruction affect policy learning. 3. The paper suggests that C2IQL mitigates the Out-of-Distribution (OOD) problem, but there is no theoretical guarantee on how well it generalizes to unseen cost distributions beyond the dataset. Methods And Evaluation Criteria: 1. The use of Bullet-Safety-Gym tasks is reasonable for evaluating safety-constrained RL. 2. The choice of baselines (BCQ-Lag, BEAR-Lag, COptiDICE, CPQ, FISOR, CDT, and VOCE) is appropriate for comparing constraint satisfaction and reward maximization Theoretical Claims: 1. Theoretical results provide constraint satisfaction guarantees and introduce a cost reconstruction model (CRM) to improve estimation of non-discounted cumulative costs. The approach enhances policy learning by refining constraint enforcement without requiring explicit policy extraction. 2. However, the method relies on the assumption that cost reconstruction provides sufficiently accurate estimates, and the paper does not formally analyze how errors in this model may propagate through policy learning. A deeper analysis of how inaccuracies in cost estimation impact safety guarantees could strengthen the theoretical foundation. Experimental Designs Or Analyses: Advantages: 1. The experiments demonstrate that C2IQL outperforms baselines in both reward maximization and constraint satisfaction, showing its effectiveness in safe offline RL settings. 2. The evaluation covers a range of environments from Bullet-Safety-Gym, providing insights into how C2IQL handles safety constraints in practical scenarios. Limitations: 1. While the results show improvements, the study does not analyze how well the cost reconstruction model generalizes to unseen cost distributions. 2. The results in Table 1 are comprehensive but a bit difficult to interpret, as they only provide final values without learning curves. A visualization of policy evolution would improve clarity. Supplementary Material: The supplementary material provides extended proofs and experimental details, improving clarity. Relation To Broader Scientific Literature: 1. The paper builds on prior work in offline RL (IQL, COptiDICE) and safe RL (CDT, VOCE). 2. The CRM approach is novel compared to prior pessimistic offline RL approaches. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The impact of cost reconstruction errors on policy learning is not analyzed, making it unclear how estimation inaccuracies influence performance. 2. There is no evaluation of sample efficiency, leaving the question of how much data C2IQL requires compared to baselines unanswered. Other Comments Or Suggestions: 1. Algorithm 2 is a bit difficult to interpret, as the equations are dense. The authors could consider placing key equations in an appendix or breaking them down incrementally within the main text to improve clarity. A more structured explanation before introducing the full algorithm would help readers understand its components more intuitively. 2. I wonder if the heavy use of abbreviations in the abstract and introduction is necessay, as it affects readability. Questions For Authors: See the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1: An experiment of how errors/inaccuracies in cost estimation impact safety guarantees is needed. Thank you for your suggestion. To address your concern, we have conducted additional experiments to analyze the impact of cost reconstruction error on policy performance and safety guarantees. To simulate increasing cost reconstruction error, we introduced noise sampled from a normal distribution, $N(0,Noise)$, to the input of the cost reconstruction model. When the added noise is small, the performance (reward) slightly improves as the noise level increases. This occurs because small amounts of noise act similarly to data augmentation, which enhances the generalization capability of the cost reconstruction model. This is analogous to adding noise to input data in image processing to improve robustness. As noise increases from 1.0 to 4.0, the performance becomes progressively more conservative. Despite this, the reward and cost metrics degrade gracefully, indicating the robustness of our method. | Noise | 0.0 | 0.1 | 0.2 | 0.3 | 0.5 | 1.0 | 2.0 | 4.0 | | - | - | - | - | - | - | - | - | - | | Error | 0.05 | 1.05 | 1.33 | 1.50 | 1.83 | 2.1 | 2.4 | 2.8 | | R $\uparrow$ | 434.01 | 435.07 | 442.08 | 435.67 | 435.60 | 421.82 | 408.77 | 401.89 | | C $\downarrow$ | 46.60 | 47.20 | 48.30 | 47.30 | 47.89 | 42.10 | 34.90 | 36.20 | ### Q2: The paper suggests that C2IQL mitigates the OOD problem, there is no theoretical guarantee on how well it generalizes to unseen cost distributions beyond the dataset. Thank you for your question. 1, we would like to clarify that C2IQL **avoids the OOD problem rather than merely mitigating it**. In SORL, the OOD problem is defined as "Since the policy may produce OOD actions, the Q-value may be wrongly estimated". Previous methods address it by constraining the target policy to be close to behavior policy, thus only mitigating the problem. In contrast, C2IQL, following IQL, completely avoids the OOD problem by training the value function using expectile regression, which inherently focuses on in-distribution data during training. 2, **C2IQL does not require generalization to unseen cost distribution** because: 1) C2IQL avoids the OOD problem entirely by ensuring that the training process operates strictly within the dataset distribution. When training, C2IQL avoids relying on any distributions outside the dataset. Even during testing phase, C2IQL naturally tends to select actions that align with the dataset distribution, as it is trained to do so. 2) As for the cost reconstruction model (CRM), they will not encounter unseen cost distribution because CRM is only used during training. In testing phase, CRM is no longer utilized—only the policy is used to generate actions based on the current state and constraint conditions. ### Q3: The results in Table 1 are a bit difficult to interpret without learning curves. Thank you for your suggestions. We will add some more learning curves to the appendix. Table 1 follows a common style in most SORL papers (e.g., CDT, FISOR, OASIS), where results are presented as final values after training. Additionally, we use color coding to improve interpretability. The primary reason for not including learning curves is due to the nature of offline RL. In SORL, the agent is trained on a pre-collected and fixed dataset without interacting with the environment during training. As a result, monitoring performance during intermediate training steps is not typically necessary, since the agent's performance is only evaluated after the training process is complete. That said, we have included some learning curves in Figure 3 to provide a visualization of policy evolution. ### Q4: Evaluation of sample efficiency is needed. Thank you for your suggestions. In Offline RL, the agent is trained on a fixed and pre-collected dataset without any interactions with environments. This indicates that all methods (C2IQL and baselines) require the same amount of data without any new data added into the dataset. The dataset in DSRL usually contains thousands of trajectories. ### Q5: Algorithm 2 is a bit difficult to interpret, which could consider to improve clarity. Thank you for your suggestion. Algorithm 2 is indeed a collection of equations previously introduced in Sections 3.1–3.4, but now extended to the constraint-conditioned version. We realize that we did not explicitly indicate their connections to the earlier sections, which may have caused confusion. To improve clarity, we will provide additional explanations in Algorithm 2 to the corresponding sections and equations: Line 6: Cost Reconstruction -> Obtain discounted cost values and reconstruct the non-discounted value with $R^c$ in Section 3.3 Line 9: Constraint Penalization -> Obtain the constraint-penalized Q-value function with Equation (11) Line 11: Update value function -> Update value function with Equation (12) Line 14: Update cost value function -> Update cost value function with Equation (17) --- Rebuttal Comment 1.1: Comment: Thank you for the detailed and thoughtful rebuttal. I appreciate the additional experiments and clarifications provided, particularly the analysis of cost reconstruction errors and the explanation on how C2IQL handles the OOD issue. Your responses addressed many of my concerns, and I am looking forward to the planned improvements. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer `QM2i` for the engagement. We are delighted that the reviewer recognized the novelty and effectiveness of our proposed method in their initial review. In response to reviewer `QM2i`'s suggestions, we plan to incorporate the following updates to the next version: 1. Adding experimental analysis on the impact of cost reconstruction error on policy performance and safety guarantees. 2. Including additional learning curves to improve clarity. 3. Refining the manuscript by incorporating our rebuttal changes and addressing the following issues: - Improve Algorithm 2 for clarity: - Line 6: Cost Reconstruction -> Obtain discounted cost values and reconstruct the non-discounted value with $R^c$ in Section 3.3 - Line 9: Constraint Penalization -> Obtain the constraint-penalized Q-value function with Equation (11) - Line 11: Update value function -> Update value function with Equation (12) - Line 14: Update cost value function -> Update cost value function with Equation (17) - Less Abbreviations, for example: - Delete "SORL" in line 16 and 21, page 1 in abstract because the first sentence already determine the scope as SORL - Delete "ACPO" in line 70, page 2 - Delete "CMDP" in line 72, page 2
Summary: This work focuses on the offline safe RL, where the existing baseline methods often suffer from the OOD issues (as in the general offline RL). To do so, this paper proposes the C2IQL method that employs the cost reconstruction model to derive non-discounted cumulative costs from discounted values and incorporates a flexible, constraint-conditioned mechanism to accommodate dynamic safety constraints. This work also provides some empirical evidence on the Bullet-Safety-Gym environments. ## update after rebuttal My concerns has been addressed by the authors and I have updated my score. Claims And Evidence: The experimental results are less convincing. Please see the weakness and questions below. Methods And Evaluation Criteria: The benchmark environments are commonly considered in OSRL. However, the authors use Bullet-Safety-Gym only. Other more complicated environments like SafetyGymnasium, MetaDrive should be considered as well. Theoretical Claims: Theorems look reasonable, but I didn't carefully check the proofs. Experimental Designs Or Analyses: Some of the experimental results are less convincing and shows limited improvements than the baselines. Please see the weakness below. Supplementary Material: I reviewed the whole Supplementary Material except the proofs. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The idea is interesting. 2. I think the main weakness is that the proposed method is only evaluated on Bullet-Safety-Gym, which is relatively simple. DSRL has other more complicated environments like SafetyGymnasium, MetaDrive. 3. There are no discussions/comments for the Theorems. 4. How do the authors obtain the results of baselines, e.g., in Table 1? a) The proposed method has very limited improvement to be honest. b) The proposed method is not beating the baselines that are mentioned in other OSRL work, e.g., see Table 5 of the below work Gong, Z., Kumar, A., & Varakantham, P. (2024). Offline Safe Reinforcement Learning Using Trajectory Classification. arXiv preprint arXiv:2412.15429. Other Comments Or Suggestions: 1. I suggest that the authors should focus on C2IQL, maybe no need to say that much about CIQL, especially maybe don't treat it as a new method but part of the C2IQL? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1: More complex benchmark is needed. Thank you for your suggestion. To address the reviewer’s suggestion, we have also incorporated additional experiments on the SafetyGymnasium benchmark to diversify the test tasks. Specifically, we have selected the 4 Point and 3 Velocity tasks as additional benchmarks different from Bullet-Safety-Gym. Given limited time, we have included results of C2IQL and two strong baselines (FISOR and CDT). The bold results are **unsafe** cases. |Env||C2IQL|CDT|FISOR| |-|-|-|-|-| |PointCircle1|R|0.666 (0.09)|0.562 (0.04)|0.651 (0.04)| ||C|0.772 (0.11)|0.628 (0.23)|**1.230 (0.44)**| |PointGoal1|R|0.745 (0.01)|0.689 (0.01)|0.612 (0.01)| ||C|0.924 (0.05)|**1.033 (0.26)**|0.474 (0.03)| |PointPush1|R|0.343 (0.02)|0.252 (0.02)|0.244 (0.02)| ||C|0.464 (0.05)|0.447 (0.12)|0.114 (0.08)| |PointButton1|R|0.343 (0.09)|0.519 (0.02)|0.063 (0.01)| ||C|0.913 (0.33)|**1.631 (0.87)**|0.018 (0.02)| |HalfCheetahVelocity|R|0.985 (0.02)|0.975 (0.02)|0.855 (0.01)| ||C|0.512 (0.11)|0.061 (0.04)|0 (0)| |HopperVelocity|R|0.791 (0.04)|0.715 (0.02)|0.153 (0.02)| ||C|0.344 (0.08)|0.422 (0.24)|0.029 (0.02)| |AntVelocity|R|0.996 (0.01)|0.986 (0)|0.832 (0)| ||C|0.575 (0.18)|0.46 (0.12)|0 (0)| The additional evaluation on SafetyGymnasium demonstrates that our method, C2IQL, achieves nearly a 10% improvement in performance for most tasks while satisfying constraints, further validating its superior performance. ### Q2: No discussions for Theorems. Thank you for your suggestion. We will an explanation of Theorem 1 in line 255 to illustrate how it can be utilized in Theorem 2: "Theorem 1 provides us with a relationship between the constraint-penalized reward function and corresponding formulation of the implicit policy. When the implicit policy formulation follows Equation (14), the update of the value function under implicit policy $\pi_{imp}(a|s)$ is equivalent to the update formulation of the value function in Equation (13) under behavior policy (the policy for collecting the dataset). Thus we can utilize the formulation of the implicitly policy obtained in Theorem 1 to derive how to update the cost value function based on its definition in Theorem 2." ### Q3: How do the authors obtain the results of baselines? For BCQ-Lag, BEAR-Lag, COptiDICE, CPQ, and CDT, we use the code provided by the OSRL library. For FISOR and VOCE, we use the official code released by the respective papers. To ensure a fair comparison, we run each method across 10 evaluation episodes, using 5 random seeds and 3 different constraint thresholds. ### Q4: The proposed method has very limited improvement to be honest. The performance gain only appears marginal to CDT because the unsafe results in smaller constraints is averaged by safe results in larger constraints since Table 1 is averaged out 3 constraint thresholds following the style of most existing work. Thus we add Figure 2 as supplement of Table 1 to illustrate that: 1. CDT **cannot achieve safe policy for small constraints (like L<30)** under some tasks. This is very fatal disadvantage since satisfying the constraint is the foundation of safety. 2. CDT **cannot achieve reward maximization for large constraints** (like L<70) compared to C2IQL. 3. C2IQL achieves **best and safe performance for all three constraints**. This indicates C2IQL provides substantial benefits in handling a wider range of constraints and reward maximization accordingly. ### Q5: The proposed method is not beating baselines in other OSRL work [1]. Thank you for bringing up this concurrent work. We would like to address the reviewer’s concern in the following aspects: **Baseline Algorithm Selection Criteria:** We carefully select strong-performing methods (e.g, CDT (2023), VOCE (2023), FISOR (2024), WSAC (2024) from reviewer qPih) from recent publications in SORL field to ensure a comprehensive comparison. Extensive experiments on Bullet-Safety-Gym and SafetyGymnasium benchmarks have demonstrated the superior performance of our proposed C2IQL. Regarding the concurrent work [1], we excluded it from our comparison because it was only posted on arXiv in 12/2024 and has not undergone the peer review process. **Compare with [1]:** 1. **Performance**: By directly comparing the results from Table 5 in [1] and Table 1 in our paper, we observe that C2IQL provides stronger performance compared to [1]: - C2IQL achieves better results on AR, AC, BR, BC, CC, DR, and DC, demonstrating its strong performance across a variety of tasks. - C2IQL exhibits slightly worse performance on CarRun compared to [1]. 2. **Method**: [1] addresses issues with poorly behaved policies under global cost constraints. In contrast, C2IQL focuses on the OOD problem and is the first method designed to avoid the OOD problem in SORL. Besides, we also address the discounted cost formulation problem, which further improves constraint satisfaction and policy effectiveness. [1] Offline Safe Reinforcement Learning Using Trajectory Classification. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for your additional experiments. I have a few comments below. 1. How many random seeds do the authors consider in the additional experiments? 2. As indicated by Table 1, FISOR is not a strong baseline. BCQ-Lag has very good avg reward even though it slightly violates the constraint (I assume it may work for the new SafetyGymnasium tasks). Even COptiDICE and CPQ outperform FISOR as shown in Table 1. 2. I didn't mean that the authors had to compare exactly with paper "Offline Safe Reinforcement Learning Using Trajectory Classification". This is just an example indicating that the baseline methods in other papers could be better than the baseline results shown in this paper. Plus that the improvements of this work is limited. Then it is hard to see if this work really achieves an outperforming results than the current SOTA methods. --- Reply to Comment 1.1.1: Comment: We sincerely thank reviewer `ojm5` for the engagement and appreciation of our interesting idea. We provide the following clarifications to address the remaining questions: ### Q1: We use 5 random seeds for each case, which is consistent with Table 1 in our paper. As the results show, the performance gain of C2IQL clearly exceeds the variance margin, demonstrating significant improvement over existing baselines. Details in: https://anonymous.4open.science/r/rebuttal-B9B9/README.md ### Q2: Thank you for your comments. We want to clarify that: 1. **FISOR is indeed a strong baseline** as it achieves safety across all tasks, which is a fundamental requirement for SORL algorithms. 2. BCQ-L's higher average performance comes at the cost of safety violations. Specifically, BCQ-Lag violates constraints in 4/8 scenarios, indicating insufficient performance in safety-critical environments. We would like to emphasize that **comprehensive evaluation of SORL algorithms requires examining both safety satisfaction and reward performance.** 3. BCQ-L is added to the link and performs unsafely in SafetyGymnasium. ### Q3: We agree on the importance of fair and comprehensive comparisons with current SOTA methods in SORL to demonstrate the superior performance of C2IQL. And we believe we have adequately done so by: 1. **Including relevant SOTA methods:** We conducted a comprehensive survey of recent publications (including ICML, ICLR, NIPS) in 2023-2024 and incorporated **enough methods demonstrating promising results [2-6]** as baselines in Table-R1. Extensive experiments on Bullet-Safety-Gym and SafetyGymnasium show the superior performance of C2IQL, establishing it as the new SOTA. 2. **Ensuring faithful baseline reproduction**: We utilized official implementations or reliable third-party code to evaluate baseline performance. We also cross-compared our reproduced results with those originally reported (please see Table-R2) and found our reproduction achieves comparable or better performance for baseline methods, ensuring fair comparison. 3. **Incorporating the additional baseline** suggested by the reviewer `ojm5` and `qPih`. Results show C2IQL outperforms [1] and WSAC, further validating our method's effectiveness. **Additional baseline reproduction details:** Table-R1. Published paper in SORL field and corresponding baselines compared in the paper. |Paper/Baseline|BCQ-L|BEAR-L|COptiDICE|CPQ|CDT|FISOR|VOCE|WSAC| |-|-|-|-|-|-|-|-|-| |This paper|Y|Y|Y|Y|Y|Y|Y|Y| |CDT(2023)[2]|Y|Y|Y|Y|Y|N|N|N| |VOCE(2023)[3]|Y|N|Y|N|N|Y|Y|N| |FISOR(2024)[4]|Y|N|Y|Y|Y|Y|N|N| |WSAC(2024)[5]|Y|Y|Y|Y|N|N|N|Y| |OASIS(2024)[7]|Y|Y|Y|Y|Y|Y|N|N| |[1] (2024)|Y|Y|Y|Y|Y|N|N|N| Table-R2 below cross-references our reproduced results with those in recent publications: Table-R2. Reported results comparison of baselines in recently published papers. For method doesn't compared on Bullet-safety-gym, we just use "N" to show. For those tested on Bullet-safety-gym, we list the performance shown in the original paper. |Baseline|BCQ-L||BEAR-L||COptiDICE||CPQ||CDT||FISOR||VOCE||WSAC|| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |**Paper**|R|C|R|C|R|C|R|C|R|C|R|C|R|C|R|C| |C2IQL|0.69|1.05|0.49|1.49|0.53|0.80|0.41|0.83|0.71|0.91|0.31|0.06|0.42|1.80|0.54|0.97| |CDT|0.79|2.67|0.59|1.2|0.50|2.96|0.76|2.84|0.83|0.72|N|N|N|N|N|N| |VOCE|N|N|N|N|N|N|N|N|N|N|N|N|N|N|N|N| |FISOR|0.71|10.63|N|N|0.55|7.64|0.32|5.28|0.63|2.09|0.39|0.1|N|N|N|N| |WSAC|0.51|1.12|0.52|1.43|0.36|1.44|0.36|1.63|N|N|N|N|N|N|N|N| |OASIS|0.78|3.21|0.65|4.38|0.64|2.30|0.34|9.07|0.63|2.44|0.41|0.48|N|N|N|N| |[1]|0.74|3.11|0.48|3.8|0.55|2.55|0.33|1.12|0.68|1.04|N|N|N|N|N|N| Performance variations across papers can be attributed to: 1. Different constraint threshold settings (FISOR employs nearly zero constraints, resulting in relatively large normalized costs) 2. Task selection differences (CDT omits complex AntCircle (S=34, A=8, T=500)) 3. Different problem focuses (OASIS addresses dataset mismatch problems and omits AntRun and AntCircle; it is essentially BCQ-Lag + diffusion-based data augmentation) We hope these responses and additional results provided in our rebuttal address your concerns and encourage you to consider a more favorable evaluation of our paper. Thank you again for the time you investigated in evaluating our paper. [2] Liu, Z. et al., 2023. Constrained decision transformer for offline safe reinforcement learning. ICML [3] Guan, J. et al., 2023. Voce: Variational optimization with conservative estimation for offline safe reinforcement learning. NIPS [4] Zheng, Y.et al, 2024. Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model. ICLR [5] Wei, H. et al., 2024. Adversarially Trained Weighted Actor-Critic for Safe Offline Reinforcement Learning. NIPS [6] Xu, H. et al., 2022. Constraints penalized q-learning for safe offline reinforcement learning. AAAI [7] Yao, Y.et al., 2024. Oasis: Conditional distribution shaping for offline safe reinforcement learning. NIPS
Summary: Offline RL has gained popularity recently as it can be trained with offline batched data without interacting with a simulation environment. Constrained offline RL further extends the idea by adding threshold penalty on the entire trajectory cost. While offline RL and constrained on-policy RL are relatively well studied problems, constrained offline RL still remains a challenging problem. The paper proposed to solve this problem by taking ideas from IQL and IDQL (to address OOD problem) and introducing a cost reconstruction model to address the problem with discounted cost in offline setting. Experimental results on Bullet-Safety-Gym toy simulation environment demonstrate that the proposed C2IQL can outperform other baseline methods in terms of improving the reward values while maintaining the cost budget. Claims And Evidence: Constrained offline RL is a challenging problem. The paper claims to solve this problem with cost reconstruction and IQL method. The major claims in improving the reward and cost satisfaction have been proven theoretical and experimentally. Methods And Evaluation Criteria: The proposed evaluation criteria in safety-gym environment are a standard practice in offline RL problems. The evaluation criteria also make sense to me. My only concern is that the threshold penalty is chosen randomly in experiments. Adding experiments on real-world problem settings or a simulated version of some real problem would have been appreciated. Theoretical Claims: Theorem 1 and 2 (extended from IQL) gives reasonable insights on how to derive the algorithm, but quality guarantees and convergence proofs are missing. Experimental Designs Or Analyses: Experiments are designed carefully with standard practice on safety-gym environments. C2IQL is also compared with several SOTA techniques. Ablation studies were conducted to demonstrate the value for different steps proposed in C2IQL. Overall, the results seem complete, but adding some results on real-world problems with noisy offline transition data would have been more appreciated. Supplementary Material: I have read the supplementary material at a high level and might have missed some of the mathematical proofs. Relation To Broader Scientific Literature: Constrained offline RL is not a well explored problem yet, and the real-world applicability seems to be in niche areas. Therefore, it would be of interest to a limited scientific community. Essential References Not Discussed: NA Other Strengths And Weaknesses: Constrained offline RL is a challenging problem and the paper addressed the main challenges such as OOD problem and discounted cost reconstruction is a valuable contribution. Experimental results also seem to cover the key areas of evaluation. Here are some concerns though: 1. The motivation behind constraint offline RL is somewhat missing. Adding a few real-world examples in the introduction will be helpful. 2. Experiments are only conducted in simulation environment with randomly generated cost trajectories and threshold. Therefore, it is hard to predict how such techniques will perform in real-world scenarios with noisy transition sample trajectories. 3. Convergence proof and computation complexity or runtime analysis is largely missing. Other Comments Or Suggestions: Overall, the paper is written well and easy to follow. However, it is hard to follow the theoretical results and algorithms, so adding a brief high-level summary will be helpful to readers. Questions For Authors: 1. Please provide a few real-world examples of constrained offline RL. 2. Have you done any runtime analysis of C2IQL? Can you provide the results for different methods considered in experiments. 3. Have you finetuned parameters of different baseline algorithms or just took some random configuration? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1: My only concern is that the threshold penalty is chosen randomly. Adding experiments on real-world problem settings is appreciated. Thank you for your suggestion. The threshold penalty is choose based on small (<50%), middle (50%), and large (>50%) constraint of the maximum cost, which is comprehensive to cover real-world constraint settings. Besides, we sincerely agree that addressing real-world problems is important. However, in most existing SORL benchmarks [1, 2], there are nearly no real-world simulators since SORL is a newly rising field in recent years. Thus we will consider it in our future work. To address your concern, we have expand experiments on other environments in [2]. Please refer to Q1 of **reviewer ojm5** for detailed results due to character limit. [1] Gronauer, S. (2022). Bullet-safety-gym: A framework for constrained reinforcement learning. [2] Ji, J., Zhang, B., Zhou, J., Pan, X., Huang, W., Sun, R., ... & Yang, Y. (2023). Safety gymnasium: A unified safe reinforcement learning benchmark. *Advances in Neural Information Processing Systems*, *36*, 18964-18993. ### Q2: Theorem 1 and 2 miss quality guarantees and convergence proofs. We appreciate the reviewer recognizing the insights provided by theorem 1 and 2, which highlight our contribution avoid the OOD problem and address the discounted formulation problem in constrained settings. Regarding the quality guarantees and convergence proofs, we think our method share the same results as CPQ theoretically. The sketch of the proof is given as follows: First, we have that as the expectile parameter $\kappa$ increase to 1, $V^\pi_r(s)$ can approach $\max_aQ^\pi_{r|c}(s,a)$ because the expectile regression is a convex function. Second, convergence proofs of our algorithm can follow similar steps as those in CPQ because CPQ utilize $\max_aQ^\pi_{r|c}(s,a)$ while C2IQL utilize $V^\pi_r(s)$. ### Q3: Adding a few real-world examples to motivate constraint offline RL in introduction. Thank you for your suggestion. We add a few real-world examples on the right part of line 24, page 1, second paragraph, to further strengthen our motivation: "… in safety-critical scenarios. **For example, unsafe operations could harm patients in healthcare, unsafe driving style may lead to accidents, and unsafe decisions may incur additional costs in financial investments.** In these situations, …" ### Q4: Adding a brief high-level summary will be helpful to follow the theoretical results and algorithms. Thank you for your suggestion. We will add a high-level summary at the beginning of section 3.1: "To derive a concrete CIQL algorithm, we need to answer three questions: **First, how to update the constrained reward value/Q-value function following IQL style?** To address this problem, CIQL formulates a constraint-penalized reward Q-value function following CPQ and utilize a value function with expectile regression to approximate the maximized Q-value function in Bellman backup procedure. **Second, how to update the cost value function under the same implicit policy since it is hidden in the reward value function?** To address this problem, we rederive CIQL and obtained the formulation of the implicit policy in theorem 1 following IDQL, and then derive the formulation of the cost value function with this implicit policy. **Third, How to extract the policy?** We extract the policy in an expectile way following Equation 18." ### Q5: Runtime analysis is needed. We record the training time of our proposed C2IQL and baselines in AntCircle scenario in the following table: | Algorithm | BCQ-Lag | Bear-Lag | COptiDICE | CPQ | FISOR | VOCE | CDT | C2IQL | | ------------- | ------- | -------- | --------- | ------- | ------- | ------- | -------- | ------- | | Training Time | 4h23min | 5h08min | 2h57min | 3h46min | 2h05min | 2h33min | 11h32min | 5h41min | Overall, the training cost of C2IQL is reasonable when compared to other methods. While it takes slightly more time than methods like FISOR and VOCE, it is still significantly faster than CDT, which has the highest training time. Notably, C2IQL achieves a remarkable balance between computational efficiency and performance. The additional training time is justified by the significant performance gains provided by C2IQL, making it a practical and effective solution ### Q6: About baseline configuration? All baseline algorithms follow the default parameters, which are already finetuned in OSRL [1] projects for best performance. As for FISOR and VOCE, we finetuned the parameter to make sure the performance keeps consistent with the original paper. However, under the same environment but different constraints, we keep the same parameters for all methods including our proposed C2IQL. [1] Liu, Z., Guo, Z., Lin, H., Yao, Y., Zhu, J., Cen, Z., ... & Zhao, D. (2023). Datasets and benchmarks for offline safe reinforcement learning. *arXiv preprint arXiv:2306.09303*.
null
null
null
null
null
null
Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models
Accept (poster)
Summary: This paper presents a new method for posterior sampling from a wide variety of generative models (GANs, Flow, VAE) that can be expressed as a deterministic transformation x ~ f(z) for z sampled from a simpler distribution like Gaussian noise. The key idea is to train an auxiliary diffusion model to produce an initial point z' which when passed through f(z') results in samples x' from the the desired posterior in the data space p(x | y). Claims And Evidence: There are 3 main claims: 1) It's applicable to a wide range of prior models 2) It's effective under a variety of domains 3) It's more efficient than MCMC type methods. I think the claims are well supported. Results are shown on a variety of models and a lot of different tasks. One thing that wasn't clear to me is whether we need to train a different model for each posterior task. If so, then the efficiency is questionable, because MCMC posterior sampling methods can generally substitute in different likelihood constraints p(y | x) at inference without retraining. Since the R constraint is in the loss function (eq 4) it seems like the outsourced diffusion model is trained for each specific task. Methods And Evaluation Criteria: In general the method is well explained and makes sense. I like some of the core evaluations on the ImageReward examples to show how this proposed method improves over the baselines. I wonder why the examples are mostly 256 x 256. Is it harder to train in the latent space of larger models? Given that the first couple of figures are on toy examples (the swiss roll pictures) and the experiments are on small images, it seems like there may be more work needed to make this method truly applicable. Theoretical Claims: I checked most of the math closely (excluding some of the proofs in the appendix). Everything seemed to follow logically, and I appreciate the authors interpreting many of the detailed equations in more intuitive/layman terms. Experimental Designs Or Analyses: The broad experiments were great. There were many different generative models, constraints, and datasets used. As mentioned earlier, it would be nice to see some higher resolution examples if it's possible. Supplementary Material: I reviewed some of the network design and experiment details. Relation To Broader Scientific Literature: I think this paper ties together posterior sampling across many different types of generative models nicely. It also positions itself in contrast to MCMC posterior sampling methods, which have generally been extremely slow and required many steps to converge. I like that it is positioned well regarding all the posterior methods that the community uses. It is also a novel idea from my understanding. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is very clear. I found the method section easy to follow. I worry about the cost of training an auxiliary model for posterior sampling. A major advantage of using generative models for posterior sampling, is the ability to handle many tasks (inpainting, super-resolution etc) with a single pretrained generative model. If the outsourced diffusion sampler has to be retrained for different R functions then this method becomes much more constrained. In fact I think this factor should be discussed more, the (retraining of the outsourced sampler for different constraints) Other Comments Or Suggestions: I think there should be one nice teaser figure at the top. To start the paper by only seeing the toy swiss roll examples makes me think the practicality is already limited. The casual reader would greatly benefit from seeing a clear, compelling example up front, like maybe the cat on the llama example. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and constructive feedback, as well as the positive comments on the exposition and evaluations. To address the questions: ### __Efficiency of training a model for each task__ It is true that the outsourced diffusion sampler must be trained separately for each likelihood constraint $p(y\mid x)$. However, for the diffusion sampler, the inference-time cost per sample is much lower than for MCMC (i.e., the sampling cost is amortized). In fact, the total cost for generating samples for evaluation was much lower using our method (**including** training time) compared to MCMC, for instance: - The CIFAR-10 evaluation required generating 1000 samples, which took 10 hours with Hamiltonian Monte Carlo (using multiple chains in parallel), while the outsourced diffusion sampler took 5 hours for both training and sampling (of which sampling was a neglible fraction). - A similar pattern held for MCMC in the protein experiments: the entire experiment took 8 hours for MCMC and 4 total hours for our method. Additional details on the timing comparisons between MCMC baselines and our method are available in Appendix B. Regarding the **need to retrain for new constraints**, we point out that the regular functionality of generative models remains intact when using our method to sample the latent space. Namely, we can still apply *approximate* training-free methods for guidance to solve intractable sampling problems (e.g., for inpainting or linear inverse problems), with the only difference being that the noise components are obtained from our diffusion sampler. Our method offers a computational trade-off whereby additional compute can be spent to sample more accurately according to a given posterior distribution, compared to the generative model with standard noise. ### __Size of example images: Can we use larger models?__ The largest latent space we explore in our experiments is with Stable Diffusion 3, featuring a latent resolution of $16 \times 64 \times 64$, corresponding to an image resolution of $3 \times 512 \times 512$. We consider this to be a reasonably high resolution for practical applications: most image generative models, save those with excessive hardware and sampling time requirements, use similar latent dimensions. (Although diffusion models trained in pixel space with higher dimension exist, they typically require far more sampling steps than ones that work in a latent space.) Generally, larger latent spaces do increase training times, but our experiments demonstrate that our method scales effectively even with high-fidelity models like SD3. We note that for many generative models like GANs, the noise space is of much lower dimension than the data space (e.g., 512 vs. $3\times256\times256$ for the FFHQ StyleGAN3 we consider), which is further motivation to perform outsourced sampling. Finally, thank you for your suggestions on reorganizing the figures. We will incorporate this feedback into our final submission. **Thank you again for your review, and please do not hesitate to let us know if there is anything more we can clarify in the second response phase.** --- Rebuttal Comment 1.1: Comment: thank you for your detailed responses. I didn't realize the images were 512 x 512. This is very reasonable resolution, and should be made clear by including a larger figure earlier on. I maintain my recommendation of 4 (accept).
Summary: This paper addresses the posterior inference problem using diffusion sampling. By comparing their approach with existing MCMC methods and amortized inference methods, the authors demonstrate that their proposed outsourced diffusion sampling method, optimized through the trajectory balance objective, is both efficient and effective. They evaluate its performance across three application domains: conditional image generation, text-to-image generation, and protein structure generation. Claims And Evidence: This paper makes the following three claims. First, ODS is agnostic to the form of priors, which is demonstrated through the experiments. Second, ODS is an effective posterior inference method that can be applied across multiple domains. While ODS shows improved results in certain application domains given the baseline authors provided, some important baselines have not been considered, which I will discuss in detail in the Experiments & Designs section. In addition, certain results are a bit far from the state-of-art methods, i.e. the FID of ODS with I-CFM in the conditional image generation task. The experiments cover three application domains, which I consider sufficient. The third claim concerns efficiency, where the authors claim that ODS is more efficient than amortized inference methods and MCMC methods. I suggest the authors make this more explicit, is it the training time efficiency or sampling time efficiency? Training time ODS is better than Adjoint matching (Domingo-Enrich et al., 2024). But the NFEs at sampling time are not mentioned between ODS and Adjoint matching in the conditional CIFAR-10 data. Methods And Evaluation Criteria: The proposed methods is a proper fit to the problem and also to the applications that the author is trying to target. In the Conditional High-Resolution Face Generation and Text-to-Image RLHF experiments, the generated image quality metric like FID is not present. Theoretical Claims: N/A Experimental Designs Or Analyses: Class-Conditional Sampling with Cifar-10, Text-to-Image RLHF is carefully checked. For the first one, a simple training-free baseline such as classifier-guidance is missing from Table 3. Additionally, I'm unsure if mentioning distillation on ODS is relevant. For the second one, is the choice of SD3 instead of SD1.5 as the prior for not comparing with Fan et al. (2023) and Venkatraman et al. (2024)? Both methods are closely related to the proposed method. Although the paper mentions that CNF is not a diffusion model, recent work [1] demonstrates that flow-based models can be viewed as diffusion models with different noise schedules and parameterizations. Therefore, comparisons should also include methods closely related to this work in the diffusion land. [1] Gao, R., Hoogeboom, E., Heek, J., De Bortoli, V., Murphy, K. P., & Salimans, T. (2024). Diffusion meets flow matching: Two sides of the same coin. Supplementary Material: Section B of the Supplementary Material was checked for checking experiment details. Relation To Broader Scientific Literature: Posterior inference is an important problem to the broader scientific field. Traditional MCMC methods may yield better results, but they suffer from slow sampling times. Amortized inference offers an alternative approach, however, it requires obtaining massive numbers of samples from the posterior for training. Therefore, developing an efficient solution to this problem is important, which is what this paper targets. Essential References Not Discussed: The discussion on the relevant works is sufficient. Other Strengths And Weaknesses: - Strength: The proposed method offers a promising solution to the posterior inference problem at scale and is universal to the prior distribution. In addition, the proposed method does not require the reward or constraint function to be differentiable. The paper also includes applications from different domains. - Weakness My main concerns in this work are the writing and the experiment. The details of the proposed method in Section 4.2 are missing, and there is no algorithm block describing the overall method. This leaves an audience with no prior knowledge about the trajectory balance objective line of work struggling to differentiate between what already exists in the literature and what is newly proposed in this work. So I suggest instead of introducing what VAE/GAN is, set up the audience with more details like “off-policy” divergences approach and TB objective. Some of the weaknesses are mentioned in the Experimental Designs or Analyses. Some of the results, such as the FID scores on conditional CIFAR-10, are far from those of state-of-the-art methods, which makes hard to judge whether ODS can obtain high quality posterior samples. Other Comments Or Suggestions: Figure 1 comment: “The constraint function – a mixture of two Gaussians centered **an an** observation” Questions For Authors: In section 4.2, which part of the proposed method exists in the literature and which part is new in this work? Did authors consider training-free posterior sampling methods to be a proper baseline? Why is classifier guidance not appear in the condition Cifar10 experiment? Could the author compare ODS with Fan et al. (2023) and Venkatraman et al. (2024) on SD1.5 to demonstrate the effectiveness? Is the training stable given the ODS need to back-propogate through the entire sampling chain according to Eq(4)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your good questions and constructive feedback. We have responded to your points below. ### __Algorithm details__ We agree that the addition of an algorithm block would help readers understand how to implement the core training loop, and added this to __Algorithm 1 in page 2 of the [linked pdf](https://anonymous.4open.science/r/results-FED2/results.pdf).__ ### __FID scores not state-of-the-art__ While FID scores are indeed worse than adjoint matching on CIFAR-10, we highlight that ODS is a general approach applicable to any generative model, while adjoint matching is specific to flow models (and not all flow models, see end of §3.1) and requires differentiating the reward. The FID scores reported in [Venkatraman et al.] are an order of magnitude lower than those presented in our paper. On examining the underlying codebases we discovered that the discrepancy arises from the FID packages used. Specifically, [Venkatraman et al.] uses [this codebase](https://github.com/marcojira/fld), while we use [this one](https://github.com/GaParmar/clean-fid). The distinction lies in the handling of image resolution: CIFAR images are smaller than ImageNet images, and only the latter implementation (which we use) accounts for this difference appropriately. ### __Efficiency__ Our efficiency claim refers to lower training time of ODS relative to finetuning methods, while having much cheaper inference than MCMC. There is a small (25 step) inference overhead relative to Adj. matching (for reducing this, see Appendix C.1). ### __Which part of the proposed method is new?__ Trajectory balance has previously been employed for training diffusion samplers from unnormalized densities (notably in [Lahlou et al.] and [Sendera et al.]), and a variant of it, relative TB, is a diffusion-specific fine-tuning method in [Venkatraman et al.]. Our novel contribution is the general capability to train a sampler (using TB) of the Bayesian posterior in noise space (§3.2) for any generative model, providing a general-purpose framework for amortized posterior inference. The exact way in which ODS generalizes relative TB is detailed in Appendix A.1. ### __Training-free baselines and classifier guidance__ Unbiased **classifier guidance** requires a *time-dependent* likelihood gradient: $\nabla \log p(y \mid x_t) = \nabla \log \mathbb{E}_{p(x_0 \mid x_t)}[p(y \mid x_0)]$ This necessitates training a classifier on noised inputs, as using one trained on clean data renders the gradient intractable. Such methods therefore make stronger assumptions than our setting, which assumes a black-box classifier on clean data. Training-free methods like DPS are biased approximations to classifier guidance but work well in practice. DPS uses the approximation $\nabla \log p(y \mid x_t) \approx \nabla \log p(y \mid \mathbb{E}[x_0 \mid x_t])$, and requires the same assumptions as our proposed algorithm making it an appropriate baseline. We’ve added results for DPS (Diffusion Posterior Sampling, [Chung et al.]), a strong training-free baseline for approximate posterior sampling with diffusion and flow models using the I-CFM prior on CIFAR-10. __Results are in Table 1 of the response to Reviewer qmBQ and also [here](https://anonymous.4open.science/r/results-FED2/results.pdf).__ DPS achieves high reward but poor FID, reflecting highly biased sampling. ### __Additional experiments with SD1.5__ We conducted additional experiments with Stable Diffusion 1.5 to compare against DDPO [Black et al.], DPOK [Fan et al.], and RTB [Venkatraman et al.], as requested. We also included RTB with the I-CFM prior as a baseline for the CIFAR-10 class-conditional sampling task. __Results are shown in Tables 2 and 1 of our response to reviewer qmBQ and are available [here](https://anonymous.4open.science/r/results-FED2/results.pdf).__ On SD1.5, ODS achieves the a strong balance between reward and sample diversity. ### __Stability of training__ ODS uses TB, an off-policy RL objective that does **not** require backpropagating through the full sampling chain. Instead, it only needs local gradients of log-likelihoods for each sampling step (see [Venkatraman et al.], Appendix H.1) Such off-policy optimization has three benefits, according to prior work (see, e.g., [Nüsken & Richter], [Sendera et al.]): - The reward function need not be differentiable, since TB avoids using log-reward gradients; - TB avoids instability and mode collapse seen in methods that backprop through the full SDE trajectory (e.g., PIS [Zhang & Chen]); - Memory usage is minimal thanks to a gradient accumulation strategy (included in our code), avoiding storage of full computation graph In our experiments, we found training to be very stable for all tasks except for the protein diversity task, where policy collapse sometimes occurs. **Thank you again for your review, and please do not hesitate to let us know if there is anything more we can clarify in the second response phase.**
Summary: This work targets the problem of generating samples from the posterior $p(x | y) \propto p(x) r(x, y)$, where the prior $p(x)$ is a (pre-trained) generative model and $r(x, y)$ is a reward function. The authors argue that (most) generative models can be formulated as the application of a pushforward $f(z) = x$ to a simple latent distribution $p(z)$ (e.g. Gaussian). In this case, the initial problem is equivalent to generating from the posterior $p(z | y) \propto p(z) r(f(z), y)$, which the authors argue is easier to tackle. The authors train a *diffusion sampler* $p^\phi(z | y)$ using the trajectory balance (TB) objective (Malkin et al., 2022), which only requires access to the unnormalized density $p(z) r(f(z), y)$. The authors demonstrate on 3 text-to-image benchmarks and 1 protein design benchmark that their method is competitive against a few alternatives. ### update after rebuttal I thank the authors for their rebuttal and taking my comments into account. I appreciate the additional discussion, and hope that they will be able to add it to the manuscript. My concerns regarding the evaluation of the posterior (calibration) somewhat remains, but I agree that proper evaluation remains a challenge in high dimension, especially in the lack of data pairs $(x, y)$. In this light, I will be raising my score to 4 (accept). Claims And Evidence: / Methods And Evaluation Criteria: Yes, the method is sound, the evaluation tasks are relevant, and several baselines are considered. However, 1. In the case where $p(x)$ is a flow-matching/diffusion model, an important baseline should be to fine-tune the generative model using the trajectory balance (TB) objective directly, which would validate the authors' claim that $p(z | y)$ is an easier target than $p(x | y)$ (line 74.5). This approach is taken by Venkatraman et al. [1]. 2. The assessment of the quality of the inferred posterior distributions is not sound. $\mathbb{E}[\log r(f(z), y)]$ is maximized by a collapsed distribution around $z^* = \arg\max_z r(f(z), y)$. The diversity is maximized by a maximal entropy distribution. Although these quantities are relevant for evaluating the inferred posterior $p^\phi(z | y)$, they are not sufficient. In fact, I suspect the method presented in this paper to be subject to mode collapses, and the current evaluation does not rule out this hypothesis. For example, you can observe in Figure 4.e that most dogs are white and facing the camera, which is not the case in the CIFAR-10 dataset. Similarly, in Figure 6, there is a collapse of the image composition (tabby cat on the llama, meteor falling bottom left, horse on the left). There is an extensive literature on the evaluation of posterior distributions, notably emerging from the SBI [2] community. For example, in scientific applications, the calibration of the posterior distribution is extremely important [3-5]. Another sensible metric could be the (reverse) KL divergence between $p^\phi(z | y)$ and $p(z | y)$. This KL can be computed up-to the normalizing constant $Z(y) = \int p(z) r(f(z), y) dz$, and therefore can be used to compare different approximations of $p(z | y)$. In my opinion, a proper evaluation of the inferred posteriors is a *sine qua non* for a paper claiming to perform posterior inference. I would be happy to raise my score if the authors address this concern. [1] Amortizing intractable inference in diffusion models for vision, language, and control (Venkatraman et al., 2024) [2] The frontier of simulation-based inference (Cranmer et al., 2020) [3] Validating Bayesian Inference Algorithms with Simulation-Based Calibration (Talts et al., 2018) [4] A Trust Crisis In Simulation-Based Inference? Your Posterior Approximations Can Be Unfaithful (Delaunoy et al., 2021) [5] Sampling-Based Accuracy Testing of Posterior Estimators for General Inference (L3mos et al., 2023) Theoretical Claims: Yes, I quickly checked the proof of Prop. 3.1 and it seemed valid. The work is mainly applicative, as it consists in applying already existing methods (mainly TB) to a widespread problem. Experimental Designs Or Analyses: See Methods and Evaluation criteria. Supplementary Material: Nothing but the proofs. Relation To Broader Scientific Literature: The literature review in this work is complete and extensive, without being overwhelming. Kudos to the authors. Essential References Not Discussed: There is an extensive literature on the evaluation of posterior distributions [3-5] (this is a very limited selection), which this work does not consider/discuss. Other Strengths And Weaknesses: 1. The paper is very well written. The figures are clear. The experiments are relevant. 2. The evaluation is lacking a proper assessment of the posteriors' quality. 3. In my opinion, the methodological contribution of this work is modest with respect to previous works. The impact, however, could be significant. Unfortunately, this cannot be assessed without a proper evaluation. Other Comments Or Suggestions: 1. In Table 1, diffusion models should have $d_{data}$ as noise dimension, if they are to be considered as deterministic functions. The decoder of latent diffusion models is typically considered deterministic (even by the authors), leading to a noise dimension $d_{latent}$. This is consistent with Table 2. 1. In Figure 1, "two Gaussians **and** an observation" 1. In Figure 1, the bottom right plot should read $p(x | y)$ 1. Line 240, "**the one** defined by the target" 1. Line 241, "noising kernel" is undefined 1. Line 246, TB "loss" was introduced as "objective" in Eq. (4) 1. In Eq. (2), I would avoid the notation $\sigma_t$ as it has another meaning in the diffusion model literature 1. Line 312.5 $z \in \mathbb{R}^{128}$ Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the detailed review and constructive feedback, as well as pointing out that the paper is very well written. You raise some important concerns, which have helped us improve the paper and which we hope to address: ### __Posterior evaluation__ We agree that more comprehensive posterior evaluations are important for a thorough assessment. First, we emphasize that the FID scores reported in our original CIFAR-10 experiments capture both reward and coverage/diversity, and serves as a proxy metric for closeness to the true posterior. FID remains a widely accepted and robust metric for evaluating the sample quality of image generative models. For the CIFAR-10 experiments (comparing our amortized sampler to ground truth posterior samples), we tried posterior evaluation with TARP [Lemos et al., 2023] and PQMass [Lemos et al., 2024], an unconditional two-sample variant of TARP, but were not able to obtain results that seemed meaningful. For example, running TARP frequently showed prior samples as perfectly calibrated and unbiased relative the conditional dataset, which is obviously false -- but the discrepancy is captured correctly in the FID scores. (Published code from those papers was used; to ensure the results do not contain an error, we checked that our code could reproduce values in line with those reported on CIFAR-10 in [Lemos et al., 2024].) For other experiments, ground truth samples from the posterior are not available, complicating evaluation (for instance, FID cannot be computed), which is the reason we resort to other metrics. For prompt-conditioned sampling we instead report the average CLIP feature distance, following the approach used by [Venkatraman et al.]. In our Stable Diffusion 1.5 experiments, the CLIP feature diversity scores exhibit a consistent and interpretable trend: the prior achieves the highest diversity, while DDPO, which performs greedy reward maximization without KL regularization, yields the lowest diversity. These trends support the reliability of our evaluation metrics in capturing meaningful aspects of sample diversity. At the reviewer's suggestion, we also report the ELBO (i.e., the lower bound on the log-normalizing constant $\log Z$, where the ELBO-to-log-likelihood gap is the reverse KL divergence). This is a widely used metric in the diffusion samplers literature, including [Sendera et al.], and is maximized by a perfect sampler of the target density. We note however, that this metric cannot be used to evaluate inference-time baselines or MCMC. __Addition of the ELBO metrics for CIFAR-10 and SD3 are included in Tables 1 and 3 repectively, within the [linked PDF](https://anonymous.4open.science/r/results-FED2/)__. ### __Additional baselines__ We include experimental results for the RTB baseline proposed by [Venkatraman et al.] for flow models trained with independent coupling, which can subsequently be fine-tuned as diffusion models. This evaluation is done for RTB using the I-CFM prior on CIFAR-10; we also include new results comparing our method against RTB, DPOK [Fan et al.] and DDPO [Black et al.] with Stable Diffusion 1.5 as a prior with the same setup used by [Venkatraman et al.]. __These are included in Tables 1 and 2 of the response to Reviewer qmBQ, and also in the [linked pdf](https://anonymous.4open.science/r/results-FED2/results.pdf)__. We note that for CIFAR-10, the RTB baseline is significantly more unstable to train: requiring LoRA fine-tuning [Hu et al.], as well as lower learning rates to avoid a quick policy collapse. This aligns with design choices in the RTB paper [Venkatraman et al.]. Even with this, the training remains somewhat unstable for the flow model architecture, being very sensitive to training hyperparameters. Within the training time allocated to ODS, the model achieves a modest improvement in reward; however, the generated samples often fail to consistently belong to the target class and frequently exhibit visual artifacts. This degradation in quality is further reflected in poorer posterior metrics (FID, ELBO). We argue the training instability is caused by the flow prior, which uses a different noise schedule as well as much fewer inference steps compared to experiments in [Venkatraman et al.]. This instability could lead to "reward-hacking" behavior where reward improves at the cost of sample quality. By contrast, ODS improves largely within the first 2-3 GPU hours, and only marginally afterwards. The general difficulty of training RTB with the flow prior reinforces the claim that the latent posterior $p(z|y)$ is a simpler target than the image posterior $p(x|y)$. Finally, **thank you for your helpful suggestions on improving the notation and writing clarity**, as well as for pointing out the typos. We will implement this feedback for our final submission. **Thank you again for your review, and please do not hesitate to let us know if there is anything more we can clarify in the second response phase.**
Summary: ## Summary * This paper proposed a more general approach towards posterior inference for generative model with Gaussian prior (e.g. GAN, Flow, Diffusion models). More specifically, it proposed to learn a non-Gaussian prior (noise space z). * The paper is evaluated on various models and various scale from toy size to Stable Diffusion 3. Solid empirical results support Claims And Evidence: Yes. The authors claim a general approach to achieve conditional sample by learning the prior p(z) of various generative model. They verify their approach in different kinds of models such as flow and diffusion. Methods And Evaluation Criteria: The evaluation spans from toy CIFAR dataset to practical scale image generation. The metrics can be organized in a better way, but common metrics such as FID and imagereward are shown. Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: ## Experimental designs * The flow matching model in SD3 is a special type of flow matching model that start from a Gaussian noise. This type of flow is not much different from diffusion models * For some models such as GAN and diffusion, there exists tailored method to sample from posterior, such as DPS [diffusion posterior sampling for general noisy inverse problems] and GAN inversion [PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models]. As the authors propose a general approach for different types of generative model, it would strengthen the paper if the authors show the performance of their approach against model specific approaches. Supplementary Material: Yes, I review additional experimental results. Relation To Broader Scientific Literature: * The authors propose a general approach to learn amortized conditional generative model by learning the prior. This paper is also related to a recent hot topic: golden noise [Not All Noises Are Created Equally:Diffusion Noise Selection and Optimization] and test time scaling in text to image diffusion model [Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps]. Despite this paper is amortized, it is better to discuss the relationships to those non-amortized works. Essential References Not Discussed: See Relation To Broader Scientific Literature. Other Strengths And Weaknesses: ## Weakness * The authors are encouraged to organize the results in a better way. For now too many results are shown from image to portein generation. Different datasets, methods and metrics are scattered across the main text and appendix. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful review. We address the weaknesses and questions raised by you in our response below. ### __Comparison with model-specific approaches__ We thank the reviewer for pointing out the additional model-specific baselines with which to compare out general approach. - We note that for GANs, we compare to a latent space exploration baseline similar to PULSE, Hamiltonian Monte Carlo (HMC). - We have added the DPS baseline to the CIFAR-10 experiments. - We have added RTB as a diffusion-specific baseline for CIFAR-10 and Stable Diffusion-1.5 experiments. __The additional results are included in Tables 1 and 2 below, and also in the [linked pdf](https://anonymous.4open.science/r/results-FED2/results.pdf).__ Table 1: CIFAR-10 posterior sampling results, averaged over 10 classes | Model | Sampler | $\mathbb{E}[\log p(\textbf{y} \mid \textbf{x})]$ $(\uparrow)$ | FID $(\downarrow)$ | ELBO $(\uparrow)$ | |-------|----------------------|------------------------------------------------------------|--------------------|--------------------------------| | I-CFM | Prior | -5.88 | 84.79 | -24.04 | | | DPS | -2.22 | 84.96 | - | | | RTB | -4.20 | 90.77 | -147.69 | | | Latent HMC | -2.80 | 46.69 | - | | | Adj. Matching | -3.09 | 19.45 | -17.23 | | | **Outsourced Diff.** | -3.35 | 34.28 | -20.36 | In the CIFAR-10 results we note that DPS obtains a higher reward than Outsourced Diff., with a worse FID score (and ELBO -- see the response to Reviewer nZ5P). This points to possible reward hacking. Additionally, the added RTB baseline for this experiment proved much more unstable during training than our method, reflected in its worse FID (and ELBO) scores, while improving the reward relative to the prior. Some RTB runs underwent policy collapse, requiring early stopping for the reported results. Table 2: SD 1.5 fine-tuning results. DDPO, DPOK and RTB results taken from [Venkatraman et al.] | Sampler | $\mathbb{E}[\log r(\textbf{x}, \textbf{y})] (\uparrow)$ | CLIP diversity $(\uparrow)$ | |----------------------|--------------------------------------------------------|-----------------------------| | Prior | -0.17 | 0.18 | | DDPO | 1.37 | 0.09 | | DPOK | 1.23 | 0.13 | | RTB | 1.4 | 0.11 | | **Outsourced Diff.** | 1.26 | 0.14 | For the SD1.5 experiment, we see that our method achieves a slightly lower reward than RTB, while obtaining a higher diversity over all baselines (other than the prior). ### __Relationship to non-amortized methods (golden noise and test-time scaling)__ We agree that incorporating a discussion comparing our proposed amortized sampler with these inference-time optimization methods would strengthen the paper. Notably, the latent space HMC baseline used in our CIFAR-10 and FFHQ experiments shares conceptual similarities with these inference-time noise optimization approaches. While the methods in prior work do not asymptotically sample from the true Bayesian posterior over the noise space, they effectively bias the sampling process toward high-reward regions, often producing qualitatively similar results. In contrast, our method frames prior-regularized reward fine-tuning as posterior inference, whereas these inference-time techniques are more aligned with direct reward maximization. We will integrate this discussion into the appendix of the final version. ### __Organization of results__ We thank the reviewer for their feedback on the organization of results. We will merge some of the results that share similar metrics and baselines (such as CIFAR and FFHQ), and reorganize the tables to make the presentation more clear for the final submission. **Thank you again for your review, and please do not hesitate to let us know if there is anything more we can clarify in the second response phase.**
null
null
null
null
null
null
Ehrenfeucht-Haussler Rank and Chain of Thought
Accept (poster)
Summary: The paper studies the expressivity of one-layer Transformers with hard attention (in particular for Boolean functions), where it's shown that the Ehrenfeucht-Haussler rank of a Boolean function is equal to the number of (continuous) chain-of-thought (CoT) tokens that a single-head model has to produce to express the target label. The paper further extends this finding to multi-head single-layer Transformers and more general alphabets by defining appropriate ranks. ## update after rebuttal I maintain my positive assessment for the paper. Claims And Evidence: The claims are supported by theoretical arguments and proofs. However, I have some concerns about the mathematical modeling of decoder-only Transformers in this paper. In particular, this paper assumes that CoT tokens are continuous arbitrary vectors and not predefined tokens. So I wonder if the results hold if we somehow limit the Transformer to a constant size (e.g., {0,1, EoL}) alphabet. Methods And Evaluation Criteria: See above. Theoretical Claims: Yes, the theoretical arguments seem valid to me. See above for remarks on modeling. Experimental Designs Or Analyses: No experiments. Supplementary Material: I have read the supplementary material but I didn't verify all the proofs carefully myself. Relation To Broader Scientific Literature: This paper studies the relation between CoT complexity and rank of Boolean functions for a specific class of Transformer functions. The paper does not provide any broad insight into the working mechanism of Transformers or large language models and is pretty shallow on the machine learning side. So I think the paper would be mainly of interest of a niche community of ML theory people. Essential References Not Discussed: I think the original chain of thought and scratchpad papers should be cited. Other Strengths And Weaknesses: The paper uses different notions and techniques from theoretical computer science. This makes the paper interesting but also hard to read. Nevertheless the writing is quite clear. Other Comments Or Suggestions: - When discussing the related work, some results depend on the size and depth of the models. It would be nice to discuss them in more detail. - Elaborating on the questions below could be helpful. Questions For Authors: - What is the central message of the paper for the ML community? - Can the arguments work with a decoder model that predicts discrete tokens belonging to a constant-size alphabet? (See 'Claims and Evidence' part) - I think the decision trees could be defined and explained better. Do trees always have exactly $n$ queries? Is it always the same query at a particular depth? What is the meaning of the equation on line 121? - In line 209 what is $u_{i_h}$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you a lot for your comments and questions. We respond to all three below. *Q1: What is the central message of the paper for the ML community?* The ability of Transformers to perform function composition has garnered increasing attention in recent years, as understanding this capability sheds light on the computational resources they require to infer implicit knowledge from a given set of facts. Peng et al. demonstrated that single-layer, soft-attention Transformers without Chain-of-Thought (CoT) reasoning are fundamentally incapable of function composition. However, when CoT is introduced, they can achieve iterated composition—albeit at the cost of requiring a growing number of steps, which depends on both vector dimensionality and feature precision. Our work precisely quantifies the number of steps needed for t-th iterated composition and establishes that, under the idealized assumption of hard-attention, the number of required CoT steps is exactly t. This finding underscores a key insight: while CoT enables function composition, it does so incrementally—one step at a time. We believe this to be the central message for the ML community. *Q2: Can the arguments work with a decoder model that predicts discrete tokens belonging to a constant-size alphabet? (See 'Claims and Evidence' part)* Our argument works with discrete chain-of-thought tokens as well. Indeed, in our construction, CoT tokens essentially are used to maintain the one-hot encoding of the current node of the decision tree. In place of one-hot encoding, we can store these nodes directly in discrete tokens. We do not know how to obtain our results with discrete tokens, belonging to the constant-size alphabet. Note that in our setting, even input tokens do not necessarily belong to a constant-size alphabet (for example, in t-comp, input tokens come from the alphabet {1, …, n}). *Q3: In line 209 what is $u_{i_h}$?* This should be $x_{i_h}$. Thank you for noticing this typo. --- Rebuttal Comment 1.1: Comment: Thank you for the response. In agreement with other reviewers, I think it would be beneficial to explain the relation of this work with other complexity measures. Also, I think the alphabet should be discussed more thoroughly (for example, if we use discrete tokens, how many of them would be needed?)
Summary: The paper shows that the minimal number of CoT steps a single-layer hard-attention Transformer needs to compute a function exactly corresponds to the function's EH rank. In essence, it proves that the EH rank equals the minimum depth of decision trees (over assignment queries) that simulate the Transformer's computation process. Two directions are established: one where Transformer iterations mimic decision tree steps to resolve attention choices, and another where a Transformer decoder is constructed to replicate a decision tree of a given rank. Key canonical functions are analyzed. For iterated composition, it’s shown that the EH rank equals the number of compositions, while for the kth-one identification problem, the rank exactly equals k when the input size is sufficiently large. These results are supported by communication complexity and combinatorial arguments. Additionally, the paper extends these findings to multi-head attention by defining an H-head rank, proving that even with parallel a-queries, the sequential steps required for tasks like composition or counting remain unchanged. Overall, the work bridges classical complexity (EH rank) with modern Transformer architectures, highlighting both their computational power and the limitations of parallelism in structured tasks. Claims And Evidence: The claims are supported by clear theoretical evidence within the paper’s scope (single-layer hard-attention Transformers). The proofs for equivalence and lower bounds are systematic, leveraging combinatorial fixation, communication complexity, and inductive self-reducibility. However, the reliance on hard attention and idealized input fixation limits practical applicability. No evident gaps or errors are present in the core arguments. 1. Equivalence of EH Rank and Decoder Depth: $\mathrm{rk}(f) = \mathrm{dd}^{(1)}(f)$ for any function $f$. 1. Rank ≤ Decoder Depth: Simulates Transformer steps via decision trees over a-queries. Each CoT step corresponds to resolving one a-query (Appx. A.1). 2. Decoder Depth ≤ Rank: Builds a Transformer decoder emulating rank-$r$ decision trees (Sec. 4). Positional encodings track decision paths, ensuring equivalence. The bidirectional reduction is explicit, with detailed embeddings and attention mechanisms (Sec. 4.1). 2. Tight Bounds for Canonical Functions $\rightarrow$ for t-Comp $\mathrm{rk}(t\text{-Comp}_n) = t$ for $n > 2t$. Authors show combinatorial argument via input fixation (Prop. 2.3). Fixing $t-1$ values forces at least $t$ a-queries (Appx. A.2). 3. Authors also show Multi-Head Generalization claiming $\mathrm{rk}^{(H)}(f) = \mathrm{dd}^{(H)}(f)$, for $t$-Comp, reduces to pointer chasing (PC) with $\Omega(t)$ rounds (Cor. 5.5) and for $k$-thOne, uses induction and unfixed intervals (Thm. 5.6). --- Few limitations that I found: 1. Results apply to idealized hard attention, not softmax-based models. The paper acknowledges this but does not explore extensions (Sec. 6). 2. Arguments for $k$-thOne (Appx. A.3) rely on intricate partial fixations. While valid, the self-reducibility step assumes $f(n)$ grows sufficiently without explicit bounds. 3. The reduction from PC assumes Bob-first protocols require $\Omega(n)$ communication, but the paper does not re-prove this result, relying on Duris et al. (1987). Methods And Evaluation Criteria: 1. The equivalence between EH rank and decoder depth is established via explicit bidirectional reductions (decision trees ↔ Transformers). This provides a formal foundation for analyzing CoT steps. Constructive embeddings (Sec. 4) ensure equivalence, while combinatorial fixation (Prop. 2.3) and communication complexity (Cor. 5.5) enforce tight bounds. 2. Reduction to pointer chasing (PC) with Ω(t) communication rounds (Duris et al., 1987) is valid and aligns with established complexity theory.Partial fixation arguments (Appx. A.3) enforce sequential resolution of 1s, leveraging inductive self-reducibility Limitations: 1. The analysis is restricted to single-layer decoders. Modern Transformers use multiple layers, which may reduce CoT steps via hierarchical processing—unaddressed in the paper. 2. The decoder depth equivalence assumes custom positional encodings (Sec. 4.1). It is unclear if results hold for standard embeddings (e.g., sinusoidal, learned). 3. While the paper proves multi-head rank equivalences (Thm. 5.2), it does not identify functions where additional heads reduce steps. The claim that “multi-head attention cannot circumvent inherent sequential steps” applies only to t-Comp/k-thOne—not general tasks. 4. One thing which I was hoping to find in this paper: its applicability to real-world Transformers remains unproven. Future work should address soft attention and multi-layer models while incorporating standard NLP benchmarks. Theoretical Claims: All the theoretical claims looks to be correct, just one doubt on Theorem 5.2 -> Multi-Head Rank Equivalence claims $ \mathrm{rk}^{(H)}(f) = \mathrm{dd}^{(H)}(f) $, each head’s a-query is parallelized via $ H $-degree queries. Positional encodings and matrices generalize the single-head case. $W_O$ concatenates head outputs, and $ W_1, W_2 $ ensure state transitions. Assumes ReLU correctly prunes invalid paths. This is valid if matrix dimensions align with the expanded coordinate system? Experimental Designs Or Analyses: The paper is purely theoretical, focusing on proving equivalences and lower bounds via combinatorial/communication complexity arguments. Since it contains no empirical experiments, there are no traditional "experimental designs" or statistical analyses to critique. 1. All results assume idealized hard attention (argmax over tokens). While acknowledged, this limits practical relevance to real-world softmax-based Transformers. 2. Lower bounds rely on adversarial input constructions (worst-case analysis). No consideration of average-case or probabilistic inputs. 3. In Thm 4.1, matrices $W_1, W_2$ are described via logic (e.g., “ReLU correctly prunes invalid paths”) but lack explicit parameterization. While the logic holds, explicit matrices would strengthen rigor. 4. The $t\text{-Comp}$ lower bound (Cor 5.5) depends on Duris et al.’s $ \Omega(n) $ bound for PC. While this is a standard reference, the paper does not validate if $t\text{-Comp}$’s structure fully aligns with PC’s assumptions (e.g., cyclic dependencies). Supplementary Material: There are no supplementary materials provided for this paper Relation To Broader Scientific Literature: The paper advances three major threads: 1. Bridging Complexity Theory and Transformers: Connects EH rank (PAC learning) to CoT steps (Transformer theory). 2. Multi-Head Limitations: Shows parallelism cannot circumvent sequential rank bounds, contrasting with softmax-based analyses. 3. Combinatorial Proof Techniques: Introduces novel fixation/self-reducibility arguments for positional tasks, complementing communication complexity methods. These results refine the understanding of Transformer’s inherent limitations, providing theoretical grounding for empirical observations about CoT’s necessity in complex reasoning. Essential References Not Discussed: The paper does not cite several recent works that provide critical context for its contributions. Below are key omissions: ## 1. 1.1 Li et al. (ICLR 2024): ["Chain of Thought Empowers Transformers to Solve Inherently Serial Problems"](https://openreview.net/forum?id=3EWTEy9MTM) proves that transformers with CoT can simulate polynomial-size circuits, resolving whether CoT fundamentally increases expressivity. This work establishes that CoT allows transformers to overcome parallel computation limits (e.g., processing **TC⁰** vs **AC⁰**), which directly contextualizes the current paper’s focus on sequential steps and rank equivalence. 1.2 The current paper’s claim that CoT steps correspond to EH rank aligns with Li et al.’s conclusion but lacks a broader complexity-theoretic framing. ## 2. Feng et al. (2023): ["Transformers as Neural Solomoff Inducers"](https://arxiv.org/abs/2310.10691)* connects CoT to circuit complexity, showing transformers with CoT can solve problems outside NC (parallelizable classes). The current paper’s combinatorial proofs for k-thOne and t-Comp could be strengthened by contrasting with Feng et al.’s circuit-depth arguments. ## 3. Hu et al. (2024): ["Stepwise Self-Consistent Training for Language Agents"](https://arxiv.org/abs/2402.03286) demonstrates empirically that CoT steps reduce estimation error in multi-step reasoning, even with noisy intermediate tokens. The current paper’s theoretical analysis of hard attention could benefit from engaging with Hu et al.’s findings on robustness. ## 4. Chen et al. (2024): ["Theoretical Limitations of Multi-Layer Transformer"](https://arxiv.org/abs/2412.02975) proves dimensional lower bounds for soft attention in deep transformers. While cited briefly in Sec. 6, this work is critical for understanding why the current paper’s single-layer, hard-attention analysis cannot trivially extend to multi-layer models. The paper does not contrast its rank-based bounds with Chen et al.’s dimension-dependent constraints. ## 5. Bhattamishra et al. (2024): ["Transformers Learn Higher-Order Programs"](https://arxiv.org/abs/2403.00732) shows transformers with CoT can learn program induction in-context. The current paper’s focus on decision trees could be enriched by discussing how program induction aligns with EH rank’s sequential resolution. Other Strengths And Weaknesses: The paper is a theoretically rigorous contribution that advances our understanding of Transformers’ computational limits. While its scope is narrow, its originality and formal insights lay groundwork for future research on CoT and neural architectures. Other Comments Or Suggestions: No other questions or comments for the authors. Questions For Authors: **Question 1: Hard Attention vs. Practical Softmax Models** The paper establishes equivalences under hard attention, but real-world Transformers use softmax attention. **How do the authors anticipate their results generalizing to soft attention?** --- **Question 2: Combinatorial Fixation Assumptions** The lower bound for $k$-thOne relies on adversarial input fixation (Lemma A.2). **Do these bounds hold for inputs with probabilistic structure (e.g., i.i.d. 1s) or only worst-case inputs?** --- **Question 3: Multi-Layer Architectures** The paper focuses on single-layer decoders. **Can the EH rank framework extend to multi-layer architectures, given Chen et al.’s (2024) dimensional lower bounds for multi-layer soft-attention models?** --- **Question 4: Explicit Matrix Definitions** Theorem 4.1’s Transformer construction lacks explicit parameterization of $W_1, W_2$. **Can the authors provide a full specification of these matrices (e.g., via block-diagonal structures or sparse encodings)?** --- **Question 5: Relation to CoT’s Circuit-Theoretic Power** The paper does not cite Li et al. (ICLR 2024), which shows CoT enables Transformers to solve **P**-complete problems. **How does EH rank align with CoT’s role in overcoming parallel computation limits (e.g., TC⁰ vs. AC⁰ separations)?** Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you kindly for all your comments and inspiring questions! *The claim that “multi-head attention cannot circumvent inherent sequential steps” applies only to t-Comp/k-thOne—not general tasks.* A following function with H-head rank 1 and 1-head rank H establishes tightness of proposition 5.3. [Dahiya, Mahajan, On (Simple) Decision Tree Rank] gives the function OR_H \comp AND_m, defined as the disjunction of H conjunction, where each conjunction is taken on m disjoint variables. The aforementioned paper shows that its normal rank is H. On the other hand, its H-head rank is 1 because each head can compute one conjunction. *All the theoretical claims looks to be correct, just one doubt on Theorem 5.2 -> Multi-Head Rank Equivalence (...) This is valid if matrix dimensions align with the expanded coordinate system?* Yes, this is still valid. In particular, by expanding the number of coordinates, we create $H$ independent blocks to encode assignments. This allows us to one-hot encode the assignments $a_1, .., a_H$, results of $H$ a-queries of the current, each within one of the blocks. In the matrix $W_1$, in each row (corresponding to a potential node $v_{t+1}$, there will be precisely one 1 per assignment block, indicating the unique tuple of answers $a_1, .., a_H$ that lead to the node $v_{t+1}$ from $v_t$. There is a small typo here, we must have $-H$ before the special coordinate instead of $-(H-1)$. This is because we want the expression $ReLU(b_0 + b_1 + … + b_H - H)$ to be $1$ if and only if $b_0 = b_1 = … = b_H = 1$ if and only if $b_0 + b_1 + … + b_H = H +1$. We will fix this in the revised version. Re: *the paper does not validate if $t\text{-Comp}$’s structure fully aligns with PC’s assumptions (e.g., cyclic dependencies).* Let us clarify that for this lower bound, we consider phi’s that map numbers from the first half to the second half, and numbers from the second half to the first half. Such mappings are decomposable into two independent function $g:{1, .., n/2} \rightarrow {n/2+1, … n}$ and $h:{n/2+1, … n}\rightarrow {1, .., n/2}$, which fully aligns the assumptions of the pointer chasing problem. Functions that are not decomposable in this way are not needed. *Q1:* The exact relation between hard and soft attention constitutes a major open problem in the area. Recently, there has been some progress with regard to simulating hard attention with softmax, see e.g. Yang et al 2024 [Simulating Hard Attention Using Soft Attention] . Then there are examples of tasks, like PARITY, which can be done with softmax but not with hard attention. At the same time, in experiments softmax transformers struggle to learn PARITY. So lower bounds for hard attention do seem to predict well what a softmax transformer can do in practice, even if these lower bounds do not always generalize in theory. *Q2:* Great question! By Yao’s principle, this is equivalent to asking whether there is a randomized transformer that, on any input, solves the task with probability of error at most 1%. We do not immediately see how to extend our lower bound to work against randomized transformers, but we hope that some argument exists. Indeed, this question is relevant from a practical point of view, as real language models generate tokens by sampling. *Q3:* We hope that some ``recursive’’ version of EH rank can capture multi-layer hard attention decoders. For instance, we can think of level-2 decision trees as low-rank decision trees that in their leaves, instead of just 0,1, can compute a low-rank decision tree. For instance, the palindrome function can be computed as a disjunction of conjunctions, which is a level-2 decision tree of rank 1, as disjunctions and conjunctions are normal rank-1 functions. We find it plausible that some variation of this notion will capture 2,3,4…-layer decoders, but of course, this is a future-work direction. *Q4:* The matrix W_2 is set to be the identity matrix, as stated on Page 6 line 285. As for W_1, to increase the readability of the proof, we have defined it as a matrix of a linear transformation, defined in (6-8). We believe it is easy to deduce the explicit description of W_1 from the description of the corresponding linear transformation. Namely (W_1)_{ij} is equal to the value of the i-th coordinate of the image of our linear transformation on the vector that has 1 in the j-th coordinate and 0s elsewhere. *Q5* We actually cite this paper, but we have to fix the citation, thank you for pointing us out to this issue. By the results of Li et al., to solve P-complete problems, transformers need polynomially many iterations of CoT. In turn, EH rank allows to precisely characterize what functions are computable in any given fixed number of iterations (1, 2, 3, and so on). Results are thus formally incomparable and complement each other in different regimes of the number of CoT steps. --- Rebuttal Comment 1.1: Comment: I thank the authors for taking time and responding to the questions. All the questions are clearly explained. I would greatly appreciate if the authors can respond to limitations as well. --- Reply to Comment 1.1.1: Comment: Thank you, with regard to the limitations, not addressed in our previous response: Limitations 1. Although our result holds for just a single layer, we believe that it could inspire similar results for multiple layers. For instance, some generalization of the notion of rank could potentially come into play here. Limitation 2. Our result demonstrates that with learnable positional encoding, there always exists a choice of parameters of positional encoding that computes a given function in the number of steps, equal to its rank. We do not immediately see how to obtain our result with sinusoidal positional encoding, this is a great question for future work. Limitation 4 Let us also point that tasks, considered in this paper, have some practical motivation, in particular, function composition. Previous papers, like https://arxiv.org/abs/2311.13314, observed that LLMs struggle with prompts that can be viewed as composition of functions (e.g. ‘’When is Frederic Chopin’s father’s birthday?). Peng et al https://arxiv.org/abs/2402.08164 introduced t-Comp in this context exactly as a model of compositional task, which could be used to explain why LLMs struggle with compositional tasks. We will elaborate on this in the revised version.
Summary: This paper characterizes the notion of rank for Boolean and non-boolean functions with a one-layer transformer. Specifically, they show that the rank of a function is equivalent to the minimum number of chain-of-though steps required by a single-layer Transformer with hard attention to compute the function. They also generalize the definition to the H-head attention. For some compositional tasks, they show $k$-fold function composition necessitates exactly $k$ CoT steps. Claims And Evidence: The theoretical claims are supported by proofs. Methods And Evaluation Criteria: They evaluate the proposed rank for simple compositional task t-Comp and k-thOne. Theoretical Claims: I didn't check the proofs. But the theoretical results make sense to me. Experimental Designs Or Analyses: There is no experiments in the paper. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper might help improve the understanding of the expressive power of transformers. Essential References Not Discussed: The paper discussed several related works. But I think it would be good to discuss the relation with these works. For example, what is the relation between the rank and the different circuit complexity? What is the complexity of the t-Comp and k-thOne? Other Strengths And Weaknesses: ### Strengths - I think the connection between the rank of a function and the steps of CoT is quite interesting. It might help us to have a better understanding of the steps of CoT required for different problems. - The proposed connection between rank and decoder depth allows us to have a finer analysis of the task complexity at hand. ### Weaknesses - The specific tasks considered, t-Comp and k-thOne, are very simple. It would be great to consider some more practical tasks, e.g. the arithmetic tasks and Dynamic Programming in [1]. Or maybe give more examples of functions with different ranks so that the reader can have a better sense of the complexity of different tasks. - The fact that t-Comp and k-thOne need t or k CoT steps seems not very surprising since every step can only compute a single composition/iteration and the multi-head can only add different functions together but not compose functions. - It would be great to discuss the relations with previous works. What is the relation between the rank and circuit complexity? [1] Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. Towards revealing the mystery behind chain of thought: A theoretical perspective. NeurIPS, 2023. Other Comments Or Suggestions: - line 209 $u_{i_h}$ is not defined. - line 344 should be the product of $f$ and $g$? Questions For Authors: See questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for all your valuable comments. Below we respond to some: 1. *The specific tasks considered, t-Comp and k-thOne, are very simple. It would be great to consider some more practical tasks, e.g. the arithmetic tasks and Dynamic Programming in [1]. Or maybe give more examples of functions with different ranks so that the reader can have a better sense of the complexity of different task* Thank you for the suggestions. We will study [1] to apply our technique for the tasks considered there. We can also add more simple examples. For instance, [Hahn, 2020] gives examples of functions (Parity, Majority, Dyck) that are not doable with a constant number of hardmax layers (without CoT). We can show that these functions have rank, linear in the input length, meaning that they require a linear number of CoT steps for 1-layer hardmax decoders. [1] Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. Towards revealing the mystery behind chain of thought: A theoretical perspective. NeurIPS, 2023. Let us also point out that tasks considered also have practical motivation. Arguably, compositional/relational tasks are a very important part of human linguistic skills. Previous papers, like https://arxiv.org/abs/2311.13314, observed that LLMs struggle with prompts that can be viewed as composition of functions (e.g. ‘’When is Frederic Chopin’s father’s birthday?). Peng et al https://arxiv.org/abs/2402.08164 introduced t-Comp in this context exactly as a model of compositional task, which could be used to explain why LLMs struggle with compositional tasks. We will elaborate on this in the revised version. *2. The fact that t-Comp and k-thOne need t or k CoT steps seems not very surprising since every step can only compute a single composition/iteration and the multi-head can only add different functions together but not compose functions.* We agree with the reviewer that lower bounds on t-Comp and k-thOne are intuitive. However, we think that this is one of the strengths of our paper, that we provide a formal rigorous proof. With the proof, we can now with 100% certainty say CoT cannot do anything better than compute a single composition per iteration, even with a constant number of attention heads. *3. It would be great to discuss the relations with previous works. What is the relation between the rank and circuit complexity? Great question. Ehrenfeucht and Hausler have shown that functions with constant rank have polynomial decision-tree size, which also implies they have polynomial circuit-size. By the result of Liu et al., that implies that problems with constant rank are solvable with polynomially many CoT steps. Our results clarify this by showing that only a constant number of steps suffices in this case. We will also add a reference to the following recent paper [On (simple) decision tree rank, Dahiya, Mahajan] that has some connections of the rank with other complexity measures. Regarding some minor comments: *line 209 $u_{i_h}$ is not defined.* this is a typo, it has to be x_{i_h}, thanks for noticing! *line 344 should be the product of $f$ and $g$?* sorry, the phrase ``Namely, by the product of g : A → B and h: A → C'' should be ''Namely, by the product of f : A → B and g: A → C'' --- Rebuttal Comment 1.1: Comment: Thanks for the response. I would like to increase my score to 3. I encourage the authors to include the discussion and more examples in the revised paper.
Summary: The authors study the expressivity of single-layer transformers with hard attention by studying the question: How many chain of thought steps are required to compute particular functions that map strings of length n to some finite output set. The Ehrenfeucht-Haussler rank of a Boolean function measures the complexity of Boolean decision trees that compute the function. The authors introduce a generalisation of the Ehrenfeucht-Haussler rank for non-Boolean functions and obtain various results that relate the number of chain of thought steps required to compute a function to its rank. They first establish that the rank of a function is identical to the number of chain of thoughts required to compute the function. They also give a generalisation of this result to transformers with multiple attention heads by defining a corresponding H-head rank of a function. For particular examples, they establish that the ranks of an n-fold composition function and the function computing the position of the n:th 1 of an input is n. Finally they show that the H-head rank can be at most H times smaller than the 1-head rank, and that H-head ranks of the n-fold composition function and the function computing the position of the n:th 1 of an input is n. Hence, for these function adding the number of heads does not decrease the number of chain of thoughts required. ## update after rebuttal I maintain my previous assessment. Claims And Evidence: The paper is well written and presented. The claims made in the paper are supported by clear intuitive explanations and formal proofs. Methods And Evaluation Criteria: NA. Theoretical Claims: The proof ideas seem plausible. I did not identify any technical issues, but I did not check all the technical details. Experimental Designs Or Analyses: NA. Supplementary Material: I did not read the appendix in detail. Relation To Broader Scientific Literature: The authors do a good job in setting the scene of the paper and in positioning their results with respect to literature. Essential References Not Discussed: I am not aware of related works that should be cited. Other Strengths And Weaknesses: This is a solid paper. The study of the expressivity of transformers is an important and timely topic, and this paper takes an interesting angle to this by relating to complexity of decisions trees needed to compute corresponding functions. The results are interesting and non-trivial. Other Comments Or Suggestions: Please in the future use two-column line numbering. -l.145, right. Typo: "and out task". -l.175. It is a bit out of place to define [n] here, since the notation has been used already many times before in the paper. -l.277. Typo: "the t-the" Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive review and for noticing some typos!
null
null
null
null
null
null
Self-Consistency Preference Optimization
Accept (poster)
Summary: This paper introduces SCPO, a novel training method for large language models (LLMs) that leverages self-consistency to improve performance on complex reasoning tasks without requiring gold labels. SCPO iteratively trains models to prefer consistent answers over inconsistent ones by generating multiple responses, selecting the most and least consistent ones as preference pairs, and optimizing a weighted loss function based on the model's confidence in these pairs. The method is evaluated on GSM8K, MATH, and ZebraLogic datasets, showing significant improvements over existing unsupervised baselines and closing the gap with supervised training methods. Key findings include that SCPO outperforms models trained with external reward models and self-improvement methods, and it can further enhance results when combined with supervised learning. The approach demonstrates the effectiveness of using self-consistency as a training signal for improving both the accuracy and consistency of LLMs on reasoning tasks. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. The authors provide extensive experimental results across multiple datasets (GSM8K, MATH, ZebraLogic) demonstrating performance improvements over baselines. Methods And Evaluation Criteria: The proposed methods (SCPO) and evaluation criteria make sense for the problem of improving LLM reasoning without gold labels. SCPO's approach of using self-consistency to generate training signals aligns well with the challenge of lacking labeled data. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs are mostly sound but could benefit from including additional relevant baselines such as [1, 2]. These would provide more comprehensive comparisons and strengthen the validation of SCPO's effectiveness. 1. Chen, Changyu, et al. "Bootstrapping language models with dpo implicit rewards." arXiv preprint arXiv:2406.09760 (2024). 2. Kim, Dongyoung, et al. "Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment." The Thirteenth International Conference on Learning Representations. 2025. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Some recent work on self-improving/self-annotating alignment also utilizes LLMs for preference annotation, similar to the approach. These references could support the rationale behind this work. For example: 1. Chen, Changyu, et al. "Bootstrapping language models with dpo implicit rewards." arXiv preprint arXiv:2406.09760 (2024). 2. Kim, Dongyoung, et al. "Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment." The Thirteenth International Conference on Learning Representations. 2025. Other Strengths And Weaknesses: Strengths: - The paper has intuitive motivation and relatively solid experimental validation. Weaknesses: - Limited baselines in experiments; should include more relevant comparison methods. - No ablation studies on the number of responses k, which is crucial for the reweighting term w(x). - The importance of the weighted SCPO loss is demonstrated in Table 5, but the discussion lacks references to specific related works like [1, 2, 3, 4, 5] where selecting high-quality data or weighting techniques are already common in DPO-based methods. [1] $\beta$-dpo: Direct preference optimization with dynamic $\beta$. NeurIPS, 2024. [2] Preference optimization with the pairwise cringe loss. arXiv preprint arXiv:2312.16682. [3] Reward difference optimization for sample reweighting in offline rlhf. EMNLP 2024. [4] Rs-dpo: A hybrid rejection sampling and direct preference optimization method for alignment of large language models. EMNLP 2024. [5] Filtered direct preference optimization. EMNLP 2024. Other Comments Or Suggestions: see `Strengths And Weaknesses` Questions For Authors: see `Strengths And Weaknesses` Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your detailed review and comments. We are also glad to see you found our paper to have “intuitive motivation” as well as “extensive experimental results”. Please find our in-depth response to your comments below. > Additional relevant baselines: Thanks for pointing these papers out to us, we contrast ScPO to these works as: - **Chen et al. 2024**: We argue that they are focused on **general instruction following, and their method is not proven to work in reasoning tasks** (as we do in ScPO) like math etc. Also, note that their main technical contribution is to add a length control factor to DPO’s implicit reward term. While “length bias” has been an issue with general instruction following tasks [(Dubois et al 2024](https://arxiv.org/abs/2404.04475)) as models spuriously prefer longer generations, preference evaluation for reasoning domains is more straightforward – preferring correct answers and rejecting the rest, so we expect length-controlled DPO to perform similarly to the standard DPO algorithm used in ScPO. - **Kim et al. 2025**: At its core, this paper uses the **model's confidence on two generations corresponding to the same prompt, to construct preference pairs** with additional preference label smoothening for the bottom 10% annotations that are expected to be noisy. This stands in contrast to ScPO which uses self-consistency of predictions to generate preference annotations, and we compare the quality of the two metrics (confidence vs self-consistency) to curate preference data below to demonstrate the superiority of ScPO on reasoning tasks: - *Setup:* We incorporate Kim et al.’s strategy for preference data creation by using the model’s confidence. Specifically, we compute the confidence for each generation (from the same sample set used to compute SC) and create preference pairs by choosing the most confident response and rejecting the least confident response. Further, following Kim et al. (2025), we filter out the last 10% of training instances with the lowest difference in confidence of chosen and rejected responses (indicative of noise in annotation), and compare its Somer’s D correlation with accuracy with that of self-consistency (Table 6). | Dataset/Metric | Confidence (Kim et al. 2025\) | Self-Consistency (ours) | | :---- | :---- | :---- | | GSM8K | 0.11 | 0.80 | | ZebraLogic | \- | 0.93 | - *Results:* On ZebraLogic, we find that the accuracy of the most confident answer is in fact 0 rendering Kim et al.’s method completely ineffective, whereas SC on the other hand correlates strongly with correctness. Similarly, on GSM8K we find that confidence score of generation shows little correlation with correctness, and is most likely not effective to generate preference training data. **The results indicate that across datasets self-consistency offers a better estimate of correctness of an answer, and therefore yields higher quality preference for training.** These findings are consistent with prior work that uses self-consistency to estimate model confidence for reasoning tasks ([Xiong et al. 2023](https://arxiv.org/abs/2306.13063), [Kabra et al. 2023](https://arxiv.org/pdf/2311.09553)). > Ablation with number of samples (K) **We measure the impact of the number of samples used to measure self-consistency on its Somer’s D correlation with correctness (as done in Table 6\) for K=2,4,8,16.** | Dataset | K=2 | K=4 | K=8 | K=16 | | :---- | :---- | :---- | :---- | :---- | | GSM-8K | 0.39 | 0.65 | 0.80 | 0.89 | | ZebraLogic | 0.66 | 0.82 | 0.92 | 0.93 | The results indicate that (i) lower values of K (e.g. K=2/4) have lower correlation with correctness or accuracy which we find is because of fewer instances where any answer gets multiple votes; (ii) while larger values of K=16 yield slightly higher correlations, we prioritize computational efficiency in the data generation phase (L100-122), and use a sufficiently large value of K=8 in addition to filtering (L118-121) and a weighted loss (L141-149) for ScPO training. > related works like \[1, 2, 3, 4, 5\] where selecting high-quality data or weighting techniques are already common in DPO-based methods Thank you for pointing out these works. We haven’t discussed them in our paper because they are about weighting prompts based on their quality, removing noisy or unhelpful questions. In contrast, our work is about weighing solutions, giving less importance to solutions that are likely to be wrong. However, it could be helpful to discuss those works and their differences, which we will do in the final revision. We hope these additional results and explanations address your questions and will allow you to revisit your score. --- Rebuttal Comment 1.1: Comment: Thank you for your efforts and response. I will increase my score accordingly. Additionally, I believe the final version should include a discussion and comparison of weighting methods in preference learning, as this would further strengthen the manuscript. --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement and for revisiting your score. We will add a discussion and comparison to the papers you pointed out in the final paper.
Summary: This paper considers self-alignment of LLMs, where the data consist of only prompts but not the ground truth. The authors propose to use the self-consistency to choose winning and losing samples, where the responses corresponding to the most/least final answers are considered as the winning/losing samples. Claims And Evidence: The major claim is that self-consistency preference optimization improves self-alignment for reasoning task. This claim is quite intuitive and plausible. Their results served as empirical evidence for this claim. Methods And Evaluation Criteria: The method itself is quite intuitive and simple, which is likely to be effective for unsupervised self-alignment, as self-consistency is well-recongnized for filtering samples. The method and baselines are evaluated on popular reasoning datasets, GSM8K and MATH, which is quite standard hence I have no question about the evaluation. Theoretical Claims: N/A Experimental Designs Or Analyses: The baselines are reward-model fine-tuning, and language model self-improving (LMSI) [1]. The choices are to certain extent reasonable, one concern here is that the self-improvement baseline, LSMI, is not quite up-to-date. Considering self-alignment as a quite active area, there should be more follow-ups suitable for serving as a baseline. [1] Large Language Models Can Self-Improve. https://arxiv.org/abs/2210.11610 Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Another concern is that this work is very similar to [2]. See their Figure 2 for their method with self-consistency feedback. Since [2] is made public last Nov, I consider this as a concurrent work to this paper. My positive evaluation is based on the consideration that both works are concurrent (so it did not hurt the score of my evaluation). [2] Preference Optimization for Reasoning with Pseudo Feedback. https://arxiv.org/abs/2411.16345v1 Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your review and for appreciating the “intuitive and simple” design of our method as well as our evaluation setup. > The choices are to certain extent reasonable, one concern here is that the self-improvement baseline, LSMI, is not quite up-to-date. Considering self-alignment as a quite active area, there should be more follow-ups suitable for serving as a baseline. We are not aware of a more up-to-date follow-up of the LMSI baseline that is applicable to our unsupervised setting but please let us know if we missed something you had in mind. Furthermore, we note that two variants of IRPO as well as the 8B RM are fairly up to date since **IRPO was published in Neurips 2024, and the Armo RM (also released in mid 2024\)** was among the best performing 8B reward model as per the RewardBench leaderboard at the time of development. > Another concern is that this work is very similar to \[2\]. See their Figure 2 for their method with self-consistency feedback. Since \[2\] is made public last Nov, I consider this as a concurrent work to this paper. While the “pseudo feedback from self-consistency” idea in \[2\] is concurrent and related to ScPO, we would like to identify the following key differences: - As shown in Fig 1 (left) and Sec 2 (L 81-99), we show that ScPO can be used to augment the initial seed training data with additional questions sampled from the same model. Furthermore, we show that self-consistency plays a crucial role in filtering for well-formed and answerable questions, which \[2\] lacks. The effectiveness of generating additional data can be seen from not only the unsupervised results, but also our results from the semi-supervised setting where ScPO outperforms the supervised IRPO (gold) baseline by up to 2.35% on GSM8K. - Next, we also incorporate self-consistency in our weighted loss that adjusts the weight of a training instance based on the relative confidence, i.e. difference in vote share of chosen and rejected responses (L141-149). We demonstrate the importance of this weighted training loss in Table 4 of Sec 5\. Thus, we argue that \[2\] is equivalent to the w(x)=1 baseline (only trained on seed data) which ScPO outperforms. That being said, we thank you for pointing out this concurrent work and we will cite it in future versions of our paper.
Summary: The paper introduces Self-Consistency Preference Optimization (SCPO), a novel approach to self-training large language models (LLMs) for complex reasoning tasks without requiring gold labels/solutions. SCPO extends the concept of self-consistency (typically used only at inference time) to create preference pairs during training by sampling multiple responses for each problem and identifying the most consistent vs. least consistent answers. The key innovation is a weighted preference optimization objective where weights depend on the vote margin between chosen and rejected responses. The paper also presents a semi-supervised variant that incorporates gold labels when available, further improving performance. The authors validate the method on reasoning (GSM8K, MATH) and logical reasoning (ZebraLogic) benchmarks, showing that unsupervised SCPO nearly matches supervised preference optimization with gold labels, while semi-supervised SCPO outperforms fully supervised baselines. Most notably, on ZebraLogic, SCPO helps Llama-3 8B outperform significantly larger models like Llama-3 70B and Claude-3 Haiku. Claims And Evidence: The claims in this paper are supported by empirical evidence. The primary claim that SCPO improves reasoning abilities without access to gold solutions is demonstrated through significant performance gains on GSM8K, MATH, and ZebraLogic. The claim that unsupervised SCPO approaches the performance of supervised training is supported by results showing <1% gap in performance. The authors also provide evidence for the superiority of their weighted loss function through ablation studies that show 1-2.5% improvements over unweighted alternatives. The paper includes analyses showing the correlation between vote share and accuracy, which validates the main assumption that consistency is a good proxy for correctness although this result was relegated to Appendix A. Methods And Evaluation Criteria: The use of GSM8K and MATH as benchmarks for mathematical reasoning and ZebraLogic for logical reasoning is a sensible choice. The authors evaluate using both greedy decoding and self-consistency inference, showing improvements in both settings. The comparison against multiple baselines (zero-shot CoT, IRPO_RM, LMSI, IRPOGold) is broad and provides enough evidence for the effectivenetss of SCPO. It would be good to use benchmarks thay go beyond math and logical reasoning to show the versitility of SCPO but those are notoriously difficult to come by. The focus on GSM8K, MATH, and ZebraLogic is understandable. Theoretical Claims: The paper doesn't present formal theoretical claims or proofs. It focuses on the presentation and empirical validation of their new method. Experimental Designs Or Analyses: I examined the core experiments in the paper and found no issues: 1. Baseline comparisons on reasoning datasets (Tables 1-3): sound evaluation of the core points of the paper. 2. Weighted loss ablation (Table 4): Direct comparison between weighted and unweighted versions. 3. Consistency analysis (Figure 2): Measurement of vote share increases across iterations. 4. Threshold filtering experiment (Table 5): Testing different consistency thresholds. 5. Preference accuracy analysis (Figure 3): Comparison of self-consistency vs. reward models in correctly ordering preferences. The hyperparameters ar well-documented (learning rate, epochs, temperature settings). The only limitation is the lack of ablation on the number of samples (k) used for voting, which would help understand computational efficiency tradeoffs. Otherwise, the experiments provide strong evidence for the paper's claims. Supplementary Material: I reviewed all the supplementary material, including Appendices A-D. I found Appendix A most interesting as it provides analysis of the correlation between self-consistency and accuracy using Somers' D, which strongly supports the paper's approach. Relation To Broader Scientific Literature: SCPO extends Wang et al.'s (2023) self-consistency concept from inference to training, presenting a clean and practical strategy for self-improvement in LLMs. While methods like LMSI (Huang et al., 2023) also use self-consistency for unsupervised training, SCPO builds on them improving the end performance. The approach relates to self-alignment work like Yuan et al. (2024), but addresses their limitation in reasoning tasks identified by Huang et al. (2024). Essential References Not Discussed: The paper covers the most relevant literature well. I don't see any critical omissions that would significantly impact the understanding of the work's context or contributions. The authors cite key papers on self-consistency, preference optimization, and self-training approaches. Other Strengths And Weaknesses: Strengths: - Clear presentation of the work makes understanding the paper very easy - Creative application of self-consistency to training rather than just inference - Impressive results on ZebraLogic, showing a smaller model can outperform much larger ones - Strong empirical validation across multiple datasets and model scales - Thoughtful ablation studies that justify design choices Weaknesses: - Limited discussion of computational overhead from generating multiple responses per query during training - Current evaluation limited to math and logic tasks with definitive answers; unclear generalizability to more open-ended reasoning tasks - While the paper shows that consistency correlates with correctness, deeper analysis of when/why this relationship might break down would strengthen the work Other Comments Or Suggestions: - It would be valuable to analyze how SCPO affects the diversity of reasoning paths across iterations - A more detailed analysis of computational requirements compared to baselines would help readers understand practical tradeoffs Questions For Authors: 1. Have you investigated the trade-off between computation costs and performance gains in SCPO? Specifically, how does the computational overhead of generating multiple responses during training compare to other methods, and how might this scale with larger models? 2. The results show diminishing returns after the second iteration of SCPO. Do you have insights into whether this is due to fundamental limitations of self-consistency as a training signal, or might there be ways to continue improving with more iterations? 3. For real-world applications, have you considered how SCPO might perform on problems where there isn't a single definitive correct answer, or where answer extraction is more challenging than in the studied tasks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your extensive review and comments and are glad to see you appreciate the “creative application of self-consistency”, “impressive results”, and “thoughtful ablations”. Please find our detailed response to your comments below and let us know if you have any follow up questions. > Impact of number of samples (K) **We measure the impact of the number of samples used to measure self-consistency on its Somer’s D correlation with correctness (as done in Table 6\) for K=2,4,8,16.** | Dataset | K=2 | K=4 | K=8 | K=16 | | :---- | :---- | :---- | :---- | :---- | | GSM-8K | 0.39 | 0.65 | 0.80 | 0.89 | | ZebraLogic | 0.66 | 0.82 | 0.92 | 0.93 | The results indicate that (i) lower values of K (e.g. K=2/4) have lower correlation with correctness or accuracy which we find is because of fewer instances where any answer gets multiple votes; (ii) while larger values of K=16 yield slightly higher correlations, we prioritize computational efficiency in the data generation phase (L100-122), and use a sufficiently large value of K=8 in addition to filtering (L118-121) and a weighted loss (L141-149) for ScPO training. > Computational Overhead of ScPO: We reiterate that **all our baselines including IRPO, LMSI, as well as ScPO have the same computational overhead by design** as we use similar size training datasets, same number of samples (K), and the same training hyperparameters. Specifically, to your point on generating training data, we note that this process is done once at the start of each iteration when using the LLM at inference-time, and *not during training*. Therefore, we can make use of popular strategies for speeding up LLM inference such as the [vLLM library](https://github.com/vllm-project/vllm), increasing batch-size, and utilizing multiple GPUs in parallel, making the data-generation process far less computationally demanding than the training itself. We will include this discussion in the paper. > Impact of ScPO on model diversity: **In Fig 2 of Sec 5, we visualize the vote share of the model's responses across iterations, and find that it increases across iterations for all datasets**, indicating a decrease in the number of unique answers (diversity). We suspect this is a consequence of RLHF training that has been well-documented in prior work ([Kirk et al. 202](https://arxiv.org/abs/2310.06452)4, [Murthy et al. 2024](https://arxiv.org/abs/2411.04427), Murthy et al. 2024\) and is outside the scope of our study. At the same time, in Sec 4 (Tables 1-3 and 8-9), we report test accuracy after 8-way self-consistency and find that models trained with ScPO continue to benefit from diversity in generations via SC at test-time. > Number of Iterations: We refer you to Table 7 in Appendix B where we find that performance on math reasoning largely plateaus after the second iteration, with \<1 point gain with a third iteration on training. However, in the same table we find that sampling questions from a different distribution, i.e., the distributions of questions (*without using the answers*) from the test set and using it to sample additional related problems (L81-98) yields additional improvements in the third iteration on top of the M\_2 models (L270-274). Therefore, we believe that increasing the diversity of the seed problems, or after each iteration can be an effective way to delay performance saturation. Also, developing a RLHF method that does not diminish diversity can be a solution. > Open Ended Reasoning Tasks: We reiterate that ScPO is designed to improve model’s reasoning performance with unsupervised or semi-supervised training. Different from general instruction following tasks such as creative writing with subtle human preferences, on reasoning tasks we desire to prefer correct solutions and disprefer incorrect ones. Nevertheless, in such scenarios, we believe ScPO can be combined with techniques to measure consistency in more generative settings such as universal self-consistency (L432-436) or using executable programs to measure correctness ([Lambert et al. 2024](https://arxiv.org/abs/2411.15124)). Note that we use a similar programmatic or symbolic approach to measure consistency **for the ZebraLogic benchmark where the answer is in a complex “json” format that cannot be directly compared via exact string match and requires a symbolic function to measure equivalent answers and multiple votes**. Measuring consistency in more open-ended tasks (e.g. creative writing) is indeed under explored, and likely to be first worked out in inference time uses, which we leave for future work. We will expand on the final paper. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your responses. I think that those answers reinforce my score and I would like to see the paper accepted. Best
Summary: The paper introduces Self-Consistency Preference Optimization (ScPO), an unsupervised method for training LLMs to improve reasoning tasks. ScPO leverages the concept of self-consistency—traditionally used at inference—to iteratively optimize models by preferring answers with high consensus over inconsistent ones. - A weighted loss function that prioritizes high-confidence preference pairs based on vote margins. - Semi-supervised extensions combining labeled and unlabeled data. - Experiments on GSM8K, MATH, and ZebraLogic showing ScPO outperforms supervised baselines (e.g., +22.74% on GSM8K) and larger models (e.g., Llama-3 8B trained with ScPO surpasses Llama-3 70B on ZebraLogic). Claims And Evidence: Most claims are supported by empirical results. ScPO outperforms IRPO and LMSI baselines (Tables 1–3) but the improvement is not significant. But the evaluation just considers zero-shot accuracy, and I'm not sure whether other baseline models are using the best configurations, or if other models will perform better using inference-time weighted voting/reward model. Methods And Evaluation Criteria: Make sense. The benchmark datasets GSM8K and Math are adequate. Theoretical Claims: No theoretical proofs are provided. The loss function is empirically justified but lacks formal analysis (e.g., convergence guarantees or why vote margins correlate with correctness). Experimental Designs Or Analyses: Strengths: Ablation studies (Tables 4–5) and correlation analysis (Appendix A) strengthen validity. Weaknesses: - The choice of 2 iterations is under-explained (mentions some related work but lacks systematic analysis). - Threshold τ is tuned on dev sets, but sensitivity to this hyperparameter is not thoroughly tested. Supplementary Material: N/A Relation To Broader Scientific Literature: ScPO builds on self-consistency and preference optimization, and may contribute to improve LLM's self-consistency for math reasoning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: see above Other Comments Or Suggestions: N/A Questions For Authors: 1. How does ScPO generalize to tasks with ambiguous final answers? 2. Could higher vote shares reflect over-confidence rather than accuracy? How is this risk mitigated? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your detailed review and questions. Please find our response below your comments: > Significance of Results Our test sets include \>= 1K samples for GSM8K and ZebraLogic, and 5K problems for MATH. In our primary unsupervised paradigm with greedy decoding, ScPO **consistently** outperforms the IRPO (RM) and LMSI baselines across **three datasets and two base models** *(Tables 1-3, 8-9)*. The unsupervised baselines exhibit high variance across different datasets and models. For example, while LMSI is the second-best unsupervised method behind ScPO (by 7.2%) with Llama-3 8B in Table 1, it performs significantly worse with Llama-3.1 8B on the same GSM8K dataset in Table 8, trailing ScPO by 11.83% and IRPO RM by 7.6%. Additionally, IRPO RM is the least effective on GSM8K and ZebraLogic with Llama-3 8B. Given that all methods have similar computational and data budgets (see Sec 4), this strongly supports the effectiveness of ScPO for reasoning tasks. > Configuration of Baselines We conduct zero-shot evaluation to assess the improvement in the model's reasoning abilities from zero-shot instruction training. As shown in Tables 1-3, 8-9, using 8-way self-consistency (SC) on the test set, SC improves the performance of all baselines. However, models trained with ScPO still achieve the highest performance (after SC) in both supervised and semi-supervised settings across two datasets and two model families. Therefore, we expect similar results for other inference-time variants, such as weighted SC or Best-of-N sampling. > Vote-margin and Correctness We reiterate that the intuition behind self-consistency is that model errors are generally random, making it unlikely to repeat the same incorrect answer across different samples (L 28-36). This concept, effective in various domains and predating its use in LLMs (e.g., RANSAC by Fischler & Bolles, 1981), is supported by popular LLMs showing improved accuracy with majority voting in reasoning tasks (e.g., e.g. DeepSeekMath, Gemini1.5). Empirically, our results (Appendix A, Table 6\) demonstrate a strong correlation between consistency and correctness across datasets, indicating LLMs are not widely over-confident in math and logical reasoning tasks. This aligns with prior findings that LLMs are well-calibrated ([Kadavath et al. 2022](https://arxiv.org/abs/2207.05221)) and that self-consistency reliably estimates model confidence ([Xiong et al. 2023](https://arxiv.org/abs/2306.13063), [Kabra et al. 2023](https://arxiv.org/pdf/2311.09553)). In domains with prevalent overconfidence, self-consistency could be combined with inference-time calibration techniques ([Zhao et al. 2021](https://arxiv.org/abs/2102.09690)). > Number of Iterations Refer to Table 7 in Appendix B, where we observe that math reasoning performance largely plateaus after the second iteration, with less than a 1-point gain in the third iteration. However, the same table shows that sampling questions from a different distribution—using the distributions of questions (*without answers*) from the test set to sample additional related problems (L81-98)—yields further improvements in the third iteration on top of the M2 models (L270-274). > Sensitivity to Threshold τ As noted in L352-378 and Table 5, the initial threshold is based on the training data quality (Margin: Acc(preferred) \- Acc(rejected)) and the number of instances meeting the cutoff, i.e., Vote(preferred) \>= τ. Even at lower thresholds, Table 5 shows we can improve the base model's performance. While the training data quality and sample size depend on the LLM's inherent consistency for a specific domain/dataset, we applied the same method to train the Llama-3.1 8B model in Appendix C and achieved significant gains without tuning hyperparameters for the new base model. > Dealing with Ambiguous Answers We reiterate that ScPO is designed to improve model’s reasoning performance with unsupervised or semi-supervised training. Different from general instruction following tasks such as creative writing with subtle human preferences, **on reasoning tasks we desire to prefer correct solutions and disprefer incorrect ones, making the domain relatively less ambiguous** (please let us know if you have any specific dataset in mind). Nevertheless, ScPO can be combined with techniques to measure consistency in generative settings, such as universal self-consistency (L432-436) or executable programs for correctness ([Lambert et al. 2024](https://arxiv.org/abs/2411.15124)). For example, in the ZebraLogic benchmark, we use a programmatic approach to measure consistency, where answers in complex "json" format require a symbolic function for comparison and multiple votes. We hope our response has addressed all of your questions and will allow you to revisit your score. We are happy to answer any followup questions and requests you may have.
null
null
null
null
null
null
Mechanistic Unlearning: Robust Knowledge Unlearning and Editing via Mechanistic Localization
Accept (spotlight poster)
Summary: The author investigates how mechanistic interpretability improves the precision and robustness of knowledge editing and unlearning in LLMs. They distinguish between methods that preserve outputs and those that target high-level mechanisms with predictable states. The findings show that localizing edits to lookup-table mechanisms for factual recall enhances robustness across formats, resists relearning attacks, and reduces unintended side effects, outperforming baselines on the sports facts and CounterFact datasets. Additionally, certain localized edits disrupt latent knowledge more effectively, making unlearning more resilient to adversarial attacks. Claims And Evidence: Not clear. Please see Question For Authors. Methods And Evaluation Criteria: I think this paper's experimental setup has issues. Please refer to Questions for Authors for details. Theoretical Claims: There is no theoretical claims. Experimental Designs Or Analyses: I think this paper's experimental setup has issues. Please refer to Questions for Authors for details. Supplementary Material: All of them. Relation To Broader Scientific Literature: Please refer to Questions for Authors for details. Essential References Not Discussed: None Other Strengths And Weaknesses: Please refer to Questions for Authors for details. Other Comments Or Suggestions: Please refer to Questions for Authors for details. Questions For Authors: 1. The authors repeatedly mention machine unlearning but do not adopt any well-known machine unlearning methods in their paper, such as RMU [1], GradDiff [2], or NPO [3]. Based on the presented results, it is unclear whether their proposed approach truly outperforms existing machine unlearning methods. > [1] Li, Nathaniel, et al. "The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning." arXiv preprint arXiv:2403.03218 (2024). > [2] Yao, Yuanshun, Xiaojun Xu, and Yang Liu. "Large Language Model Unlearning." Advances in Neural Information Processing Systems 37 (2025): 105425-105475. > [3] Zhang, Ruiqi, et al. "Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning." arXiv preprint arXiv:2404.05868 (2024). 2. Additionally, the authors do not conduct experiments on any established unlearning benchmarks, such as WMDP [1], TOFU [2], or MUSE [3]. > [1] Li, Nathaniel, et al. "The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning." arXiv preprint arXiv:2403.03218 (2024). > [2] Maini, Pratyush, et al. "TOFU: A Task of Fictitious Unlearning for LLMs." arXiv preprint arXiv:2401.06121 (2024). > [3] Shi, Weijia, et al. "MUSE: Machine Unlearning Six-Way Evaluation for Language Models." arXiv preprint arXiv:2407.06460 (2024). 3. The proposed method merely applies existing model editing techniques to the field of machine unlearning without introducing any novel contributions. 4. The authors claim that their approach provides robust unlearning, yet they only evaluate it against a single technique—relearning attacks. What about other methods? For instance, adversarial prompts, logit lens, etc. [1]. > [1] Łucki, Jakub, et al. "An Adversarial Perspective on Machine Unlearning for AI Safety." arXiv preprint arXiv:2409.18025 (2024). ## Update after rebuttal This paper does not compare its method against well-known machine unlearning benchmarks such as TOFU, MUSE, or WMDP, nor does it evaluate against established unlearning methods like NPO or RMU. As a result, it is difficult to assess the effectiveness of the proposed approach in the context of machine unlearning. The authors' rebuttal does not fully address my concerns. But I appreciate their efforts and am willing to raise my score to 2. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for reading our work and providing feedback. We appreciate the opportunity to clarify our contributions and address your concerns. We believe that the assessment and low score was based on certain misunderstandings of our work: we focus on model editing of factual relations; our editing method is in fact novel and clears up misconceptions about mechanistic interpretability for model editing and unlearning, making it an important contribution to the literature; we provide an extensive set of evaluations of our method, and even stronger attacks than suggested by the reviewer. We hope our detailed responses below will convince the reviewer to re-evaluate our work and raise the score. *On the Lack of Unlearning benchmarks* Our primary contribution lies in investigating how mechanistic interpretability can enhance the precision and robustness of knowledge editing of factual associations: i.e., replacement of certain facts with new facts. This goal underscores our choices of baselines and datasets. We use the SportsFacts dataset following work from Nanda et. al, who use it to mechanistically understand factual recall, to create a localization technique for robustly modifying factual associations. We then translate these findings to the CounterFact dataset, a benchmark widely used in the editing literature. Most of our work focuses on editing rather than unlearning, and the baselines of RMU, GradDiff, and NPO couldn’t be used for the editing objectives without a significant reformulation of the tasks. Our inclusion of an unlearning result in A.1 of the paper was primarily to show the potential for our method to generalize to unlearning, but not a claim that our method led to state of the art unlearning performance on every benchmark. However, the current literature on unlearning is relatively unanimous in that no current method is robust against the partial relearning attack (Deeb 2024), including RMU (Li 2024) and TAR (Tamirisa 2024) which is defeated by parameter-efficient fine-tuning: we believe our method can yield progress. In this work, we hoped to lay the groundwork for applying interpretability for unlearning. We are excited about future work that applies more sophisticated interpretability on complex datasets like WMDP: a positive outcome of the results of this paper would be to inspire further research into interpretability to achieve more robust editing and unlearning. We will revise the paper to more accurately reflect that the majority of our current empirical results focus on fact editing. An easy fix! *On Novelty* One main novel contribution is the identification and utilization of Fact Lookup (FLU) mechanisms for robust knowledge editing. We contrast this approach with Output Tracing (OT) methods. We show that editing localized to these FLU mechanisms leads to more robust edits as measured using a number of evaluations (listed below). We demonstrate that the relationship between localization and fact editing/unlearning is more nuanced than suggested in Hase et al. (2023), and that not all localization techniques are equally effective. This is an important contribution to the literature, showing a promising use of mechanistic interpretability. *On evaluations* Regarding the evaluation against different attacks, we would like to highlight that we do evaluate the robustness of our method against: * Rephrasing prompts (Paraphrase Evaluation); * Multiple-choice question extraction (MCQ Evaluation); * Adversarial relearning attacks; * Soft prompt attacks, which are a more challenging form of adversarial prompting since they operate in continuous space. * Latent knowledge analysis, which trains a probe to extract the correct answer from the latent representation of the model, instead of logit lens which is a biased representation of the model’s best guess at a layer We are confident that these evaluations provide a comprehensive assessment of the robustness of our proposed editing method. Note that the current version of the paper is even stronger as we included additional experiments suggested by other reviewers: proposed and tested an automated version of our localization method which outperforms strong baselines; included additional tests on which components are important for localization; and demonstrated that our method can successfully edit a large number of facts (up to 1000). References: Hase, P. et al. Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models, 2023. Meng, K. et al. Locating and editing factual associations in gpt, 2023. Nanda, N. Attribution patching: Activation patching at industrial scale, 2023. Li, N. et al. The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning, 2024.​ Deeb, A. et al. Do Unlearning Methods Remove Information from Language Model Weights?, 2024.​ Tamirisa, R. et al. Tamper-Resistant Safeguards for Open-Weight LLMs, 2024.​
Summary: The authors investigate the effectiveness of adopting techniques from mechanistic interpretability to improve editing and unlearning in large language models. In particular, the work focuses on analyzing the benefits of unlearning and editing brought by localization techniques based on factual lookup (FLU) instead of the typical strategies using causal tracing methods. The authors focus their experiments on a sports fact dataset and a counterfactual dataset, showing that, in these contexts, their methodology leads to robust unlearning/editing while mitigating the risks of relearning attacks. Claims And Evidence: The idea of studying editing and unlearning from a mechanistic interpretability perspective is interesting and timely, as well as the approach of focusing on the role of the factual recall mechanism in editing and unlearning of LLMs knowledge. The authors consider diverse models to show that their findings are robust to medium-level LLM scale and to different pre-training strategies (e.g., dataset, hyperparameters, etc.). Some points could make the analysis more convincing (see Section "Other Strengths And Weaknesses"). Methods And Evaluation Criteria: The evaluation of the unlearning and editing performances on the sports fact dataset and CounterFact dataset is sound and clearly explained. Some extensions of the experimental setup would strengthen the evidence for the claims (see Section "Other Strengths And Weaknesses"). Theoretical Claims: NA Experimental Designs Or Analyses: The experimental design is well constructed and focused on supporting the main claims. Some extensions might strengthen the work (see Section "Other Strengths And Weaknesses") Supplementary Material: I read the appendix for the parts that seemed crucial for understanding the main body, but I did not examine all the results in the Appendix accurately. Relation To Broader Scientific Literature: The authors discuss at a good level of detail the relevant literature for the current submission. Essential References Not Discussed: NA Other Strengths And Weaknesses: 1) It would be beneficial to justify better the choice of focusing only on interventions on MLP layers. Even if, as discussed by the author, they play a crucial role in the factual recall process and thus are a natural candidate for editing and unlearning, showing that interventions that involve the attention mechanism do not allow to achieve better performance would give stronger evidence for the author's choices. 2) It would be beneficial to at least outline a clear semi-automatic procedure for the selection of model components, even if the current work is more focused on giving proof of concept. In particular, is it possible to deduce a strategy from the "manual analysis for both datasets outlined in Appendix A.2.1"? 3) It would be beneficial to extend the experiments to at least one additional more challenging scenario. For instance a subset of subjects in MMLU could provide an example. 4) In the reviewer's opinion, even if averaging over models allows for simplifying the presentation, it would be important to show error bars to show that the trends are shared among models or to find other strategies to support this (e.g., separate results in the appendix). Other Comments Or Suggestions: I have no further comments or suggestions. Questions For Authors: I have no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable proposals. In response, we proposed a way to automate our method, which allowed us to test it at scale in terms of the number of facts to be edited. We also added experiments demonstrating that adding attention heads does not improve performance. Addressing points 1, 2: We originally ignored the attention heads as candidates for localization as a result of work done by Nanda et. al who identified attention heads as playing a “fact extraction” role in the recall mechanism. We don’t claim that we found the precisely optimal localization for editing, but rather that this somewhat coarse localization was sufficient enough to yield significant robustness improvements: future work with more sophisticated interpretability techniques could further strengthen these results. However, we agree that empirical evidence to prove this would be valuable. To address this, along with your second point for a more automated strategy for localization and your third point for a more challenging editing scenario, we run an experiment scaling up the number of CounterFact facts edited from 64 to 1000, and try a new localization. We stick to CounterFact as the dataset contains factual questions in varying formats. Our difficulty increase thus comes from scaling the number of facts edited: our technique now has to maintain robustness to orders of magnitude greater number of facts. We do this experiment for the Gemma-7B model, and plan to replicate this setup across Gemma-2-9B and Llama-3-8B by the camera-ready deadline. This scaling of facts also means we have to use a more automated technique for localization. We localize per-fact, employing the same technique as described in A.2.2. We pick the components by utilizing a heuristic, picking all important MLPs that affect the final logit difference by > 2 stds. This differs from our original manual localization where we analyzed a group of facts, took the average contribution for each MLP, and assigned a group of MLPs to be the localization. We test our technique, based on your suggestion, to also include relevant attention heads. We compare this against a strong baseline of picking all the MLPs. We measure the robustness of these techniques to prompt changes (using an MCQ prompt format as in section 3.1) and to latent attacks by measuring probe accuracies, as in Section 3.3. We see that our fact lookup localization technique (localizing MLP layers only) maintains its MCQ forget error as we scale the facts edited, outperforming the baselines (fig: https://imgur.com/a/60azzEo). Our localization also outperforms other methods on its MCQ edit accuracy (fig: https://imgur.com/a/L7TnBwu), being the only localization to generalize to an alternative prompt setting. Finally, we see that latent knowledge attacks continue to fail when using our localization, as the probe accuracies are as accurate as random chance (fig: https://imgur.com/a/Fn6xWp1). All figures are anonymized and do not contain author information. Addressing point 3: In this work, we hoped to lay the groundwork for applying interpretability for unlearning. We chose to work with CounterFact and Sports Facts datasets because the mechanisms of factual recall in these datasets have been well studied in the literature, and because CounterFact is a benchmark widely used in the editing literature, including in the seminal paper by Meng et. al. We are excited about future work that applies more sophisticated interpretability analyses on complex datasets like MMLU: a positive outcome of the results of this paper would be to inspire further research into interpretability to achieve more robust editing and unlearning. Addressing point 4: We will add error bars and also results for each model across the various evaluations in the camera-ready version. For latent knowledge we do present results for each model in Appendix A.7.4 since each model has a different number of layers. For the latent knowledge analysis, Fact Lookup localization is most robust in Gemma-7b and Llama-3-8b, while some other methods are competitive with Fact Lookup in Gemma-2-9b. Thank you again for your feedback. We hope that these strong additional empirical results, automation of our method, and our clarifications addressed all of your concerns. References: [1] Nanda, N., Rajamanoharan, S., Kram ́ar, J., and Shah, R. Fact finding: Attempting to reverse engineer factual recall on the neuron level, Dec 2023. URL https://www.alignmentforum.org/posts/iGuwZTHWb6DFY3sKB/fact-finding-attempting-to-reverse-engineer-factual-recall.
Summary: The paper studies the mechanistic localizations for knowledge unlearning and editing. There are two main categories of mechanistic localizations in the literature: Output Tracing and Fact Lookup. Through a designed experiment, the paper finds that Fact Lookup localizations make knowledge unlearning / editing more robust, at the aspect of rephrasing prompt, multi-choice question extraction and adversarial relearning. Claims And Evidence: The main conclusion is that localizing edits/unlearning to components associated with the lookup-table mechanism is more robust, and this conclusion is well supported by the empirical results. Methods And Evaluation Criteria: Overall the evaluation set-up make sense. However, the description for one of the method remains unclear to me. Lines 171-196 describe the Fact Lookup localization for CounterFact, but the steps of the method are not clear after reading this paragraph. It might be necessary to more formally introduce these evaluated methods. Theoretical Claims: N/A Experimental Designs Or Analyses: The metrics and baselines are sufficient. The experiment is well aligned with the main question to check in this paper. However, I have some concerns for the dataset set-up. The dataset for the evaluation is quite small -- both two datasets only have 16 or 64 facts for editing, potentially making the empirical results sensitive to this specific test set. It is also unknown for the behaviors of editing larger batch of facts. Supplementary Material: I have not read the supplementary. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well written. I mostly enjoyed reading the paper and the presentation of the results. Other Comments Or Suggestions: Figure 2 and Figure 4 are not mentioned / described in the main text. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments, we will make sure to improve the description of the FLU localization technique for CounterFact in Sec 2.2, and we report new positive experimental results editing significantly more facts. The methodology is briefly summarized here: An important pre-requisite is “path patching”, described in section 3.1 of Wang et. al. This allows us to measure the importance of the direct edge between two components in a model. We first measure the direct effect of each attention head on the final output, using the logit difference between the original and edit answer as our measure. Nanda et. al show that these attention heads “extract” relevant associations from the residual stream. We call these the fact extraction heads. We then measure the effect of each MLP on the final logit difference as mediated only through these fact extraction heads. That is, we iterate through the MLPs and path patch the MLP -> {all fact extraction head} edges, and measure the logit difference. MLPs that cause a large change in this measure do so by introducing the factual association into the residual stream via a lookup mechanism, enabling the fact extraction heads to parse the association. We call these our fact lookup MLPs, which are the localization. Below we report new experimental results in response to your concern about the number of facts we edit. We acknowledge your point about the relatively small size of the dataset for editing. Our reasoning for maintaining a smaller set of facts was primarily to facilitate the creation of precise manual mechanistic localizations. However, we recognize the importance of understanding how our findings might generalize to a larger set of edited facts. To address this, we slightly modify our technique to localize per fact rather than averaging over a set of facts. Then, similar to our original technique in A.2.2, we pick the relevant components by identifying the MLPs that have a >2 std impact on the logit difference in the fact. Using this, we can precisely localize each fact at scale. This allowed us to run an additional evaluation scaling up the number of facts on the CounterFact dataset to be edited all the way to 1000 facts. We do this evaluation only on our Gemma-7b model due to time constraints, but we can expand this to Gemma-2-9B and Llama-3-8B as well by the camera-ready deadline. We compare our localization technique to using all the MLPs as a localization (a strong baseline) and a localization that uses our technique but also includes attention heads (as suggested by reviewer MshQ). We measure the robustness of these techniques to prompt changes (using an MCQ prompt format as in section 3.1) and to latent attacks by measuring probe accuracies as in section 3.3. We see that our fact lookup localization technique maintains its MCQ forget error as we scale the facts edited, outperforming the baselines (fig: https://imgur.com/a/60azzEo). Our localization also outperforms on its MCQ edit accuracy (fig: https://imgur.com/a/L7TnBwu), being the only localization to generalize to an alternative prompt setting. Finally, we see that latent knowledge attacks continue to fail when using our localization, as the probe accuracies are as accurate as random chance (fig: https://imgur.com/a/Fn6xWp1). All figures are anonymized and do not contain author information. We hope our additional experiments have convinced you of the robustness of our method at scale. References: Nanda, N., Rajamanoharan, S., Kram ́ar, J., and Shah, R. Fact finding: Attempting to reverse engineer factual recall on the neuron level, Dec 2023. URL https://www.alignmentforum.org/posts/iGuwZTHWb6DFY3sKB/fact-finding-attempting-to-reverse-engineer-factual-recall. Wang, Kevin, et al. "Interpretability in the wild: a circuit for indirect object identification in gpt-2 small." arXiv preprint arXiv:2211.00593 (2022).
Summary: This paper focuses on machine unlearning i.e. when the model needs to be prevented from outputing a certain information such as the profession of a person, or any mention of a given sport. Specifically they show that a lot of known methods might be preventing access to specific facts, but not overwriting the facts themselves. They show that their method on the other hand does this better. Claims And Evidence: Claim 1: Output tracing does not unlearn fact, it unlearns access to the fact which can be regained by different forms of relearning (Both evidence from the litterature and empirical examples are provided) Claim 2: Fact lookup localisation is more likely to delete the fact itself. It is also more parameter efficient (This is verified through many ablations, and multi-dimensional evaluation. They also use probes to verify data presence) Methods And Evaluation Criteria: They test there claims on 3 state of the art models of around 8B parameters, from two different companies. They use different datasets, They evaluate different state of the art methods, and perform extensive ablation Theoretical Claims: This paper is more on the empirical side. Experiments are nonetheless well defined and situated within the relevant litterature Experimental Designs Or Analyses: Experimental design is sound. As previously stated, relevance of method is checked through different datasets, different levels of unlearning, different datasets, different parameters, giving a good overview that supports the generality of the claims Supplementary Material: I did not review the supplementary material Relation To Broader Scientific Literature: Relevant methods are discussed and explained, main drawbacks of said methods are carefully explained. Essential References Not Discussed: To my knowledge, no key papers are missing Other Strengths And Weaknesses: Strong and well explained experimental process, relying on many tools which are effectively used and described. Benchmark is quite extensive, on multiple models, datasets, and different tasks. Variations and relevance of tasks are discussed. I particularly appreciate the attention to the different forgetting mechanisms, and the different possible components to consider. Other Comments Or Suggestions: No additional comment Questions For Authors: In 3.2 you mention that models should be able to generalize relearning half the basketball athletes to all basketball athletes. Could you further comment on how this informs the unlearning procedure? Are all athletes still known to perform in the same sport, and is basketball simply missing? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for a thorough read of our submission. Below we answer your question regarding the generalization of relearning in the Sports-Athlete-Editing and Full-Sports-Editing tasks, and explain our rationale behind the task choice for relearning attacks. Our relearning experiments in Section 3.2 are performed on the Sports-Athlete-Editing task. In this task, for a randomly selected set of athletes, we edit their associated sport by assigning them a new sport chosen uniformly at random from the existing set of sports. In this Sport-Athlete-Editing task, we do not expect relearning to “generalize”: there is no inherent correlation between an athlete, their original sport, and the newly assigned sport. Therefore, relearning a subset of edited facts should not provide the model with a basis to correctly infer the previously unlearned facts for other athletes, even if they were originally associated with the same sport like basketball (unless this old information is still stored in the model after editing). On the other hand, relearning attack is uninformative in some other setups, where the forget set accuracy could go up by the model simply learning a fixed mapping (e.g., always respond with “Golf” instead of “Basketball”). For example, this concern would apply for relearning the Full-Sports-Editing task: here we reassign one sport to another for all athletes playing that sport. For example, we might change all associations of "basketball" to "golf." If we then retrain the model on half of these edited facts, it's highly probable that the model would simply learn the global reassignment (e.g., all "golf" becomes "basketball") rather than truly relearning the specific original associations. This makes it difficult to distinguish between a newly learned association and the relearning of previously stored knowledge. It is important to note that the relearning paper we base our evaluations off (Deeb 2024) also makes efforts to avoid retraining on a set which does not have independent information from the rest of the forget set: when the information used for retraining is independent from the rest of the forget set, we don’t expect any information recovery given perfect unlearning/editing. However, when the retrained information is not independent from the rest of the forgotten facts, it is unclear what baseline amount of recovery is expected. We hope this clarifies our approach and the rationale behind our experimental design. We would also like to highlight that we ran additional experiments requested by other reviewers to further strengthen the paper: we showed that the localization part of our method can be automated, and that our method can be scaled to successfully edit a large number of facts.
null
null
null
null
null
null
Unsupervised Transfer Learning via Adversarial Contrastive Training
Reject
Summary: This paper presents a novel unbiased self-supervised approach, Adversarial Contrastive Training (ACT), aimed at mitigating biased sample risk. The proposed ACT method is both simple and effective, utilizing matrix G to enhance self-supervised transformation learning. Notably, the k-NN evaluation demonstrates state-of-the-art performance on mini-sized images using ResNet18. Furthermore, the authors provide theoretical insights suggesting that the ACT loss function facilitates clustering in the representation space, thereby improving the learned feature distribution. Claims And Evidence: There appears to be a discrepancy between the claims and the presented results. In the analysis of the loss function, certain issues highlighted in the introduction seem to be addressed through assumptions and conditions rather than direct evidence. It is unclear which specific aspect the authors aim to guarantee. Strengthening the connection between the key claims and the supporting evidence would enhance the clarity of the proposal. Furthermore, the theoretical link between the proposed guarantee and the core component, matrix G, remains ambiguous, making it challenging to fully assess its impact. Methods And Evaluation Criteria: The use of matrix G is a simple yet effective approach. However, the lack of experiments exploring its contributions and limitations makes it difficult to fully assess its impact. It is initially unclear why the study focuses solely on ReLU networks, as similar boundary conditions might also apply to GeLU, particularly in transformer-based architectures. Additionally, while matrix G appears effective for small-sized inputs, it incurs higher computational costs for larger inputs, which is not thoroughly analyzed. Finally, ACT includes a hyperparameter, $\gamma$, but no experiments evaluate its impact across different values. Exploring this aspect would strengthen the empirical support for the method. Theoretical Claims: The equations are challenging to interpret due to the lack of specificity in the notations. Providing clearer definitions and more consistent notation would enhance readability and make the theoretical analysis more accessible. The conclusions drawn from the equations do not appear to contradict previously established findings. However, due to the complexity of the equations and the lack of clarity in notation, I cannot confidently assess their correctness. A more detailed explanation would help in evaluating the theoretical claims more rigorously. Experimental Designs Or Analyses: The connection between the problem, proposed solution, and experimental design is not entirely clear, which makes it challenging to fully assess the validity of the approach. In particular, the link between the theoretical guarantees and the actual method remains ambiguous. Furthermore, the experiments appear insufficient to convincingly demonstrate the effectiveness of the proposed approach, and the experimental design does not fully support the claims made. To strengthen the paper, the authors could either clarify how the theoretical guarantees directly relate to the proposed method or provide additional experiments to further substantiate its validity. Supplementary Material: The anonymous GitHub repository provided in the paper contains the same content as described in the main text. However, the appendix lacks specificity in notation, making it even more difficult to follow than the main paper. Additionally, the equations in the appendix appear more complex, further complicating their interpretation. While these issues make understanding the supplementary material challenging, they do not critically impact the overall evaluation of the main paper. Relation To Broader Scientific Literature: The ACT method shares conceptual similarities with Barlow Twins (BT) in its approach. If the authors are confident in their method, they could consider aligning their experiments more closely with those in the BT paper to facilitate a more direct and meaningful comparison. Additionally, while the authors adopt notations from cited works, which helps maintain consistency with prior research, some notations remain unclear, making certain equations difficult to interpret. Providing explicit definitions and clarifications would significantly improve readability and accessibility for a broader audience. Essential References Not Discussed: The authors emphasize theoretical confidence in population risk, yet ACT exhibits lower performance than BYOL in linear evaluation. Providing a detailed analysis or justification for this performance gap would strengthen the discussion and help contextualize the effectiveness of the proposed method. Additionally, the update mechanism of matrix G bears similarities to an exponential moving average, which is known to enhance performance. However, its specific contribution to the success is not explicitly analyzed. Furthermore, the impact of mini-batch handling does not appear to be critical to the effectiveness of the model, as BYOL has been shown to function without batch statistics [Richemond et al., 2020]. Addressing this aspect could provide a clearer understanding of how ACT compares to existing approaches.
 - BYOL works even without batch statistics [Richemond et al., 2020] Other Strengths And Weaknesses: ### Strength: ACT demonstrates notable performance improvements through a simple yet effective approach leveraging matrix G.
 ### Weakness: The clarity of the presentation of the paper could be improved. Certain notations and theoretical explanations lack specificity, making it challenging to fully grasp key concepts and their implications. Other Comments Or Suggestions: Even if a method is not entirely novel, a theoretical perspective alone can still provide a meaningful contribution. However, for such a contribution to be effective, it must be clearly communicated to the research community. While the authors appear to have adopted notation from previous works, this has made it difficult to discern which aspects they aim to emphasize mathematically. Due to this lack of clarity, it is challenging to fully appreciate the significance of their contribution. Improving the presentation and explanation of key theoretical insights would enhance the paper’s impact. Questions For Authors: What is the precise relationship between the theoretical guarantees and matrix G in the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your thorough review of our manuscript and for your constructive suggestions. Our point-by-point responses to your comments are given below. > **C1** Improve the presentation. Thank you for your constructive suggestion. Please see the response to **C4** of Reviewer f4DL. > **C2** The role of the matrix $G$ in the proposed method and its theoretical analysis. Thank you for your valuable feedback. The matrix $G$ plays a pivotal role in both the theoretical analysis and the algorithmic design. Specifically: * **Theoretical Perspective**: The matrix $G$ ensures that the sample-level regularization term in line 180 remains **unbiased**, whereas the sample-level regularization term in eq. (2) is biased. As discussed in Section 3.2, the biased sample-level loss introduces challenges for the error analysis. In contrast, the unbiased sample-level loss as eq. (8) simplifies the analysis, facilitating a more tractable approach to proving the theoretical guarantees. * **Practical Perspective**: Experimental results in Table 1 highlight that ACT significantly improves downstream classification accuracy compared to two biased self-supervised learning methods: Barlow Twins and Hao Chen et al. (2022). This demonstrates the **practical benefits** of incorporating matrix $G$, especially in terms of enhancing the performance of the model. > **C3** The computational costs of updating $G$. As outlined in step 8 of Algorithm 1, the update of matrix $G$ involves only two encoder evaluations and a summation over the samples in a mini-batch. The computational cost of this step is relatively **minimal** compared to the cost of training the encoder, which not only requires encoder evaluations but also involves backpropagation and gradient descent updates. > **C4** The activation function and architectures of the networks. To maintain a fair comparison, we use the ReLU activation function and ResNet18 as the backbone, as done in BT and Hao Chen (2022). However, we appreciate your suggestion and will consider exploring the use of GeLU and transformer-based architectures, in future work involving more complex tasks. > **C5** Hyperparameter $\gamma$ of ACT. We would like to clarify that **ACT as eq. (4) does not include a hyperparameter $\gamma$**. We presume you mean $\lambda$. If it is, we conduct ablation studies across a range of values for the regularization parameter $\lambda$. The experimental results, as shown in https://anonymous.4open.science/r/RE1-FFCE, demonstrate that ACT is **robust** to the selection of $\lambda$ > **C6** The experiments appear insufficient. We appreciate your feedback. * **Comparison with BT**: Experimental results in Table 1 demonstrate that ACT significantly outperforms Barlow Twins in downstream classification accuracy. Additionally, the results in Table 2 show that ACT achieves SOTA performance when compared to mainstream baselines. * **Transfer Learning**: We include experiments for transfer learning from CIFAR100 to CIFAR10, which are presented in https://anonymous.4open.science/r/RE4-8B12. * **Ablation Studies**: We conduct ablation studies across a variety of augmentation methods and a range of regularization parameter $\lambda$. The results are provided as https://anonymous.4open.science/r/RE1-FFCE and https://anonymous.4open.science/r/RE5-98C8. > **C7** ACT exhibits lower performance than BYOL in linear evaluation. Thank you for your insightful comment. (1) While ACT demonstrates lower performance than BYOL on the Tiny ImageNet dataset in linear evaluation, it still outperforms other methods. Additionally, as shown in Table 2, **ACT consistently outperforms BYOL on the CIFAR-10 and CIFAR-100**, which highlights its effectiveness. (2) The observed performance gap on Tiny ImageNet could be influenced by the unique **characteristics of the dataset**. Our theoretical analysis indicates that ACT's performance is dependent on properties of dataset, and this variability may explain the differing performance on Tiny ImageNet. > **C8** ACT bears similarities to an exponential moving average. Thank you for your comment. However, to the best of our knowledge, **the update of $G$ has limited relevance to EMA** used in BYOL. Specifically, EMA in BYOL involves online and target neural networks. In contrast, ACT employs a single network. > **C9** BYOL works even without batch statistics. Thank you for your comment. We would like to clarify that the paper you referenced, *BYOL works even without batch statistics*, has limited relevance to ACT. In BYOL, batch normalization (BN) plays a critical role in preventing collapse. The paper you mentioned shows that using group normalization can achieve competitive performance compared to BYOL with BN. However, rather than relying on BN, ACT prevents collapse through an explicit regularization term. This distinction means that **the impact of the paper you referenced in ACT is not as critical as it is in BYOL**. --- Rebuttal Comment 1.1: Comment: Thank you. I’ve raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for your thorough review of our manuscript and for raising your score following our rebuttal. We greatly appreciate the constructive suggestions you provided, which have significantly improved the quality and clarity of our work. Your insights have been invaluable in helping us strengthen our paper, and we have carefully incorporated your feedback into the revised version.
Summary: This work studies the theoretical aspects of contrastive learning. Specifically, the authors focus on the regularization framework for model collapse in contrastive learning, where the main issue in current works is the population-level bias and sample-level bias cannot be simultaneously mitigated. To deal with this, the authors reformulate the risk function, which admits unbias property w.r.t. expectation; based on this reformulation, rigorous theoretical analysis is provided, e.g., upper-bound of error, consistency/convergence. --- **Rebuttal Update**: I would like to thank the authors for providing detailed responses, which generally addressed my concerns. I think this work provides an effective estimator with intuitive reasoning and theoretical guarantees for popular self-supervised transfer learning objective. Thus, I keep my original positive recommendation. Claims And Evidence: The claims are generally consistent with theoretical results, while the empirical evidence from numerical experiments, i.e., performance improvement over SOTA baselines, needs further improvement and justification (details are provided in *Experimental Designs Or Analyses*). Methods And Evaluation Criteria: The methodology and evaluation criteria are appropriate. Theoretical Claims: The theoretical results and proofs look correct. Experimental Designs Or Analyses: The experiment and analyses are generally valid, while there are some minor concerns: C1. In lines 255-257, the representation dimensions of baseline methods are 512, while the proposed method employs different dimensions in different datasets. I know that the dimensions of the proposed method are much smaller, while such differences in network architectures could have impacts on the fairness of comparison. Some justifications are highly appreciated. C2. Following Q1, are there results of implementing the proposed ACT with a dimension of 512? These additional results could provide a more comprehensive understanding of the effectiveness of ACT. C3. The different dimension settings naturally raise a question, i.e., how to choose a proper dimension for ACT? Will the ACT model be sensitive to the choices? And, how to choose the dimension empirically? Supplementary Material: The proof for the main theoretical results is roughly checked. Relation To Broader Scientific Literature: The key contribution, i.e., unbiased estimations for empirical regularizer and model risk, seems to be novel. Essential References Not Discussed: The references are appropriate. Other Strengths And Weaknesses: **Pros:** 1. The key idea, i.e., correcting the sample-level bias and population-level bias, is novel. 2. The proposed method with the rewriting risk function is reasonable. 3. The theoretical results rigorously support the proposed method. **Cons:** 1. The organization should be improved, where the technical parts are quite dense and the notations are complex. Other Comments Or Suggestions: 1. The $\kappa (\theta)$ in line 138 is used before it is undefined. 2. Line 189-191, repeated sentence ‘the collection of used data augmentations’. 3. Line 199, repeated notation $A_{i,1}$. 4. What is ${\hat{f}}_{n_s}$ (line 255)? Specifically, the meanings of subscripts of $f$ should be clarified. Questions For Authors: Q1. The notation $\mathbb{P}_s(k)(\cdot)$ is confusing. Specifically, what is the domain of this function? (probably $\mathcal{X}$?) Q2. How to obtain the last term of inequality in line 305? i.e., how to replace the $\hat{f}$ with $\bar{f}$ on the empirical loss $\hat{\mathcal{L}}$? Q3. An interesting point claimed in this work is that the bias is induced by non-commutativity between the sum operator (expectation) and the Frobenius norm. Therefore, the modified risk in Eq. (3) ensures that the risk is linear w.r.t. the covariance difference term (also $G$), which raises two questions: (3.1) Assume that the true risk (on all possible data) can be split into multiple batch-wise estimated risks, then true $G^*$ should be the same on each batch due to the linearity of $G$ w.r.t. $\mathcal{R}(f,G)$. Thus, the on-the-fly estimation provided in step 4/8 of Alg. 1 seems to be inconsistent with this conclusion. If this conclusion is incorrect, some explanations are highly appreciated. (3.2) Another intuitive idea for learning $G$ is parameterizing it, e.g., a single-layer network (single weight matrix). The essential intuition for such a design is the linearity, which seems to suggest that $G^*$ (and also the approximation $G$) could or even should be unchanged in each batch. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thorough review of our manuscript and for your constructive suggestions. Our point-by-point responses to your comments are given below. Thank you for pointing out these typos. We correct them in the revised manuscript. > **C1** The dimension of ACT. * We appreciate the suggestion and agree that evaluating ACT with a 512-dimensional representation would ensure fairness in comparison. We implement the proposed ACT method using **the same 512-dimensional representation as the baseline methods**, which outperforms other methods. Further, we conducted **ablation studies** across a range of dimensions of ACT. Our experiments suggest that ACT is **robust** to dimension choices within a reasonable range. The experimental results are shown as https://anonymous.4open.science/r/RE3-1437. * Choosing the appropriate dimension for representation learning may vary depending on the dataset, task complexity, and computational and memory constraints. We recommend performing cross-validation (CV) over a range of dimensions. Additionally, dimension selection may be influenced by resource constraints, as lower-dimensional representations can offer faster inference and reduced memory usage. > **C2** Improve the presentation of the technical parts. Thank you for your constructive suggestion. Please see the response to **C4** of Reviewer f4DL. > **C3** Notations $\mathbb{P}\_{s}(\cdot)$ and $\hat{f}\_{n_{s}}$. * $\mathbb{P}\_{s}(\cdot)$ denotes the probability distribution of the source data, while $\mathbb{P}\_{s}(k)(\cdot)$ denotes **the probability distribution of the source data that categorized into the $k$-th latent class $C\_{s}(k)$**. Both of them are defined on the domain $\mathcal{X}$. * $\hat{f}\_{n_{s}}$ denotes the ACT estimator defined as eq. (4). The subscript $n_{s}$ is the number of unlabeled training samples in the source domain. We use this subscript to emphasize that **this estimator depends on the training dataset**, and we show that the error of this estimator converges as $n_{s}$ increases. > **C4** The derivation of line 305. Thank you for your suggestion. The last term of inequality in line 305 used the fact that $\hat{f}$ is the minimizer of the empirical risk over the hypothesis class. For the sake of clarity of the presentation, we provide the complete derivation: We define the minimizer of the biased sample-level loss as $$ \hat{f}\_{n_{s}}^{\mathrm{bias}}\in\mathop{\arg\min}\_{f\in\mathcal{F}}\widehat{\mathcal{L}}(f):=\widehat{\mathcal{L}}\_{\mathrm{align}}(f)+\lambda\widehat{\mathcal{R}}(f), $$ where $\widehat{\mathcal{R}}(\cdot)$ is defined as (2). We then consider the **expected population risk** of this estimator. For each $\bar{f}\in\mathcal{F}$, it holds that \begin{align*} \mathcal{L}(\hat{f}\_{n_{s}}^{\mathrm{bias}}) &=\\{\mathcal{L}(\hat{f}\_{n_{s}}^{\mathrm{bias}})-\widehat{\mathcal{L}}(\hat{f}\_{n_{s}}^{\mathrm{bias}})\\}+\\{\widehat{\mathcal{L}}(\hat{f}\_{n_{s}}^{\mathrm{bias}}) -\mathcal{L}(\bar{f})\\}+\\{\mathcal{L}(\bar{f})-\mathcal{L}(f^{\*})\\}+\mathcal{L}(f^{\*}) \\\\ &\leq\\{\mathcal{L}(\hat{f}\_{n_{s}}^{\mathrm{bias}})-\widehat{\mathcal{L}}(\hat{f}\_{n_{s}}^{\mathrm{bias}})\\}+\\{\widehat{\mathcal{L}}(\bar{f})-\mathcal{L}(\bar{f})\\}+\\{\mathcal{L}(\bar{f})-\mathcal{L}(f^{\*})\\}+\mathcal{L}(f^{\*}) \\\\ &\leq 2\sup\_{f\in\mathcal{F}}|\mathcal{L}(f)-\widehat{\mathcal{L}}(f)|+\\{\mathcal{L}(\bar{f})-\mathcal{L}(f^{\*})\\}+\mathcal{L}(f^{\*}), \end{align*} where the first inequality follows from the fact that **$\hat{f}\_{n_{s}}^{\mathrm{bias}}$ minimizes the empirical risk $\widehat{\mathcal{L}}(\cdot)$ over the hypothesis class $\mathcal{F}$**, and thus $\widehat{\mathcal{L}}(\hat{f}\_{n_{s}}^{\mathrm{bias}})\leq\widehat{\mathcal{L}}(\bar{f})$. Standard techniques in empirical process theory can be applied to estimate the first term in **unbiased** situations. However, the **biased** nature of $\widehat{\mathcal{L}}(\cdot)$ complicates the process of bounding the first term. > **C5** (1) The on-the-fly estimation is not the same on each batch. (2) Parameterize $G$ using a network. Thank you for your constructive suggestion. * We agree that the true $G^{\*}$ should be consistent across batches, as it is determined by the population risk rather than by the batch-specific empirical risk. However, as indicated by our theoretical analysis, **the on-the-fly estimation of $G^{\*}$ differ only slightly between batches, provided that the batch size is sufficiently large**. * You raise an interesting point about parameterizing $G$ using a network. Since the solution to the inner maximization problem in Eq. (4) has a **closed-form** solution, as shown in step 4/8 of Alg. 1, we prefer to rely on the closed-form solution. However, we appreciate your suggestion and will consider **exploring this idea in more complex tasks in future work**.
Summary: This paper focuses on the task of transfer learning and proposes a loss function based on adversarial contrastive learning. More concretely, based on this adversarial conservative learning framework, this paper learns a representation map from source data which can be transferred to the target distribution for downstream classification tasks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: I have gone through the appendix for technical proofs. Relation To Broader Scientific Literature: As distribution shifts are common with real data, this transfer learning technique can be useful for scientific research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength 1. This paper focuses on an important problem in learning theory: the benefit of contrastive representation learning for transfer learning. 2. The introduction of adversarial contrastive training with the notation of debiasing the sample-level spectral loss is interesting. 3. Conditions for theoretical results are clearly stated and well-organized. Weakness 1. The presentation of theoretical results is not clear and concise enough. For example, the derivation of objective functions on page 6 can be deferred to the appendix. 2. The presentation of Section 2 is a little confusing to me. As this section is motivated by the bias in the sample-level spectral contrastive loss, I wonder if the proposed objective function $\hat \mathcal{L}(f,G)$ has some kind of unbiasedness at the sample level. If so, it would be beneficial to state it clearly; otherwise, the comparison is not fair enough. 3. The technical assumptions are presented in a dense way and there are not enough intuitive interpretations for some of the assumptions. For example, Assumption 3.8 is common in the theory of transfer learning, but it would be much better if concrete examples (e.g. Gaussian family with linear functions) can be presented for more straightforward understanding. It also applies to Assumption 3.7. Other Comments Or Suggestions: N/A Questions For Authors: Please also see the previous "Weakness" section. 1. I wonder what is the suggested choice of $\lambda$ implied by theory. 2. In Assumption 3.7, I wonder why is $\sigma_s$ related to augmentations. 3. In Theorem 3.9, it would be beneficial to interpret the role of $\sigma_s$ in the performance guarantee, and how it is related to Assumption 3.7. 4. If we use supervised transfer learning as a benchmark (with $n_s$ observations from the source distribution), how is the comparison with the rate in that scenario with the Holder class? What is the fundamental gap between (contrastive) unsupervised and supervised transfer learning and are there any scenarios such that the gap can be closed? I would be happy to raise my score if the aforementioned questions are addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your thorough review of our manuscript and for your constructive suggestions. > **C1** Examples of Assumption 3.8. We exemplify the source/target distributions as **Gaussian mixtures** of the same componential variances. Then $\epsilon_{1}$ is the maximum distance between the means of the source and target distributions for each latent class. $\epsilon_{2}$ is the maximum distance between the mixture weights of the source and target distributions. Thus, Assumption 3.8 not only requires that the source and target distributions for each latent class are close in terms of their means, but also that their mixture weights are similar. > **C2** Explanations of Assumption 3.7. The concept of $(\sigma_{s},\delta_{s})$-augmentation is introduced to quantify the concentration of augmented data. We now provide a step-by-step explanation: * Augmentation distance: for a given augmentation set $\mathcal{A}$, the augmentation distance between two samples $x_{1}$ and $x_{2}$ are defined as: $\\|x_{1}-x_{2}\\|\_{\mathcal{A}}:=\min\_{x_{1}^{\prime}\in\mathcal{A}(x_{1}),x_{2}^{\prime}\in\mathcal{A}(x_{2})}\|x_{1}^{\prime}-x_{2}^{\prime}\|$. Since augmentations can capture semantic meanings of the original sample through various views, this distance reflects the maximal semantic similarity between the two samples. * $\sigma_{s}$-main-part of the latent class: for a latent class $C_{s}(k)$, the $\sigma_{s}$-main-part is defined as $\widetilde{C}\_{s}(k)\subseteq C\_{s}(k)$ satisfying $\mathbb{P}\_{s}\\{x\in\widetilde{C}\_{s}(k)\\}\geq\sigma_{s}\mathbb{P}\_{s}\\{x\in C_{s}(k)\\}$. The parameter $\sigma_{s}$ quantifies the concentration of the distribution $\mathbb{P}\_{s}(k)$ of this latent class. Specifically, for fixed $\widetilde{C}\_{s}(k)$ and $C\_{s}(k)$, a larger value of $\sigma_{s}$ indicates a higher concentration of $\mathbb{P}\_{s}(k)$. * Augmentation diameter of the $\sigma_{s}$-main-part: the parameter $\delta_{s}$ is defined as the diameter of the $\sigma_{s}$-main-part in augmentation distance, that is, $\sup\_{x_{1},x_{2}\in\widetilde{C}\_{s}(k)}\\|x_{1}-x_{2}\\|\_{\mathcal{A}}$. For a fixed distribution $\mathbb{P}\_{s}$ and a fixed parameter $\sigma_{s}$, the smaller value of the diameter $\delta_{s}$ means a higher concentration of the augmented distribution, as well as greater similarity between augmented data samples. * **Summary**: The concentration of the augmented distribution, as measured by parameters $(\sigma_{s},\delta_{s})$, depends on both $\mathbb{P}\_{s}(k)$ and $\mathcal{A}$. Specifically, for a fixed $\mathcal{A}$, a smaller value of $\sigma_{s}$ and a higher concentration of $\mathbb{P}\_{s}(k)$ result in a smaller $\widetilde{C}\_{s}(k)$, leading to a smaller value of $\delta_{s}$. Additionally, for a fixed distribution $\mathbb{P}\_{s}(k)$, a smaller value of $\sigma_{s}$ and a larger $\mathcal{A}$ lead to smaller $\\|x_{1}-x_{2}\\|\_{\mathcal{A}}$ for each pair $(x_{1},x_{2})$, resulting in a smaller value of $\delta_{s}$. **Example**: Suppose the samples in the $k$-th latent class follows the uniform distribution on $[0,R]$, i.e., $C_{s}(k)=[0,R]$ and $\mathbb{P}\_{s}(k)=\mathsf{unif}(0,R)$. For each $\sigma_{s}\in(0,1]$, we can find a $\sigma_{s}$-main-part of $C\_{s}(k)$ as $\widetilde{C}\_{s}(k)=[0,\sigma_{s}R]$. Further, we define $\mathcal{A}(x)=\{x^{\prime}\in\mathbb{R}:|x-x^{\prime}|\leq r\}$ for each $x\in\mathcal{X}$. Then the augmentation diameter $\delta_{s}$ of the $\sigma_{s}$-main-part is given as $$\sup\_{x_{1},x_{2}\in\widetilde{C}\_{s}(k)}\\|x_{1}^{\prime}-x_{2}^{\prime}\\|\_{\mathcal{A}}=\max\\{\sigma_{s}R-2r,0\\}=:\delta_{s}.$$ **The parameters $\sigma_{s}$, $\delta_{s}$, $r$ and $R$ are interrelated by this equality**. Note that the parameter $R$ reflects the concentration of the distribution $\mathbb{P}\_{s}(k)$ within the latent class. A smaller value of $R$ indicates a higher concentration of $\mathbb{P}\_{s}(k)$, which in turn leads to a smaller value of the augmentation diameter $\delta_{s}$. Additionally, a larger augmentation set, i.e., a larger value of $r$, results in a smaller value of the augmentation diameter $\delta_{s}$. > **C3** The role of $\sigma_{s}$ in the performance guarantee. Thanks for your suggestion, we have added the interpretation for $\sigma_s$ in revised manuscript. * The probability of the inequality Line 343 is directly determined by $\sigma_s$. The closer $\sigma_s$ is to 1, the larger the probability of this event. * The technical effect of $\sigma_s$ in Assumption 3.7. is to convert the condition $\psi > 0$ (Line 1646) into a probabilistic form, ensuring that this condition can be definitively satisfied. **Due to space constraints**, we are unable to address the questions regarding unbiasedness and supervised transfer learning in this rebuttal. We kindly ask that you evaluate our existing responses for now, and **we will be glad to provide the remaining explanations once we receive your feedback.** --- Rebuttal Comment 1.1: Comment: Thanks so much for your clarification. I’ve raised my score. --- Reply to Comment 1.1.1: Comment: We appreciate your thoughtful review and the raised score. We are committed to incorporating the valuable feedback received during this process to further strengthen our work. Our additional responses and explanations are provided below. > **C4** The derivation of objective functions. Thank you for your valuable suggestion. The derivation of the objective functions Section 3.2 is detailed in the appendix to streamline the main text. > **C5** The unbiasedness of $\widehat{\mathcal{L}}(f,G)$. Thank you for your thoughtful comment. The proposed objective function $\widehat{\mathcal{L}}(f,G)$ is an unbiased estimate to the population risk $\mathcal{L}(f,G)$ as pointed out in Line 185. We now provide a detailed derivation. It is sufficient to consider the regularization term, since the alignment term $\widehat{\mathcal{L}}\_{\mathrm{align}}(\cdot)$ is obviously unbiased. Specifically, for each fixed $f$ and $G$, one has \begin{align*} \mathbb{E}\_{\widetilde{D}\_{s}}[\widehat{\mathcal{R}}(f,G)] &=\mathbb{E}\_{\widetilde{D}\_{s}}\Big[\Big\langle\frac{1}{n_{s}}\sum\_{i=1}^{n_{s}}f(x_{1}^{(i)})f(x_{2}^{(i)})^{\top}-I_{d^{\*}},G\Big\rangle\_{F}\Big] \\\\ &=\Big\langle\mathbb{E}\_{\widetilde{D}\_{s}}\Big[\frac{1}{n_{s}}\sum\_{i=1}^{n_{s}}f(x_{1}^{(i)})f(x_{2}^{(i)})^{\top}\Big]-I_{d^{\*}},G\Big\rangle\_{F}=\mathcal{R}(f,G), \end{align*} where the second equality invokes the linearity of the inner product. This equality implies the unbiasedness of the proposed regularization function $\widehat{\mathcal{R}}(f,G)$. Combining this with the unbiasedness of the sample-level alignment term $\widehat{\mathcal{L}}\_{\mathrm{align}}(\cdot)$ yields the unbiasedness of the proposed objective function $\widehat{\mathcal{L}}\_{\mathrm{align}}(\cdot)$. We revise Section 2 to more clearly outline that the proposed sample-level loss in eq. (4) is unbiased. > **C6** The choice of $\lambda$ implied by theory. Thank you for your insightful feedback. The regularization parameter $\lambda$ in ACT balances the alignment term $\mathcal{L}_{\mathrm{align}}(\cdot)$ and the regularization term $\mathcal{R}(\cdot)$. From a **theoretical perspective**, our theoretical analysis suggests that $\lambda=\mathcal{O}(1)$. Specifically, * In Lemma A.4, we demonstrate that the alignment factor $R\_{t}(\varepsilon,f)$ can be bounded by the alignment term $\mathcal{L}\_{\mathrm{align}}(\cdot)$, while the divergence factor $\max_{i\neq j}|\mu_{t}(t)^{\top}\mu_{t}(j)|$ is bounded by the regularization term $\mathcal{R}(\cdot)$. * Based on the definition of the population risk $\mathcal{L}(f)=\mathcal{L}\_{\mathrm{align}}(f)+\lambda\mathcal{R}(f)$, we find \begin{equation*} \mathcal{L}\_{\mathrm{align}}(f)\leq\mathcal{L}(f) \quad\text{and}\quad \mathcal{R}(f)\leq\lambda^{-1}\mathcal{L}(f)\lesssim\mathcal{L}(f), \end{equation*} where we used $\lambda=\mathcal{O}(1)$. This allows us to bound both the alignment factor $R_{t}(\varepsilon,f)$ and the divergence factor $\max_{i\neq j}|\mu_{t}(t)^{\top}\mu_{t}(j)|$ in terms of the population risk $\mathcal{L}(f)$, which leads to the conclusion in Theorem A.5. From a **practical perspective**, we include ablation studies across a range of regularization parameter. The experimental results are shown as https://anonymous.4open.science/r/RE1-FFCE. The experimental results show that ACT is robust to the selection of $\lambda$. For more complex tasks, we recommend performing cross-validation (CV) over a range of regularization parameter. > **C7** The comparison with supervised transfer learning. * **Supervised transfer learning**: the convergence rate of the nonparametric transfer learning derived in [1] is given as $\mathcal{O}(\max\\{n_{s},n_{t}\\}^{-\frac{2\alpha}{2\alpha+d}}+(\epsilon\vee n_{t}^{-\frac{\alpha}{2\alpha+d}})n_{t}^{-\frac{\alpha}{2\alpha+d}})$. The rate given in [2] is $\mathcal{O}(n_{s}^{-\frac{\alpha}{2d+3\alpha}}+n_{t}^{-\frac{\alpha}{2(d^{*}+1+2\alpha)}})$ for $\alpha>2$. * **Similarities**: Both supervised transfer learning and unsupervised transfer learning exhibit convergence as $n_{s}$ and $n_{t}$ increase. * **Differences**: While labels from the source domain are available in supervised transfer learning, unsupervised transfer learning relies on pseudo-labels generated through augmentation. As a result, unsupervised transfer learning depends on the specific parameters of the augmentation methods. * **Whether the gap can be closed?** If unsupervised learning techniques can be improved via more informative augmentations, the performance gap can potentially be narrowed. We will continue to explore ways to bridge this gap in future work. [1] T. Tony Cai and Hongming Pu. Transfer Learning for Nonparametric Regression: Non-asymptotic Minimax Analysis and Adaptive Procedure. (2024) [2] Yuling Jiao, Huazhen Lin, Yuchen Luo, and Jerry Zhijian Yang. Deep Transfer Learning: Model Framework and Error Analysis. (2024)
Summary: This paper introduces Adversarial Contrastive Training (ACT), a novel approach to unsupervised transfer learning that addresses bias issues in existing contrastive learning methods. The authors provide both theoretical guarantees and empirical evidence demonstrating the effectiveness of their approach. The theoretical guarantees connecting upstream unlabeled data to downstream performance are particularly valuable, offering insights into why these methods work well in few-shot learning scenarios. Claims And Evidence: Yes. Methods And Evaluation Criteria: - The paper identifies a critical bias issue in existing contrastive learning methods and presents a clever solution through adversarial train- ing. The min-max formulation effectively tackles the bias between population-level and sample-level estimators. - The paper presents Algorithm 1 in a general format but lacks a practical implementation. Including this would make the method more accessible to practitioners and facilitate reproduction. - How sensitive is ACT to the choice of augmentation strategy? Is there a principled way to select this hyper-parameter? Theoretical Claims: - The authors develop a comprehensive end-to-end theoretical analysis for their method, showing how ACT can lead to downstream data being clustered in representation space. This theoretical work bridges an important gap in the literature. - The paper provides valuable theoretical insights for few-shot learning, explaining why ACT can achieve good performance even with limited downstream samples. Experimental Designs Or Analyses: - The experiments demonstrate consistent improvements over baseline methods across multiple datasets (CIFAR-10, CIFAR-100, and Tiny ImageNet), validating the practical relevance of addressing the bias issue. - The paper mentions that $\lambda$ is an important hyperparameter but does not provide a comprehensive ablation study showing how different val- ues affect performance across datasets. This analysis would provide valuable insights into the robustness of the method. Supplementary Material: - While the appendix contains detailed proofs, the main text would benefit from a concise proof sketch that outlines the key steps and intuition behind the theoretical results. This would make the theoretical contributions more accessible. Relation To Broader Scientific Literature: The key contribution of this paper is identifying bias issues in existing contrastive learning methods, which have not been thoroughly discussed in prior work. Additionally, the proposed solution, ACT, is supported by both theoretical guarantees and empirical evidence, demonstrating its effectiveness. Essential References Not Discussed: - Section 4 could be strengthened by including references of existing methods for clarity. Other Strengths And Weaknesses: - The paper lacks a dedicated notation list or table that would help readers track the numerous mathematical symbols used throughout the theoretical sections. This would significantly improve readability, especially for the complex mathematical derivations. Other Comments Or Suggestions: No. Questions For Authors: - In the related works, the authors mention that the Rademacher complexity can be significantly reduced by controlling the scale of the network class, which causes the upper bound to be ineffective if the approximation error is ignored. Could the authors please elaborate on this claim? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thorough review of our manuscript and for your constructive suggestions. Our point-by-point responses to your comments are given below. > **C1** Lack of practical implementation. We add a detailed PyTorch-type pseudo-code in the appendix of the revised version. > **C2** The choice of the augmentation strategy. Thank you for your comment. * We include **ablation studies** across a variety of data augmentation methods. The experimental results are shown as https://anonymous.4open.science/r/RE5-98C8. The experimental results show that ACT is generally **robust** to the choice of augmentation strategy. * Selecting the appropriate augmentation strategy for ACT may vary depending on the dataset and the complexity of the task. To empirically choose the best approach, we recommend using cross-validation (CV). However, this method can be computationally expensive. Fortunately, as our ablation studies demonstrate, ACT shows a relatively low sensitivity to the choice of augmentation method. > **C3** The choice of the regularization parameter. The regularization parameter $\lambda$ in ACT balances the alignment term $\mathcal{L}_{\mathrm{align}}(\cdot)$ and the regularization term $\mathcal{R}(\cdot)$. **From a theoretical perspective**, our theoretical analysis suggests that $\lambda=\mathcal{O}(1)$. Specifically, * In Lemma A.4, we demonstrate that the alignment factor $R\_{t}(\varepsilon,f)$ can be bounded by the alignment term $\mathcal{L}\_{\mathrm{align}}(\cdot)$, while the divergence factor $\max_{i\neq j}|\mu_{t}(t)^{\top}\mu_{t}(j)|$ is bounded by the regularization term $\mathcal{R}(\cdot)$. * Based on the definition of the population risk $\mathcal{L}(f)=\mathcal{L}\_{\mathrm{align}}(f)+\lambda\mathcal{R}(f)$, we find \begin{equation*} \mathcal{L}\_{\mathrm{align}}(f)\leq\mathcal{L}(f) \quad\text{and}\quad \mathcal{R}(f)\leq\lambda^{-1}\mathcal{L}(f)\lesssim\mathcal{L}(f), \end{equation*} where we used $\lambda=\mathcal{O}(1)$. This allows us to bound both the alignment factor $R_{t}(\varepsilon,f)$ and the divergence factor $\max_{i\neq j}|\mu_{t}(t)^{\top}\mu_{t}(j)|$ in terms of the population risk $\mathcal{L}(f)$, which leads to the conclusion in Theorem A.5. **From a practical perspective**, we include **ablation studies** across a range of regularization parameter $\lambda$. The experimental results are shown as https://anonymous.4open.science/r/RE1-FFCE. The experimental results show that ACT is **robust** to the selection of $\lambda$. For more complex tasks, we recommend performing cross-validation (CV) over a range of regularization parameter. > **C4** Presentation suggestions. Thank you for your constructive suggestion. To improve the clarity and readability of the technical sections, we add a comprehensive **table of notations** in the appendix. Additionally, we include a **proof sketch** in the appendix to offer an overview of the key steps in the proofs. We also provide more **detailed explanations** and broken down the steps more explicitly in the proofs of theoretical results to ensure better understanding. > **C5** The necessity of the approximation error analysis. Thank you for your insightful comment. As shown in eq. (8) of the manuscript, the excess risk can be decomposed into the **approximation error** and the **statistical error**, where the latter is bounded by the **Rademacher complexity**. The approximation error reflects the capacity of the deep neural network class to approximate the target function, and it generally decreases as the scale of the network class increases. On the other hand, the Rademacher complexity increases with the scale of the network class. This creates a **trade-off** between the approximation error and the statistical error, suggesting that the network class should be chosen with an appropriate scale that depends on both the number of samples and the complexity of the task. This theoretical insight is consistent with the findings in experimental practice. However, if the approximation error is ignored, the excess risk is only bounded by the Rademacher complexity, which would imply that the network class should be as small as possible. Such an theoretical result clearly contradicts practical applications, where smaller and simpler network classes often struggle to capture the underlying patterns in complex tasks, particularly when dealing with large datasets.
Summary: The paper presents a novel self-supervised learning (SSL) transfer learning technique that is unbiased and with provable guarantees. The method is part of the class of SSL decorrelation approaches that align the cross-correlation matrix of learned representations with the identity matrix. The paper first highlights that current approaches rely on biased sample-level covariance/correlation matrix estimators. Then, an unbiased adversarial learning objective is derived, which is formulated as a min-max problem. Subsequently, the authors: - highlight the limitations of biased estimators from a theoretical perspective - and, based on a series of assumptions, derive bounds on (1) the angle between the classes in the target domain and (2) the misclassification error in the source domain (Theorem 3.9). This theorem provides theoretical insights for few-shot learning and demonstrates that abundant unlabeled data benefits transfer learning (i.e., to a different target domain). Empirically, the method improves (linear and kNN) classification accuracy compared to previous methods on three different datasets. ## Update after rebuttal Since the authors addressed all my concerns, I raised my rating from "weak accept" to "accept". Claims And Evidence: ### "We introduce Adversarial Contrastive Training (ACT), a novel self-supervised transfer learning method. This approach learns representations from unlabeled data by solving a min-max optimization problem that corrects the bias inherent in existing methods." - The introduced method is **novel** and well explained. - The paper **explains why current estimators are biased** and **derives and unbiased estimator**. ### "Through extensive experiments, we demonstrate that ACT significantly outperforms traditional biased iterative methods (Table 1). Our empirical evaluation shows that ACT achieves state-of-the-art classification performance across multiple benchmark datasets using both fine-tuned linear probes and k-nearest neighbor (k-nn) protocols (Table 2)." - The method indeed **outperforms the biased approaches** (Table 1) in both linear probe and kNN evaluation settings. - It **also outperforms other SSL methods** (Table 2) in this same setting. However, it **cannot be determined if the method is state-of-the-art** (SOTA) as it is not compared with SOTA methods, such as e.g. SwAV [1] or DINO [2]. ### "We establish comprehensive end-to-end theoretical guarantees for ACT in transfer learning scenarios under misspecified and overparameterized settings (Theorem 3.9). (...) can lead to the downstream data distribution being clustered by category in the representation space, provided that the upstream unlabeled sample size is sufficient. Hence, even with a few downstream samples, ACT can achieve outstanding classification performance, offering valuable insights for few-shot learning." - The paper **clearly explains the assumptions** and **provides theoretical guarantees** on the angle between the classes in the target domain and the misclassification error in the source domain (Theorem 3.9). The former **provides bounds** for which target class centers would be close to orthogonal and thus separable, **consequently leading to high transfer learning accuracy**. - However, a key limitation of the current manuscript is that the experimental section only evaluates the method on the source domains, and **does not include transfer learning experiments** (i.e., evaluations on a target domain from a different dataset). [1] Caron, Mathilde, et al. "Unsupervised learning of visual features by contrasting cluster assignments." Advances in neural information processing systems 33 (2020): 9912-9924. [2] Caron, Mathilde, et al. "Emerging properties in self-supervised vision transformers." Proceedings of the IEEE/CVF international conference on computer vision. 2021. Methods And Evaluation Criteria: - The benchmarks make sense to show that the method is effective for learning representations on the in-domain/source-domain. - However, the evaluations lack transfer learning benchmarks, similar to e.g. "Table 3. Transfer learning: image classification." in Barlow Twins [3]. [3] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." International conference on machine learning. PMLR, 2021. Theoretical Claims: I did not verify the correctness of the proofs provided in the supplementary material. Experimental Designs Or Analyses: - The linear+kNN evaluation protocol, datasets, and hyperparameter choices are **common in the literature**. - I verified that the **results from previous methods reported in Table 2 align** with the ones reported in the W-MSE paper [4]. - The **results for re-implementations** of previous techniques (Barlow Twins and "HaoChen 2022") **are lower than expected**. I, however, did not check all re-implementation details and hyper-parameter choices in the provided code. [4] Ermolov, Aleksandr, et al. "Whitening for self-supervised representation learning." International conference on machine learning. PMLR, 2021. Supplementary Material: The reviewer read the proofs but did not check every detail and, thus, cannot confirm/invalidate their correctness. Relation To Broader Scientific Literature: The paper provides an overall clear positioning of its research questions: - It is part of the family of SSL decorrelation techniques (Barlow Twins, W-MSE, VICReg, ...), and its formulation is most similar to the one from Barlow Twins. - The authors introduce related theoretical studies (on population risk in SSL and the generalization error in SSL) and detail the differences with their contributions. - *(minor)* The paper could benefit from a short introduction and mention of related work in adversarial learning, as it is a component of the introduced method and algorithm. Essential References Not Discussed: No essential references are missing to the best of the reviewer's knowledge. Other Strengths And Weaknesses: - In its training objective, the paper **introduces an additional *alignment* loss term to be invariant to data augmentations**. However, **the other (unbiased) objective already includes an alignment term** in the diagonal part of the cross-correlation matrix. Therefore, and given that Barlow Twins does not require it, is this additional alignment term necessary for the performance of this method? Other Comments Or Suggestions: - typo line 196, left: "Base on $\mathcal{A}$" -> "Based on $\mathcal{A}$" - typo line 326, left: "(...) we can systematically analysis" -> "(...) we can systematically analyze" - Are Assumption 3.5 on the Lipschitz constant, and the other assumptions, realistic for the class of augmentations used in the experiments (described in the paragraph "Image transformations details")? Questions For Authors: 1. As both the text (e.g., in the abstract: "Adversarial Contrastive Learning (ACT), a novel unbiased self-supervised transfer learning approach") and the theoretical claims suggest the method is effective at transfer learning, could the authors **evaluate the method on a SSL transfer learning benchmark**? 2. Since Barlow Twins' formulation is very close and does not require an additional alignment objective, **is the additional alignment term of ACT necessary** for this method? Could the authors run a small ablation study on this question and/or discuss it? The technical and theoretical contributions of the paper are valuable to the community, but the paper lacks an essential evaluation (see question 1). Therefore, the reviewer suggests "weak accept", and is willing to raise the rating if the questions are addressed. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thorough review of our manuscript and for your constructive suggestions. Our point-by-point responses to your comments are given below. Thank you for pointing out these typos. We correct them in the revised manuscript. > **C1** Additional experiments: (1) comparisons with SOTA methods, and (2) transfer learning. Thank you for your constructive suggestion. * We have re-implemented SwAV, and the corresponding experimental results are presented in https://anonymous.4open.science/r/RE6-DDAF. Due to time constraints, we were unable to include additional comparisons with DINO. Alternatively, we kindly ask you to refer to the benchmark results in CIFAR10 available at https://docs.lightly.ai/self-supervised-learning/getting_started/benchmarks.html. In summary, **ACT outperforms both SwAV and DINO**, demonstrating its strong competitive performance. * The transferability of ACT: Please see the response to **C6** of reviewer L3h2. > **C2** The results for re-implementations are lower than expected. Thank you for your comment. * Our re-implementation of Barlow Twins is based on the **official implementation** available at https://github.com/facebookresearch/barlowtwins. Unfortunately, the authors of ``HaoChen 2022'' do not provided their implementation, so we implement it according to their paper. * To ensure a fair comparison, the **hyperparameter choices** for ACT and all other methods, including Barlow Twins and ``HaoChen 2022'', is almost aligned with the settings used in [1]. * The official Barlow Twins implementation's README provides a link to a Barlow Twins experiment on CIFAR-10 as https://github.com/IgorSusmelj/barlowtwins. **The experimental results reported in this linked implementation closely align with ours**, further supporting the fairness of our re-implementation. [1] Aleksandr Ermolov, Aliaksandr Siarohin, Enver Sangineto, and Nicu Sebe. Whitening for Self-Supervised Representation Learning. (2021) > **C3** Additional alignment objective. We agree that the the diagonal part of the cross-correlation matrix serves as a alignment term **under certain conditions**. As shown by Lemma 4.1 of [1], the alignment risk is then related to the diagonal part of the cross-correlation matrix as: $$ \mathcal{L}\_{\mathrm{align}}^{2}(f)\leq 4d\sum_{i=1}^{d}\big\\{\mathbb{E}\_{x}\mathbb{E}\_{x_{1}\in\mathcal{A}(x)}[f_{i}(x_{1})^{2}]-\mathbb{E}\_{x}\mathbb{E}\_{x_{1},x_{2}\in\mathcal{A}(x)}[f_{i}(x_{1})f_{i}(x_{2})]\big\\}^{2}. $$ It is crucial that the right-hand side of the inequality is consistent with the alignment term in the loss function of Barlow Twins, provided that $\mathbb{E}\_{x}\mathbb{E}\_{x_{1}\in\mathcal{A}(x)}[f_{i}(x_{1})^{2}]=1$ for each $i\in\\{1,\ldots,d\\}$. **However, this condition does not hold generally.** * From a theoretical perspective, the cross-correlation loss alone, as discussed previously, is insufficient for learning representations invariant to augmentations, as previously discussed. To address this, we introduce an additional explicit alignment term in the loss, as also suggested by [2,3]. * From a practical perspective, we fully agree with the necessity of distinguishing the effect of alignment from the de-biased operation. We conduct an ablation study comparing ACT with and without the explicit alignment term as https://anonymous.4open.science/r/RE2-7B34. Our results indicate that **the explicit alignment term can slightly improves ACT's performance**. [1] Weiran Huang, Mingyang Yi, Xuyang Zhao, and Zihao Jiang. Towards the Generalization of Contrastive Self-Supervised Learning. (2023) [2] Jeff Z. HaoChen, Colin Wei, Ananya Kumar, and Tengyu Ma. Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations. (2022) [3] Jeff Z. HaoChen, and Tengyu Ma. A Theoretical Study of Inductive Biases in Contrastive Learning. (2023) > **C4** Assumption 3.5. We appreciate your question. As outlined in the section ``Image transformations details'', the data augmentations used in our experiments -- crops, horizontal mirroring, brightness adjustment, grayscaling, and Gaussian blurring -- are linear operations, and thus Lipschitz continuous. Specifically, the Lipschitz constants of crops horizontal mirroring, and grayscaling are less than and equal to 1, and the Lipschitz constant of brightness adjustment depends on the adjustment factor. The Lipschitz constant of Gaussian blurring relies on the the kernel size and the variance of Gaussian. **Consequently, all Lipschitz constants can be explicitly calculated**, but due to character limitations, we had to place these details in the additional appendix of the revised manuscript. > **C5** Assumption 3.7. See the response to **C4** of Reviewer etBC for a detailed discussion on Assumption 3.7. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers and for running the requested additional experiments in this short time frame. Since the authors addressed all my concerns, I am raising my rating to "accept" as stated. --- Reply to Comment 1.1.1: Comment: We would like to thank you once again for your valuable contributions and insightful suggestions. Your specialized reviews have undoubtedly helped improve the quality of our paper, particularly regarding the ablation experiments on the alignment term, transfer learning, and the justification for the Lipschitz property of the used augmentations.
Summary: The paper proposes a method for unsupervised transfer learning called Adversarial Contrastive Training (ACT). The key idea is to address the bias present in sample-level estimators of self-supervised contrastive learning by reformulating the regularization term into a minimax (adversarial) framework. In this formulation, a matrix variable G is introduced and alternated with the encoder, with the intent of minimizing the inherent bias from mini-batch estimation. The paper provides an end-to-end theoretical analysis on how the method benefits downstream tasks by proving convergence properties under several assumptions. Claims And Evidence: Claims: The paper claims that reformulating the self-supervised objective into a minimax problem involving an auxiliary variable G can obtain an unbiased estimator with better representations than existing biased methods. Evidence: The theoretical part builds an error decomposition that formally connects the adversarial formulation to downstream clustering, and the experiments report modest improvements in accuracy. However, the experimental evidence is not sufficient for the limited datasets and marginal performance improvement. Methods And Evaluation Criteria: Methods: The paper’s main methodological innovation is the transformation of a regularization term into a maximization over an auxiliary matrix G. This leads to a minimax optimization framework where the encoder is updated to counteract the worst-case bias estimated by G. The alternating optimization algorithm (Algorithm 1) is designed to iteratively update the encoder and the adversarial variable. Evaluation Criteria: The method is evaluated on common transfer learning benchmarks using linear probe and k-nearest neighbors classifiers. However, the evaluation is limited to small-scale datasets (CIFAR series and Tiny ImageNet) Theoretical Claims: The authors derive convergence results and error bounds that relate the downstream classification error to the minimax optimization formulation. They provide detailed proofs in the supplementary materials regarding error decomposition and convergence under several assumptions. Experimental Designs Or Analyses: Design: The experiments are designed to compare ACT against existing methods on standard benchmarks. Issues: The experimental section suffers from several shortcomings: - There is a repetition of experimental results (as similar comparisons appear in both Sections 2.2 and 4) while seemingly addressing the same task. - The datasets chosen for evaluation are small and do not cover more challenging datasets that would be more representative of real-world applications. - The performance gains reported are marginal, and no ablation study specifically analyzes the effectiveness of introducing G. Supplementary Material: The supplementary material provides extended proofs for the theoretical results and additional discussions on the assumptions. Relation To Broader Scientific Literature: The method is positioned within the rapidly growing area of self-supervised contrastive learning, where many recent works (e.g., Barlow Twins, SimCLR, BYOL) address the challenges associated with negative sampling and model collapse. By reformulating the self-supervised loss into an adversarial (minimax) problem, this work attempts to remove biases inherent in mini-batch estimators—a concern also touched upon in prior studies by HaoChen et al. and Zbontar et al. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Recasting the regularization term into a minimax framework is conceptually interesting and may open up new research directions in self-supervised learning. - The paper provides a rigorous theoretical analysis that links the adversarial training process to concrete guarantees on downstream performance. Weaknesses: - Writing Clarity: - The explanation of the role of the auxiliary variable G is insufficient. The paper does not convey its meaning or the motivation behind its introduction. No dedicated section or ablation study specifically validates the contribution of G. - The submission suffers from structural issues: experimental results are repeated in different sections, making the narrative confusing, and the overall organization of the paper is somewhat chaotic. - The heavy reliance on dense mathematical derivations with minimal intuitive explanations makes it less accessible to readers who are not experts in theoretical machine learning. - Experimental Insufficiency: - Evaluation is based on a small number of datasets with limited representativeness. The performance gains are marginal, raising questions about the proposed method's practical impact. Other Comments Or Suggestions: See listed above. Questions For Authors: See listed above. Code Of Conduct: Affirmed. Overall Recommendation: 2
null
null
X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Accept (poster)
Summary: With wide-spread deployment of CLIP image- and text-encoding for use in downstream tasks, CLIP has been revealed as a useful attack surface for subverting inference in the downstream model. A natural question is how to craft an effective perturbation once by way of ensemble methods which then subverts many models given any data point (i.e., universal adversarial perturbation (UAP)). The authors extend this idea to the realm of subverting many models, tasks, datasets, and domains, and describe such transferability as so-called super transferability taking form as super universal adversarial perturbations (SUAP). Previously, UAP methods require a fixed set of models, heuristics, and constraints to optimize a UAP with desired effect. The authors argue that this becomes intractable when tackling the problem of super-transferability, and instead propose to use heuristics agnostic to the surrogate model. In practice, the authors use the heuristic which simply minimizes (or maximizes) the similarity in CLIP embedding space of untargeted (or targeted) evasion attack. This heuristic works since all downstream tasks will exploit the contrastive nature of CLIP embeddings in some way. To scale up to many surrogate models, the authors finally propose to adopt upper confidence bounds (UCB) from the well-studied multi-armed bandits framework to avoid optimizing with respect to all surrogate models at once, instead letting the UCB-based algorithm (X-Transfer) guide the sampling of surrogates. Specifically, the authors cast the UAP optimization over surrogate models as a non-stationary multi-armed bandits problem, since the reward distribution for different surrogate models may shift as optimization progresses. ## Post-rebuttal update Thanks to the authors for their response. They will incorporate some of the changes mentioned in the original review, and have provided some interpretations of the submission's pain points. I will keep my original score. Claims And Evidence: - The authors suggest that by efficient selection of surrogate models during ensemble optimization, the adversary may learn a super UAP which transfers to many tasks, models, datasets, and modalities. The main claim is checked through empirical studies starting in Section 4 for zero-shot classification (Table 1) and captioning/VQA (Table 2). In these tables it can be observed that X-Transfer performance improves as a larger set of surrogates is selected (e.g., going from Vanilla to Large offers reliable improvement of ASR). The effect of UCB selection is ablated in Figure 2a showing that (1) more surrogates leads to higher ASR, and (2) selection guided by UCB offers a compute reduction while matching the results of higher $k$. - The evaluation suggests that UCB-based X-Transfer may reduce the computational cost of SUAP. However, it is also shown that both random sampling and $\epsilon$-greedy sampling strategies have comparable performance, hence it might be difficult to motivate the extra complexity of using UCB over simple random sampling. The main takeaway for this result is that sub-sampling of surrogate models is important to achieve SUAP. As a further takeaway, it could be argued that since UCB does not have a large gap compared to random sampling, the reward distribution of the surrogate models may be more trivial (i.e., stationary) than expected. It could be that in the case of dynamic surrogates (i.e., updated in real time against the adversary) the reward distribution becomes more suitable for UCB. Methods And Evaluation Criteria: - The methods for evaluating each task (e.g., CIDEr for image captioning) generally match the metric used to score the task in the respective literature. Attack success rate is used for all tables which is the standard for adversarial ML studies. - The benchmark datasets and associated models are reasonable since they are fairly recent approaches. Theoretical Claims: N/A Experimental Designs Or Analyses: - The experiments use a variety of established baselines such as C-PGC, ETU, and AdvCLIP for CLIP-specific attacks, and also attacks meant as general purpose UAP, such as GD-UAP, TRM-UAP, and Meta-UAP. An ablation is performed which is the same attack setup of X-Transfer but without the UCB-guided surrogate selection. These baselines are reasonable due to being relatively recent but also relevant to the goal (attacking CLIP models). - Cross-data experiments operate on standard benchmark datasets which vary in scale, fidelity, and purpose. Cross-model experiments use a mix of both ResNet- and transformer-backed CLIP models which isolates causes due to model architecture. Other more recent methods are checked, such as MetaCLIP, meanwhile some VLMs are selected to study transferability on multi-modal LLM inference. In this latter case the authors do not launch X-Transfer on the respective VLM finetune of CLIP, which seems reasonable. Supplementary Material: My review is based on the main text and some of the appendices, I did not check the supplemental. Relation To Broader Scientific Literature: - The authors plan to release a large collection of UAPs and TUAPs to the broader community, which may be useful for checking future defenses on a static target. - The authors show that subsampling of the surrogate ensemble is a simple but effective way of improving UAPs. A variety of baseline methods and surrogate models are checked which will serve as useful reported baselines in future studies. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - Since the ablation shows that UCB is marginally better than random sampling for the fixed CLIP models, it might be interesting to check the reward distribution of CLIP models which update dynamically against the adversary (hence changing the reward distribution over course of adversary's training loop). It could be that UCB becomes more useful on an adaptive defender threat model, where the CLIP model is updated on-the-fly during adversary's optimization loop (to mimic a defender's inner update loop). - The writing is generally high quality and easy to follow for someone familiar with the prior work. - The approach could be considered an iterative improvement over prior work since it mainly combines UCB with known objectives and optimization algorithm. I would consider the primary contribution to be the ablations and empirical results studying ensemble sampling behavior. Other Comments Or Suggestions: * Equation 1 - the text should connect $k$ and $j$ to the contrasting of image with texts and the reverse for clarity. * Equation 4 - $t_{adv}$ should be clarified to be a sample of interest. * L179-180 - encodes -> encoders Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your careful review of our paper. Please find our detailed responses to your questions below. --- **Q1:** Choice of sampling strategies and improvement over random sampling **A1:** We would like to clarify and emphasize the comparison of sampling strategies presented in Section 4.2. The results are reported as macro-averages across 9 encoders, each evaluated on 8 datasets, yielding a total of 72 evaluation points. We argue that the observed 2.3% performance gap is statistically significant, particularly in the context of large-scale benchmarking. Our analysis demonstrates that reward-aware sampling strategies (UCB and $\epsilon$-greedy) consistently outperform reward-agnostic random sampling. This empirical evidence strongly suggests that the reward metric itself - rather than the sampling strategy choice - is the primary driver of super-transferability improvements. While we acknowledge that alternative sampling approaches may be worth exploring, our core contributions remain: (1) the efficient surrogate scaling framework and (2) the proposed reward metric. To summarise: - Super-transferability stems fundamentally from surrogate scaling, as evidenced by experiments with varying search space sizes ($N$). - While sub-sampling improves computational efficiency (Fig 2a), it is not the primary driver of super transferability. - Reward-aware sampling consistently outperforms random sampling while maintaining efficiency. --- **Q2:** The dynamic reward distribution **A2:** We would like to clarify that our current approach already incorporates a dynamic reward distribution that evolves throughout the adversary’s training loop. As the perturbation improves over time, it influences the loss values used as the reward signal, leading to a naturally shifting reward landscape. We also agree with the reviewer that exploring an adaptive threat model, in which CLIP itself is updated during the adversarial process, presents an interesting direction for future research. We appreciate this insightful and constructive suggestion and will certainly pursue it in future work. --- **Q3:** Iterative improvement and contribution **A3:** While the adversarial objective we employ may appear simple, its design is both intentional and necessary, and its combination with UCB may seem like an incremental improvement. However, we emphasize that this formulation has led to insightful and novel findings regarding the vulnerabilities of CLIP and VLMs. The simplicity and generality of the adversarial objective are deliberate design choices, as they ensure compatibility across diverse CLIP encoders and are crucial for achieving super transferability, as demonstrated in Appendix C.4. This deliberate simplicity underscores the elegance and effectiveness of our approach. Beyond performance, our work reveals a new and exciting phenomenon: the untargeted UAPs generated by X-Transfer are semantically interpretable, yet they do not align with human perception. Prior work has suggested that semantically interpretable perturbations that align with human understanding typically arise only from targeted objectives, whereas untargeted perturbations appear patternless. In contrast, our findings show that untargeted UAPs can also produce visually coherent patterns that are semantically rich but misaligned with human interpretation, suggesting that these perturbations explore a space distinct from all previously observed perturbation behaviors. We believe that a simple and effective method, paired with a novel and well-supported finding, and validated through rigorous experimentation, constitutes a meaningful and impactful contribution to the field. --- **R4:** Others suggestions **A4:** We will incorporate these suggestions in the revision.
Summary: This paper reveals a universal adversarial vulnerability in CLIP models, where a single perturbation achieves super-transferability across datasets, domains, models, and tasks. The authors find that proxy encoder selection, rather than the dataset, is the key factor. They propose X-Transfer, a novel attack that surpasses prior UAP methods in transferability, establishing a new benchmark. Additionally, they introduce X-TransferBench, an open-source evaluation framework for CLIP and VLM robustness. The core innovation lies in an efficient proxy scaling strategy, leveraging UCB sampling to optimize encoder selection, enhance perturbation generalization, and achieve super-transferability. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: (1) The study primarily relies on OpenCLIP, which, while widely used, may not fully capture the diversity of CLIP models in real-world applications. Differences in architecture, training, and optimizations across implementations could impact X-Transfer’s effectiveness, limiting the generalizability of the findings. (2) Although the evaluation covers various tasks and datasets, it lacks real-world complexity. Practical deployment factors—such as data formats, model updates, distribution shifts, and hardware constraints—may affect X-Transfer’s performance. Experimental Designs Or Analyses: (1) Key hyperparameters, such as the proxy encoder search space size and the number of selected encoders per iteration, are empirically set without rigorous theoretical justification or systematic validation. (2) The study relies solely on OpenCLIP, which may not fully capture the diversity of real-world CLIP deployments. Supplementary Material: Yes, I review all the materials. Relation To Broader Scientific Literature: (1) CLIP’s contrastive learning enables strong zero-shot generalization, driving research into its applications and vulnerabilities. (2) While UAPs have been explored for CLIP, prior methods struggle with super-transferability across datasets, domains, models, and tasks. X-Transfer fills this gap, setting a new benchmark for UAP effectiveness in adversarial attacks on CLIP. Essential References Not Discussed: No. Other Strengths And Weaknesses: (1) X-Transfer is empirically validated, but key theoretical explanations remain lacking. The dominance of ViT-based encoders in proxy selection is observed but not deeply analyzed, and the emergence of semantic patterns in UAPs is attributed to CLIP’s concept fusion ability without solid theoretical grounding. (2) Despite extensive experiments, real-world deployment factors—such as data formats and dynamic model environments—are not fully considered. (3) Key hyperparameters, including search space size and proxy encoder selection, are determined heuristically without rigorous analysis. Relying solely on empirical tuning limits the method’s generalizability and scalability. Other Comments Or Suggestions: Developing defenses against X-Transfer is essential. It should explore robust training, architectural modifications, and specialized defenses while balancing security and model performance. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your time and careful review of our work. Below, we provide detailed responses to address each of your concerns. --- **Q1:** Using OpenCLIP lacks generalisability and real-world models. **A1:** (1) OpenCLIP is an open-source framework, protocol, and API for CLIP and its variants, not a model itself. It currently hosts the largest collection of open-source CLIP models, which we adopt in our experiments primarily for its convenient and unified interface. Many real-world CLIP models are already included in the OpenCLIP collection, such as MetaCLIP from Meta and SigLIP from Google, both of which are evaluated in our experiments. (2) To address your concern regarding real-world deployment, we conducted evaluations on GPT-4o and OpenAI o1. Using the same targeted adversarial example generated via TUAP (as shown in Figure 3), we prompted the models with *“Briefly describe the image.”* Below are the models’ responses. **GPT4-o**: *The image is a digital collage blending a volcanic eruption with cyber elements. Lava flows into the ocean under a blue-toned overlay of circuit patterns and glowing text related to computer viruses, such as “Remove Files” and “Virus Inject.” It visually represents the destructive power of cyber threats.* **OpenAI o1**: *It’s a composite image showing lava pouring into an ocean, overlaid with circuit-board patterns and words referencing computer viruses and file removal, evoking the idea of a powerful, destructive cyber threat.* We also conducted a qualitative evaluation by randomly sampling 1,000 images from the ImageNet test set and applying our TUAP to generate adversarial examples. We queried the model using the prompt “Briefly describe the image.” For this evaluation, we used OpenAI’s GPT-4o-mini model. Each request included both the textual prompt and the adversarial image (encoded in base64), and the maximum response length was set to 128 tokens. If the response contained any of the keywords in target text descriptions (e.g. *“remove”, “files”, “computer”, “inject”, “computer virus”*), we considered it a successful attack. The ASR based on this criterion is reported in the table below. | Base | Mid | Large | |---|---|---| | 53.6 | 65.8 | 70.0 | --- **Q2:** Ablation on hyperparameters **A2:** The search space size, denoted as $N$, is evaluated throughout the paper, and the number of selected encoders per iteration, $k$, is specifically analysed in Section 4.2. Results for different search space sizes, Vanilla ($N$ = 1), Base ($N$ = 16), Mid ($N$ = 32), and Large ($N$ = 64), are presented in Tables 1, 2, 13, 15, and 16. These results demonstrate that a larger search space consistently improves the ASR. The ablation study on $k$ is shown in Figure 2(a), and the efficiency analysis is provided in Table 7. The results indicate that while varying $k$ has minimal impact on ASR, it primarily affects computational efficiency. Our choice of $k$ is 25% of $N$ is motivated by the trade-off observed in Figure 2(a) and the efficiency analysis in Table 7. --- **Q3:** Theoretical explanations **A3:** We believe that super transferability and CLIP’s concept fusion ability are novel topics of growing importance. As such, there are currently no established theoretical frameworks to formally characterise these phenomena. Nevertheless, our empirical results are comprehensive, providing strong support for our claims regarding super transferability. The novel observation that UAPs on CLIP exhibit semantically interpretable patterns offers valuable insight into the underlying vulnerabilities of CLIP models. We plan to investigate theoretical explanations in our future work. --- **Q4:** Defences against X-Transfer **A4:** Please refer to Appendix C.8, where we have presented evaluations using adversarially trained CLIP encoders.
Summary: This paper introduces X-Transfer, a novel adversarial attack method that generates universal adversarial perturbations (UAPs) with "super transferability" across data, domains, models, and tasks for CLIP-based vision-language models. The core innovation is an efficient surrogate scaling strategy that dynamically selects a subset of surrogate CLIP encoders from a large search space using a multi-armed bandit (MAB) framework with Upper Confidence Bound (UCB) sampling. Extensive experiments demonstrate that X-Transfer outperforms state-of-the-art UAP methods, achieving higher attack success rates (ASR) on zero-shot classification, image-text retrieval, image captioning, and VQA tasks. The authors also release X-TransferBench, a comprehensive benchmark of UAPs for evaluating super transferability. Claims And Evidence: - Claim 1: X-Transfer achieves "super transferability" (simultaneous cross-data/domain/model/task transferability). - Evidence: Supported by experiments across 12 datasets, 9 CLIP encoders, and 4 VLMs (Table 1-2). However, cross-task transferability (e.g., attacking VLMs trained with autoregressive objectives) lacks mechanistic explanation. - Claim 2: The dynamic surrogate selection strategy reduces computational costs while improving transferability. - Evidence: Figure 2(a) shows X-Transfer with $k=1$ matches standard scaling ($k=N$), but theoretical guarantees for UCB-based selection (e.g., regret bounds) are missing. - Claim 3: X-TransferBench provides a practical resource for adversarial robustness evaluation. - Evidence: The benchmark is described in Appendix D, but no user studies or community adoption examples are provided. Methods And Evaluation Criteria: - Methods: - The MAB-based dynamic selection is novel and addresses scalability limitations of fixed ensembles. However, the choice of UCB over other bandit algorithms (e.g., Thompson sampling) is not justified. - The adversarial objective (Eq. 3-5) is generic but lacks architectural/task-specific adaptations (e.g., for VLM autoregressive losses). - Evaluation Criteria: - ASR is task-specific (e.g., accuracy drop for classification, CIDEr for captioning), which is reasonable. However, targeted attacks (TUAPs) are only evaluated on a limited set of 10 manually designed text prompts (Appendix C.7), raising concerns about generalizability. Theoretical Claims: - The paper lacks theoretical analysis. For example: - No proof of convergence for the UCB-based selection strategy. - No formal analysis of why ViT-based surrogates dominate selection (Figure 2(b)) or how embedding space geometry enables cross-task transfer. Experimental Designs Or Analyses: - Strengths: - Broad evaluation across CLIP variants (ViT, ResNet, SigLIP) and VLMs (LLaVA, MiniGPT-4). - Ablation studies on surrogate scaling (Figure 2(a)) and perturbation types ($L_2$, patch). - Weaknesses: - "Following Fang et al. (2024b); Zhang et al. (2024), we employ L∞-norm bounded perturbations with ϵ = 12/255, and the step size η of 0.5/255." I did find the description of ϵ = 12/255 in the two papers, but I did not find the description of step size η of 0.5/255. Please provide more detailed sources for the setting of step size. - The search space (Tables 8-10) includes only CLIP variants, excluding non-CLIP vision-language models (e.g., ALIGN, Florence). - Results on adversarial training (Appendix C.8) are superficial; no defense strategies (e.g., randomized smoothing) are discussed. Supplementary Material: - Reviewed Appendices A-D. Key points: - Appendix A: Comparison with prior work (Table 3) clarifies the distinction between "super transferability" and prior UAP methods. - Appendix C.9: Qualitative analysis of UAP patterns (Figure 7) is intriguing but lacks quantitative correlation with ASR. - Appendix D: X-TransferBench is under-documented (e.g., no code/license details). Relation To Broader Scientific Literature: - Extends prior work on UAPs (Moosavi-Dezfooli et al., 2017) and CLIP adversarial attacks (Zhou et al., 2023; Zhang et al., 2024) by addressing multi-dimensional transferability. - Connects to bandit algorithms (Auer, 2002) but does not leverage recent advances in contextual bandits for non-stationary environments. Essential References Not Discussed: - Multi-modal adversarial attacks: Concurrent work on GPT-4V/DALL-E 3 adversarial attacks (e.g., "ImgTrojan: Jailbreaking Vision-Language Models with ONE Image" is not cited. - Theoretical analysis of UAPs: The paper does not discuss recent theoretical frameworks for UAP transferability (e.g., "On the Universal Adversarial Perturbations for Efficient Data-Free Robustness Evaluation" (ACL 2023)). Other Strengths And Weaknesses: - Strengths: - Originality: First to formalize "super transferability" and propose a scalable solution. - Practical Impact: X-TransferBench fills a gap in standardized UAP evaluation. - Weaknesses: - Clarity: Equation symbols (e.g., $\mathbb{D}$ vs. $\mathbb{D}$) are inconsistently defined. - Significance: While results on existing CLIP/VLMs are strong, applicability to newer models is unclear. Other Comments Or Suggestions: Please provide more examples similar to Figure 3. Questions For Authors: 1. Theoretical Justification: Can you provide regret bounds or convergence guarantees for the UCB-based surrogate selection strategy? *A theoretical analysis would strengthen the method’s credibility.* 2. Search Space Generalization: How does X-Transfer perform if the search space contains only ViT or ResNet architectures? *This would test the robustness of dynamic selection to model homogeneity.* 3. Defense Evaluation: Have you evaluated X-Transfer against state-of-the-art defenses (e.g., randomized smoothing or feature denoising)? *This would clarify the practical threat model.* Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and insightful comments. Please find our responses to your questions below. **Q1:** Mechanism behind the cross-task transferability on VLM **A1:** X-Transfer exploits a common weakness in CLIP image encoders, even when they are trained on different datasets and model architectures. Our adversarial objective is intentionally generic as it induces **meaningless embeddings**. Since many VLMs adopt CLIP or its variants as their image encoders, the UAPs generated by X-Transfer can therefore induce similar **nonsensical embeddings** in these models. This highlights a shared vulnerability across a wide range of CLIP encoders. **Q2:** Theoretical analysis **A2:** The focus of this work is super transferability, not MAB or UCB regret analysis. We believe that super transferability is a novel topic, and as such, there is currently no established theoretical framework. However, our empirical results are solid and comprehensive, providing strong support for our claims on super transferability. The geometry-based theoretical analysis of transferability between classifiers (Tramèr et al., 2017), as well as the ACL23 work mentioned by the reviewer, could serve as valuable starting points for future theoretical investigations into super transferability. **Q3:** X-TransferBench and supplementary material **A3:** The sample code is included in the accompanying ZIP file. We will adopt the MIT License in the published version. Please refer to Figure 2(a) for the correlation with ASR. **Q4:** Choice sampling method and generic adversarial objective **A4:** Our main contribution lies in the MAB-based dynamic selection framework and the design of a reward metric. The sampling method itself is not the primary factor in achieving super transferability. It is the reward metric that plays a critical role (see Section 4.2). In Section 3.3, we stated that while UCB is our default choice due to its simplicity, we also emphasised that other sampling strategies (including Thompson sampling) are plausible and fully compatible with our framework. The generic objective (Eq. 3-5) is essential to ensure compatibility across different architectures, embedding sizes, and pre-training objectives of surrogate models. While the objectives may appear simple, they are crucial for achieving super transferability, see Appendix C.4 and Table 11. **Q5:** New experiments (TUAP, step size, other and newer models, randomised smoothing, and search space) **A5:** (1) We tested 5 additional TUAPs (average ASR 77.6%) with targets sampled from the AdvBench and compared their performance with the 10 TUAPs (average ASR 75.8%). The results are consistent. (2) We follow existing works and set the $\epsilon$ to 12/255. The step size is a typo. It is our choice of hyperparameters, not baselines. This does not affect our analyses since $\epsilon$ is the factor in ensuring the fair comparison for $L_\infty$ perturbations. (3) Please find the results below for ALIGN, Florence and newer models. Note MetaCLIP-v1.2 ViT-H/14 was released in Dec 2024, and SigLIP-v2 ViT-B-16 was released in Feb 2025. | Model | Task | ETU | C-GPC | Base | Mid | Large | |:---:|:---:|:---:|:---:|:---:|:---:|:---:| | ALIGN | ZS | 57.1 | 52.9 | 68.7 | 65.7 | **70.6** | | | IR | 45.3 | 38.7 | 56.7 | 59.8 | **63.8** | | | TR | 52.9 | 49.6 | 66.5 | 66.5 | **71.0** | | MetaCLIP-v1.2 | ZS | 36.0 | 33.9 | 61.0 | 64.1 | **70.5** | | | IR | 15.2 | 20.3 | 40.5 | 43.4 | **48.4** | | | TR | 27.7 | 29.6 | 48.8 | 55.0 | **61.9** | | SigLIP-v2 | ZS | 55.3 | 49.0 | 66.7 | 69.3 | **72.1** | | | IR | 27.8 | 34.2 | 51.4 | 56.2 | **59.1** | | | TR | 49.4 | 46.3 | 59.7 | 65.3 | **67.4** | | Florence-v2 | COCO-IC | 24.4 | 15.2 | 29.4 | 29.7 | **33.0** | | | Flicker-30k-IC | 25.4 | 20.5 | 28.5 | 28.5 | **31.2** | While we cannot guarantee complete effectiveness against newer models, we believe our work offers novel insights into CLIP/VLM vulnerabilities and makes valuable contributions to the community. These findings provide important foundations for developing safer, more robust models in the future. (4) We tested our UAP against the smoothed ImageNet classifier provided by Cohen et al., 2019. As expected, the classifier demonstrates robustness to $L_\infty$ and $L_2$ perturbations, consistent with its certified guarantees. However, it is not robust to our patch-based UAP, as these perturbations lie outside the certified radius. This finding highlights an important gap and, we believe, serves as a strong motivation for future works. (5) We evaluated the generalisation capability of our method by testing with a ViT-only base search space (average ASR of 69.3%) and comparing it to the original mixed-architecture base search space (average ASR of 69.2%). The results show no significant difference. **Q6:** Others **A6:** We will fix the typo, add more examples of Fig 3, and add references to related works mentioned by the reviewer.
Summary: This paper introduces an algorithm to find universal adversarial perturbations for CLIP-like image encoders. The vulnerability works across domains, tasks, and samples. The main algorithm follows standard methods for finding adversarial perturbations: finding a perturbation whose \( L_{\infty} \) norm is smaller than a given \( \epsilon \) while also optimizing the attack objective so that the CLIP encoding becomes similar to a text encoding for a certain malicious input text. To make this perturbation universal, the authors use multiple encoders. The main novelty of the attack lies in the efficient use of a large zoo of \( N \) CLIP-like image encoders for finding a universal perturbation. To this end, the authors use a reward-based strategy to select \( k \ll N \) encoders at each optimization step of the attack. The experiments are conducted on several tasks, including zero-shot classification, image-text retrieval, image captioning, and VQA, demonstrating the versatility of the attack. The most interesting part of the paper is the finding that the vanilla version, without an ensemble of models (often a requirement for finding a universal perturbation), works quite well for classification and image-text retrieval tasks and actually outperforms several existing methods by large margins. Another interesting insight from the paper is the appearance of text-like artifacts on perturbed images. Overall, this is an excellent paper (the kind of paper I would love to write). It is easy to understand, makes claims backed by ample empirical evidence, demonstrates the power of simpler methods, and has an excellent evaluation setup and experiment/analysis. Claims And Evidence: The claims in this paper are clear and supported by ample empirical evidence. The main claim is the introduction of an efficient "super" UAP that works across tasks, models, and domains. This is backed by comprehensive experiments. Methods And Evaluation Criteria: The method and evaluation criteria presented in the paper align well with the problem of finding UAPs. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: I did not evaluate the experimental design by running any code. However, I compared the design with existing UAP methods and found it to be sound. Moreover, the results make sense, and the appearance of buildings and text on the input images indicates the validity of the experiments. Supplementary Material: I have skimmed through the Supplementary Material and looked more closely at C{5, 7, 9}. Relation To Broader Scientific Literature: The main contribution for me is its finding that non-ensemble-based UAP with standard methods work quite well. This finding is Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: The paper is clear and presents a novel insight into the vulnerability of VLMs. Other Comments Or Suggestions: No. Questions For Authors: The finding that the "vanilla version without an ensemble works well" has important implications. It suggests that a single-model attack can be surprisingly effective, challenging the common assumption that an ensemble is necessary for universal perturbations. Does this raise questions about whether previous attacks on CLIP were executed optimally or if they relied on unnecessarily complex setups? It may also indicate that CLIP-like models share inherent vulnerabilities that can be exploited more easily than previously thought. The appearance of text on perturbed images is an interesting phenomenon. CLIP has been shown to be deceived by directly imposing text on input images, so this could suggest that the perturbations exploit similar weaknesses in the model’s reliance on textual features within visual inputs. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely appreciate your review, valuable feedback, and kind recognition of our work. Below are our responses to your questions. --- **Q1:** The common assumption that an ensemble is necessary. **A1:** We agree that it is indeed surprising that the vanilla version of X-Transfer without an ensemble, performs comparably to several strong baselines. However, we believe that the ensemble mechanism is crucial for achieving super transferability, as it allows the method to exploit shared vulnerabilities across diverse surrogate models. Regarding the complex setups used in prior work, we believe that they may be well-justified for the specific task those studies focused on. However, for the broader goal of super transferability, including cross-task scenarios, we find that a simple and generic adversarial objective is not only sufficient but also necessary. Notably, our experiments (Appendix C.4) demonstrate that surrogate scaling does not improve transferability when applied to the baseline (ETU), further reinforcing the motivation behind our design choices. --- **Q2:** Appearance of text features in perturbation **A2:** We would like to clarify that we did not use any explicit objective to introduce textual features or characters into the perturbation. Rather, these textual-like patterns **emerged naturally** as a byproduct of the optimisation process. The goal of X-Transfer is to exploit common vulnerabilities in CLIP encoders, and we believe our findings indeed demonstrate that these textual features reflect a shared vulnerability across CLIP variants. Understanding why such patterns emerge and how they interact with the multimodal representations of CLIP is, in our view, an interesting direction for future research.
null
null
null
null
null
null
Evaluating Neuron Explanations: A Unified Framework with Sanity Checks
Accept (poster)
Summary: This paper introduces a systematic framework designed for evaluating neural explanations. The core of this framework lies in its ability to quantitatively measure how well a given explanation aligns with neuron behavior across different samples. The framework operates on a few key components. First, for any explanation *t* (which could be textual, visual, or any other format), it defines $[c_t]_i$ as a measure that indicates the presence of concept *t* in sample *i*. This presence is potentially determined through methods like crowdsourcing. Second, for a neuron *k*, $[a_k]_i$ represents its activation value on sample *i*. The alignment between $[a_k]_i$ and $[c_t]_i$ is then framed as a binary classification task. In this task, perfect matching suggests that explanation *t* accurately describes neuron *k*'s behavior. The framework also formalizes various evaluation metrics from previous research, such as measuring the **Recall** or **IoU (Intersection over Union)** between $c_t$ and $a_k$. Furthermore, the paper introduces two meta-evaluation approaches aimed at assessing the reliability of these evaluation metrics. The first is the **Label Manipulation Test**. This test involves artificially setting $[c_t]_i$ to 0 for samples that originally contained the concept. Reliable metrics should then show decreased performance compared to the initial perfect score. Conversely, when $[c_t]_i$ is set to 1 for samples that lacked the concept, scores should similarly decrease from their initial perfect score. The paper conducts these tests both theoretically and empirically for various metrics. The second approach is the **Neurons with Known Concepts Test**, which uses the final layer neurons of classifiers with known explanations as ground truth. The framework computes scores for all neuron-concept pairs, including both correct pairs and random pairings. The expectation is that reliable evaluation metrics should consistently yield higher scores for correct neuron-concept pairs when compared to incorrect ones. The paper conclude by suggesting some metrices as most reliable options in their framework. ## update after rebuttal I kept my score as autors addressed the issues I raised. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the theoretical analysis in section D of appendix which is related to **Label Manipulation Test** Meta-evaluation. No issue found. Experimental Designs Or Analyses: The experiment on Meta-evaluation **Label manipulation Test** is valid. However its contribution to the paper claims is very minimal. Their Theoretical analysis can fully support those claims without the need of the expeiments. Supplementary Material: I checked sections D,E,F. Relation To Broader Scientific Literature: This paper formalized previous evaluation metrices in an unified framework. Essential References Not Discussed: No Other Strengths And Weaknesses: # Strengths 1. The paper addresses a very crucial problem. Evaluation in machine learning interpretability is currently inconsistent, with each work often employing unique, new evaluation methods. As a result comparison of different methods is very hard. 2. The paper's formalization is simple yet comprehensive, effectively encompassing the evaluation setups used in many other works. 3. The proposed meta-evaluation metrics are both logical and a valuable contribution to the field. 4. The paper is very well written and easy to follow. # Weaknesses 1. The paper neglects to discuss the labeling costs associated with each metric, a significant consideration. For example, while the paper criticizes the use of **recall**, it's important to recognize that **recall** can reduce labeling costs because it only requires concept labels ($[c_t]_i$) for samples that highly activate a neuron. Given that calculating $c_t$ typically involves crowdsourcing or LLMs (the most expensive part of the evaluation process), **recall** offers a practical advantage. Furthermore, crowdsourcing labels for $c_t$ across *all* samples, where the label is predominantly 0 (due to concepts appearing in only a small percentage of samples), can lead to lower-quality labels due to extreme data imbalance (labelers may become inattentive and default to labeling most samples as 0). 2. The paper's meta-evaluation using **Neurons with Known Concepts** lacks proper citation. Soltani Moakhar et al. [1] present a very similar method for evaluating interpretability, employing the final layer of a ResNet (where neuron/concept pairs are known) for evaluation purposes. While this paper uses it for meta-evaluation (distinguishing it somewhat), it's still a relevant prior art to acknowledge. 3. The paper's novelty is limited. While not necessarily a weakness, a completely novel and unintuitive evaluation metric might face resistance to adoption within the field. [1]: Soltani Moakhar, A., Iofinova, E., Frantar, & Alistarh, D. SPADE: Sparsity-Guided Debugging for Deep Neural Networks. ICML 2024. Other Comments Or Suggestions: 1. The addition of examples in Appendix F (e.g., a sample dataset input, a concept present in that input, corresponding $c_t$ and $a_k$ values for a neuron) would enhance understanding. 2. Also, there appears to be a writing issue on Line 1636 in the appendix that needs to be addressed. Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the review and positive feedback! Your summary is very well written and highlights a strong understanding of our work. We would like to address your concerns below: **Weakness 1 - Labeling Cost** This is a great point. It is true that a significant reason for the use of recall is that it is cheaper to evaluate as we only need to only evaluate $c_{ti}$ on highly activating inputs, i.e. where $a_{ki} = 1$. In contrast, evaluating most other metrics requires knowledge of the full $c_t$. One way to address this issue is to use a combination of metrics, for example evaluating F1-score by combining a crowdsourced evaluation of Recall with a generative model based evaluation of Precision as we suggest in Line 897 of the Appendix. As you mentioned, crowdsourced evaluation of the entire $c_t$ will likely give noisy results as most inputs do not contain the concept. To avoid this, another approach is to oversample highly activating inputs similarly to Top-and-random sampling. This can be done without failing the extra labels test if the proper importance sampling correction is applied to correct for the bias introduced by the sampling. However, the effective sampling/user study design for non-recall metrics is a rather large and complex topic that does not fit within the scope of the current paper, but it is something we are actively looking into and we are confident that there are ways to evaluate the other metrics with a cost not much higher than recall evaluation. We will include discussion of this under limitations/future work. **Weakness 2 - Missing citation** Thank you for pointing this out. We will cite [1] as well as other papers[2, 3] conducting similar studies based on neurons with known concepts as suggested by Reviewer 7Dju. [1] Soltani Moakhar, A., Iofinova, E., Frantar, & Alistarh, D. SPADE: Sparsity-Guided Debugging for Deep Neural Networks. ICML 2024. [2] Schwettmann et al, "FIND: A Function Description Benchmark for Evaluating Interpretability Methods", 2023. [3] Shaham et al, "A Multimodal Automated Interpretability Agent", 2025. **Weakness 3 - Limited novelty i.e. not introducing new metrics** Similar to how you acknowledge this is as “not necessarily a weakness”, we do not think this is a weakness but a conscious decision as we thought it would be a more valuable contribution to the field to rigorously analyze existing and standard statistical metrics instead of focusing on creating new ad-hoc metrics. This way our contributions are more clearly defined, and designing metrics particularly to do well on our tests risks overfitting them to these specific tasks, or worse tuning the tests themselves in a way that our new metrics would do well. **Other comments or suggestions 1- Examples** We have included an example with dataset inputs and concepts values Appendix Figure B.1., to improve clarity similar to your suggestion. Please let us know if you would find additional examples useful or have any additional suggestions. **Other comments or suggestions 2 - Typo** Thank you for pointing this out, we have corrected the typo in Line 1636 to: “… we defined the “correct” concept $t_k$ as the …” **Additional Experiments:** In response to other reviewer's comments, we have conducted additional experiments such as evaluating novel combinations of metrics, showing the robustness of our results to the choice of $\epsilon$, and showcasing that our results using a random subset as $c_t^-$ are similar to using a real semantic subset. We have also included a new figure showcasing the Missing and Extra Labels tests. These results are available at https://drive.google.com/file/d/1OHMxyMW1KVIzxd2Rd_Hx34qIjVecUVmo/view. **Summary** We are happy to hear that you find our work is addressing a crucial problem and produces logical and valuable contributions to help solve it. We believe we have addressed you main remaining concerns regarding labeling cost and missing citations and will include a discussion of these in the revised manuscript. Please let us know if you have any remaining questions and we would be happy to discuss them further. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. All of my concerns have been addressed. As a suggestion, I think it would be helpful to briefly mention the labeling cost issue of non-recall metrices in the conclusion when recommending those metrices. I think the paper is a valueable addition to the the interpretability comunity and I hence keep my score. --- Reply to Comment 1.1.1: Comment: Thanks again for the response and your thoughtful and positive comments! We will include the labeling cost in the conclusion of the revision.
Summary: This paper proposes NeuronEval for the meta-evaluation of input-based explanation metrics. Given a textual explanation of an input (and resultant activation), a variety of metrics have been proposed to evaluate how faithfully the description describes the neuron (or “any scalar function of network inputs”). NeuronEval unifies 19 existing explanation evaluation methods (and 18 different metrics) under 1 framework, and characterises differences between them by varying the metric, the source of the concept vector, granularity of activation vectors, and probing dataset. Motivated by the intuition that a reliable metric should be “minimal” (specific) and “exact” (not too general), they propose 2 necessary (but not sufficient) sanity checks: the missing labels test and extra labels test. They find that most metrics fail these tests (lack in sensitivity and fidelity); point out which metrics reliably pass the tests; provide guidelines on reliability desiderata for future metrics in explanation evaluation. Claims And Evidence: The claims in this work are that meta-evaluation (of XAI evaluation metrics) is necessary to compare and contrast their reliability, to guide the development and usage of more reliable metrics. Such claims are supported by the introduction and application of NeuronEval, a framework which assesses the reliability, sensitivity and robustness of 18 metrics (across 19 methods) through sanity checks of missing and extra labels. The authors present evidence that the majority of metrics fail or perform suboptimally on one/both tests, which speaks to the importance of this investigation. Methods And Evaluation Criteria: This paper proposes NeuronEval, a framework which unifies major XAI evaluation metrics via plug-and-play, as well as 2 associated sanity checks for the fidelity and sensitivity of various metrics. The framework is sufficiently general and expressive; the sanity checks are well-motivated and consistent with existing intuition that optimal explanations should be minimal and descriptive of the input-activation pair, and that faithful evaluation metrics (of these explanations) should be sensitive to changes in model response. The proposed framework and sanity checks are applicable for the task of meta-evaluating the faithfulness of XAI metrics. Theoretical Claims: N/A: there are no theoretical claims in this work. Experimental Designs Or Analyses: The experimental details (e.g. specifics of metrics, concept vector, activation vector, probing dataset) for meta-evaluation are expounded upon throughout the paper and supplement. The authors elaborate on how NeuronEval admits different modalities (image and text), network units (e.g. neuron, channel, scalar function of inputs, SAE features, CBM neurons, linear probes), different explanation methods and metrics – I find their descriptions clear and sufficient. They further elaborate on specific setups for the missing/extra labels sanity checks, and discuss 5 possible outcomes and provide hypotheses for failure cases (e.g. concept imbalance, biassed sampling, using generative models in the evaluation pipeline). I find their experimental design reasonable and sound. Supplementary Material: I have reviewed all sections of the appendix. This includes section A, which summarises the definitions and details of various evaluation metrics; section B, which discusses limitations / failure modes of existing evaluation frameworks to motivate the 2 proposed sanity checks; section C, examining the effects (on accuracy) of missing/extra labels tests under a toy setting of matching concept imbalance; section D, analytical solutions for the population statistics of metrics under the sanity tests; section E, additional hyperparameter ablations and checks; section F, detailed results supplement. Relation To Broader Scientific Literature: This work addresses an important issue in interpretability research: the lack of a structured framework / unified basis to evaluate and compare existing methods. It engages well both with established work in explainable AI (XAI) and newer work in mechanistic interpretability. The framework is quite general and contributes to a unified understanding of XAI: it is compatible with 19 different input-based explanation techniques, 18 metrics; it evaluates explanations of network “units”, from a single neuron, single channel to scalar functions of network inputs, e.g. sparse autoencoder (SAE) features, concept bottleneck model (CBM) neurons, linear probes. Essential References Not Discussed: This paper could more deeply with previous work in XAI robustness analysis – RISE [1], Sobol [2], ROAR [3], MuFidelity [4] – which have exhibited similar intuitions regarding the desired sensitivity of XAI methods to perturbations. [1] Vitali Petsiuk, Abir Das, and Kate Saenko. RISE: randomized input sampling for explanation of black-box models. In BMVC, pp. 151. BMVA Press, 2018. URL http://bmvc2018.org/ contents/papers/1064.pdf. [2] Thomas Fel, Rémi Cadène, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, and Thomas Serre. Look at the variance! efficient black-box explanations with sobol-based sensitivity analysis. Advances in neural information processing systems, 34:26005–26014, 2021. [3] Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. A benchmark for interpretability methods in deep neural networks. NeurIPS, 32, 2019. [4] Umang Bhatt, Adrian Weller, and José MF Moura. Evaluating and aggregating feature-based model explanations. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 3016–3022, 2021. Other Strengths And Weaknesses: In other sections, I have elaborated at length on the positives of this work. Regarding negatives, I find the originality of meta-evaluating XAI methods and metrics based on their sensitivity and faithfulness interesting but not completely novel (see “essential references”). That said, this work is comprehensive, clarifying/unifying and a worthwhile contribution to interpretability research. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the detailed review and positive feedback! **Re: Additional Related Work:** Thank you for pointing out these references. While these references focus on a very different type of XAI, in particular local input-importance evaluations and as such cannot be applied to our setting, they contain important ideas about reliable evaluation of explanations and we will cite and discuss them in the related work(Appendix B.4) as follows: The field of input-importance explanations has seen an evolution in the metrics used, with initial focus on finding the features that humans think are important. Later metrics such as deletion and insertion proposed by [1] allow for more principled evaluation of the explanation fidelity, i.e. whether it actually matches what the model in vision models, and [2] extends the deletion metric to natural language settings. [3] proposes the Remove And Retrain (ROAR) framework as an alternative method for evaluating the quality of input-importance explanations by evaluating whether a model retrained on data without the most important pixels can still solve the task. [4] propose additional checks, such as measuring whether an explanation method has high sensitivity, with the intuition that similar inputs should have similar explanations. Overall, we think that this line of research [1]-[4] does not reduce the novelty of our contribution, as they are focused on a different problem and produce no actionable insight for our setting of global neuron-level explanations. In addition, these papers are mostly focused on introducing new evaluation metrics, instead of conducting meta-evaluation between existing metrics which is the focus of our work. **Additional Experiments:** In response to other reviewer's comments, we have conducted additional experiments such as evaluating novel combinations of metrics, showing the robustness of our results to the choice of $\epsilon$, and showcasing that our results using a random subset as $c_t^-$ are similar to using a real semantic subset. We have also included a new figure showcasing the Missing and Extra Labels tests. These results are available at https://drive.google.com/file/d/1OHMxyMW1KVIzxd2Rd_Hx34qIjVecUVmo/view. Please let us know if you have additional questions and we would be happy to discuss them further! If not, we hope you consider updating your score as you have noted our work is comprehensive, clarifying/unifying and a worthwhile contribution to interpretability research with little weaknesses. **References:** [1] Vitali Petsiuk, Abir Das, and Kate Saenko. RISE: randomized input sampling for explanation of black-box models. In BMVC, pp. 151. BMVA Press, 2018. URL http://bmvc2018.org/contents/papers/1064.pdf. [2] Thomas Fel, Rémi Cadène, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, and Thomas Serre. Look at the variance! efficient black-box explanations with sobol-based sensitivity analysis. Advances in neural information processing systems, 34:26005–26014, 2021. [3] Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. A benchmark for interpretability methods in deep neural networks. NeurIPS, 32, 2019. [4] Umang Bhatt, Adrian Weller, and José MF Moura. Evaluating and aggregating feature-based model explanations. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 3016–3022, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the response and improvements to the manuscript. In response to comments, the authors have reviewed the suggested references in detail and clarified the difference in problem settings but commonalities in sensitivity-related observations. They also presented new results to confirm the functional robustness of the 2 proposed sanity checks (missing and extra label tests); they further performed a granular ablations study of if/how using different (combinations of) metrics, (types of) subsets, epsilon values, concept activations might exert influence on performance on the tests. I find that this work has been strengthened by the review/rebuttal process and raise my score from a 3 -> 4. --- Reply to Comment 1.1.1: Comment: Thanks again for the insightful review and participation in the discussion! We are happy to hear you find the submission stronger after the rebuttal.
Summary: The paper proposes NeuronEval, a unified meta-evaluation formalism for assessing neuron explanation evaluation metrics. It reformulates 19 commonly used metrics under a shared mathematical notation. The authors assess the reliability of these metrics using two diagnostic tests—missing labels and extra labels—and analyze which metrics pass these tests theoretically, empirically, and on neurons with known concepts across different models and modalities. The analysis identifies a subset of metrics that consistently align with neuron-concept correspondence. Claims And Evidence: Overall, the paper and supplementary materials present a comprehensive theoretical framework, supported by empirical and theoretical evaluations. The unification of evaluation metrics is well-supported. However, claims about metric reliability are weakened by concerns about the design of the sanity tests and fixed parameter choices, which limit the generality of the conclusions. Please see more details below. Methods And Evaluation Criteria: The unification of evaluation metrics under NeuronEval is a valuable and timely contribution. However, the Missing and Extra Labels sanity tests rely on simplistic manipulations of concept labels (random removal or addition) that do not reflect realistic explanation failures, such as semantic drift, polysemanticity, or context-specific activations. Moreover, the use of a fixed epsilon threshold (0.001) ignores their differing scales and sensitivities, potentially skewing the results. Theoretical Claims: - Eq. 9 lacks an expectation over the random subset c^{-}_{t}, which would be statistically more rigorous, as the sanity test result would otherwise depend on a single random choice - The effect of the epsilon threshold (0.001) is not analyzed, even though it directly affects whether a metric passes or fails the tests. Metrics with small score ranges may unfairly fail. - Failure modes are limited to Missing and Extra Labels; other important failures like contextual specificity, polysemanticity, compositions of concepts are not explored. Thus, while the breadth of experiments is good, the design omits key real-world neuron-concept dynamics, limiting the validity of conclusions. Experimental Designs Or Analyses: Overall, the experimental section is comprehensive, covering both theoretical and empirical analyses across neurons from different modalities. However, some key information is missing: - The total number of neurons analyzed has not been reported. - The source of concept labels c_t​ is unspecified—whether human-annotated, model-generated, or otherwise. - Both omissions limit the ability to assess the validity and reproducibility of the findings Supplementary Material: The supplementary material contains useful additional formalism and ablations. Relation To Broader Scientific Literature: The paper contributes to the growing literature on mechanistic interpretability and neuron explanation evaluation Essential References Not Discussed: In Section 5, it would be useful to refer to existing analyses of neurons with ground-truth descriptions, such as [1, 2]. [1] Schwettmann et al, "FIND: A Function Description Benchmark for Evaluating Interpretability Methods", 2023. [2] Shaham et al, "A Multimodal Automated Interpretability Agent", 2025. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: - How many neurons were used in total in each of the experiments? - How was the labels c_t generated in the experiments? - How sensitive are the results to the choice of epsilon? Have you tested different epsilon values? - Why model incorrect concepts as random subsets/supersets? Did you consider more semantically plausible alternatives (e.g., polysemanticity, partial concepts)? - How would your tests handle neurons that correctly activate only for specific aspects of a concept (e.g., "flying bird" vs. "bird")? - Did you explore cases where non-activating inputs may still contain the concept? - Did you analyze emperically if some metrics perform better in certain modalities (e.g., vision vs. language), and if so, how do you plan to address these differences? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the detailed review! We have conducted extensive additional experiments available at: https://drive.google.com/file/d/1OHMxyMW1KVIzxd2Rd_Hx34qIjVecUVmo/view, see in particular Tab G5-G7 as these experiments were conducted to address your questions. **1.Realistic failure modes** We argue that most real world explanation failures are closely connected to one of two failure modes, and that these failure modes are directly captured by our tests: - FM-A) Explanation **too generic**: This means the explanation concept is a superset of the “true” neuron concept, e.g. describing a neuron as “animals” when it only activates on dogs. Our Extra Labels Test captures whether a metric can detect this failure mode. - FM-B) Explanation **too specific**: Explanation concept is a subset of real neuron activations, i.e. describing a neuron as “black cat” when it activates on all cats. Our Missing Labels Test captures whether a metric can detect this failure mode. Also, our idea of a “concept” is very general and includes any text-based description. This means a single concept could be highly specific (e.g. “flying bird”), or a composition of simpler concepts (e.g. “water OR river”). The realistic failure modes you discussed then fit into the above two failure modes: - **Polysemanticity:** A popular model of polysemanticity is to model neuron activations as an OR of different concepts. If the explanation only captures one of these concepts, this means the explanation is too specific (FM-B). - **Context specific activations:** A context specific neuron activation means the neuron’s “true” concept is a subset of the non-context specific concept. If the explanation is not-context specific, the explanation is too generic(FM-A). - **Non activating inputs still contain the concept:** This means that the explanation concept is too generic i.e. a superset of the “true” concept(FM-A). **2. Random vs semantic sub/supersets** Thanks for the suggestion! Our missing/extra labels tests are a mathematical model of semantic sub/superclasses that can be run without knowledge of the semantics or relationships between concepts. To test this model, we conducted a version of extra labels test on the ImageNet dataset where instead of randomly sampling $c^+$ we used the smallest superclass of the concept according to WordNet hierarchy, and a version of missing labels test where we used the largest subclass of the concept as $c^-$. As shown in new Table G.5, the results are essentially identical to using random sub/supersets, showcasing our random sub/superset is a good model of real semantic relationships for our purposes. **3. Importance of Epsilon Choice:** To clarify, as mentioned on lines 307-309, we normalize each metric so that their values lie in [0,1] before running the tests to ensure fair comparison. To measure the sensitivity of our results to the choice of eps, we ran our tests with different epsilons as shown in Tab G.7. We can see that changing eps by an order of magnitude does not significantly change which metrics pass. Our theoretical results also suggest our tests are robust to choice of $\epsilon$. As we show in Tab. C.4 & D.4, the score difference of metrics that fail our tests approaches 0 as the concept/neuron activation frequency $\gamma$ approaches 0, while passing metrics retain a nonzero score difference regardless of $\gamma$. This means that for any $\epsilon > 0$ there exists $\gamma > 0$ s.t. metrics like accuracy will fail the tests. **4. Lack of expectation over $c^{-}_{t}$:** Great point. We will add an expectation over $c_t^{-}$ in Eq. 10 as this is a more principled definition. While our experimental results only used one sample of $c_t^-$, we conducted a study on how the results change with additional samples as shown in new Tab G.6. We can see averaging over multiple samples has little impact on the results, likely because we already average over a large number of neurons and settings. **5. Number of neurons** Thanks for pointing this out. For the theoretical missing/extra labels test, we simulate 1k neurons as mentioned in line 1129. For empirical Missing/Extra labels test, we evaluated a total of 5549 neurons, and for meta-AUPRC we evaluated a total of 2989 unique neurons. **6. Source of $c_t$** We would like to clarify we have specified the source of $c_t$ in the manuscript. For missing/extra labels tests, we use ground truth $c_t$, i.e. from a labeled dataset(line 301-302). For meta-AUPRC, we test with both ground truth $c_t$ from a labeled dataset as well as pseudo-labels from SigLIP(App. F.2, lines 1764-1782). **7. Additional references** Thanks, we’ll cite and discuss these references. **8. Performance on different modalities** As our main contributions are mathematical features of the metric itself and not tied to any particular model or modality, we did not observe significant differences between the modalities. Please let us know if you have any remaining questions! --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and for providing the additional results. I encourage the authors to add the following points to their paper: - Discussions about the accordance of missing/extra label cases in real scenarios. - Results of semantic subsets/supersets vs. random ones. - According to table G.6, the results clearly depend on \epsilon. This is an important point that needs to be further addressed in the paper. The additional results addressed my concerns, and I therefore raise my score to 3: weak accept --- Reply to Comment 1.1.1: Comment: Thank you for the response and valuable feedback! We will include the suggested parts in the updated manuscript as they are important and make the submission stronger. Regarding $\epsilon$, while the experimental results(G.6) show some change in response to $\epsilon$, we believe this effect is small as for the most part it does not affect which metrics pass or do not pass the test despite 2 orders of magnitude change in $\epsilon$. More importantly, theoretically we do not think the $\epsilon$ choice is important. An alternative and perhaps more fundamental definition for our theoretical tests could be as follows: A metric passes the test if $\exists \epsilon > 0$ s.t. $\forall \gamma > 0, \Delta s < -\epsilon$, where $\Delta s$ is calculated with the theoretical missing labels test defined in Appendix C and $\gamma$ is the concept activation frequency. This definition is purely a mathematical property of the metric without any hyperparameters or ties to any setting, and as far as we know matches our theoretical test results of Table C.1 and C.2 that use a fixed epsilon. However, we can only prove a metric passes this version of the test if we have an analytical solution, which we have for binary classification metrics in Table D.2, which makes this definition harder to use in practice. We can see for example accuracy fails this test as $\lim_{\gamma \to 0} \Delta s = 0$ so no matter what $\epsilon$ we choose there exists some $\gamma$ s.t. Accuracy will fail the tests. On the other hand, for F1-score it will pass the theoretical tests for any $\epsilon < 1/3$ regardless of $\gamma$. From this we can see that if we decrease $\epsilon$ in the current theoretical test, more metrics would pass, but the same metrics would fail again if we expand Table C.1/C.2 to the right by including smaller $\gamma$ values, while current passing metrics would not fail regardless of how many additional $\gamma$ values we test. We will include this additional discussion as well as our results about $\epsilon$ in the updated manuscript. Thank you again for your comments and rebuttal response!
Summary: This paper focuses on evaluating neuron-level explanations in deep learning models, particularly in the context of mechanistic interpretability. While many existing methods generate textual explanations for individual neurons, a critical challenge remains: how to assess the quality and reliability of these explanations. To address this issue, the authors introduce NeuronEval, a unified mathematical framework that systematically organizes and compares 18 different evaluation metrics used in previous studies. ## update after rebuttal Claims And Evidence: "Input-based explanations" are vague. Some attribution-based methods also explain the input, but they are not related to neuron explanations. Methods And Evaluation Criteria: The assessment methods used are reasonable. Theoretical Claims: The theoretical part has some shortcomings. Experimental Designs Or Analyses: The experiments designed under the proposed framework are reasonable. Supplementary Material: The organization of the supplementary materials is adequate. Relation To Broader Scientific Literature: This framework has certain implications for the evaluation of existing black-box explanation work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** 1. The idea of using a unified theoretical framework to evaluate metrics is novel and meaningful, and this idea will have a profound impact on the design of future benchmarks. 2. The selected evaluation metrics are sufficient and can reflect the applicability of the framework. **Weakness:** 1. In terms of experimental analysis, the paper lacks analysis of some evaluation metrics that have not passed the corresponding tests. Such analysis can help the reaction indicators evaluate the rationality of the experiment. 2. The paper discusses the value of a single metric, but existing work generally considers the performance of a combination of several metrics. Therefore, I think the paper needs to add some experiments to study how the combined metric fits into the proposed theoretical framework. 3. In the author's description in Section 3, we can see that the author has vaguely seen that the evaluation metrics can be classified, but the experimental results are only briefly mentioned without further analysis. 4. Neuro Explanation seems to be lacking if it does not consider the evaluation of the explanation of the parameters of the neurons in the intermediate layers, but only considers the explanation of the neurons in the last layer. 5. "Input-based explanations" are vague. Some attribution-based methods also explain the input, but they are not related to neuron explanations. Other Comments Or Suggestions: Please see weaknesses, I am willing to raise my score if the author can address my concerns convincingly. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the review! Please see https://drive.google.com/file/d/1OHMxyMW1KVIzxd2Rd_Hx34qIjVecUVmo/view for our new experimental results, in particular Tables G1-G4 as those experiments were conducted to address your questions. Below we address your concerns and questions in detail. > **Weakness 1 - Lacks analysis of some metrics that failed the tests** We would like to clarify that in Appendix C of the manuscript, we have conducted extensive theoretical analysis on why metrics fail the tests; and in our motivating example (Sec 4.1, table 2, Figure B.1) we showed why recall and precision fail the tests. In particular, failing the extra labels test is caused by the metric not being able to distinguish between the correct concept (pet) and a superclass (animal), while failing the missing labels test is caused by the metric not being able to differentiate the correct concept (pet) from a subclass (dog/cat). Overall, our tests are a mathematically formalized measurement of whether a certain metric can differentiate between a correct concept vs sub/superclass of it. For many metrics, this only happens on imbalanced neurons/concepts that do not activate on most inputs as we show in tables C1-C4, which indicates failure is tied to poor metric performance on imbalanced data. Please let us know if you had a specific kind of analysis in mind that you would like to see. > **Weakness 2 - Combinations of Metrics** Thank you for the suggestion! We would like to point out that our current result (Table 3, 4) already contains a combination of metrics, since F1-score is the harmonic mean of Recall and Precision: $F1 = 2/(recall^{-1} + precision^{-1})$. In the original manuscript, we also briefly discuss the idea of combining different metrics on lines 900-901 in the Appendix. Following your comment, we have conducted additional experiments evaluating more extensively whether combinations of metrics can work as a good evaluation metric. Specifically, inspired by F1-score, we used the harmonic mean of other existing metrics to see how they perform. Full results are shown in Tables G1-G4 in [our new results](https://drive.google.com/file/d/1OHMxyMW1KVIzxd2Rd_Hx34qIjVecUVmo/view). We can see that many combinations of metrics achieve quite a good performance, and now pass the theoretical missing/extra labels tests. For example the harmonic mean of Balanced Acc and Inverse Balanced Acc performed well. In general, our initial results indicate that combining a metric that passes the extra labels test with a metric that passes the missing labels test will pass both tests. Conversely, combining two metrics that fail the same test, such as Recall and AUC will still fail that test. > **Weakness 3 - Section 3: Evaluation Metrics can be classified** We are not sure which specific part the reviewer is referring to. Do you refer to framing neuron explanation as a binary classification problem in line 114? If so, we have discussed the details in Appendix A.2. We would be happy to discuss further if you have specific questions. > **Weakness 4 - Intermediate layer explanations** We believe there might be some misunderstanding, our results and contributions are not limited to final layer neurons. Our proposed NeuronEval framework in Sec 3 and the theoretical missing/extra labels test (Tab. 3, Fig. 2, App. C) cover explanations of *all* neurons regardless of where they are, including hidden layers neurons, final layer neurons and even non-neuron units like directions in activation space. In addition, we did show intermediate or non final-layer results in our experiments, including: - intermediate layer neurons (settings 2 & 4, Tab F1-F2), - concept neurons in an intermediate layer of a concept bottleneck model (setting 5 in Tab. F3 & setting 7 in Tab. F.6) - linear probes trained on hidden layer representations (setting 6 in Tab. F3 and setting 8 in Tab. F6). Our main text results (Table 3, Table 4) are averaged across settings and contain these results. > **Weakness 5 - “Input-based explanations” are vague** Thank you for pointing this out. We will change the term to *Input-based neuron explanations* to avoid confusion. > **Theoretical claims - the theoretical part has some shortcomings.** Could the reviewer expand on what specific shortcomings in the theoretical part are? We would be happy to discuss and clarify further if you have any specific comments on shortcomings. > **Summary** We believe that we have addressed your major concerns by clarifying our theoretical analysis and experiment results in the intermediate layer neurons (Weakness 1, 4), running additional experiments on combined metrics (Weakness 2) and clarifying our terminology (Weakness 5). We had a hard time understanding a few concerns (Weakness 3, Theoretical Claims), and we hope the reviewer can clarify these if concerns still remain. Otherwise, we hope you consider adjusting the rating if our response has addressed your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the author's detailed reply. I think most of the concerns have been clarified, so I decided to increase my score. --- Reply to Comment 1.1.1: Comment: Thanks again for the review and response! We are happy to hear we have addressed your concerns.
null
null
null
null
null
null
Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective
Accept (poster)
Summary: This paper investigates reward transfer in the context of online active RLHF. Existing investigations (based on active preference collection using on-policy sampling) have regret bounds proportional to instance-dependent properties, such as the cardinality of the action space. This investigation assumes access to imperfect reward models, and makes a novel connection between the coverage of the optimal policy due to the policies induced by these reward models, and their sub-optimality gaps with respect to the optimal policy. Leveraging this insight, the authors propose a policy selection routine to speed up convergence in the early stages. This subroutine works by forming estimates of the policy value induced by the imperfect reward models, and prescribing the policy which has the largest value. In terms of performance, the proposed algorithm is shown to exhibit sublinear regret which *does not depend on structural complexity measures*. Claims And Evidence: The claims are adequately substantiated. Methods And Evaluation Criteria: I am not entirely convinced by the choice of baselines for comparing the proposed algorithm in empirical evaluations. - Since the paper shows improved regret compared against on-policy algorithms, why not compare against one of them (e.g. XPO)? - Why only choose summarization task as a baseline? What about other baselines (e.g., the ones considered in [1])? [1] Ji, K., He, J., & Gu, Q. (2024). Reinforcement learning from human feedback with active queries. arXiv preprint arXiv:2402.09401. Theoretical Claims: I did not go over the appendices, but the claims in the paper look correct. Experimental Designs Or Analyses: The experiment section is fairly limited. I have discussed my concerns about the baselines under "Methods And Evaluation Criteria". Supplementary Material: No. Relation To Broader Scientific Literature: The contributions of this paper are very relevant to the RLHF community. To the best of my knowledge, reward transfer has not been investigated in the RLHF context. Furthermore, this paper makes some important theoretical observations, for example, relating the coverage of the optimal policy with respect to any policy, with the sub-optimality gap of the policy. Furthermore, the proposed algorithms improve upon existing regret bounds by getting rid of instance-dependence, which is also an important contribution. These strengths contribute towards my decision to lean towards acceptance of this paper, even though the simulations can be improved. Essential References Not Discussed: N/A Other Strengths And Weaknesses: These have been discussed. Other Comments Or Suggestions: N/A Questions For Authors: I have the following questions for the authors: 1. Since we do not have access to the optimal reward $r^\star$, we may not obtain the value o the imperfect reward models with respect to the optimal policy. To circumvent this, the authors propose to estimate the value gap $J_{\beta}(\pi^\star_{r_w}) - J_{\beta}(\pi_{\rm ref})$. Instead, what if we use the an estimate of the value function $J_{\beta}(\pi^\star_{r_w})$ for policy evaluation? More specifically, form an optimistic estimate of $\widehat r$, compute its value under $\pi^\star_{r_w}$, and select the source model with the largest estimated value. Why do we require to introduce $\pi_{\rm ref}$? 2. In (8), the second term is contributed by the imperfection of the source reward models. However, if the gaps are small (near perfect source models), I wonder why it hurts the regret. Imagine having $\Delta_{\min}$ non-zero, but vanishingly small. In that case, the second term in (8) explodes. Is this a limitation of the analysis? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and insightful suggestions! We address your comments as follows. ## 1. Methods And Evaluation Criteria & Experimental Designs Or Analyses ### 1.1 Comparison with other online algorithms Regarding online RLHF methods, we first point out that other existing methods (XPO, IPO [1] etc) cannot handle transfer settings, hence they are not directly comparable. Besides, the core technique in empirical TPO is the "win rate-based source policy selection via UCB", which allow us to adapt to the best source RMs and switch back to normal online learning if no benefits in transfer, *without prior knowledge on task quality*. Therefore, **we view other online methods (e.g. XPO) not as competing baselines, but rather as complementary approaches that can be enhanced with our transfer learning techniques**. To make it clear, in our next revision, we will replace “DPO” (line 11, Alg. 3) with $Alg_{PO}$, which serves as a placeholder for any **P**olicy **O**ptimization oracle (e.g. DPO, XPO, IPO etc). This change aligns with the usage of generic placeholder $Alg_{OL}$ in TPO (Alg. 1). To support this point, similar to Table 1 in paper, below we report win rates comparisons when we instantiate $Alg_{PO}$ by optimizing XPO and IPO loss, respectively, keeping other settings the same as Sec. 6 (except using learning rate 1e-5 in IPO). These results demonstrate that our transfer learning techniques can be effectively combined with different policy learners, leading to consistent performance improvements. We believe this highlights the modularity and generality of our framework, and we thank the reviewer again for prompting this valuable point. * **Empirical TPO (Alg. 3) by replacing DPO (line 11) with XPO** ||Without Transfer|Purely Exploit ROUGE|Purely Exploit T5-Large| |:-: |:-:|:-:|:-:| |Iter 1|52.3$\pm$1.1|53.4$\pm$0.8|50.2$\pm$0.3| |Iter 2|51.6$\pm$1.3|54.7$\pm$1.6|49.1$\pm$1.3| |Iter 3|52.2$\pm$1.6|53.8$\pm$2.9|49.2$\pm$1.1| * **Empirical TPO (Alg. 3) by replacing DPO (line 11) with IPO** ||Without Transfer|Purely Exploit ROUGE|Purely Exploit T5-Large| |:-: |:-:|:-:|:-:| |Iter 1|52.3$\pm$1.0|50.4$\pm$1.6|49.9$\pm$0.4| |Iter 2|55.2$\pm$1.4|52.3$\pm$0.3|50.1$\pm$0.3| |Iter 3|55.3$\pm$1.1|51.8$\pm$0.5|50.3$\pm$0.5| ### 1.2 Additional benchmarks We consider summarization task for the following reasons: 1. Summarization task is important and it is also widely used in RLHF [2, 3]. 2. Summarization task is quite suitable for our reward model transfer setup. There are various choices for additional reward models with different qualities, such as similarity scores (ROUGE, BERTScore) with human expert summaries, advanced LLMs, etc. We believe it is an interesting direction to evaluate our algorithms with other LLMs and benchmarks (e.g. those in [4] as suggested). Given that our experiments already effectively demonstrate the advantage of our proposed approach and due to limited computational resources, we leave further evaluation to the future works. ## 2. Questions For Authors ### 2.1 Why not estimate policy value directly The main reason is that under the Bradley-Terry assumption, the preference distribution $\mathbb{P}_{r^*}(\cdot|s,a,\tilde{a}) = \sigma(r^*(s,a)-r^*(s,\tilde{a}))$ is invariant under the transformation that $r^*(s,a)\rightarrow r^*(s,a)+b(s)$, where $b$ is an arbitrary state-dependent function. As a result, we can at best identify the true reward $r^*(s,a)$ up to a state-dependent shift, which can largely bias the policy evaluation and make it unreliable. In contrast, introducing $J_\beta(\pi_{\text{ref}})$ as the baseline can cancel out the bias term. This enables consistent estimation for the value difference $J_\beta(\pi^*_{r^w})-J_\beta(\pi_{\text{ref}})$. ### 2.2 Clarification on the second term in Eq. (8) Firstly, notice that the second term of Eq. (8) takes the minimum over $\sum_w1/\Delta(w)$ and $\sqrt{Wt}$. When $\Delta(w)$ is very small, $\sqrt{Wt}$ avoids the upper bound being extremely large. Secondly, as in most regret bounds in online learning, our result characterizes the worst-case behavior. Analogous to multi-armed bandit (MAB) settings, for fixed $T$, in the worst case we may have $\Delta(w)=\sqrt{T/W}$, resulting in regret that matches our bound. Moreover, there is a simple refinement can be done: additionally taking the minimum over the current bound and $\sum_w\Delta(w)N(w,T)=O(\sum_w\Delta(w) T)$, where $N(w,T)$ denotes the number of times we transfer from model $r^w$ in $T$ iterations. This may align more closely with the reviewer’s intuition. In fact, this basic regret bound was actually the starting point of deriving Thm. 4.3. [1] A General Theoretical Paradigm to Understand Learning from Human Preferences [2] Conditional Language Policy: A General Framework for Steerable Multi-Objective Finetuning. [3] BOND: Aligning LLMs with Best-of-N Distillation [4] Reinforcement learning from human feedback with active queries --- Rebuttal Comment 1.1: Comment: I would like to thank the author(s) for the detailed response to my queries. My concerns have been addressed. I was optimistic in my score, and with all my queries addressed, I would like to maintain my evaluation of the paper. Good luck! --- Reply to Comment 1.1.1: Comment: Thank you for recognizing the contributions of our work and the constructive feedback!
Summary: This paper studies the provably benefits of transferring knowledge from imperfect reward models (RMs) in online reinforcement learning from human feedback (RLHF). First, this paper identifies an important property specific to KL-regularized RLHF: the coverability for the optimal policy can be upper bounded by the policy value gap. This implies in order to obtain a dataset with good coverage, it is sufficient to rollout the policy with high value. Guided by this principle, this paper proposes a “self-transfer learning” procedure, which first runs a online no-regret method to generate a dataset, followed by employing an offline method to output the final policy. This paper proves that this procedure enjoys a sub-optimality bound of $\mathcal{T^{-1/2}}$, which improves previous results by removing the dependence of certain complexity measure in the dominating term. Again guided by the above principle, this paper further proposes a transfer learning method TPO. TPO first runs certain iterations of online algorithm and then switches to a transfer policy selection (TPS) procedure, which selects the policy with the highest optimistic estimate of the value. This paper provides regret bound of TPO, demonstrating that 1). in the early stage, the regret of TPO is reduced by leveraging the imperfect RMs; 2). after finite iterations, the regret bound almost reduces to the one of self-transfer learning. Finally, this paper proposes an empirical TPO method, which selects the policy based on the wining rate. The effectiveness of TPO is validated on a simple summarization task. Claims And Evidence: I feel main claims in this paper are well supported by clear evidence. I only have some minor questions. 1. In the discussion below Theorem 3.2, the paper emphasizes that the suboptimality does not depend on |S|, |A| or other complexity measure. But if I understand correctly, the suboptimality is of $\mathcal{O} (T^{-1/2} + Cov (|\Pi|)T^{-1} )$, which still depends on the complexity measure. 2. In the first paragraph in Section 2.2, the paper states that $\Delta_{\min}=0$ implies the realizability of the reward class. This is not very rigorous because a reward with a constant shift also ensures the same optimal policy. The same issues also happens in “there is a one-to-one correspondence XXX”. Methods And Evaluation Criteria: The methodology of this paper builds on the policy coverage perspective commonly used in existing offline RLHF theory, along with a novel structure property induced by KL regularization. I find this methodological approach well-reasoned. Besides, the experiments are conducted on a standard summarization task, which looks solid to me. Theoretical Claims: The results appear to be mathematically sound, though I have a question about Theorem 4.3, which contains a seemingly counterintuitive element. Specifically, the second term in Eq.(8) contains a $\sum_{w} 1/\Delta (w)$ quantity that increases as the error of source RMs decreases. This seems to contradict the intuition that higher-quality RMs should lead to lower regret. Experimental Designs Or Analyses: The experimental designs are adequate. However, the empirical results have a weak connection to the theory. This is primarily because empirical TPO uses the winning rate to select policies. Unlike the policy value, the winning rate cannot provide an upper bound for the coverage coefficient—a crucial element in the theoretical framework established in this paper. Supplementary Material: I had a rough look at the appendix. Relation To Broader Scientific Literature: This paper studies online RLHF for LLM alignment and may inspire advancements towards sample-efficient methods for training LLMs. Essential References Not Discussed: To the best of my knowledge, this paper provides a sufficiently thorough discussion of all closely related works. Other Strengths And Weaknesses: Please see my comments in the above part. Other Comments Or Suggestions: The proposed self-transfer learning process employs two existing RLHF methods, which may not be computationally efficient since it involves two separate policy optimization procedures. Could we leverage the structural property to design a new online RLHF method with a single policy optimization procedure, while achieving an improved regret bound? Questions For Authors: Please see my questions in the above part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive suggestions! We address your specific comments in the following. ## 1. Claims And Evidence ### 1.1 About Theorem 3.2 As correctly pointed out, the sub-optimality gap is $\tilde{O}(T^{-1/2} + Cov(\Pi) T^{-1})$, and that’s why we claim the suboptimality/regret does not depend on complexity measure **after finite time**. As long as $T \geq \Omega(Cov(\Pi)^2)$, we have $\tilde{O}(T^{-1/2} + Cov(\Pi) T^{-1}) = \tilde{O}(T^{-1/2})$, and $T$ becomes the only dominating term. We will make this point clear in our revision. ### 1.2 A few rigorousness Issue Thanks for pointing them out! We will revise our statements to improve their rigor as recommended. ## 2. Theoretical Claims Firstly, notice that the second term of Eq.(8) takes the minimum over $\sum_w 1/\Delta(w)$ and $\sqrt{Wt}$. When $\Delta(w)$ is very small, $\sqrt{Wt}$ avoids the upper bound being extremely large. Secondly, as in most regret bounds in online learning, our result characterizes the worst-case behavior. Analogous to multi-armed bandit (MAB) settings, for fixed $T$, in the worst case we may have $\Delta(w) = \sqrt{T/W}$, resulting in regret that matches our bound. Moreover, there is a simple refinement can be done: additionally taking the minimum over the current bound and $\sum_w \Delta(w) N(w,T) = O(\sum_w \Delta(w) T)$, where $N(w,T)$ denotes the number of times we transfer from model $\pi^*_{r^w}$ in $T$ iterations. This may align more closely with the reviewer’s intuition. In fact, this basic regret bound was the starting point of deriving Theorem 4.3. ## 3. Experimental Designs Or Analyses Thanks for raising this valid point. Despite difference in design, both TPO and its empirical version are grounded in the same core theoretical insight—**transfer from policy with better coverage for $\pi^{\*}\_{r^{\*}}$ (i.e. low $Cov^{\pi^{\*}\_{r^{\*}}|.}$)**. The value estimation is preferable from a theoretical standpoint, but it is often computationally expensive in practice. To address this, our empirical TPO uses win rate as a proxy, because it is more scalable and characterizes the lower bound for $Cov^{\pi^{\*}\_{r^{\*}}|.}$ (Lem. 5.1). We view this as a reasonable trade-off between theoretical rigor and empirical scalability. ## 4. Other Comments Or Suggestions > Could we leverage...while achieving an improved regret bound? This is a very interesting question. We conjecture it is possible to design more elegant algorithms with better regret bound, and we believe it is a highly valuable direction. We hope our work can serve as an initial step and inspire future developments, and we leave this for future work.
Summary: The paper proposes a transfer learning algorithm that utilizes offline and online preference-based policy learning methods for RLHF. They provide a policy selection algorithm in each step where a new policy is selected based on a set of imperfect reward models and is used to further augment the training dataset. They provide theoretical evidence, a regret bound on the learned policy, for their proposed method and an emprical computationally efficient algorithm as a practical alternative for their method. The motivation and flow of the paper is well-written and the principles explained about the concept of coverage and its use in the construction of the algorithm gives very clear picture of the whole idea of the paper. They provide a regret bound that is independent from size of the state and action space and the complexity of the policy space. Claims And Evidence: The main claims of the paper include, 1. The design of an algorithm for RLHF (TPO) with proved regret bound. 2. Proposition of the computationally efficient version of TPO 3. The importance of policy value as a criterion for selecting the policy to generate training data. 4. The proposition of a policy learned by offline data, that is proved to have a regret bound of $O(1/\sqrt{T})$ with no dependence on the action and state spaces size or any complexity measures on the policy space. The first three claims are well justified, but the fourth claim, which I assume is the main contribution of the paper, is not correctly justified. Theorem 3.2 is proposed to provide a regret bound on the offline policy and is claimed to be independent of **any complexity measure on policy class**. But actually, when looking at the complete term of the bound in the Appendix, it **depends** on $|\Pi|$, which violates the authors' claim. Another problem with this claim that affects the whole idea of the paper is that we use the term **Offline** when we don't have access to the environment anymore and have to teach the model with the prepared, fixed training dataset. This is one of the core applications of offline methods that work even when we can't interact with the environment anymore. Here, to train the offline model, we require a no-regret online method to generate the dataset, so we have to access the environment continuously for the value difference term in the bound to vanish, hence the so-called offline policy is no longer offline, it requires access to the environment to generate a continuously improving dataset to learn from. Altogether, the contribution of the paper compared to many algorithms in this field like RPO and XPO is not clear. The method requires access to the environment (so it's not offline) and it uses the same data sample many times during the training (so it's not online), and also the proved bound doesn't seem to have any advantage over RPO or XPO in terms of T, or other factors. Methods And Evaluation Criteria: The theoretical evaluation is the conventional regret bound, which is standard and reasonable. The experimental evaluation is not very convincing, as it is the win rate of the policies compared to each other. An absolute measure of performance, like accuracy in the preference dataset, could be a better metric, as it can be compared with any baseline or SOTA method. Theoretical Claims: As mentioned in the claims section, theorem 3.2 provides a regret bound on the offline policy that is claimed to be independent of policy class complexity, but it's not. There is another serious problem with the theoretical claims, and it is the removal of the best policy coverage term for the offline policy. For offline learning, coverage is necessary because we don't have control over the offline dataset. If we can control the offline dataset, we can trivially generate customized datasets that don't have a high coverage issue. Removal of the coverage term in the offline learning literature is not a contribution, as the coverage term makes the bound instance dependent and tighter, based on the quality of the dataset and algorithms that don't have the coverage term, often have looser bound, because the bound should work on **any dataset**. Also, the assumption of a bounded policy ratio is very restrictive and not realistic in most of the applications. Experimental Designs Or Analyses: Experiments are not complete. The complete algorithm that automatically selects the best source model is missed in the main paper results, only fixed source selection methods are reported and tested. Also the only compared baseline is iterative-DPO, and algorithms like RPO, XPO, IPO, and SimPO are missed as baseline. The evaluation criterion is not justifiable, because it is only the win rate of different policies compared to each other. The experiments section seems more like a small ablation study section rather than the main experiment section. Even the appendix doesn't cover the lack of experiments and insufficient experimental evidence. It should at least contain the absolute performance of the complete method on difference preference datasets and comparison with online and offline RLHF methods like RPO and XPO. Supplementary Material: The supplementary material consists of a detailed theoretical analysis of the paper and the proofs of the theorems in the main body. It also consists of some additional experiments. Relation To Broader Scientific Literature: The paper's main goal is to solve the RLHF task by generating a good training dataset and train a model on the generated dataset. It is linked to both offline and online policy learning methods that use human-annotated preference data. However the focus of the paper is not generating data from human feedback, but from a set of already available reward models. Essential References Not Discussed: I didn't recognize any missed essential references. Other Strengths And Weaknesses: The whole paper fails to convince its contribution to the big literature in RLHF, Moreover, the provided empirical algorithm, differs significantly from the main algorithm, in the very important part of the source selection. This will make the validity of the theoretical evidence questionable for the empirical algorithm. It is common to have an empirical algorithm that is different from the theoretically justified method because of some estimations required in practice, but in this work, the difference is not just because of an estimation, the source selection algorithm is totally changed. Other Comments Or Suggestions: I have no other comments. Questions For Authors: I don't have any questions. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. It seems there may be some misunderstandings regarding our setting and several our key claims. To clarify, we start with a general remark, followed by detailed point-to-point responses. We hope our replies improve the clarity of our submission and help the reviewer evaluate our paper. ## General Remark for Clarification As stated in Sec. 1&2 and Fig. 1, **we target at improving sample efficiency in online RLHF by transferring from imperfect source RMs**, i.e. learning $\pi^\*_{r^\*}$ from online human feedbacks associated with $r^\*$, while leveraging auxiliary RMs $r^1, \dots, r^W$. * Our theoretical method TPO (Alg 1&2) **is not offline but instead an online reward transfer algorithm**, and we just use an offline subroutine (RPO) to compute a transfer candidate (Lines 6–7 in Alg. 2). Besides, **we did not claim Thm. 3.2 is a contribution for offline literature**. Instead, we claim it improves the existing convergence rate of online RLHF methods, which motivates "self-transfer learning". **Our core theoretical contribution lies in analyzing the benefits of transfer learning in online RLHF**. As stated in Thm. 4.3 and Sec. 4.2, TPO improves the online RLHF results given good source RMs and self-transfer learning. * Empirical TPO (Alg. 3) is closely aligned with the theoretical insights behind TPO—**transfer from policy with better coverage for $\pi^{\*}\_{r{\*}}$ (i.e., low $Cov^{\pi^{\*}\_{r^{\*}}|.}$)**. TPO follows it by selecting policy with its value gap (upper bound of $Cov^{\pi^*_{r^*}|.}$ by Lem. 3.1), while empirical TPO utilizes win rates (lower bound of $Cov^{\pi^*_{r^*}|\cdot}$ by Lem. 5.1, but more scalable). ## 1. Claims and Evidence >Theorem 3.2...violates the authors' claim. In Thm 3.2, we only omit $\log|\Pi|$—the log covering number. We apologize that our wording is not precise enough, and will revise to "no dependence…up to log-covering number factors". However, such log terms are standard even in supervised learning, and removing any policy coverage terms in previous online RLHF bounds is a significant improvement. We also clarify that Thm. 3.2 is about convergence rate, not regret bound, and it is only part of the contributions. >The method...not offline…not online… We apologize for the confusion. We study reward transfer in **online setting**, and the term "offline" appears in paper just because we use an offline method (RPO) to compute a policy. We will reword "offline" to "distilled" to avoid confusion. >the contribution...to…RPO and XPO is not clear We reply to it together in point 2 below. ## 2. Methods And Evaluation Criteria & Experimental Designs Or Analyses Given that we study the online setting, offline methods (e.g. RPO) operate under different assumptions and objectives and are naturally not comparable. Regarding online RLHF methods, note that other existing methods (XPO, IPO, etc) cannot handle transfer settings, hence they are not directly comparable. Besides, the core technique in empirical TPO is the "win rate-based source policy selection via UCB", which adapts to the best source RMs and switches back to normal online learning if no benefits in transfer, *without prior knowledge on task quality*. Therefore, **we view other online methods (e.g. XPO, IPO) not as competing baselines, but rather as complementary approaches that can be enhanced with our transfer learning techniques**. For limited space, we refer to "1.1 Comparison with other online algorithms" in our response to Reviewer yCvD for additional discussion and experimental support. Our main goal in experiments is to show the effectiveness of empirical TPO, and our experiment results clearly demonstrate these. Besides, win rate is a standard metric in evaluating RLHF methods. We note that accuracy is not typically used as a standard metric for summarization tasks. ## 3. Theoretical Claims >​​Removal of...in the offline learning literature is not a contribution… There is a misinterpretation of our contribution. We study the online setting and our main contribution is to **improve sample efficiency in online RLHF by transfer learning**. >...bounded policy ratio is very restrictive… Such an assumption is standard in online RLHF literature [1,2]. Besides, as stated in Footnote 1 (Page 3), it is not essentially an assumption, but an additional preprocessing step given a realizable policy class, because $r^* \in [0, R]$ implies $||\log \pi^*_{r^*}/\pi_{ref}||_\infty \leq R/\beta$. ## 4. Other Strengths And Weaknesses >...empirical algorithm, differs significantly from the main algorithm…the source selection algorithm is totally changed We respectfully disagree with it. Both our TPO and empirical TPO share the same insight (see second bullet in general remark). [1] Exploratory preference optimization: Harnessing implicit q*-approximation for sample-efficient rlhf [2] Self-Exploring Language Models: Active Preference Elicitation for Online Alignment --- Rebuttal Comment 1.1: Comment: There may be a misunderstanding about the contribution of the paper. I understand that the method is a transfer learning approach to utilize a set of imperfect reward models. Now let's compare the proposed method with an existing online method like XPO. Both methods are trying to solve the RLHF task, with access to the environment (online). I understand that the approaches are different, and the proposed method uses transfer learning, yet both are finally solving the same task, as an end-to-end system. Now, we have the theoretical and practical contributions. I may be wrong and have misunderstood the contributions, so I ask the authors to clarify. Theoretical contribution: XPO's bound depends on $\sqrt{Cov_\infty(\Pi)}$, but TPO (in infinity) depends on $\log(|\Pi|)$. Is there any theorem to compare the two values and state that the first term is significantly larger (in order or in practice)?. Are there any other theoretical contributions compared with XPO? I seek a precise mathematical statement that theoretically, the proposed method beats already available RLHF methods, e.g., XPO. Practical contribution: Do we have any experiment that computes the win-rate of a TPO-trained policy over an XPO-trained policy? --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer’s further questions and appreciate the chance to clarify it! ## 1. Theoretical Contribution Briefly speaking, both XPO and our results depend on $\log|\Pi|$, while our results are **strictly better in that we eliminate the coverage term $Cov_\infty(\Pi)$** (after finite time). For clarity, in the following big-O notations, we only omit constant terms and $\log T$. ### 1.1 Comparison in Regret Bounds * **XPO**: As stated in Thm. 3.1 and its proof in [1], w.p. $1-\delta$, running XPO for $T$ steps yields regret bound: $ \tilde{O}(\sqrt{Cov_\infty(\Pi) T \log\frac{|\Pi|}{\delta}}). $ * **TPO**: As discussed in our Sec. 4.2, w.p. $1-\delta$, with appropriate choice of $\alpha$ (e.g. $\alpha = e^{-\frac{R}{\beta}}$), the regret of TPO is * $\tilde{O}(W\sqrt{T \log\frac{|\Pi|}{\delta}})$ when $T\leq \frac{W^2}{\Delta_{\min}^2}$; improving from $\sqrt{Cov_\infty(\Pi)}$ to $W$—the number of source tasks, which is usually small. * $\tilde{O}(\sqrt{T \log\frac{|\Pi|}{\delta}})$ when $T > \frac{W^2}{\Delta_{\min}^2}$ and large enough; improving from $\sqrt{Cov_\infty(\Pi)}$ to $O(1)$. The improvements in the above two stages are contributed by existence of good source tasks and self-transfer learning, respectively. ### 1.1 Comparison in Convergence Rates * **XPO**: [1] reports a convergence rate by outputting the uniform mixture policy $$ \tilde{O}(\sqrt{\frac{Cov_\infty(\Pi)} {T} \log\frac{|\Pi|}{\delta}}), $$ * **TPO**: Similarly, the regret bound of TPO implies the following convergence rate (after finite time): $$ \tilde{O}(\sqrt{\frac1T\log\frac{|\Pi|}{\delta}}). $$ ## 2. Practical Contribution As we mentioned in our rebuttal, **we view other online methods (e.g. XPO) not as competing baselines, but rather as complementary approaches that can be enhanced with our transfer learning techniques**. We decided to replace “DPO” (line 11, Alg. 3) with $Alg_{PO}$, which serves as a placeholder for any Policy Optimization oracle (e.g. DPO, XPO, IPO etc). To support this claim, in our response to Reviewer yCvD, we consider instantiating $Alg_{PO}$ by optimizing XPO or IPO loss, and report the win rates (similar to Table 1 in paper) between the policies produced by TPO and other baselines. For convenience, we re-report those results below. All experimental settings remain the same as in Sec. 6, except for using a smaller learning rate of 1e-5 in the IPO experiments. **Empirical TPO (Alg. 3) by replacing DPO (line 11) with XPO** ||Without Transfer|Purely Exploit ROUGE|Purely Exploit T5-Large| |:-:|:-:|:-:|:-:| |Iter 1|52.3$\pm$1.1|53.4$\pm$0.8|50.2$\pm$0.3| |Iter 2|51.6$\pm$1.3|54.7$\pm$1.6|49.1$\pm$1.3| |Iter 3|52.2$\pm$1.6|53.8$\pm$2.9|49.2$\pm$1.1| Here the “without transfer” baseline (i.e., when $W = 0$) **is exactly the empirical XPO** in [1] (see “Implementation details” in Appx. E in [1]). The advantage of win rates demonstrates that our transfer learning techniques can further enhance performance of XPO when good source tasks exist. This highlights not only the effectiveness but also the modularity of our approach. **Empirical TPO (Alg. 3) by replacing DPO (line 11) with IPO** ||Without Transfer|Purely Exploit ROUGE|Purely Exploit T5-Large| |:-:|:-:|:-:|:-:| |Iter 1|52.3$\pm$1.0|50.4$\pm$1.6|49.9$\pm$0.4| |Iter 2|55.2$\pm$1.4|52.3$\pm$0.3|50.1$\pm$0.3| |Iter 3|55.3$\pm$1.1|51.8$\pm$0.5|50.3$\pm$0.5| [1] Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient
Summary: This paper studies RLHF under the contextual bandit setting with KL regularization. In usual bandit problems, there is a need to balance exploration and exploitation. However, they show that there is a "blessing of regularization" in which the a policy that has low policy value gap will also be a good exploration policy. In particular, this means that due to the presence of regularization, the two goals of exploration and exploitation are aligned, and do not require additional assumptions that are usually made about the problem to avoid negative transfer. Using this idea, the authors design the Transfer Policy Optimization (TPO) algorithm for this problem, prove a offline policy value gap, prove a regret bound, propose an alternative algorithm that is more computationally efficient, and perform some empirical validation. Claims And Evidence: 1. While I understand the claim that Theorem 3.2 shows that the offline policy converges at an asymptotic rate that does not depend on any structural complexity measure (that you are treating everything except $T$ as a constant), I think it may be clearer to say more explicitly that there are two phases to the convergence rate. In the first phase, convergence rate decreases at a rate of $O(e^{1/\beta}\mathcal{C}(\Pi)(\beta T)^{-1})$, while in the second, it decreases at a rate of $O(T^{-1/2})$. In particular, if this complexity measure $\mathcal{C}(\Pi)$ used in Corollary E.6 is infinite, then the second phase that is independent of $\mathcal{C}(\Pi)$ never happens (or, if it is allowed to grow with $T$). Preserving $\beta$ and $\mathcal{C}(\Pi)$ in the bound makes it clear that they do matter in convergence, but only in the first phase. This is a pretty interesting phenomenon, and maybe it would be nice to expand further on this discussion. For example, do you have any intuition/justification for why the second phase is also independent of $\beta$? 2. I thought Section 4 was a bit confusing to read. I think the last two paragraphs of Section 3 are quite important to understand Section 4. But somehow, it is not obvious until one understands the TPO algorithm why they are important. Is there a way to write Sections 3 and 4 so that it is easier to grasp what the algorithm? Part of the problem is that the way that Theorem 3.2 relates to the whole paper and TPO is also not initially clear. I like the line near the end of Section 5 "learn from an expert until surpassing it". Perhaps that idea could be more explicit earlier in the paper and could be a guiding thought figure throughout the paper. Methods And Evaluation Criteria: 1. The experiments were a bit confusing to me. A lot of details were missing from the main paper and are in the appendix. It is also not clear to me whether Table 1 is a 'good' empirical result. These numbers do not really seem to say that the proposed empirical method beats the baselines (everything seem quite close to 50% win rate). Or, is the point that only three iterations were needed? Would the trend in improvement continue with additional iterations? Theoretical Claims: I did not have time to check proofs and for the most part only read the main paper. Experimental Designs Or Analyses: As I mentioned before, the experiments were somewhat confusing. It would be nice if there were more justification for why these experiments were performed. Supplementary Material: I read Appendix B, C, and parts of D and E. Relation To Broader Scientific Literature: I found the high-level ideas of this paper quite interesting, especially the idea of self-transfer. I almost wish there more of a focus on this and transfer for KL-regularized bandits, and less RLHF. In other words, are the same ideas be present even in the very basic bandit setting? In terms of exposition, I think I would've really appreciated learning the basic ideas in a much simpler setting. Although, I understand that it is also good to connect to the current RLHF/LLM research interests across the community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. In Section 2, Additional Notation, you write that "$\mathcal{R}^\Pi$ denotes the reward class converted from $\Pi$." I don’t believe "converted" is standard terminology in RL. If what’s meant is the reward class induced by $\Pi$ via the one-to-one mapping in Eq. (2), it may be helpful to say that directly to avoid confusion. 2. I think overall the writing is not bad, but the overall structure could do with a bit more thought. It was not trivial to understand the structure of the paper and required very non-linear reading. Perhaps this is due to my lack of familiarity with the area, however. Other Comments Or Suggestions: The review above is written by me. Below, I asked chatgpt to help summarize it. I think it captured my overall impression quite well: ```[ChatGPT summary of my review]: This paper presents a theoretically motivated and timely contribution to RLHF by showing that KL regularization aligns exploration and exploitation, enabling safe and effective transfer from both external and self-generated policies. I found the core idea of "self-transfer" particularly compelling, and the theoretical results are thoughtfully developed. However, I believe the presentation could be improved—the connection between Sections 3 and 4 is not immediately clear, and the empirical results, while suggestive, are limited in scope and detail. Additionally, I think the claim of complexity-independent convergence could be more carefully nuanced to reflect the two-phase nature of the bound. Despite these issues, the paper makes a valuable contribution to understanding transfer in regularized bandit-style RLHF, and I recommend it for acceptance with revisions focused on clarity and empirical depth.``` Finally, though I do believe I understood the paper, I don't work directly in RL or RLHF, so my confidence score is somewhat low because I am not very familiar about how this paper is situated within the field. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive suggestions! We address your specific comments in the following. ## 1. Claims And Evidence ### 1.1 About Claim in Thm. 3.2 Thank you for the suggestion. We will follow it and clarify our claim in our next revision. > ...why the second phase is also independent of $\beta$? As predicted by the offline learning theory, the offline policy converges to $\pi^*_{r^*}$ at a rate of $\tilde{O}(Cov^{\pi^*_{r^*}|\pi_{mix}^T} T^{-1/2})$, where $\pi_{mix}^T := \frac{1}{T}\sum_{t=1}^T \pi^t$ is the uniform mixture of the policies collecting data. Technically, in the second phase, $\pi_{mix}^T$ is very close to $\pi^*_{r^*}$, and as implied by Lem. 3.1, $Cov^{\pi^*_{r^*}|\pi_{mix}^T}$ is at constant level, which results in the $\tilde{O}(T^{-1/2})$ rate. From this view, $\beta$ only matters in how fast $Cov^{\pi^*_{r^*}|\pi_{mix}^T}$ reduces to constant as $T$ grows. ### 1.2 About structure of Section 3 and 4 Thank you for pointing it out, and we will try our best to make it more understandable. The core insight behind our discussion in Sec. 3 and 4 (and also empirical TPO in Sec. 5) is that **"select and transfer from the policy with the best coverage of $\pi^{\*}_{r^{\*}}$"**. It may help the reader to grasp the main algorithm if we highlight this statement earlier (e.g. Sec. 3.2). ## 2. Methods And Evaluation Criteria & Experimental Designs Or Analyses > It is also not clear to me whether Table 1 is a 'good' empirical result… Win rate is a common metric in evaluating the LLM performance. A "(50+$p$)%" win rate against no-transfer baseline roughly means that, per 100 prompts, the LLM fine-tuned by empirical TPO is preferable on $50+p$ out of 100, which is $2p$ more than no-transfer baseline ($50 - p$ out of 100). Besides, the performance gap may be further enlarged by increasing training batch size and training epochs, incorporating better source reward models, adjusting test-time decoding temperature, etc. Notably, the quality of the inital policy also affects the magtitude of the performance gap---**the more near-optimal the initial policy is, the smaller the maximal possible performance gap can be**, although empirical TPO can still outperform the no-transfer baseline. Therefore, our primary goal in experiments is to demonstrate the potential of our transfer learning techniques in improving sample efficiency, especially, reward transfer remains an underexplored area in online RLHF. > ...is the point that only three iterations were needed? Here we follow the related literature [1,2], where running for three iterations is a common choice in algorithm evaluation (due to limited computational resources). > Would the trend in improvement continue with additional iterations? As reported in Fig. 2 in Appx. I, in the 3rd iteration, empirical TPO already switches back to online learning without transfer, as the online policy can already outperform the best source policy in win rates. If we continue training for more iterations, the comparison with the no-transfer baseline effectively reduces to running the same online algorithm but with a better initial policy. Overall, we can expect the advantage of empirical TPO to persist, although the performance gap may decrease as both of them are converging to the optimal policy. ## 3. Relation To Broader Scientific Literature To our knowledge, most existing transfer learning literature, especially those theoretical ones, focuses on pure-reward maximization setup. We are the first to identify special structure (Lem. 3.1) induced by KL regularization, and propose novel transfer learning principles specialized in this setting. ## 4. Other Strengths And Weaknesses Thank you for your suggestions! We will take them into consideration in our next revision. [1] Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint [2] Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF
null
null
null
null
null
null
DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation
Accept (poster)
Summary: The paper presents DiTAR (Diffusion Transformer Autoregressive Modeling), a novel approach that combines an autoregressive language model (LM) with a diffusion transformer (LocDiT) to improve continuous speech generation. The key idea is a patch-based modeling strategy, where the LM predicts the sequence at a high level, and LocDiT refines the details within each patch using bidirectional attention. A temperature-based sampling method is also introduced to control the trade-off between determinism and diversity in the generation process. Evaluations on zero-shot TTS benchmarks demonstrate that DiTAR achieves state-of-the-art performance in robustness, speaker similarity, and naturalness, while requiring significantly less computation than competing methods like Voicebox and NaturalSpeech. Claims And Evidence: The paper makes several claims, most of which are supported by strong empirical evidence. The claim that DiTAR outperforms existing zero-shot TTS models is well-supported by both objective and subjective evaluations. Word Error Rate (WER), speaker similarity, and UTMOS scores confirm that DiTAR produces more robust and natural speech compared to previous baselines. The claim that DiTAR reduces computational costs is backed by FLOPS measurements and throughput comparisons, showing that it achieves similar or better performance with up to 43× lower compute requirements than non-autoregressive diffusion models. Another claim, that temperature-based sampling is essential for balancing diversity and determinism in continuous-valued LMs, is supported by a PCA analysis of generated speaker embeddings, showing that different temperatures influence the diversity of generated voices. The modified Classifier-Free Guidance (CFG) method is another strong contribution, making it more suitable for patch-based diffusion models. However, certain aspects of the methodology could be explained more clearly. Some theoretical transitions feel abrupt, such as the statement that “operating in the velocity space with a conditional flow-matching target is also equivalent”, which lacks context or formal justification. The role of historical patches, denoted as h_{i-2}, h_{i-1} in Figure 1, is also not fully formalized in Section 3.1. Readers can infer how they are encoded and used, but a clearer explanation in the main text would improve clarity. Methods And Evaluation Criteria: The evaluation of DiTAR is comprehensive and includes multiple zero-shot TTS benchmarks, comparing its performance against strong baselines like Voicebox and NaturalSpeech3. The results demonstrate that DiTAR achieves state-of-the-art performance in robustness, speaker similarity, and naturalness while maintaining significantly lower computational costs. The analysis of patch size versus historical context is particularly insightful, providing valuable guidance on how to balance computational efficiency with generation quality. Theoretical Claims: N/A (there is no theoretic proof, the paper is mostly empirical). I checked the derivation of the proposed temperature-based sampling process and it looks good to me. Experimental Designs Or Analyses: The experimental setup is thorough and provides compelling evidence for the effectiveness of DiTAR. Evaluations are conducted across multiple zero-shot TTS benchmarks, ensuring a fair comparison with strong baselines. The ablation studies on patch size, historical context, and LM guidance are insightful and highlight key trade-offs in model design. The scaling analysis further strengthens the paper’s claims, showing that WER and speaker similarity improve consistently as the model size and training data increase. A discussion on whether performance would continue improving with larger models (e.g., 10B+ parameters) would be valuable. Supplementary Material: yes, I checked the derivation of the temperature-based ODE solver. Relation To Broader Scientific Literature: This work is highly relevant to recent advances in diffusion-based generative models and speech synthesis. It builds upon prior work like ARDiT, NaturalSpeech3, and Transfusion, but improves efficiency by combining diffusion with autoregressive patch modeling. The use of patchification is inspired by techniques in image and video generation, such as those found in latent diffusion models, and applies them effectively to speech synthesis. The paper also contributes to research on temperature-based sampling in diffusion models. While the idea of controlling randomness through temperature is well-known in SDE-based diffusion models, the specific approach of defining temperature as a noise injection point in ODE solvers is a novel adaptation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper's architecture can be used for Speech Language model as well to speed up the inference and improve speech synthesis quality. Other Comments Or Suggestions: I am not an expert in ODE/SDE solver for diffusion model (though I am fairly familiar with DDPM, DDIM, and major Flow-matching objectives). I am not sure if the temperature-based sampling introduced in this paper is already present somewhere in the literature. Questions For Authors: - Do you have ablation experiments on aggregation encoder to make it causal? One claim made by the authors is that causal attention degrades performance in diffusion-based autoregressive modeling, so it will be nice to have some comparisons here. - I am curious what happens if the model (and its training data) is scaled up further? (since currently it’s only scaled until 1B). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive review and insightful comments. Most of your points are aligned with the contributions we aim to convey in our paper. Next, we address your questions organized according to the review sections. We have attached audio samples of our method in this link:https://spicyresearch.github.io/ditar/#hard-cases. Feel free to listen. **Questions in "Claims And Evidence"** Operating velocity in classifier-free guidance (CFG): In the paper, we derive the CFG process based on the score to align with the original work [1] for easier understanding. The score can be easily converted to velocity, allowing us to straightforwardly apply the CFG method to flow-matching or v-prediction models. Below, we provide the derivation process, which will be added to the paper later. Begin with the definition of velocity $v$: $ v= \\dot{\\alpha_t}x_0 + \\dot{\\sigma_t}\\epsilon=\\dot{\\alpha_t}\\frac{x_t-\\sigma_t\\epsilon}{\\alpha_t}+\\dot{\\sigma_t}\\epsilon$ Rearrange concerning $\epsilon$: $\\epsilon = \\frac{\\alpha_t v-\\dot{\\alpha}_t x_t}{\\alpha_t\\dot{\\sigma}_t - \\dot{\\alpha_t}\\sigma_t}$ Perform CFG on score space and substitute the above equation: $ \\tilde{\\epsilon}( x_t, c ) = (1+w) \\epsilon( x_t,c ) - w\\epsilon(x_t) = \\frac{\\alpha_t \\tilde{v} ( x_t, c) - \\dot{\\alpha}_t x_t}{\\alpha_t\\dot{\\sigma}_t - \dot{\\alpha_t}\\sigma_t}$ where $ \\tilde{v}( x_t, c ) = ( 1+w )v(x_t, c) - w v( x_t)$ Therefore, performing CFG operations in the velocity space is equivalent to doing so in the score space. **Questions in "Questions For Authors"** - Validate the claim that causal attention degrades the performance of continuous-valued AR: - We experimentally found that the aggregation encoder has a minimal impact on the receptive field. Given that LocDiT uses historical patches and non-causal attention, even if the aggregation encoder is a causal transformer, the impact is minor. - The aggregation encoder is not a primary innovation of this work, and as shown in Table 3, the benefits from scaling the encoder are small. - We have validated this claim from another perspective. In Table 4 of the paper, as the patch size decreases and the number of historical patches reduces, the entire model becomes more causal. When the patch size is set to 1 and the number of historical patches is set to 0, the model turns into a vanilla causal language model. It can be observed that the more causal the model, the worse the performance. - Further scaling: - In the zero-shot TTS task, the amount of training data is limited and the task is relatively well-defined. Further scaling of the model provides marginal benefits, considering inference performance. Therefore, we did not pursue scaling beyond 1B parameters. - Our framework is a general generative model, not limited to speech generation. We aim to apply it to more complex tasks, such as speech LLM and video generation. As a continuous-valued LM, we hope it will achieve scaling performance comparable to discrete-valued LMs. **References** [1] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." _arXiv preprint arXiv:2207.12598_ (2022). *We sincerely hope that our reply could address your concerns and that you might consider raising the rating. Please let us know if you have any further questions or require additional results.*
Summary: This paper proposes DiTAR (Diffusion Transformer Autoregressive Modeling), a patch-based autoregressive framework for zero-shot text-to-speech synthesis that combines language models with diffusion transformers. The method uses a divide-and-conquer strategy where continuous speech tokens are partitioned into patches. A language model handles inter-patch prediction, while a localized diffusion transformer (LocDiT) with bidirectional attention generates each patch. The authors introduce a temperature-based sampling approach for the continuous-valued autoregressive model and demonstrate superior scaling properties. According to the authors' evaluation, DiTAR achieves state-of-the-art performance in zero-shot speech generation for robustness (WER), speaker similarity (SIM), and naturalness with reduced computational demands compared to existing models. ## update after rebuttal The authors' responses have addressed most of my concerns. I have raised my score from 2 to 3. Claims And Evidence: Several key claims in the paper lack sufficient supporting evidence or comparative analysis: 1. The claim of novelty in the patch-based approach is weakened by insufficient comparison to similar prior work, particularly VALL-E 2's Grouped Code Modeling (Chen et. al 2024b), which implements a comparable approach. As shown in the VALL-E 2 paper: "We partition the codec code sequence into groups with the group size G, and C0:G stands for the group [c0, c1, ..., c(G-1)]." This approach appears functionally equivalent to DiTAR's next-patch-prediction method, yet this similarity is not acknowledged. 2. The computational efficiency claims are supported by FLOPS calculations but are problematic as they only materialize at unrealistically large batch sizes (>100) for optimal patch size (2 or 4) that would exceed typical GPU memory constraints in production environments (unless using GPUs such as H200 or B200). Figure 5 clearly shows that for batch sizes below 100, NAR models maintain superior or very close throughput. 3. There are unexplained inconsistencies in the reported results between tables (e.g., the 0.4B model shows WER of 1.876 in Table 3 but 1.685 in Table 6, and SIM of 0.716 vs. 0.735, which I assume both are on SeedEval dataset), undermining confidence in the reliability of the findings. 4. The subjective evaluation claims (naturalness, quality) cannot be independently verified due to the absence of audio samples (demo page), which is a significant limitation for a text-to-speech paper. Methods And Evaluation Criteria: The methods are generally sound, although some evaluation aspects are questionable: 1. The benchmark datasets and metrics (WER, SIM, UTMOS) are appropriate for TTS evaluation. 2. However, the throughput/efficiency metric (FLOPS) is implemented in a way that favors the proposed approach under unrealistic conditions (very large batch sizes). Specifically: - As shown in Figure 5, DiTAR only surpasses NAR models in throughput at high batch sizes (around 100) for optimal patch size ranges (2 or 4), which is impractical for most deployment scenarios. - Most production-grade GPUs (like A100 with 80GB memory) cannot accommodate such large batch sizes for these models, especially for long-form speech generation where the KV cache assumption in FLOPS calculation is useful. - When accounting for model parameters, optimizer states, and gradient accumulation, batch sizes of 100+ would require multiple high-end GPUs operating in parallel, introducing communication overhead that negates the theoretical throughput advantages. - In real-world deployment scenarios, lower latency with smaller batch sizes is often preferred to higher throughput with large batches as it can be distributed across multiple low VRAM GPUs for inference, making NAR models more practical despite their theoretical inefficiency with large batch sizes. 3. The lack of detailed ablation studies comparing the full architecture (encoding → AR → diffusion) to simpler alternatives (like direct AR → diffusion as in ARDiT) prevents a clear understanding of whether the added complexity is necessary, especially given LocDiT needs historical patches as conditions for diffusion, making the entire framework more similar to ARDiT (Liu et. al, 2024b) than Li et. al, 2024a. Theoretical Claims: The paper does not contain theoretical claims. Experimental Designs Or Analyses: Several issues affect the experimental validity: 1. The comparison with competing methods is incomplete, with notable omissions such as VALL-E 2, which reports similar performance metrics (WER of 1.5 and SIM of 0.64) with slightly better WER trained on the same LibriLight dataset that DiTAR uses. 2. The computational efficiency analysis is conducted under conditions that favor the method but are impractical for real-world deployment. Specifically, the throughput comparison in Figure 5 demonstrates that DiTAR with patch sizes of 2 or 4 only becomes more efficient than NAR models at batch sizes exceeding 100. This requirement is unrealistic for several reasons: - Memory constraints: Most GPUs have at most approximately 80GB RAM per card, which is insufficient for batch sizes of 100+ when accounting for model parameters, activations, and KV cache, especially for long-form speech generation. - Distributed inference overhead: Linking multiple GPUs for distributed inference introduces significant communication overhead, which is not factored into the throughput calculations. - Practical deployment considerations: In production environments, it's typically more efficient to distribute smaller batches across multiple independent GPUs than to process large batches with linked GPUs due to reduced latency and better resource utilization. - The efficiency claims would only be realized on specialized high-end hardware like NVIDIA B200 or H200 GPUs, which represents an impractical deployment target for most applications. 3. The inconsistency in reported performance metrics between different tables (Tables 1, 3, and 6 all report different WER and SIM) raises questions about the reliability of the results. Supplementary Material: I reviewed Table 6 and the calculation for FLOPS. Relation To Broader Scientific Literature: The paper builds upon two major approaches in speech synthesis: autoregressive language models and diffusion models. While it cites many relevant papers, it insufficiently contextualizes its contribution relative to recent advances that use similar techniques: 1. The patch-based AR approach bears strong similarity to VALL-E 2's Grouped Code Modeling, which similarly divides codec codes into grouped patches processed sequentially. 2. The use of diffusion for patch prediction resembles existing approaches like ARDiT and Transfusion, but the paper does not sufficiently explore whether their three-stage approach (encoding → AR → diffusion) provides meaningful advantages over the simpler AR → diffusion methodology in ARDiT. Essential References Not Discussed: The paper inadequately discusses or compares to several highly relevant references: 1. VALL-E 2 (Chen et al., 2024) introduces Grouped Code Modeling, which is remarkably similar to DiTAR's patch-based approach. Despite being trained on the same LibriLight dataset and achieving comparable performance (WER of 1.5 and SIM of 0.64), this paper is not sufficiently compared against it. 2. ARDiT's approach of diffusion-based autoregressive generation deserved more direct comparison, particularly regarding whether the encoding step in DiTAR provides meaningful benefits over ARDiT's more straightforward approach. Other Strengths And Weaknesses: Strengths: 1. The paper presents a coherent framework integrating language models and diffusion models. 2. The temperature-based sampling approach for continuous-valued autoregressive models is an interesting contribution. However, it seems to be contrived for the diversity purpose since $\tau = 1$ corresponds to the original DDIM sampling algorithm (such as Cosyvoice or Seed-TTS), so the baseline it compares to ($\tau = 0$) is artificially limited in diversity. 3. The scaling analysis is comprehensive and demonstrates good scaling properties. Weaknesses: 1. Novelty is not well justified, with insufficient acknowledgment of similar prior approaches such as Vall-E 2 and ARDiT. 2. The computational efficiency claims are presented in a way that overstates practical benefits. The throughput advantage only materializes at batch sizes exceeding 100, which is impractical for most deployment scenarios due to memory constraints (80GB per GPU) and the inefficiency of distributed inference for such tasks. 3. Inconsistencies in reported metrics undermine confidence in the results. The significant variations between Table 3 and Table 6 for the same 0.4B model (WER: 1.876 vs 1.685; SIM: 0.716 vs 0.735) cannot be explained by random variation, and Table 6 is the only one that compares to more AR systems other than outdated models such as Vall-E. 4. The absence of audio samples (demo page) prevents verification of subjective quality claims, which is particularly problematic for a text-to-speech paper where perceptual quality is paramount. 5. The necessity of the three-step architecture versus simpler alternatives (ARDiT) is inadequately justified. The paper does not explore whether the encoding → AR → diffusion approach provides meaningful advantages over simpler approaches like direct AR → diffusion used in ARDiT. Other Comments Or Suggestions: - Since LocDiT also relies on historical patches, the statement "while $\theta_b$ denotes a bidirectional-attention diffusion transformer executing next patch prediction via \$p_{\theta\_b}(\mathbf{x}\_{i+1}, \ldots, \mathbf{x}\_{i+P} | \mathbf{h}_i)\$" should be replaced with \$p_{\theta\_b}(\mathbf{x}\_{i+1}, \ldots, \mathbf{x}\_{i+P} | \mathbf{h}_i, \mathbf{x}\_{i}, \ldots, \mathbf{x}\_{i-K})\$ where $K$ is your historical patch size. - Eq. 7 does not seem to be particularly "flow-matching (Lipman et al., 2022)" but more like velocity prediction (Salimans et al. 2022) Questions For Authors: 1. What is the use of the encoded patch information if we still need historical patches (non-encoded patches) when we call the diffusion model? How does it differ from ARDiT and how is it better than ARDiT? 2. Could you explain the discrepancies in reported metrics between tables (e.g., the 0.4B model's WER and SIM metrics in Tables 3 vs. 6)? 3. Can you make audio samples available to verify the subjective quality claims? 3. What are $\dot{\alpha_t}$ and $\dot{\sigma_t}$ at line 269-270, page 5, right column? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We provide detailed responses to your concerns as summarized below: **Q1. Subjective evaluation** Audio samples can be found in this link: https://spicyresearch.github.io/ditar/#hard-cases **Q2. Connection with VALL-E 2** They are different in many aspects. - VALL-E 2 is a two-stage (AR+NAR) method for discrete tokens, whereas DiTAR is a single-stage (AR) method for continuous tokens. - Patchification serves different purposes in the two methods. For VALL-E 2, it reduces computational load, whereas for DiTAR, it enables bidirectional modeling for next-patch prediction and overcomes the limitations of causal LMs. **Q3. Connection with ARDiT** Although both DiTAR and ARDiT have autoregressive and diffusion elements, they have completely different design philosophies. - The core difference lies in _which part of the model acts as the diffusion component_. The figure in the link better illustrates their differences: https://spicyresearch.github.io/ditar/#comparison - ARDiT is a diffusion model throughout its entire architecture. - Differently, DiTAR is essentially a language model with a diffusion head. The computational load of multi-step sampling in diffusion has been shifted to the diffusion head. **Q4. Discussion of computational efficiency** Thank you for your detailed response. Some of your points are insightful but contradict our actual experimental results, we elaborate in detail below: - Memory constraints: All our tests were conducted on a standard A100 GPU (80G Memory), with batch sizes ranging from 0 to 500, and no out-of-CUDA-memory incidents occurred. The model we used is a 400M parameters transformer, which is a common size in zero-shot TTS task [2][3]. Theoretically, a large batch size is practically reasonable for this size of model. - Practical deployment considerations: There is no need for specialized high-end hardware like H200. All our tests were conducted on a standard A100 GPU with 80G. - Distributed inference overhead: For commonly used TTS models like a 400M parameters transformer, this size typically doesn't require distributed inference. - Latency considerations: Different from NAR, DiTAR can maintain very low latency even with large batch sizes. (please see Q4 in the response to review oFFs). - We do not intend to prove that DiTAR is superior to NAR diffusion under all levels of concurrency. The insight we want to convey is that DiTAR is a model positioned between NAR and AR: it has low latency and high throughput like AR, while increasing parallelism by enlarging the patch size. **Q5. About the misunderstanding of inconsistency in different tables** Thank you for noticing the details. To clarify, different tables serve different purposes, which is why we have used different setups for each. - Table1: - Purpose: maximize fairness and align with other systems. - Setup: 0.6B; trained on Librilight/Emilia; evaluated on Librispeech test-clean subset A/B - Table3: - Purpose: Access the parameter scaling effects of different modules in DiTAR, so we start with a relatively smaller model and use a more difficult test set for evaluation. - Setup: 0.4B; trained on 280k hour data; evaluated on Seed - Table6: - Purpose: Assess the upper-bound performance of DiTAR by comparing DiTAR against various commercial proprietary models trained on various internal data. - Setup: 1B; trained on 280k hour data; evaluated on Seed **Q6. Objective comparison with VALL-E 2 and ARDiT** - VALL-E 2 have not released the checkpoints and the subset of the test set, so the scores reported in their papers cannot be directly used for comparison. - ARDiT is tested on a released subset of LibriTTS test-clean. We reevaluated the samples using the same tool. |Method|WER↓| SIM↑| |--|--|--| |ARDiT|4.036|0.613| |DiTAR(Ours)|**3.401**|**0.717**| **Q7. Response to other questions** - v-prediction v.s. flow matching: Under the same diffusion formulation defined by $a_t$ and $\sigma_t$, the v-prediction and flow-matching loss are mathematically equivalent [4]. We will provide the corresponding derivations in the paper. - $\dot{a}_t$ and $\dot{\sigma}_t$ are the first derivative of $a_t$ and $\sigma_t$ w.r.t $t$, respectively. **References** [1] Chen, Sanyuan, et al. "VALL-E 2: Neural codec language models are human parity zero-shot text to speech synthesizers." _arXiv preprint arXiv:2406.05370_ (2024). [2] Eskimez, Sefik Emre, et al. "E2 tts: Embarrassingly easy fully non-autoregressive zero-shot tts." _2024 IEEE SLT_. IEEE, 2024. [3] Chen, Yushen, et al. "F5-tts: A fairytaler that fakes fluent and faithful speech with flow matching." _arXiv preprint arXiv:2410.06885_ (2024). [4] Fu. Wang, et al. Rectified Diffusion: Straightness Is Not Your Need, ICLR 2025 *We sincerely hope that our reply could address your concerns and that you might consider raising the rating. Please let us know if you have any further questions or require additional results.* --- Rebuttal Comment 1.1: Comment: I appreciate the authors' careful responses and thank them for their efforts in addressing my concerns. Here are my responses to the authors' rebuttal: 1. I appreciate the new demo page, which has partially addressed my concerns over the subjective evaluations. However, I noticed that the F5 and E2 have the same total duration while DiTAR does not. Since both F5 and E2 require a total duration input and current models do not support a duration predictor (while DiTAR has an internal "total duration predictor" since it can always sample a <EOS> token to get a total duration), is the comparison a little unfair? I think in the F5-TTS paper, the authors used the total duration of the ground truth. Could you please also generate some samples using the ground truth duration (or the same total duration as your samples)? 2. I believe the outcomes are quite similar since they both propose patched generation, even though the motivation and architecture are different. 3. I understand that DiTAR and ARDiT are different in ways the authors explain in the figure, but the authors did not address my main concern regarding the similarity between DiTAR and ARDiT. That is: > The use of diffusion for patch prediction resembles existing approaches like ARDiT and Transfusion, but the paper does not sufficiently explore whether their three-stage approach (encoding → AR → diffusion) provides meaningful advantages over the simpler AR → diffusion methodology in ARDiT. > The necessity of the three-step architecture versus simpler alternatives (ARDiT) is inadequately justified. The paper does not explore whether the encoding → AR → diffusion approach provides meaningful advantages over simpler approaches like direct AR → diffusion used in ARDiT. That is, I'm not concerned over its similarity to ARDiT but rather whether the newly proposed DiTAR is necessary compared to ARDiT. 4. I appreciate the authors' response regarding the throughputs, and it has addressed my concerns. I believe the authors should revise the paper to make this point clearer, especially by adding the discussion regarding the batch size and ARDiT's advantages/disadvantages over NAR, since the current version sounds more like an overstatement of the ARDiT's efficiency by ignoring its inefficiency in small batch sizes (for inferences in on-device situations, for example). 5. Thank you for your clarification. Could you please make the experimental setup clearer in your revised manuscript? 6. > VALL-E 2 have not released the checkpoints and the subset of the test set, so the scores reported in their papers cannot be directly used for comparison. I believe the Vall-E 2 paper mentioned the evaluation models and test subset: > SIM is used to evaluate the speaker similarity between the original prompt and synthesized speech, leveraging the SOTA speaker verification model, WavLM-TDNN^3 [Chen et al., 2022]. The similarity score predicted by WavLM-TDNN is in the range of [−1, 1], with a larger value indicating higher speaker similarity. > WER (Word Error Rate) is used to evaluate the robustness of synthesized speech. Neural TTS systems sometimes experience deletion, insertion, and replacement errors due to incorrect attention alignments, which can affect their robustness. We perform ASR on the generated audio and calculate the WER with respect to the original transcriptions. In this experiment, we employ the open-sourced Conformer-Transducer model^4 [Gulati et al., 2020] as the ASR model. > Following Borsos et al. [2022] and Wang et al. [2023a], we use samples from LibriSpeech test-clean with lengths between 4 and 10 seconds, resulting in a 2.2 hours subset and 40 unique speakers. We evaluate each sample synthesis under two settings: 3s Prefix as Prompt and Ref Utterance as Prompt. For the first setting, we perform speech continuation and utilize the 3-second prefix of the speech as the prompt. In the second setting, we use a reference utterance from the same speaker as the prompt. Specifically, we begin by filtering the official speech list of LibriSpeech test-clean based on length. For the ordered speech list of each speaker, in the first setting, we synthesize the i-th speech sample using the first 3 seconds of the ground-truth i-th speech sample as the prompt. In the second setting, we synthesize the i-th speech sample using the (i − 1)-th sample as the prompt and synthesize the first speech sample using the last sample as the prompt. > 3: We use the best speaker verification model released at https://github.com/microsoft/UniSpeech/tree/main/downstreams/speaker_verification#pre-trained-models > 4: https://huggingface.co/nvidia/stt_en_conformer_transducer_xlarge Please check page 10 on https://arxiv.org/pdf/2406.05370 for more details. > ARDiT is tested on a released subset of LibriTTS test-clean. We reevaluated the samples using the same tool. ARDiT was trained on LibriTTS, while DiTAR was trained on LibriLight, making this comparison unfair. --- Reply to Comment 1.1.1: Comment: We appreciate your detailed response. Regarding your questions, our replies are as follows: 1. Subjective comparison with E2TTS and F5TTS: - To clarify, we are comparing the end-to-end text-to-speech performance between systems, and duration modeling is a part of the system. AR models can naturally simulate duration, while NAR models require an additional duration prediction module. If ground-truth (GT) duration is used for NAR systems, then for texts without GT audio, the duration cannot be obtained. - The duration of E2TTS and F5TTS: They share the same duration prediction method, based on F5TTS's released code and checkpoint. To clarify, the F5TTS paper uses rule-based predicted duration instead of GT duration, as mentioned on Page 4: > The sequence length N, or duration, has now become a pivotal factor that necessitates informing the model of the desired length for sample generation. One could train a separate model to predict and deliver the duration based on xr⁢e⁢f, yr⁢e⁢f and yg⁢e⁢n. Here we simply estimate the duration based on the ratio of the number of characters in yg⁢e⁢n and yr⁢e⁢f. - We synthesized F5TTS and E2TTS samples using GT duration and have provided them at the following link:https://spicyresearch.github.io/temp_samples/ 2. The impact of pachification is different. - VALL-E 2 achieves the best results with patch=1( Table1 in VALL-E 2 paper). The purpose of patchification is only to reduce computational load. - For DiTAR, the best results are achieved with patch>1. This demonstrates that DiTAR's patchification, which introduces bidirectional attention modeling within patches, improves performance. - Table 2 below also demonstrates this conclusion. 3. The advantage over ARDiT: Thank you for your perspective, this is a topic worth discussing. - We mentioned in the paper as follows. We will make the point clearer. >Another approach, such as ARDiT or Transfusion, repurposes the language model’s parameters for diffusion, leading to substantial computational demands. - The biggest advantage is that when combined with LLM or scaled up, DiTAR's three-stage approach (encode->LM->diffusion) can save a significant amount of computational load compared to ARDiT's single-stage approach(LM=diffusion). - Diffusion requires multiple sampling steps during inference. In models like ARDiT, where the LM and diffusion share parameters, each token prediction requires multiple computations on the LLM's parameters. In contrast, DiTAR only needs to perform multiple sampling steps on the diffusion head. \<Table1 The TFLOPs of generating a 10-second audio with NFE=10 and CFG> |Method|Parameters|TFLOPs↓| |--|--|--| |ARDiT|600M|8.70| |ARDiT|7B|112.78| |DiTAR(P=4)(Ours)|600M|2.75| |DiTAR(P=4)(Ours)|7B|5.40| 4. We are pleased to have addressed your concerns. We will further revise the paper, adding more inference metrics and discussing more about the advantages and disadvantages of each system. 5. We are pleased to have addressed your concerns. We will make the experimental setup clearer in the revised manuscript. 6. Comparison with VALLE-2: Thank you for the reminder. We followed the data processing methods mentioned in the VALL-E 2 paper. All samples were evaluated using the same tool for WER/SIM. We have compiled all the results below for comparison. DiTAR matches VALL-E 2 in WER and significantly outperforms it in SIM. \<Table 2 LibriSpeech test-clean> |Method|Patch size|WER↓| SIM↑| |--|--|--|--| |GT |-|1.6|0.779| |VALL-E 2|1|**1.5**|0.643| |VALL-E 2|2|**1.5**|0.635| |VALL-E 2|4|2.2|0.615| |DiTAR (Ours)|1|4.65|0.694| |DiTAR (Ours)|2|1.55|**0.705**| |DiTAR (Ours)|4|**1.53**|0.678| 7. Comparison with ARDIT: Thank you for the reminder. We retrained our model using LibriTTS for 100k steps. All samples were evaluated using the same tool for WER/SIM. The results are shown below: \<Table 3 LibriTTS test-clean> |Method|WER↓| SIM↑| |--|--|--| |ARDiT|4.036|0.613| |DiTAR(Ours)|**3.536**|**0.615**| *We sincerely hope our reply addresses your concerns and that you might consider raising the rating.*
Summary: This paper introduces Diffusion Transformer Autoregressive Modeling (DiTAR), a patch-based framework that combines a language model with a diffusion process to generate continuous speech tokens. By employing a “divide-and-conquer” strategy, the language model processes aggregated patch embeddings, and the diffusion transformer then generates the next patch, reducing computational overhead and error compounding. Various experiments highlight the importance of each component and ultimately demonstrate that the methodology is both high-quality and efficient. ## update after rebuttal The authors' rebuttal satisfactorily addressed most of my concerns through additional experiments and clarifications. The provided demos also demonstrated high quality. However, some ambiguity remains—specifically regarding how "low dim" is applied, as Figure 1 is still unclear on this point. I expect these issues to be clarified in the final manuscript. I am inclined to raise my score based on the improvements, but if the final version fails to resolve these outstanding issues, my score should remain unchanged. Claims And Evidence: Each claim is supported by the necessary results, and the experiments were conducted comprehensively and clearly. Methods And Evaluation Criteria: Multiple models were evaluated using a unified benchmark, metric, and evaluation model, and the effort to maximize fairness was remarkable. Theoretical Claims: The formulation was simple and convincing. However, one drawback is that the relationship between the learned v in Eq. 7’s flow-matching loss and the v-prediction in DDPM was not clearly explained. It would be beneficial to clarify this with a relevant citation. Experimental Designs Or Analyses: The necessary experiments are well-designed and the result analysis is convincing. The details of the aspect I wish to clarify through further queries are described in more detail below. Supplementary Material: I could not locate any demo or sample, which I believe is essential for an accurate evaluation. Other than that, I have reviewed all parts of the manuscript, and the questions are summarized below. Relation To Broader Scientific Literature: As mentioned in the paper, the idea of combining the strengths of language models and diffusion for high-quality, efficient modeling in speech synthesis has been proposed recently. The significance here lies in demonstrating that this hybrid approach can outperform either modeling technique on its own. This finding is both meaningful and promising for future developments in the field. Essential References Not Discussed: The paper that originally introduced the definition of v-prediction should be mentioned [1]. [1] Salimans, T., & Ho, J. (2022). Progressive distillation for fast sampling of diffusion models. *arXiv preprint arXiv:2202.00512*. Other Strengths And Weaknesses: Although this paper is not the first attempt to harness the strengths of LMs and diffusion models for creating efficient and high-performing generative models, the proposed approach is both valid and convincing, making its effective resolution of this challenge significant. Each claim is well-supported by appropriate experiments. I believe that this approach could serve as a general framework for generative modeling not only in speech synthesis but also in the image domain. However, the following aspects require precise clarification. 1. The demo page and audio samples were not found; including them would further substantiate the evaluation results presented in the paper. 2. In the right-hand LocDiT diagram of Figure 1, is h_i included solely because of CFG? Must it be placed exactly there, and does the presence of h_i improve performance even without CFG—essentially, what is the quantitative contribution of h_i? 3. In Section 3.2 (line 171), does the term “lower-dimensional feature space” refer to h_i in Figure 1? Given that h_i is fed into LocDiT and should share the same hidden dimensions, its meaning in this context remains unclear. 4. In Section 3.3 (line 191), the notion of reduced “generality” is ambiguous—does this imply that the model can condition on its own self-supervised features rather than on externally labeled classes? 5. In Section 3.4 (line 183), v_θ is used without a formal definition; if it represents the v-prediction, please include a citation to its original introduction. 6. In Algorithm 1: 1. On line 674, should “argmax” be replaced with “argmin”? 2. On line 687, is “v” intended to be “v-hat”? 3. The notation for tₙ within the for-loop requires clarification—for instance, on lines 685 and 687, does the resulting x correspond to x_t(n–1)? 7. In Section 3.5.1: 1. The statement “the 24000Hz waveform is compressed into 40Hz latent with a dimension of 64” raises the question of whether training a simple VAE (as opposed to approaches like FSQ [2]) to compress raw audio to a 40Hz rate and 64-dimensional latent was challenging, and if any issues were encountered. 2. Furthermore, since the codec’s reconstruction quality and code rate are critical hyperparameters for overall performance, it is important to know if ablation studies were conducted on these factors or if a performance comparison with other codecs was performed to clearly distinguish the contributions of the codec versus the language model. 8. In Section 3.5.4 regarding the prefix input to DiTAR’s LM: 1. Is the ordering of inputs (prompting audio, text, target text) correct? 2. If so, should prompting audio also be explicitly considered during training, as Figure 1 appears to suggest that inputs are fed in the order of (prompt) text, target text, and then prompting audio without separate training for prompting audio—this point requires clarification. 9. In Section 4.1.1 on Evaluation Metrics: 1. For the Librispeech test-clean subset B, Faster-whisper-large-v3 was used; does this model guarantee performance comparable to OpenAI’s whisper-large-v3? If not, using a different evaluation measure for the F5 TTS subset B could compromise the fairness of the comparison. 2. When measuring speaker similarity, was the ground truth raw waveform used instead of the codec-reconstructed audio? 10. In Section 4.2: 1. Is the 20k to 280k hours of data used for scaling sourced from a different internal dataset than the training dataset mentioned in Section 4.1.1? 2. Does the “Encoder” in Table 3 refer specifically to the aggregation encoder illustrated in Figure 1? 11. How long does it take for the evaluation model to converge during training? 12. In Section 4.3: 1. Why is a patch size of 4 used instead of 2, given that Figure 3 indicates that a patch size of 2 achieves lower WER and higher similarity—could this decision be related to latency or throughput concerns? 2. Does “historical context” refer to h_i or to historical patches, and in Table 4, does the value for Historical contexts represent the number of h_i’s or the number of patches? 3. Compared to previous literature, DiTAR appears to perform well even with a extreamly low NFE (e.g., NFE = 2). Could the authors provide insights into the fundamental reason behind this phenomenon? 13. In Section 4.4.2, additional explanation regarding the NAR model would be beneficial—could the authors provide further details about its design and performance characteristics? [2] Mentzer, F., Minnen, D., Agustsson, E., & Tschannen, M. (2023). Finite scalar quantization: Vq-vae made simple. *arXiv preprint arXiv:2309.15505*. Other Comments Or Suggestions: 1. In Section 3.4, line 191, please confirm if “Eqn” should be replaced with “Eq”. 2. In Table 6, the System column shows “Seed-EN” twice; should the second occurrence be “Seed-ZH” instead? 3. For zero-shot TTS evaluation, it would be beneficial to add comparisons with the following recently proposed models: 1. Multi-stage: E1 TTS [3] 2. Single-stage: DiTTo-TTS [4] 3. Open-sourced (production) models from TTS-Arena [5]—if possible, include comparisons with Kokoro, Fish Speech, XTTSv2, and StyleTTS2. 4. In Section 3.1.1, line 139, the statement “Noting the high similarity among adjacent continuous tokens, it is evident that a bidirectional dependency exists within local regions” would be strengthened by citing related research that supports this observation. 5. In Section 4.1.1, line 299, please consider citing the related works mentioned in “Many studies” to provide a more robust context for the discussion. [3] Liu, Z., Wang, S., Zhu, P., Bi, M., & Li, H. (2025, April). E1 tts: Simple and fast non-autoregressive tts. In *ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)* (pp. 1-5). IEEE. [4] Lee, K., Kim, D. W., Kim, J., Chung, S., & Cho, J. DiTTo-TTS: Diffusion Transformers for Scalable Text-to-Speech without Domain-Specific Factors. In *The Thirteenth International Conference on Learning Representations*. [5] https://huggingface.co/spaces/TTS-AGI/TTS-Arena Questions For Authors: 1. Have you conducted any experiments in the image domain? Since the proposed method appears capable of addressing MAR’s challenges, it would be interesting to know how quality and latency improvements translate to image generation. 2. How does the proposed patchification differ from the approach in the MAR paper that generates multiple tokens simultaneously? Although including historical context sets them apart, it seems that fundamentally similar modeling is being employed. 3. Final feature matching is notoriously challenging, which is why many previous approaches adopt a coarse-to-fine strategy. However, DiTAR appears to achieve strong performance by aligning the final feature in one pass. Could the authors clarify whether this success is primarily attributable to the patchification strategy, or if there are other insights that explain this phenomenon? 4. In Section 3.5.4, the model is conditioned on phonemes; have you evaluated a character-based approach as well, and is there clear evidence that the phoneme-based conditioning offers distinct advantages? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your careful reading of our paper and your insightful comments. Due to the word limit, we reply point by point in a concise manner: **Questions in "Other Strengths And Weaknesses":** 1. Audio samples are presented here: https://spicyresearch.github.io/ditar/#hard-cases 2. No, $h_i$ it is not solely for CFG. $h_i$ serves as the output of the LM and the conditioning for LocDiT, connecting the two. 3. Yes. Compared with the total dim of a patch of tokens, the dim of $h$ is low. 4. Yes. We will replace the term "generality" with "generalization" to make the meaning clearer. 5. $v_t$ here represents vector field or velocity. We will add the description to reduce ambiguity. 6. Thank you. We will correct the typos: 1. Line 674: argmax -> argmin 2. Line 687: $v$ -> $\hat{v}$ 3. Line 685: $t_n$ -> $t_{n+1}$ 7. The implementation of VAE: 1. Due to the lack of open-source speech VAE suitable for diffusion use, we trained the VAE following the approach used in LDM[1]. In the process, we aimed for adequate reconstruction quality without pursuing excessive compression, so the overall task was not challenging. 2. Thank you for the suggestion. Extensive research on VAE is part of our future work. 8. Prefix input to LM: 1. Thanks for pointing out the typo. We will fix it to (text, target text, prompting audio) 2. No. Following prior LM-based work [2], the loss is calculated over the entire audio. 9. Evaluation Metrics: 1. To clarify, we've communicated with the authors of F5TTS, and they actually used 'faster-whisper-large-v3' instead of 'whisper-large-v3'. They admitted it was a typo in their paper. To maintain consistency, we continue to use "faster" in our paper. 2. Yes, follow prior works[2][3]. 10. Section 4.2: 1. 20k: part of Librilight, 60k: Librilight, 100k: Emilia, 280k: Librilight+Emilia+inhouse 2. Yes. 11. Convergence time: 500k training steps (Appendix A.1). 12. Section 4.3: 1. Yes. A patch size of 2 is slightly better than 4 in performance, but 4 offers better throughput. This section focuses on the ablation of other components, so any reasonable patch size, either 2 or 4, is OK. 2. “Historical contexts” means the number of historical patches. We will make the expression clearer later on. 3. Thank you for noticing this detail. We think the proposed historical patches and LM guidance enhance the accuracy of LocDiT's predictions. 13. NAR in efficiency evaluation: It is a transformer identical to E2TTS[3]. It consists of 36 layers, each with a hidden dimension of 1024 and 16 heads. The performance is similar to E2TTS. **Questions in "Other Comments Or Suggestions":** 1&2. Thanks for pointing out the typo. 3. Other comparisons: - Kororo: does not support zero-shot mode. - StyleTTS 2 & E1TTS: tested on released subset of LibriTTS. - DiTToTTS: tested on Librispeech, but the subset used is not released. - XTTS v2 & Fish Speech: we evaluated the released checkpoints on Seed-EN. Test set| Method|WER↓| SIM↑| --|--|--|--| LibriTTS|StyleTTS 2|4.065|0.409| -|E1TTS|**3.246**|0.616| -|DiTAR(Ours)|3.401|**0.717**| LibriSpeech|DiTToTTS|2.56|0.627| -|DiTAR(Ours)|**1.78**|**0.64**| Seed-EN|XTTS v2|3.248|0.463| -|FishSpeech|2.372|0.55| -|DiTAR(Ours)|**1.685**|**0.735**| 4. Thank you. We will cite the corresponding reference[4]. 5. Thank you for the suggestion, we will. **Questions in "Questions For Authors"** 1. Application on image generation is part of our future work. 2. The connection with MAR in patchification: - The purposes are different. MAR employs fully bidirectional attention and uses patchification to reduce computational demand. DiTAR is essentially a causal LM and uses patchification to perform bidirectional modeling locally within patches, which enhances performance. - The figure in the link better illustrates the differences: https://spicyresearch.github.io/ditar/#comparison 3. DiTAR's strong performance in one pass: Patchification enables bidirectional modeling within patches, along with LM guidance, making predicting fine features more accurate. The LM->$h$->DiT->$x$ can be considered as an implicit coarse-to-fine process. 4. Phoneme vs. text: The use of phonemes is to align with other TTS systems for a fair comparison, not because phonemes are superior to text. **References** [1] Robin Rom., et al. High-Resolution Image Synthesis with Latent Diffusion Models, CVPR 2022 [2] Wang, Chengyi, et al. "Neural codec language models are zero-shot text to speech synthesizers." _arXiv preprint_ (2023). [3] Eskimez, Sefik Emre, et al. "E2 tts: Embarrassingly easy fully non-autoregressive zero-shot tts." 2024 IEEE SLT. IEEE, 2024. [4] Ke. Tian, et al. Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction, NeurIPS 2024 *We sincerely hope that our reply could address your concerns and that you might consider raising the rating. Please let us know if you have any further questions or require additional results.* --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ rebuttal; their experiments and explanations have addressed most of my concerns. I also randomly listened to some of the demos provided and found the quality to be high. However, some ambiguity remains. For instance, it is still unclear whether "low dim" is concatenated with the patches or applied as a separate condition to the input, as its placement in Figure 1 suggests. I expect these issues, along with the promised revisions, to be clarified in the manuscript. Based on the improvements, I am willing to raise my score; however, if the final version does not resolve these issues satisfactorily, my score should remain at 2. --- Reply to Comment 1.1.1: Comment: We are pleased to have addressed most of your concerns and sincerely thank you for your constructive feedback. We promise to incorporate our discussions in the revised manuscript to reduce ambiguity.
Summary: This paper introduces DiTAR, a speech generation method that integrates a causal language model (LM) with a shallow diffusion module using a bidirectional diffusion transformer (LocDiT). The approach incorporates several key techniques, including patchifying continuous audio tokens, directly feeding historical patches into the diffusion module, classifier-free guidance for the diffusion model, and temperature-controlled ODE sampling to balance diversity and stability. Experimental results demonstrate state-of-the-art performance in zero-shot TTS, achieving strong robustness, speaker similarity, and high naturalness, while maintaining lower computational overhead compared to baseline models. ## update after rebuttal I appreciate the authors' efforts in responding to my previous review, and I find that most of my concerns have now been effectively addressed. The work positions itself between fully autoregressive (AR) and non-autoregressive (NAR) methods. Its primary contribution appears to lie not in architectural novelty, but rather in its practical benefits and its good performance. I encourage the authors to incorporate the points discussed in the rebuttal, including relevant references, into the revised manuscript. Reflecting these improvements, I have increased my score from 2 to 3 and am now leaning towards acceptance. Claims And Evidence: The proposed methods and evaluation criteria are relevant. Methods And Evaluation Criteria: Proposed methods and/or evaluation criteria are reasonable and aligned with the problem/application. Theoretical Claims: I reviewed proposed methods and found them sound. Experimental Designs Or Analyses: I reviewed the experimental design and analyses and found them sound. Supplementary Material: I reviewed The impact of temperature during inference and Experimental Result Supplements Relation To Broader Scientific Literature: The key contributions to improve generation performance for TTS are applicable to broader audio generative modeling, including applications such as speech-language modeling and audio/music generation. Essential References Not Discussed: Although this work should be considered a follow-up work to the prior work [1], the authors make only minimal references to it. The prior work explores the combination of a transformer and a shallow MLP decoder in both autoregressive and non-autoregressive settings, as well as the patchification of four tokens in continuous token experiments. Consequently, the contribution of this work is diminished unless the authors can demonstrate that the choice of autoregressive modeling is crucial rather than an arbitrary ordering. [1] Fan, L., Li, T., Qin, S., Li, Y., Sun, C., Rubinstein, M., Sun, D., He, K., and Tian, Y. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. arXiv preprint arXiv:2410.13863, 2024. Other Strengths And Weaknesses: Strengths: * The strong performance in the main results demonstrates the effectiveness of the proposed modeling approach. * The method is straightforward, and each component is systematically evaluated through ablation studies. Weaknesses: * The proposed method is closely related to Fluid [1], as both employ a language model backbone and a shallow diffusion module. The prior work also has shown autoregressive/random ordering in generation, patchfication of tokens. However, I do not see sufficient novelty, despite differences such as the use of a transformer in the shallow diffusion module, historical patching, diffusion module-only guidance, and temperature-based sampling for improving inference performance. This work should provide a comprehensive comparison with Fluid and clarify how it critically improves upon it. * In the following sense, although the choice of autoregressive ordering is natural in TTS than non-autoregressive one with additional duration modeling, the authors should also justify the choice, as prior work has shown that random ordering can outperform autoregressive ordering. Additionally, since the model adopts autoregressive modeling, the paper should report inference speed metrics, including latency (time-to-first frame), full inference time, and the feasibility of real-time streaming. * A more extensive ablation study would better highlight the importance of each module, including: * Comparing LocDiT to prior methods, such as the MLP-based diffusion module, and discussing whether its significant performance degradation in the absence of historical context stems from its transformer-based architecture rather than the MLP-based approach, given that prior works have demonstrated strong performance with MLP-based models. * Evaluating LM guidance, or diffusion module-only guidance, against prior methods, particularly those that apply guidance to the entire model using blank conditioning, while considering both performance and efficiency. * The temperature-based sampling appears to have a marginal impact on performance. [1] Fan, L., Li, T., Qin, S., Li, Y., Sun, C., Rubinstein, M., Sun, D., He, K., and Tian, Y. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. arXiv preprint arXiv:2410.13863, 2024. Other Comments Or Suggestions: I don't have any other comments or suggestions. Questions For Authors: I don't have any other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We provide detailed responses to your concerns as summarized below: Demo page: https://spicyresearch.github.io/ditar/#hard-cases **Q1. Fluid[1] as an essential reference not discussed** To clarify, we have indeed been discussing MAR[2], the forerunner of AR with diffusion loss. Fluid and MAR are identical in methodology, with Fluid simply being a scaled version of MAR. Our discussion of MAR[2] essentially amounts to discussing Fluid. We will subsequently add references to Fluid for completeness. **Q2. Our contributions and connection with MAR/Fluid** Causal-attention AR (GPT-style) with a diffusion head performed poorly in predicting continuous tokens[1][2]: * MAR/Fluid abandoned the GPT style and proposed a bidirectional-attention method using random order. * Differently, we continue to delve deeper into GPT-style AR for continuous tokens, analyze why it does not perform well, and propose DiTAR as a solution. The figure in the link better illustrates the differences:: https://spicyresearch.github.io/ditar/#comparison **Q3. Different purposes of patchification** - The patchification technique is widely applied in various fields, but the main purpose is to reduce computational load and the best result is achieved when the patch size is set to 1[1][2][3][4]. - Differently, in our work, the main purpose of patchification is to enable bidirectional modeling for next-patch prediction and overcome the limitations of causal AR. We achieve the best results when the patch size is greater than 1. **Q4.More inference metrics** Thank you for your suggestion. We have added more inference metrics. All metrics are obtained on an A100(80GB) GPU by generating 10-second audio. Batch size:500/1 | Systems| Latency(s)↓| RTF↓| Throughputs(s)↑ | |---|---|--|--| | NAR|50.03/0.37|5.03/**0.037** |99.4/**27**| | DiTAR(P=4)|0.139/0.066|**1.39**/0.66|**360**/1.5| | DiTAR(P=2)|**0.1085**/**0.064** |2.17/1.28|230/0.78| As shown in the table, DiTAR's inference characteristics are similar to those of the causal language model. DiTAR has low latency and can use KV cache to save computation and increase concurrency, whereas NAR has high parallelism and can achieve fast speed with a small batch size. **Q5. The choice of Causal-AR over NAR on speech generation** To clarify, the purpose of our paper is not to prove that AR models are more suitable for speech generation than NAR or other multi-stage methods, as each offers specific advantages and disadvantages depending on the scenario. Instead, our aim is to propose a *general GPT-style generative model based on continuous representations*, and to demonstrate its ability to achieve SOTA results when applied to speech generation. We kept the design minimalist and avoided domain-specific features like duration or prosody modeling, making it easier to scale and adapt to other generative fields, such as video and music. **Q6.LocDiT v.s. MLP, as the diffusion head** Actually, we have conducted the comparison in Table 4 of the paper. When the patch size is 1 and the number of historical patches is 0, LocDiT degrades to an MLP module. It is evident that LocDiT is significantly superior to the MLP. We will clarify this fact more clearly in the paper. [1][2] also demonstrated that causal AR with a MLP diffusion head performs poorly. | Method|WER↓| SIM↑| |----|-----|---| |MLP|53| 0.340| |LocDiT| **1.736** | **0.720** | **Q7.LM-guidance v.s. prior guidance methods for LM** Thank you for your suggestion. We further compare the proposed guidance method with the CFG method for language model[5]. |Method|WER↓| SIM↑| Computational load↓| |----|:----|---|---| | Without any guidance| 2.858| 0.654| LM + diffusion | | CFG for language model [5]|2.323|0.680| LM x 2 + diffusion| | LM-guidance for LocDiT (ours)| **1.736**| **0.720** | LM + diffusion x 2| **Q8. Temperature for continuous-valued LM** The proposed temperature is aimed to balance diversity and certainty for continuous-valued LMs, not to improve WER/SIM. As demonstrated in Figure 6 of our paper, the higher the temperature, the greater the diversity in the generated results. **References:** [1] Lijie Li., et al. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. _arXiv preprint_ (2024). [2] Tianhong Li, et al. Autoregressive Image Generation without Vector Quantization. _arXiv preprint_ (2024). [3] Liu, Zhijun, et al. "Autoregressive diffusion transformer for text-to-speech synthesis." _arXiv preprint_ (2024). [4] Chen, Sanyuan, et al. "Vall-e 2: Neural codec language models are human parity zero-shot text to speech synthesizers." _arXiv preprint_ (2024). [5] Sanchez, Guillaume, et al. "Stay on topic with classifier-free guidance." _arXiv preprint_ (2024). *We sincerely hope our rebuttal addresses your concerns and that you might consider raising the rating. Please let us know if you have any further questions or require additional results.* --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in responding to my previous review, and I find that most of my concerns have now been effectively addressed. The work positions itself between fully autoregressive (AR) and non-autoregressive (NAR) methods. Its primary contribution appears to lie not in architectural novelty, but rather in its practical benefits and its good performance. I encourage the authors to incorporate the points discussed in the rebuttal, including relevant references, into the revised manuscript. Reflecting these improvements, I have increased my score from 2 to 3 and am now leaning towards acceptance. --- Reply to Comment 1.1.1: Comment: We are pleased to have addressed your concern. We sincerely appreciate your constructive feedback and raising the rating. We will incorporate your suggestions in the revised manuscript.
null
null
null
null
null
null
All-atom inverse protein folding through discrete flow matching
Accept (poster)
Summary: All-atom Discrete Flow Matching Inverse Protein Folding (ADFLIP) is a generative model for designing protein sequences conditioned on full atomic structures. Unlike existing inverse folding methods, ADFLIP progressively incorporates predicted side chains during sequence generation. Additionally, ADFLIP employs training-free classifier guidance to optimize sequences using pre-trained models. Evaluations on protein-ligand, nucleotide, and metal ion complexes, show that ADFLIP achieves state-of-the-art performance in both single-structure and multi-structure inverse folding tasks, highlighting its potential for all-atom protein design. Claims And Evidence: As a primary comparison to LigandMPNN, ADFLIP demonstrates comprehensive evidence of its claims. Methods And Evaluation Criteria: Inverse folding benchmarks and evaluations are clear but could use error bars. For example, in Table 1, for each complex structure, 10 sequences were sampled. Given the test set is not large it would be interesting to see the sequence recovery distribution or what is the per sample average and std delta of improvement over LigandMPNN rather than a single globular metric. Given here ADFLIP and LigandMPNN are trained on the same data, the benchmarks are fair. It would also be important to add the results for the public LigandMPNN weights which could point to the importance of the training set filters. The DSMBind-based guidance seems odd as it is a structure-based method. As a result, you generate nondiverse sequences that ideally fold to the same structure that improve the binding affinity. - Since ADFLIP cannot update the structure only when the model is incorrect in its sequence prediction, can it improve the binding affinity. - The benchmarks here also don't make sense, as why use TM-score rather than comparing the raw structure RMSD with Chai-1 for example, as done in prior evaluations. - Furthermore, the affinity gain, which is defined as whether the generated sequence’s predicted binding affinity exceeds that of the wild-type sequence, shows that in 42% of the time the generated sequences get worse than the reference's. Overall, the metrics are quite soft, and while an interesting approach, leaves more questions. Also a general a 10% improvement over the wild-type binding affinities may also be more difficult and not physically useful due to the exponetial relationship between concentration and affinity. Unclear how this 10% factors into the problem as currently written though. Theoretical Claims: No theoretical claims. The hyperparmaters used to sample as well as the foundation for guidance and specifics are not provided. Experimental Designs Or Analyses: Yes non guided experiements are sound but could benefit from deeper analysis given the small test set. Supplementary Material: Yes only purity sampling was given Relation To Broader Scientific Literature: ADFLIP enables all-atom inverse folding with SOTA sequence recovery rates for protein complexes with ligands, nucleotides or metal ions. This is an important step for improving AI assisted protein design. Essential References Not Discussed: Concurrent work so not expected to compare but FAMPNN [1] is worthwhile to discuss. [1] https://www.biorxiv.org/content/10.1101/2025.02.13.637498v1.full.pdf Other Strengths And Weaknesses: The paper is missing key numerical details with regards to how the model was training, specific hyperparameters for training and generation. Key ablations like why use purity sampling and what happens when using standard DFM sampling are needed to bostler the need for the technical novelty. Other Comments Or Suggestions: nit: FLow in line 24 Questions For Authors: 1. What is more impactful, the architecture introduced or the discrete flow matching? A lot of attention to the architecture is provided and it clearly is beneficial but deeper ablations as to what degree does the underlying generative framework and architecture play a role would strengthen the contribution. 2. What are the mean/std or distributions of the sequence recoveries? 3. What happens when you do not use purity sampling wrt the benchmarks? 4. How is classifier guidance implemented for the discrete flow models? A algorithm would be useful here. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **1.What is more impactful, the architecture introduced or the discrete flow matching? A lot of attention to the architecture is provided and it clearly is beneficial but deeper ablations as to what degree does the underlying generative framework and architecture play a role would strengthen the contribution.** Thank you for your comment. Both the multi-scale GNN architecture and the discrete flow matching framework play important and complementary roles in ADFLIP. The flow matching framework provides a flexible generative backbone that allows for integrating diverse sources of information—such as multiple structural states and external guidance signals. As shown in Table 4, this enables improved generation performance under dynamic structural contexts. On the other hand, the multi-scale architecture dynamically captures both atom-level and residue-level information, enabling the model to reason over partial side-chain context during sampling. To assess its effect, we conducted an ablation study: | **Model** | **Ligand (%)** | **Nucleotide (%)** | **Metal Ion (%)** | |------------------------|--------------------|---------------------|---------------------| | ADFLIP w/ sidechain | 62.19 ± 13.60 | 50.21 ± 13.52 | 75.79 ± 18.18 | | ADFLIP w/o sidechain | 61.43 ± 16.20 | 49.74 ± 13.32 | 75.92 ± 16.52 | While the numerical gains in recovery rate are moderate, we believe the use of side-chain information has significant implications for downstream applications of inverse folding, such as enzyme design or protein–ligand interaction modeling—because the chemical interactions with small-molecule ligands or substrates are typically through the protein side chains, and where even minor torsional differences in side chains can result in substantial functional changes. We will point out this advantage of our all-atom approach more explicitly in the revised version. **What are the mean/std or distributions of the sequence recoveries?** We have computed the mean and standard deviation of the sequence recovery rates for LigandMPNN and ADFLIP as below. We will also include the distribution in the revised manuscript. | Method | Ligand (%) | Nucleotide (%) | Metal Ion (%) | |------------|-------------------|-------------------|-------------------| | LigandMPNN | 57.96 ± 11.77 | 46.14 ± 12.13 | 69.31 ± 17.46 | | ADFLIP | 62.19 ± 13.60 | 50.21 ± 13.52 | 75.79 ± 18.18 | **What happens when you do not use purity sampling wrt the benchmarks?** We performed an ablation study to evaluate the impact of purity sampling on performance across benchmarks. Specifically, we varied the purity threshold and compared it to a fixed-step denoising strategy. Purity Sampling (variable threshold): | Threshold | Average Non-Protein RR | Std Dev | |-----------|------------------------|---------| | 0.3 | 0.602 | 0.1399 | | 0.5 | 0.603 | 0.1411 | | 0.7 | 0.588 | 0.1457 | | 0.9 | 0.568 | 0.1459 | Fixed-Step Sampling: | Denoise Steps | Average Non-Protein RR | Std Dev | |---------------|------------------------|---------| | 2 | 0.565 | 0.1313 | | 5 | 0.592 | 0.1350 | | 10 | 0.593 | 0.1361 | We will add these new results as supplementary data to the revised manuscript. [Will we add these to the revised manuscript?] Kai: Yes **How is classifier guidance implemented for the discrete flow models? A algorithm would be useful here.** Thank you for this suggestion. We have included the training-free classifier guidance sampling algorithm below, which we will incorporate into the revised manuscript for clarity. ### Algorithm: Training-Free Classifier Guidance Sampling **Input:** - $N$ protein and non-protein structures $\{x_1, ..., x_N\}$ - Initial sequence $s_0 = (\texttt{[MASK]}, ..., \texttt{[MASK]})$ - Initial sidechains $\chi_0 = \emptyset$ - Time step $\Delta t$ - Denoising network $f_\theta$, sidechain packing network $g_\eta$, regressor network $h_\phi$ - Target property value $y$ **Procedure:** 1. Initialize $t = 0$ 2. **While** $t < 1$: - **For** $n = 1$ to $N$: - Compute $p^n(\hat{s}_1) = f(s_t, x_n, \chi_t)$ - Average over structures: $p(\hat{s}_1) = \frac{1}{N} \sum_n p^n(\hat{s}_1)$ - Compute predicted property: $\hat{y} = h_\phi(p(\hat{s}_1))$ - Compute guidance respect to $s_1$: $$p(y \mid \hat{s}_1) \approx -\nabla||y - \hat{y} ||^2$$ - Sample $s_1 \sim p(\hat{s}_1) \cdot p(y | \hat{s}_1)$ - **For** $n = 1$ to $N$: - Compute sidechains: $\chi^n_1 = g_\eta(s_1, x_n)$ - Compute reward: $R_t(s_t, j) = E_{p_{1|t}(s_1|s_t)}[R_t(s_t, j|s_1)]$ - Sample $s_{t+\Delta t}$ using Eq.1 - Update time: $t \leftarrow t + \Delta t$ 3. **Return**: Final sequence $s_1$ --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. I maintain my score.
Summary: A method named ADFLIP is proposed for inverse folding in all-atom structural contexts, e.g., containing ligand, nucleotide, and metal ions. The method is based on conditional discrete flow matching and a hierarchical GNN architecture. Additionally, it incorporates amino acid sidechains predicted by an external model as context and is able to perform training-free classifier guidance. Experimental results show improved performance over a re-trained version of LigandMPNN for diverse recovery metrics. The authors also show increased affinity gain measured in silico when an external model, DSMBind, is used for guidance. ## update after rebuttal After the rebuttal, the authors addressed some of my concerns and I have raised my score. Claims And Evidence: The claims made in the manuscript are supported by clear evidence. Methods And Evaluation Criteria: The evaluation criteria make sense for the problem investigated. Theoretical Claims: The theoretical claims seem correct. Additional information about the conditional discrete flow matching formulation in (Campbell et al, 2024) would help to verify the claims in the manuscript. Experimental Designs Or Analyses: The experimental designs and analyses seem valid. Re-training LigandMPNN for the proposed cluster might affect the fairness of evaluation depending on the rigorousness of the checkpoint chosen for evaluation. Supplementary Material: The reviewer read the entire supplementary material. Relation To Broader Scientific Literature: The key contributions of the manuscript are related to: 1. Proposing a conditional flow matching framework for all-atom protein sequence design. 2. Incorporate partial side-chains using a pre-trained model during the sequence reconstruction process. Essential References Not Discussed: References for Inverse Folding such as [REF1] using discrete diffusion are missing. [REF1] Yi, Kai, et al. "Graph denoising diffusion for inverse protein folding." Advances in Neural Information Processing Systems 36 (2023): 10238-10257. Other Strengths And Weaknesses: Strengths: 2. A discrete flow matching framework is proposed for all-atom protein sequence design. As the denoiser network, the authors combine residue-wise and atom-wise features obtained by a GNN, and a transformer-based architecture is used for decoding. 3. The authors propose the use of an external network to sample sidechains during the sequence decoding process to provide additional information in all-atom contexts when decoding. 1. The proposed method is able to handle multiple conformations, and the adaptive sampling proposed by the authors might be useful for other inverse folding algorithms. Weaknesses: 1. The manuscript would improve with additional information and clarification about the methodology. 2. Additional ablation studies regarding sequences generated by the model and how different components affect the overall performance seem needed. 3. The methodology for the guidance by binding affinity using the fixed input structure and DSMBind is arguable, even though it shows the ability of the proposed method for guided generation. Other Comments Or Suggestions: 1. (Line 97) Typo: “Saport” 2. (Line 77-78) The word “here” appears twice. 3. The equation numbering appears to be wrong throughout the manuscript. 4. Algorithm 1 appears before Fig. 1 in the text. Might need to re-arrange the order. 5. (Line 413) Typo: “DSMBIND” Questions For Authors: 1. (Related Work) The reviewer suggests re-writing the Related Work section. Additionally, references for Inverse Folding like [REF1] using discrete diffusion are missing. [REF1] Yi, Kai, et al. "Graph denoising diffusion for inverse protein folding." Advances in Neural Information Processing Systems 36 (2023): 10238-10257. 2. (Methodology Explanation) Figures 1/2 and the methodology section writing do not give enough information for the reader to understand the steps of the conditional flow matching methodology. Specifically, from my understanding, the flow matching is generating the distribution s_t at each timestep, more information or illustrations regarding this process seems needed. 3. (Classifier Guidance) Does the training-free classifier guidance methodology work in a similar fashion to potential in other generative protein models like Chroma and RFDiffusion? What are the differences between these approaches and your flow matching-based approach? 4. (Methodology Clarification) Removing the side chains for masked positions might still influence the decoding toward sequences that were sampled at the beginning of the denoising process. Do the authors discuss or have an experiment to test this factor or how it influences the generation? An ablation study would help the effect of using the sidechain context. 5. (Need for Ablation Studies) What is the influence on the results of adding the sidechains? What is the influence on the results of using only the GNN architecture with a sequence decoder, in this case, is your architecture similar to the current LigandMPNN formulation? 6. (Conditional Flow Matching Formulation) Additional information about the conditional and the formulation from Campbell et al would be helpful for readers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1.Related Work** Thank you for pointing this out. We mentioned the paper by Yi et al. [1] in the introduction, but agree it should also be included in the related work. We will rewrite the related work section and discuss the discrete diffusion method for inverse protein folding from Yi et al and also another diffusion model [2][3]. This will help clarify how our work connects to previous studies. [1]Yi, Kai, et al. "Graph denoising diffusion for inverse protein folding." Advances in Neural Information Processing Systems 36 (2023): 10238-10257. [2]Wang, Xinyou, et al. "Diffusion language models are versatile protein learners." arXiv preprint arXiv:2402.18567 (2024). [3]Wang, Xinyou, et al. "Dplm-2: A multimodal diffusion protein language model." arXiv preprint arXiv:2410.13782 (2024). **2.Methodology Explanation for Flow Matching** Thank you for this suggestion. It is correct that, in our framework, discrete flow matching generates the distribution $p(s_t​)$ at each timestep. However, unlike continuous flow models, where the denoiser predicts noise, the denoiser of discrete flow matching directly predicts data (e.g., sequence tokens). In our implementation, a trained denoiser first estimates $p(s_1)$, and then $p(s_t)$ is computed by taking an expectation over this estimate, as shown in Equation (line 137) and Algorithm 1. In the revised manuscript, we will add clearer descriptions, expanded mathematical formulations, and improved illustrations to explain the conditional discrete flow matching process more intuitively. **3.Classifier Guidance** Our classifier-guidance approach shares a similar objective with Chroma and RFdiffusion in that we aim to condition generation on an external property $y$, estimated via $p(y∣s_t)$. However, there is a key difference in how this is achieved. Chroma and RFdiffusion use continuous diffusion models and typically require training a separate classifier or regressor (y = f(s_t)) directly on noisy intermediate states, $s_t$. In contrast, our method uses a training-free guidance strategy: we leverage a pre-trained classifier or regressor that operates on clean data $s_1​$, such as AlphaFold. We estimate $p(y∣s_1)$ from this clean output and then derive $p(y∣s_t)$ by taking the expectation over s1∼p(s_1∣s_t), as described in Equation (line 307). This approach avoids the need to retrain an external model and enables seamless integration of existing structure-based predictors. **4.Methodology Clarification: Does removing sidechains at masked positions bias decoding toward early samples, and is there an ablation study on this effect?** Thank you for this insightful comment. We conducted an ablation study on the parameter $\tau$ for purity sampling, which controls how much side-chain information is preserved for masked positions during the denoising process. A smaller $\tau$ value retains more side-chain atoms, providing richer structural context to the denoiser. Conversely, a higher $\tau$ results in fewer sampled residues and less side-chain information. | $\tau$ | Interaction Recovery Rate | Std. Dev. | |------------------------------------|-------------------------------|-----------| | 0.3 | 0.602 | 0.1399 | | 0.5 | 0.603 | 0.1411 | | 0.7 | 0.588 | 0.1457 | | 0.9 | 0.568 | 0.1459 | **5. Need for Ablation Studies for sidechain** We conducted an ablation study to assess the impact of incorporating predicted side-chain atoms into the denoiser's input. The results are summarized below: | Model | Ligand (%) | Nucleotide (%) | Metal Ion (%) | |---------------------|-------------------|-------------------|-------------------| | ADFLIP w/ sidechain | 62.19 ± 13.60 | 50.21 ± 13.52 | 75.79 ± 18.18 | | ADFLIP w/o sidechain| 61.43 ± 16.20 | 49.74 ± 13.32 | 75.92 ± 16.52 | While the inclusion of side-chain information leads to moderate improvements in sequence recovery, we believe its primary value lies in enhancing biological fidelity rather than optimizing this metric alone. In many downstream tasks—such as enzyme design or protein–ligand interaction modeling—minor torsional differences in side chains can lead to substantial functional changes (e.g., in binding affinity or catalytic activity). Therefore, even small gains in sequence recovery reflect a more meaningful structural signal that allows the model to better capture fine-grained biophysical nuances. We will discuss this in more detail in the revised manuscript. **6.(Conditional Flow Matching Formulation) Additional information about the conditional and the formulation from Campbell et al would be helpful for readers.** We agree, and will add more detail about Conditional Flow Matching in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. I have raised my score. I have a couple additional comments/questions: 1. I think these ablation studies for sidechains and the effect of removing sidechains should be incorporated as part of the manuscript and discussed further. 2. The quality of the denoiser directly affects the training-free classifier guidance. Additionally, the classifier might have a different objective (improve functionality) compared to inverse folding. In addition to the foldability score, it would be important to check the RMSD between for the predictions for these cases as the classifier might be also leading to mutations that change the structure.
Summary: This paper proposed a new method, namely ADFLIP, a generative model for inverse protein folding that designs sequences based on all-atom structural contexts. It is designed to handle complexes with non-protein components and dynamic structures using ensemble sampling. ADFLIP progressively incorporates side-chain context and leverages classifier guidance sampling for sequence optimization. The authors showed experimental results on a dataset curated from PDB. Claims And Evidence: - The main claim of this paper is supported by evidence of experimental results. - The results are shown for only one dataset of inverse folding. I understand this dataset used here is more aligned with the claim, however, the question naturally comes to mind how this method would perform compared to other method in the more widely used in inverse folding datasets such as CATH 4.2 and CATH 4.3. - The comparison is shown against PiFold, ProteinMPNN, and LigandFold. However, there are other more recent methods that significantly outperform PiFold and ProteinMPNN on other dataset such as CATH 4.2, CATH 4.3, TS50, TS500, etc. Some examples include LM-Design (Zheng et al. 2023), DPLM (Wang et al. 2024), AIDO.Protein (Ning et al. 2024), and so on. Although none of these methods use use all-atom structural context or the ligands, I would how ADFLIP would perform against those methods in their evaluation dataset as well as the most traditional ones. Methods And Evaluation Criteria: - The method is sound and properly described. - The evaluation criteria is also sound. However, there is some gap in the evaluation on more widely used dataset and against more recent methods. Theoretical Claims: - The authors provided clear mathematical derivation of their proposed approach - The demonstration algorithms are also sound and properly explained. Experimental Designs Or Analyses: - The experimental design and analyses are valid. - I appreciate how the authors not only showed the recovery rate and perplexity, but also foldabity as well as related metrics such as TM-score, pLDDT, and RMSD. Supplementary Material: The appendix contains only one algorithm (Algorithm 2), which is demonstrated properly. Relation To Broader Scientific Literature: The key contribution of this paper is related to proteomics research as well as machine learning research with such biological data. Their provided method has good use-case in applications such as drug design and discovering therapeutics. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **How does ADFlip perform on protein only dataset such as CATH** Thank you for the suggestion. We retrained ADFlip on the CATH 4.2 dataset and evaluated its performance based on sequence recovery rate. The results are summarized below: | Method | Sequence RR (%) | |---------------|-----------------------------| | ProteinMPNN | 45.96 | | PiFold | 51.66 | | **ADFlip** | **52.13** | **How does other inverse folding model perform on all-atom dataset** Thank you for the suggestion. Due to time constraints during the first review period, we are still working on additional evaluations. We aim to include results for other inverse folding models on the all-atom dataset in the second-round discussion and the revised manuscript.
Summary: This paper introduces ADFLIP, a model for inverse protein folding designed for complex biomolecular systems. By by incorporating all-atom structural context (including protein backbone, non-protein components like ligands and metal ions, and progressively predicted side chains) and handling dynamic protein complexes with multiple structural states. Claims And Evidence: The paper heavily emphasizes "all-atom" in the title and abstract, implying it's a novel and crucial advantage. However, the meaning of "all-atom" in the context of ADFLIP and its distinctiveness from other methods are not clearly and convincingly established. PiFold and other GNN-based methods do use geometric features constructed from all atoms (backbone atoms) to represent protein structure. The geometric features (distances, angles, etc.) inherently rely on the coordinates of all atoms. Therefore, simply stating "all-atom" as an advantage is misleading because it's not a feature unique to ADFLIP compared to modern GNN-based methods. The input is more expressive, it should be distinct from inverse folding problem based on backbone atoms. Methods And Evaluation Criteria: The proposed methods in ADFLIP, particularly discrete flow matching, all-atom context awareness, GNN architecture, and ensemble handling, are make sense for the problem of inverse protein folding in complex biomolecular systems. However, I fail to recognize the key technical innovations. Theoretical Claims: N/A Experimental Designs Or Analyses: The central claim regarding "all-atom" as a unique and clearly defined advantage is weakly supported and potentially misleading due to the lack of clear definition, lack of isolation of the "all-atom" effect, and potentially inaccurate characterization of baselines. Supplementary Material: N/A Relation To Broader Scientific Literature: It is related to inverse folding problem, which is an important area in molecular biology. Essential References Not Discussed: The problem setting can be solved by the method proposed by UniIF [1] published in NeurIPS 2024. [1] Gao, Zhangyang, et al. "Uniif: Unified molecule inverse folding." Advances in Neural Information Processing Systems 37 (2024): 135843-135860. Other Strengths And Weaknesses: This paper is well-organized and clearly-written. But the problem setting is quite simple. It seems to extend the traditional inverse folding into protein complex scenarios. Other Comments Or Suggestions: N/A Questions For Authors: Could you please clearly define what "all-atom" specifically means in the context of ADFLIP and how your approach to "all-atom" context is fundamentally different and uniquely advantageous compared to how baseline methods utilize all-atom geometric information? Simply considering non-protein components and side-chain prediction doesn't seem to be a fundamentally different "all-atom" concept. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ## Clarification on "All-Atom" Structure in ADFLIP Perhaps we had not explained clearly enough what we mean by ‘all-atom’. This interpretation of “all-atom” was introduced previously with RoseTTAFold-all atom[1]. In this context, all-atom refers not only to including the full set of protein atoms (backbone and side chains) but also to incorporating non-protein biological components such as ligands, ions, and nucleotides. Most inverse folding methods—including PiFold, ProteinMPNN, and others—limit their input representation to backbone atoms (typically N, C, O). By contrast, our method, ADFLIP, is designed to operate on full biological assemblies that may include a broader set of atoms from diverse molecules beyond proteins. This enables modeling protein-ligand, protein-RNA, or protein-ion interactions in a generalizable manner. Therefore, our all-atom terminology emphasizes the inclusion of all relevant atomic context in biomolecular complexes—not just the geometric features derived from protein atoms—enabling applications in more complex and realistic biological environments. We will revise the introduction and related work to more clearly explain this distinction. ## Novelty and Technical Contributions of ADFLIP While previous works such as UniIF have explored inverse folding in all-atom settings, the objectives differ. UniIF is designed as a general framework for modeling all biological structures, including proteins, RNAs, and small molecules, rather than focusing specifically on protein design. In contrast, ADFlip is specific for protein design tasks, particularly for proteins that interact with ligands, nucleotides, and other biomolecules. Our work introduces several novel contributions: 1. Generative Modeling with Discrete Flow Matching: ADFLIP is, to our knowledge, the first inverse folding framework that applies *discrete flow matching* to protein sequence design, taking into account protein complex context. This is particularly advantageous when handling input structures with high uncertainty, which are common when flexible ligands or RNA components are present, where traditional autoregressive models struggle. Our experiments demonstrate that ADFLIP outperforms autoregressive baselines on both single-structure and multi-structure inputs. 2. Training-Free Classifier Guidance for Conditional Generation: We propose a novel, training-free classifier-guidance approach for conditional generation with flow matching. Existing guidance techniques often require retraining regressors or classifiers to accept noisy intermediate inputs $ s_t $. Instead, we use a pre-trained classifier/regressor (e.g., AlphaFold) that operates on clean inputs $ s_1$, and approximate $ p(y | s_t)$ by taking the expectation over $ p(s_1 | s_t) $, as described in Eq. (line 308). This allows us to plug in any pretrained predictor *without* retraining, enabling flexible and efficient conditional design. 3. Support for Multiple Structural States: ADFLIP supports input ensembles with multiple structural conformations, capturing dynamic aspects of protein complexes. This is crucial for modeling biological systems where a single static structure may be insufficient. In summary, ADFLIP introduces a general-purpose, generative approach to inverse folding in full biological assemblies by (1) leveraging a broader all-atom context, (2) introducing discrete flow matching for improved sample quality and uncertainty modeling, and (3) enabling flexible, training-free guidance using pretrained models. We hope this response clarifies the distinctiveness of ADFLIP and addresses your concerns. We thank you for your valuable feedback, which will help in improving our manuscript. [1]Krishna, Rohith, et al. "Generalized biomolecular modeling and design with RoseTTAFold All-Atom." Science 384.6693 (2024): eadl2528.
null
null
null
null
null
null
From Spectrum-free towards Baseline-view-free: Double-track Proximity Driven Multi-view Clustering
Accept (poster)
Summary: This work proposes a spectrum-free and baseline-view-free multi-view clustering method with double-track proximity, DTP-SF-BVF. It aims at improving the clustering stability and alignment flexibility as well as the anchor itself characteristic exploration. Unlike current methods that usually overlook the proximity relationship between anchors, this work utilizes self-expression learning and point-point topology learning to capture that. To get rid of the limitation in baseline view, this work designs a learnable permutation strategy to jointly reorder anchors according to respective view characteristics. To alleviate the variance impact, this work avoids the formulating of spectrum. Experiments on multiple benchmark datasets demonstrate the effectiveness of the presented DTP-SF-BVF. Claims And Evidence: The claims are clear. Methods And Evaluation Criteria: The proposed DTP-SF-BVF method can effectively handle the multi-view clustering problem and be applied to large-scale scenes. Theoretical Claims: I have checked the derivation procedure. Experimental Designs Or Analyses: I have checked the soundness of experimental designs. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: The work proposes an interesting multi-view clustering, which is with spectrum-free and baseline-view-free as well as double-track proximity properties. Besides these, it is with linear time complexity and linear space complexity. Essential References Not Discussed: None Other Strengths And Weaknesses: Its strengths are as follows, - The writing is easy to follow, and the organization is clear. - The solving procedure presented in the paper is detailed. - Experiments conducted are sufficient and reveal the effectiveness of proposed method from multiple perspectives. Its weaknesses are as follows, - Clearly outlining the final steps for deriving clustering results and specifying the initialization settings of relevant variables are crucial for ensuring a thorough understanding of the algorithm's workflow. - Offering theoretical explanations for the sub-optimal results generated would significantly deepen the understanding of the algorithm's underlying mechanisms and limitations. Other Comments Or Suggestions: None Questions For Authors: - The current methodology involves constructing individual anchor graphs for each view, while why not directly learn a unified anchor graph matrix? After aligning, could this potentially provide a more superior clustering accuracy? - While the experimental results in the paper are impressive, the interpretability of the clustering outputs and the practical significance could benefit from additional exploration. The authors could provide valuable insights by discussing the interpretation of clustering results in real-world applications, highlighting the method's potential for broader impact. - When clustering different kinds of datasets, which factors are more prone to restrict the proposed algorithm? dataset size, feature dimension, or hyper-parameter? - While this approach demonstrates potential advantages, it is noteworthy that several recent MVC techniques have effectively circumvented the need for alignment by leveraging shared features. Whether are there particular scenarios where the necessity for such alignment could be reduced? The authors are encouraged to provide a more detailed exposition on the alignment criticality. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** Final steps for deriving clustering and variable initialization. **A1:** After getting $\mathbf{C}$, we derive the clustering by sequentially identifying the row numbers where the element 1 is located. We initialize $\mathbf{A}_p$, $\mathbf{T}_p$, $\mathbf{B}_p$ and $\boldsymbol{\alpha}$ with random matrix, unit matrix, orthogonal matrix and $1/v$. For $\mathbf{C}$, we create a zero matrix and then randomly assign a single 1 to each column. For $\mathbf{S}_p$, we assign elements ranging from 0 to 1 column by column and ensure that the diagonal elements remain 0 and the sum of each column equals 1. **Q2:** Explanations for sub-optimal results. **A2:** Thanks! MSCIAS achieves a marginally superior Fscore (0.72\%) on Cora, possibly because of the introduction of HSIC and the employment of local connectivity. FPMVS achieves a 0.29\% accuracy gain on CIF10Tra4 owing to the adoption of orthogonal projection ensembles. SFMC integrates a structural coherence and self-supervised attention. MFLVC introduces hierarchical feature learning and consensus semantics. **Q3:** Why not learn a unified anchor graph? Performance. **A3:** Anchor graphs on respective views may be more conducive to expressing the characteristics of their own views. A single graph structure may not adequately capture the intrinsic characteristics across all views. To confirm this, we conduct relevant experiments. IAG and UAG are the results based on individual and unified anchor graph respectively. |Dataset|DERMATO|CALTE7|Cora|REU7200|Reuters|CIF10Tra4|FasMNI4V| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |UAG|83.58|74.38|49.43|25.44|51.81|23.27|53.31| |IAG|**85.47**|**80.66**|**52.44**|**26.22**|**54.26**|**26.83**|**57.36**| |NMI|||||||| |UAG|85.37|43.92|42.07|6.17|29.74|13.93|54.37| |IAG|**89.97**|**45.25**|**43.70**|**6.25**|**31.87**|**15.64**|**59.21**| |Fscore|||||||| |UAG|83.72|70.43|**42.63**|25.79|**46.83**|19.18|**52.93**| |IAG|**87.92**|**78.12**|41.12|**28.55**|44.84|**20.64**|51.37| **Q4:** Interpretation about the clustering outputs and the practical significance. **A4:** Our model introduces a alignment mechanism without the necessity of selecting a baseline view. It is adept at synchronizing with the anchor generation, and adeptly rearranges anchors within their original space, well preserving the data diversity. Our model incorporates the geometric properties among anchors into the anchor-sample similarity, more thoroughly uncovering the manifold structure inherent into samples. Besides, our model engages in the direct learning of consensus cluster indicators, consolidating multi-view information at the cluster-label level. Further, our model is equipped with linear complexity. Owing to these, our model yields stable results and is adept at handling large-scale scenes. **Q5:** Restriction factors for the proposed method. **A5:** The computing cost is $\mathcal{O}(m^2nv+dnm+m!v+m^3kv)$. In general, $m$, $v$ and $k$ are greatly smaller than $n$. $d$ is a constant and irrelevant to $n$. The computing cost is linear with respect to $n$. Compared to the square and cube, the factorial about $m$ needs more cost. Too large $m$ will induce expensive cost. In addition, the space overhead is $\mathcal{O}(nk)$. So, the proposed method is not limited by its space overhead. Further, combined with the sensitivity study, the performance is relatively robust to hyper-parameters. So, the method is mainly influenced by the anchor number. **Q6:** Exposition on the alignment criticality. **A6:** The works based on shared features (SF) typically derive consensus anchors rather than view-tailored anchors to establish similarity. This harnesses the complementary. The following experiments further show this. ||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |SF|81.23|71.33|49.73|24.97|48.46|22.36|53.38| |NMI|||||||| |SF|82.76|42.26|39.87|6.03|29.89|13.43|51.97| |Fscore|||||||| |SF|80.64|69.72|**42.28**|25.21|**46.13**|19.58|**52.21**| In our model, the alignment mechanism builds pure self-expression affinities. If not aligning, the structure of anchor-anchor affinity would be chaotic, which will in turn impair the anchor-sample proximity, hindering the clustering performance. The following experiments validate this point. WOA is the results without our alignment. ||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |WOA|80.73|76.59|31.65|16.67|45.29|25.91|53.68| |NMI|||||||| |WOA|82.53|39.55|35.41|3.32|24.77|15.30|56.47| |Fscore|||||||| |WOA|79.47|72.23|30.69|21.14|42.59|17.90|47.41| In cases where the consistent information predominates over the complementary, it may be feasible to establish unified anchors to formulate similarity and thus potentially circumvent or mitigate the need for alignment. Nonetheless, given the inherent complexity of multi-view data, accurately quantifying the complementary and consistent information is usually a challenging task. --- Rebuttal Comment 1.1: Comment: The author's response has addressed my concerns. I have therefore decided to raise my score. --- Reply to Comment 1.1.1: Comment: Thanks!!
Summary: In this paper, the authors concentrate on three key issues in multi-view clustering field: the neglect of anchor-anchor geometric proximity, the reliance on the baseline view for anchor alignment, and the instability caused by spectrum. Firstly, the authors adopt a self-expression subspace skill to explicitly exploit anchor characteristics and feed them into similarity graph via topology learning to explore manifold structure inside samples. Then, they introduce a joint permutation mechanism, eliminating the requiring of baseline view and concurrently working with the generation of anchors. Furthermore, they design a consensus discrete structure, which skips the spectrum, to directly produce cluster labels and meanwhile provide a common link to facilitate anchor transformation. Claims And Evidence: Yes. The claims in this paper are verified by experiments and discussions. Methods And Evaluation Criteria: Yes, It makes sense. Theoretical Claims: The Appendix provides the update rules and derivations for variables. Experimental Designs Or Analyses: Yes. The experimental designs follows the commonly-used settings and the results are also reliable. Supplementary Material: Yes, I did. I reviewed all supplementary material. Relation To Broader Scientific Literature: This paper integrates the proximity between anchors into anchor graph to more adequately characterize the manifold structure inside samples, and designs a spectrum-free multi-view clustering paradigm without requiring baseline-view. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: (1) The analysis about experimental results is in-depth. (2) The linear time and space complexities of designed model guarantee its practicality. (3) The review about existing literatures is comprehensive. Weaknesses: (1) During optimizing the permutation variable on each view, the introduction about its computational cost seems a bit concise, specially, the searching on one-hot vectors. (2) In Algorithm 1, the stopping condition is missing. It would be helpful for authors to explicitly describe it. (3) To improve the accessibility and visual appeal of the dataset introduction, employing a table format would be highly advantageous. Other Comments Or Suggestions: See the following questions. Questions For Authors: (1) Rather than employing view-related anchors to extract data features, when adopting consensus anchors, the misalignment could not exist since all views share a common group of anchors. Under this situation, how is the performance? Is the permutation necessary? (2) From Eq.(8) to Eq.(9), why does this transformation hold? (3) How to determine/set the value of $m$? The parameter $m$ may affect the clustering performance since it leads to graphs with diverse scales. (4) According to the ablation results about the spectrum-free strategy, one can obtain that it takes less time cost than CS. why? what are possible reasons for this phenomenon? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** The introduction about the computational cost of permutation seems a bit concise. **A1:** More details are provided here. Due to $\mathbf{A}_p\in{d_p\times m}$, $\mathbf{S}_p\in{m\times m}$, $\mathbf{B}_p\in{m\times k}$, $\mathbf{C}\in{k\times n}$ and $\mathbf{X}_p\in{d_p\times n}$, building $\mathbf{G}_p$, $\mathbf{H}_p$, $\mathbf{M}_p$ and $\mathbf{J}_p$ will require $\mathcal{O}(d_pm^2)$, $\mathcal{O}(m^3)$, $\mathcal{O}(mkn+m^2n)$ and $\mathcal{O}(d_pmn+mnk+m^2k)$ cost. Due to $\mathbf{T}_p\in{m\times m}$ consisting of 0 and 1, conducting traversal searching on one-hot vectors will take $\mathcal{O}(m!)$. So, updating the permutation takes $\mathcal{O}(d_pm^2+d_pmn+m^3+mkn+m^2n+m!)$. **Q2:** In Algo 1, the stopping condition is missing. **A2:** Thanks! We run the Algo 1 under $f(t)-f(t+1)<=10^{-3}f(t)$. $f(t)$ is the objective value at $t$-th iteration. **Q3:** Employing a table format to introduce datasets. **A3:** Good advice! We will present that in a table format to enhance its visual appeal. **Q4:** Under consensus anchors, how is the performance? Is the permutation necessary? **A4:** The paradigm based on consensus anchors extract a set of common anchors rather than multiple sets of view-specific anchors to build similarity. Although avoiding alignment, due to the lack of view-unique characteristics, this could not extract sufficient diverse features. To further demonstrate this, we organize experiments. CA and VA are the results based on consensus and view-related anchors respectively. |Dataset|DERMATO|CALTE7|Cora|REU7200|Reuters|CIF10Tra4|FasMNI4V| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |CA|81.23|71.33|49.73|24.97|48.46|22.36|53.38| |VA|**85.47**|**80.66**|**52.44**|**26.22**|**54.26**|**26.83**|**57.36**| |NMI|||||||| |CA|82.76|42.26|39.87|6.03|29.89|13.43|51.97| |VA|**89.97**|**45.25**|**43.70**|**6.25**|**31.87**|**15.64**|**59.21**| |Fscore|||||||| |CA|80.64|69.72|**42.28**|25.21|**46.13**|19.58|**52.21**| |VA|**87.92**|**78.12**|41.12|**28.55**|44.84|**20.64**|51.37| The reason of this could be that the view-exclusive representation that view-related (aligned) anchors contain outweighs the view-common information that consensus anchors contain. **Q5:** From Eq.(8) to Eq.(9), why does it hold? **A5:** For the objective, we have $\operatorname{Tr} \left(\mathbf{B} _p^{\top}\mathbf{H}_p\mathbf{B} _p \mathbf{C}\mathbf{C}^{\top} \right) \Leftrightarrow [\mathbf{B} _p^{\top}] _{j,:} [\mathbf{H}_p\mathbf{B} _p \mathbf{C}\mathbf{C}^{\top}] _{:,j} \Leftrightarrow [\mathbf{B} _p^{\top}] _{j,:}\mathbf{H}_p\mathbf{B} _p[\mathbf{C} \mathbf{C}^{\top}] _{:,j}$ and $\operatorname{Tr}\left(\boldsymbol{\alpha} _p^2\mathbf{C}\mathbf{X} _p^{\top}\mathbf{A} _p \mathbf{T} _p \mathbf{B} _p\right) \Leftrightarrow \left[\boldsymbol{\alpha} _p^2\mathbf{C}\mathbf{X} _p^{\top}\mathbf{A} _p\mathbf{T} _p\right] _{j,:}[\mathbf{B} _p] _{:,j}$ where we omit the $\min$ operator and $\mathbf{H}_p=\beta\mathbf{L _s}+\boldsymbol{\alpha} _p^2\mathbf{Q} _p$. $\mathbf{C}\mathbf{C}^{\top}$ is diagonal, and we have $[\mathbf{B}_p^{\top}] _{j,:}\mathbf{H}_p\mathbf{B} _p[\mathbf{C}\mathbf{C}^{\top}] _{:,j} \Leftrightarrow [\mathbf{B} _p] _{:,j}^{\top}\sum _{i=1}^n \mathbf{C} _{j,i}\mathbf{H}_p[\mathbf{B} _p] _{:,j}$. For the feasible region, $\mathbf{B} _p^{\top}\mathbf{B}_p=\mathbf{I} _k$ can be divided into $[\mathbf{B} _p] _{:,j}^{\top}[\mathbf{B} _p] _{:,j}=1$ and $[\mathbf{B} _p] _{:,j}^{\top}[\mathbf{B} _p] _{:,i}=0,i=1,2, \cdots,k,i\neq j,j=1,2,\cdots,k$. Further, $[\mathbf{B} _p] _{:,j}^{\top}[\mathbf{B} _p] _{:,j}=1$ can be written as $[\mathbf{B} _p] _{:,j}^{\top}\mathbf{I} _{m\times m}[\mathbf{B} _p] _{:,j}-1=0$. The $[\mathbf{B} _p] _{:,j}^{\top}[\mathbf{B} _p] _{:,i}=0,i=1,2,\cdots,k,i\neq j$ can be written as $\left[[\mathbf{B} _p] _{:,1},[\mathbf{B} _p] _{:,2},\cdots,[\mathbf{B} _p] _{:,j-1},[\mathbf{B} _p] _{:,j+1},\cdots,[\mathbf{B} _p] _{:,k}\right]^{\top}[\mathbf{B} _p] _{:,j}=\mathbf{0} _{(k-1)\times 1}$. **Q6:** How to set $m$? **A6:** We set it equal to the number of clusters. During updating $\mathbf{T}_p$, the objective form is $\operatorname{Tr}\left(\mathbf{T} _p^\top\mathbf{B}\mathbf{T} _p\mathbf{C}+\mathbf{T} _p^{\top}\mathbf{D}\right)$. Besides, the feasible region is discrete. These cause it being hard to solve. To this end, we adopt the traversal searching on one-hot vectors to obtain the optimal solution. This takes $\mathcal{O}(m!)$ computing cost. Too large $m$ will induce intensive time. **Q7:** Why taking less time than CS? Possible reasons? **A7:** CS needs to first form spectrum and then conduct embedding partitioning. These two procedures induce extra computing cost. On the other hand, we directly generate labels via binary learning. The labels can be get in a closed-form solution. It only compares the diagonal elements of $\mathbf{W}$ and the row elements of $\mathbf{Z}$. The element scale is $k$, which is small-sized.
Summary: This paper develops a multi-view clustering algorithm named DTP-SF-BVF to address the problems: (1) current methods usually focus only on the anchor-sample proximity and fail to take into account the anchor-anchor relationship; (2) they require to select the baseline view; (3) existing spectrum paradigm induces clustering variance. DTP-SF-BVF leverages double-track proximity to extract both anchor-anchor and anchor-sample characteristics hidden into data, and devises a permutation mechanism for each view, eliminating the need for selecting the baseline view. Further, it chooses to directly formulate the label indicators through a consensus structure, and in turn the consensus structure bridges anchors on all views. Extensive experiments validate DTP-SF-BVF’s effectiveness. Claims And Evidence: Yes, the motivation of double-track proximity, baseline-view-free, and spectrum-free paradigm is clearly claimed. Methods And Evaluation Criteria: Yes, it is suitable for multi-view clustering tasks. Theoretical Claims: The solving procedure is presented detailly. Experimental Designs Or Analyses: I checked the experimental results and discussions. Supplementary Material: I reviewed the overall supplementary material. Relation To Broader Scientific Literature: This paper constructs a multi-view clustering method that contains both anchor-sample and anchor-anchor proximity and is without involving spectrum and baseline-view. It considers anchor-anchor characteristics and directly outputs clustering results without variance. Essential References Not Discussed: The references are discussed sufficiently. Other Strengths And Weaknesses: **Strengths:** 1. The idea is well-motivated. The double-track proximity paradigm effectively enhances the clustering accuracy via utilizing a more comprehensive view of data representation. 2. Baseline-view-free and spectrum-free schemes highlight the flexibility and reliability of DTP-SF-BVF for clustering data from diverse sources. 3. The organized experiments are sufficient. **Weaknesses:** 1. The binary elements inherent in the alignment model introduce complexities. Consequently, the question arises as to whether this model can be further refined. Specially, the orthogonality possesses the capability to rearrange anchors whilst preserving the irrelevance. So under such circumstances, does the model's performance witness an enhancement? 2. The performance appears to be somewhat sensitive to the fine-tuning of hyperparameters, such as on CALTE7. It would be valuable if the authors could delve deeper into the potential implications of anchor noise and provide additional insights to mitigate its influence. 3. A central concept in this paper involves leveraging self-expression to derive anchor proximity and thereby boost the clustering outcomes. However, how does this contribute to improving the anchor quality? Moreover, if constructing anchors through other means, whether this remains functional? 4. Although the model incorporates anchor relations, it appears to fall short in capturing deeper cross-view complementarities. When encountering the scenarios where anchor quality exhibits substantial variability across different views, how does the model perform? When integrating complementarities at the graph similarity level, how is the performance? Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: Please see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** Can the binary be further refined? How is the performance under orthogonality? **A1:** Thanks. Orthogonal constraints (OC) could deteriorate semantic topological continuity and limit the model's expression ability. Moreover, they will change the value and distribution of anchors. The following experiments further illustrate the distinction. BE is the results based on binary elements. |Dataset|DERMATO|CALTE7|Cora|REU7200|Reuters|CIF10Tra4|FasMNI4V| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |OC|83.46|78.46|52.37|**26.31**|52.79|27.12|55.42| |BE|**85.47**|**80.66**|**52.44**|26.22|**54.26**|**26.83**|**57.36**| |NMI|||||||| |OC|87.13|43.62|41.42|6.14|**31.91**|14.38|55.23| |BE|**89.97**|**45.25**|**43.70**|**6.25**|31.87|**15.64**|**59.21**| |Fscore|||||||| |OC|83.26|73.59|40.16|26.97|42.78|18.94|50.67| |BE|**87.92**|**78.12**|**41.12**|**28.55**|**44.84**|**20.64**|**51.37**| BE outperforms OC in most cases, the reasons of which are that BE does not change anchor characteristics, and only rearrange them within the original space. **Q2:** Somewhat sensitive to hyperparameters. Exploring potential implications of anchor noise. **A2:** The parameter $\lambda$ is responsible for striking a balance between the reconstruction loss and the regularization of anchor self-expression, whereas $\beta$ modulates the degree of inter-view consistency. Inaccurate calibration may result in diminished performance or instability, particularly in the presence of noisy anchors. Anchor noise: (1) It could potentially compel the model to assimilate the noise patterns, precipitating an over-fitting condition; (2) It could potentially render the loss surface irregular, complicating the optimization process; (3) It could predispose the model to gravitate towards spurious correlations. Several potential strategies could be employed: (1) Implement a pre-filtering mechanism or confidence-based thresholds; (2) Develop a multi-anchor voting mechanism; (3) Integrate prior knowledge to dynamically adjust hyper-parameters; (4) Conduct dataset pre-processing to eliminate instances of anchor noise. **Q3:** How does self-expression improve the anchor quality? Still functional under anchors with other means? **A3:** Self-expression learning helps extract the geometric characteristics between anchors, and facilities the learning of anchors owing to the co-optimized mechanism. OSE-L and WSE-L are to validate the effectiveness of self-expression. OSE-L and WSE-L are the results without/with self-expression under no learning. ||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |OSE-L|60.42|43.88|27.96|14.33|25.32|19.73|41.27| |WSE-L|**65.64**|**64.59**|**30.24**|**16.68**|**27.20**|**24.08**|**47.21**| |NMI|||||||| |OSE-L|64.37|35.68|5.88|1.01|1.38|12.57|44.79| |WSE-L|**69.84**|**37.95**|**33.54**|**1.06**|**1.43**|**12.98**|**47.07**| |Fscore|||||||| |OSE-L|63.76|48.43|28.79|23.42|33.87|16.86|37.64| |WSE-L|**69.33**|**61.54**|**30.40**|**24.43**|**35.25**|**18.03**|**41.43**| The following WSE+L and WSE-L are to show the effectiveness of learning.'+L' denotes having learning. ||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |OSE+L|71.51|49.05|30.35|16.75|47.05|26.69|52.15| |WSE+L|**85.47**|**80.66**|**52.44**|**26.22**|**54.26**|**26.83**|**57.36**| |NMI|||||||| |OSE+L|83.97|40.21|6.02|2.53|23.19|15.48|58.13| |WSE+L|**89.97**|**45.25**|**43.70**|**6.25**|**31.87**|**15.64**|**59.21**| |Fscore|||||||| |OSE+L|73.79|51.25|30.42 |28.54|43.04|17.70|46.77| |WSE+L|**87.92**|**78.12**|**41.12**|**28.55**|**44.84**|**20.64**|**51.37**| Besides, OSE-L and WSE-L show that under the anchors constructed by sampling, our mechanism is still functional. **Q4:** How does the model perform under varying anchor quality and graph level? **A4:** In the paper, we introduce a view-aware weighting to harmonize anchor significance on each view. OVA denotes the results without view-aware weighting. ||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |OVA|84.59|76.73|44.94|16.82|47.84|25.26|52.69| |NMI|||||||| |OVA|89.44|42.88|33.06|3.60|29.54|15.20|56.84| |Fscore|||||||| |OVA|86.59|71.80|35.36|**28.77**|42.32|18.03|46.90| Perhaps, an instance-aware adaptive weighting paradigm could be more advisable. Implementing anchor-wise importance calibration may yield more superior results since anchors on the same one view also could exhibit diverse importance. About the performance at the graph similarity level, we first generate view-specific graph and then construct fusion (FGTC) to integrating complementarities . ||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |FGTC|80.17|73.98|48.21|22.12|54.13|22.79|53.89| |NMI|||||||| |FGTC|82.74|38.97|39.26|5.83|27.89|11.79|58.93| |Fscore|||||||| |FGTC|80.87|70.46|33.72|21.63|42.87|18.84|47.73| Evidently, our model gathering information at the cluster-level receives better results in most cases.
Summary: The paper builds up double-track proximity for multi-view clustering to investigate the manifold structure among samples. In particular, it encodes anchor-anchor relation into anchor-sample similarity using self-expression learning and topology learning concurrently. It relieves the restriction of baseline-view by assigning a matrix variable consisting of 0 and 1 to each view, and transforms anchors generated on each view via joint-learning to reshape them. Beyond these, a binary strategy is adopted to take shape the cluster indicator, and meanwhile the cluster indicator connects all views and the anchors on them. A six-step optimization scheme with linear complexity effectively minimizes the loss function. Claims And Evidence: The main claims are that the anchor-sample plus anchor-anchor, alignment without reference, and no-spectrum contribute to producing more distinctive clusters. They are reflected by the comparative experimental results in Table 1 and multiple ablation results respectively. The running time comparison Figure 2 supports its linear time complexity. Methods And Evaluation Criteria: The proposed method displays the effectiveness in dealing with MVC problems, and demonstrates advantages against 17 clustering methods. Theoretical Claims: Yes Experimental Designs Or Analyses: The designs are valid, and provide evidence for verifying the effectiveness of the presented method, together with thorough analyses. Supplementary Material: Yes, the supplementary material contains derivation details, other ablations, effectiveness illustration in other scenes, convergence, sensitivity, etc. Additional analyses reveal its features from other views. Relation To Broader Scientific Literature: The idea of embedding anchor-anchor relation into anchor-sample similarity using point-to-point guidance after reshaping is worthy of recommendation in multi-view clustering literature. Essential References Not Discussed: No Other Strengths And Weaknesses: - The overall structure of this paper is cohesive, enabling readers to easily trace the progression. - Extensive comparing experiments, detailed discussions, and rounded ablations reinforce the contributions. **Weaknesses:** - The demand about the parameter searching may undermine the practical capability. There are two parameters requiring manual tuning. Moreover, according to the guideline in Section Q, their suggested ranges are also different. This may bring inadaptability for the model when encountering certain scenes with vague clusters. - The permutation learning procedure necessitates additional methodological clarification, as the determination of $\mathbf{T}_p$ relies on traversal exploration according to the feasible region. Hence, the geometric properties of feasible region should be systematically characterized. - The baseline-view-free approach achieves enhanced performance metrics compared to conventional approaches. Although empirical evidence currently supports this observation, a fundamental exploration of underlying mechanisms is warranted to strengthen the validity of this advantage. Other Comments Or Suggestions: **Grammars:** - In line 198, ‘This spectrum-free model directly output ….’---> outputs; - In line 200, ‘$\boldsymbol{\alpha}$ play…’---> plays; Questions For Authors: 1. About the complexity, combined with the theoretical expressions, it seems that the computing of cluster indicators takes more time consumption. Does it affect the overall efficiency? Is there any other way to mitigate this factor? 2. What is the reason for no-spectrum strategy spending less time consumption than traditional scheme? 3. In conjunction with the statement ‘one can project all anchors into a common space to make them have the same dimension’, by projecting raw data, we will own the anchors with the same dimension. Does the spatial misalignment emerge between anchors in this case? Furthermore, would this yield enhanced performance compared to the proposed DTP-SF-BVF method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** The parameter searching may undermine the practical capability. **A1:** Thanks! $\lambda$ governs the trade-off between error reconstruction and anchor self-expression, while $\beta$ adjusts the cross-view consistency guidance. They collaboratively modulate the model's capacity. Despite the different searching ranges, our model, as evidenced by the results in Table 1, with the suggested ranges, achieves more competitive data clustering under multiple datasets with diverse scales, which reveals that our model is suitable for various scenes. The parameter searching indeed compromises practical applicability to some extent, particularly in resource-constrained scenarios. Developing a dynamic parameter derivation mechanism based on data characteristics will be a compelling research direction, and we will conduct systematic investigations in future work. **Q2:** The geometric properties of feasible region about permutation should be systematically characterized. **A2:** Its feasible region consists of discrete elements, 0 and 1. There is only one 1-element in every column and row. These guarantee that the permutation only rearranges anchors in their original space, and does not alter the values of learned anchors. Besides, its objective is $\operatorname{Tr}\left(\mathbf{T} _p^\top \mathbf{G} _p \mathbf{T} _p \hat{\mathbf{A}} _p + \mathbf{T} _p^{\top} \hat{\mathbf{B}} _p\right)$ where $\hat{\mathbf{A}} _p=\lambda\mathbf{H} _{p}+\boldsymbol{\alpha} _p^2\mathbf{M} _p-2\lambda\mathbf{S} _{p}^{\top}$ and$\hat{\mathbf{B}} _p=-2\boldsymbol{\alpha} _p^2 \mathbf{J} _{p}$. It exhibits quadratic operation about permutation elements and inherent non-separability, which causes difficulties in optimizing. Note that the permutation is with the size of $m\times m$, we can utilize one-hot vectors to constitute it. **Q3:** A fundamental exploration for baseline-view-free mechanism. **A3:** This approach rearranges anchors without requiring any baseline views. Given that the misalignment arises from the differing order of anchors, we link each view to a learnable permutation, which allows for the flexible transformation of anchors within the original space. Moreover, it is compatible with anchors in an unified framework, and collaborates with their learning process. Selecting a baseline view introduces complicated solving procedure. An improperly chosen baseline view will also lead to inaccurate graph structure fusion. In contrast, we do not rely on any baseline views and automatically rearranges anchors. Besides, it is also proven to be with linear complexity, and hence does not harm the overall efficiency. **Q4:** Do the cluster indicators affect the overall efficiency? How to mitigate these factors? **A4**: Combined with Eq.(13), it mainly involves $\mathbf{W}$ and $\mathbf{Z}$. So, updating the cluster indicators will take $\mathcal{O}(d _pmk+d _pk^2+km^2+k^2m+nd _pm+nm^2+nmk)$ cost. Generally, $m$ and $k$ are largely smaller than $n$. $d_p$ is a constant and irrelevant to $n$. As a result, the cost on cluster indicators is linear to $n$. So, it does not affect the overall efficiency. Besides, the coefficients related to $n$ are $m$, $k$ and $d_p$, while $k$ is an inherent characteristic of datasets, and $m$ is set equal to $k$ in experiments. So, we can decrease the feature dimension to further improve the efficiency. **Q5:** What is the reason for no-spectrum strategy spending less time than traditional scheme? **A5:** The spectrum generation process and subsequent embedding grouping result in extra computational expense. Besides, we utilize binary learning to directly formulate cluster labels. This only involves the element comparing between two small-sized matrices $\mathbf{W}$ and $\mathbf{Z}$. **Q6:** How is the performance under anchors with the same dimension. **A6:** Thanks! Via projection, we indeed can make all anchors have the same dimensions (SD). However, even with the same dimensions, due to anchors still being generated on each individual view, the anchor order across views could remain inconsistent. So, the spatial misalignment still exists. The following table summaries the comparisons. DD is the results under anchors with original diverse dimensions. |Dataset|DERMATO|CALTE7|Cora|REU7200|Reuters|CIF10Tra4|FasMNI4V| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC|||||||| |SD|74.51|65.89|47.43|24.13|52.84|23.14|54.92| |DD|**85.47**|**80.66**|**52.44**|**26.22**|**54.26**|**26.83**|**57.36**| |NMI|||||||| |SD|76.82|37.64|38.62|5.24|28.61|12.73|56.94| |DD|**89.97**|**45.25**|**43.70**|**6.25**|**31.87**|**15.64**|**59.21**| |Fscore|||||||| |SD|75.34|66.82|35.12|25.63|41.63|18.83|45.62| |DD|**87.92**|**78.12**|**41.12**|**28.55**|**44.84**|**20.64**|**51.37**| The paradigm using anchors with SD produces inferior results, the reasons of which could be that the projecting operation causes information loss and degenerates multi-view diversity, weakening the performance.
null
null
null
null
null
null
Progressively Label Enhancement for Large Language Model Alignment
Accept (poster)
Summary: The paper proposes PLE (Progressively Label Enhancement) for LLM alignment that could make use of all model generated responses. The proposed algorithm learns the contrast between principle guided response and original response using a ranking loss when the reward difference of the two are larger than a threshold. It also learns from both responses through weighted SFT loss when the reward difference is not that large. Experiments are conducted on three tasks: multi-turn dialogue, controlled text generation and summarization. The results show that PLE could effectively align LLMs with human preferences. Claims And Evidence: Please see other parts. Methods And Evaluation Criteria: The proposed method makes sense to me. Theoretical Claims: Yes, I checked the proofs in the paper. Experimental Designs Or Analyses: (1) If I understand correctly, the paper seems to assume that $\pi^\star$ is $\pi_{\theta}(y|x, \text{principles})$. If that's the case, then why bother designing such a complicated algorithm, instead of just doing context distillation through SFT (basically just use $\pi_{\theta}(y|x, \text{principles})$ as target when just providing $x$)? If you want to claim that PLE has better sample efficiency or has better performance, then context distillation should be the SFT baseline which makes much more sense comparing to using offline data for SFT. (2) According to the descriptions of the experiment setup, it is not clear what reward model is being used during model training? Do training and evaluation use the same reward models? (3) Using BLEU and PPL as evaluation metrics for model alignment does not make sense to me either. Those two metrics are focusing on the similarity to references on a surface level which is not the goal of alignment. In addition, the references are already quite outdated, SOTA aligned models are expected to exhibit quite large differences to the references. (4) The ablation studies show that both ranking loss and weighted SFT loss are required to achieve optimal performance, highlighting the importance of making use of all model generated responses. However, it is probably based on the condition that other hyperparameters remain the same. We could imagine that through tuning the threshold hyperparameter, model performance of using only ranking loss or weighted SFT loss would fluctuate as well, there should be separate sweet spots for using only one loss. It is necessary to check if the performance at those points are still worse than PLE trained model. (5) The case study part is quite cursory, only showing a few examples without using more quantitive methods to show the usefulness and harmlessness of model generated responses on a larger scale. (6) It is a bit hard to put the results of this paper in context. The trained models are evaluated using customized evaluation metrics instead of being evaluated on current popular alignment benchmarks such as AlpacaEval, Arena-hard etc which typically have standard evaluation protocols. Supplementary Material: Yes I have read the Appendix. Relation To Broader Scientific Literature: The paper explores new algorithm for aligning LLMs that make full use of model generated responses. The idea of using all data is somewhat in contrast to the line of work that explores "less is more for alignment". Essential References Not Discussed: I think the paper is missing some literature and important comparisons to context distillation methods such as [1]. [1] Learning by Distilling Context Other Strengths And Weaknesses: Please see other parts. Other Comments Or Suggestions: Please see other parts. Questions For Authors: Please see other parts. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper and providing valuable feedback. I appreciate your efforts in ensuring the quality of the research. Regarding your concerns, I would like to provide the following explanations: > If I understand correctly, the paper seems to assume that $\pi^\star$ is $\pi _\theta(y|x,principles)$. If that's the case, then why bother designing such a complicated algorithm, instead of just doing context distillation through SFT (basically just use $\pi _\theta(y|x,principles)$ as target when just providing $x$? If you want to claim that PLE has better sample efficiency or has better performance, then context distillation should be the SFT baseline which makes much more sense comparing to using offline data for SFT. Thank you for your thoughtful question. We would like to clarify that $\pi^\star$ and $\pi _\theta(y \mid x, \text{principles})$ are fundamentally different in our framework. As defined in Equation (2), $$\pi^\star = \arg\max _\pi \mathbb{E} _{x \sim p(x),\ y \sim \pi(\cdot \mid x)} [R(x, y)],$$ $\pi^\star$ represents the optimal policy that maximizes expected reward. In contrast, $\pi _\theta(y \mid x, \text{principles})$ is only a *component* used within the PLE algorithm to generate a reference response under principle-guided prompting. Our goal is not to imitate the principle-guided response directly via context distillation, but rather to progressively guide the base model $\pi _\theta(y \mid x)$ toward $\pi^\star$ by comparing its outputs to those generated under the principle prompt, and selectively applying ranking or weighted learning based on reward differences. As we show in Theorem 5.3, this progressive strategy ensures that $\pi _\theta(y \mid x)$ converges toward $\pi^\star$ under certain conditions. > According to the descriptions of the experiment setup, it is not clear what reward model is being used during model training? Do training and evaluation use the same reward models? For the HH dataset, we used RM-Gemma-2B as the reward model during both training. For the IMDb dataset, we trained a sentiment classifier using the 0/1 labels provided in the dataset. The reward score is defined as the logit of the positive class predicted by this classifier. For the TL;DR dataset, we trained a reward model based on the preference pairs from the tldr-preference-trl-style dataset, which was then used consistently during both training and evaluation. In all cases, the same reward model is used throughout training and evaluation to avoid distribution mismatch. > Using BLEU and PPL as evaluation metrics for model alignment does not make sense to me either. Those two metrics are focusing on the similarity to references on a surface level which is not the goal of alignment. In addition, the references are already quite outdated, SOTA aligned models are expected to exhibit quite large differences to the references. We agree that BLEU and perplexity (PPL), while widely used, do not fully capture the goals of alignment. Additionally, our evaluation does not rely solely on BLEU and PPL. We additionally report: ​ • Reward model scores to directly reflect alignment with learned human preference signals, ​ • Human evaluation based on qualitative assessments of response helpfulness and harmlessness, ​ • Evaluations using a strong LLM (Claude API) to provide an automated yet high-quality comparative judgment. These complementary metrics together offer a more comprehensive view of alignment performance. As shown in our results (Tables 1–3 and Figure 2), our method consistently outperforms baselines across these diverse evaluation settings. Due to space limitations, other responses can be found at the anonymous link https://anonymous.4open.science/r/ICML_rebuttal_PLE-1F6E/responses_2.md --- Rebuttal Comment 1.1: Comment: Thanks for your efforts, I have increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and for reconsidering your evaluation. We appreciate your time and thoughtful suggestions.
Summary: The paper introduces a novel framework named PLE addressing inefficiencies in aligning large language models (LLMs) with human preferences. Current methods like RLHF face stability and scalability challenges, while alternative approaches rely heavily on large high-quality datasets and treat data generation and model training as decoupled processes, leading to suboptimal data utilization. PLE dynamically integrates these phases by generating principle-guided and original query responses, employing a reward-based dynamic threshold to select training strategies: it uses ranking loss to prioritize high-margin improvements when reward differences exceed the threshold and weighted loss to incorporate both responses proportionally when differences are small. Theoretical analysis demonstrates PLE's convergence to an optimal policy with bounded error rates. Claims And Evidence: The authors’ claim that existing methods inefficiently utilize generated data by treating training and data generation as static, separate processes is well-supported. The dynamic threshold mechanism, which adaptively selects training strategies based on reward differences between principle-guided and original responses, directly addresses this inefficiency by incorporating both high- and mid/low-quality data into training—evidenced by improved reward scores and human evaluations across tasks. Theoretical guarantees (Lemma 5.2, Theorem 5.3) further validate that progressive threshold updates bound approximation errors and ensure convergence, aligning with the core motivation. Methods And Evaluation Criteria: The experimental design and evaluation methodology in this paper are comprehensive and well-justified. The authors comprehensively validate PLE across three distinct tasks—multi-turn dialogue (HH dataset), controlled generation (IMDb), and summarization (TL;DR)—using both large-scale (LLaMA-8B, Qwen-7B) and smaller models (GPT-2), and employ both objective metrics (PPL and reward model scores) and subjective assessments (human and model evaluations) for comparison. Theoretical Claims: I checked the theoretical claims in the main text (Lemma 5.2 and Theorem 5.3), which are logically coherent. Lemma 5.2 establishes that the dynamic thresholding strategy progressively expands the "pure level set" by iteratively tightening the threshold, while Theorem 5.3 builds on this to bound the approximation error between the learned policy and the optimal policy, ensuring convergence. Experimental Designs Or Analyses: The experimental design is generally sound: the authors evaluate PLE across multiple tasks (dialogue, text generation, summarization) using diverse models (LLaMA-8B, Qwen-7B, GPT-2) and metrics (PPL, reward scores, BLEU, human/API evaluations), demonstrating broad applicability. Ablation studies (Tables 4–5) confirm the necessity of both ranking and weighted losses, and training curves (Figure 3) empirically validate progressive improvement. Supplementary Material: I reviewed all the supplementary material. Relation To Broader Scientific Literature: The paper’s key contributions build on prior alignment methods like Self-Instruct [1] (using self-generated principles to guide responses) and RLHF [2] (reward models for preference learning) but introduce a novel dynamic threshold mechanism to address inefficiencies in data utilization. [1] Self-instruct: Aligning language models with self-generated instructions [2] Training language models to follow instructions with human feedback. Essential References Not Discussed: I think all related work has already been mentioned. Other Strengths And Weaknesses: **Strengths**: 1. The idea of coupling data generation and training via dynamic thresholds to mitigate the problem of low data utilization is interesting. 2. The convergence proofs strengthen the method’s credibility. **Weaknesses**: 1. The proofs lack intuitive explanations for key parameters, leaving their practical impact unclear. 2. Figure 1’s current design fails to clearly illustrate the interplay between principle-guided responses, thresholding, and training phases; a redesigned figure with step-wise visuals would improve understanding. Other Comments Or Suggestions: No other comments or suggestions. Seen above. Questions For Authors: 1. Equation (2) defines the optimal policy as $ \pi^* = \arg\max _{\pi} \mathbb{E} _{x,y\sim\pi}[R(x,y)] $, omitting the KL divergence regularization term commonly used in RLHF (e.g., $ -\beta \mathbb{D} _{\text{KL}}(\pi \| \pi _{\text{SFT}}) $) to prevent excessive deviation from the initial policy. Could the authors clarify why KL regularization is absent in their theoretical framework? 2. In Table 1 (HH dataset), the PPO-aligned model shows higher perplexity and lower reward scores compared to SFT. What might explain this? A discussion on the failure modes of PPO in specific tasks (e.g., multi-turn dialogue) would strengthen the paper’s critique of existing methods. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper and providing valuable feedback. I appreciate your efforts in ensuring the quality of the research. Regarding your concerns, I would like to provide the following explanations: > The proofs lack intuitive explanations for key parameters, leaving their practical impact unclear. Thank you for your valuable feedback. We would like to clarify that all key parameters and symbols used in the theoretical analysis are formally defined and explained in Sections 4 and 5 of the paper. If there are any specific parameters whose meaning remains unclear, we would be happy to provide further clarification or improve the exposition accordingly. > Figure 1’s current design fails to clearly illustrate the interplay between principle-guided responses, thresholding, and training phases; a redesigned figure with step-wise visuals would improve understanding. Thank you for your helpful suggestion. We will improve the framework diagram in the next version to provide a clearer representation of the overall process. > Equation (2) defines the optimal policy as $ \pi^* = \arg\max _{\pi} \mathbb{E} _{x,y\sim\pi}[R(x,y)] $, omitting the KL divergence regularization term commonly used in RLHF (e.g., $ -\beta \mathbb{D} _{\text{KL}}(\pi \| \pi _{\text{SFT}}) $) to prevent excessive deviation from the initial policy. Could the authors clarify why KL regularization is absent in their theoretical framework? Thank you for pointing this out. Equation (2) presents a general formulation of the alignment objective, which captures the goal of maximizing expected reward. Common approaches such as PPO introduce a KL divergence regularization term to prevent the learned policy from deviating too far from the reference policy. Such regularized objectives can still be encompassed within our formulation by considering the reward function $R(x, y)$ to implicitly include regularization terms. This perspective has also been adopted in prior work [1]. [1] FUNDAMENTAL LIMITATIONS OF ALIGNMENT IN LARGE LANGUAGE MODELS, ICML2024 > In Table 1 (HH dataset), the PPO-aligned model shows higher perplexity and lower reward scores compared to SFT. What might explain this? A discussion on the failure modes of PPO in specific tasks (e.g., multi-turn dialogue) would strengthen the paper’s critique of existing methods. To ensure a fair comparison across methods, we adopted an offline training setup for PPO, using the same set of queries from the HH dataset. This constraint limits PPO’s ability to fully explore and optimize responses beyond the fixed dataset, which may explain the relatively modest performance gains compared to SFT. Similar observations have been reported in prior work [1]. [1] KTO: Model Alignment as Prospect Theoretic Optimization, ICML 2024 We hope that our revisions have addressed all of your concerns, but please let us know if there is anything else we can do to improve the manuscript. We would be happy to answer any additional questions or provide any further information you may need. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. My concerns have been addressed. After considering other reviewers' feedback, I will maintain my positive recommendation. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our responses. We sincerely appreciate your thoughtful feedback and consideration.
Summary: Authors propose a novel framework that couples data generation and model training, leading to inefficient utilization of generated data. Authors provide a theoretical prove that with the progressively updated threshold strategy, our approach can bound the error rate between the trained model and the optimal model, ensuring convergence within a controlled range. Authors use LLama3-8B base model and Qwen2.5-7B model for tests on the HH dataset, and GPT2 model for tests on IMDb and the TL;DR datasets. Authors compare proposed approach with SFT, DPO, PPO, and RAFT, showing consistent improvements across tasks. ## update after rebattle I think this is a solid paper and keep my score at 4 Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I tried to check Sec 5 Theoretical Analys, although possible I missed some details. Experimental Designs Or Analyses: Authors compare proposed methods with various alternative tuning methods on three popular transformer models, and show consistent improvements. Supplementary Material: No Relation To Broader Scientific Literature: Authors propose a novel method to train models with reward. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Paper is clear and well-written. Experiments are performed with a variety of models on different datasets, and sound convincing. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper and providing valuable feedback. I appreciate your efforts in ensuring the quality of the research. We would be happy to answer any additional questions or provide any further information you may need.
Summary: he paper introduces Progressively Label Enhancement for LLM Alignment, a framework designed to improve the alignment of Large Language Models (LLMs) with human expectations, addressing ethical and legal concerns. PLE tackles these issues by dynamically adjusting the model's training process based on the quality of generated data. Claims And Evidence: Yes. Methods And Evaluation Criteria: 1. The proposed approach PLE shares considerable similarity with prior alignment methods, raising concerns about novelty. In particular, RAFT (Dong et al., 2023) already “expands the SFT dataset by generating additional samples and selecting those with high reward scores”​. PLE’s core idea – generate model responses and utilize reward scores to decide how to train on them – is a direct generalization of RAFT. The main difference is that instead of discarding low-scoring samples entirely (as RAFT does), PLE continues to use them with reduced weight or as negative examples. This is a fairly incremental change: it addresses data utilization inefficiency noted in RAFT but does not introduce a fundamentally new alignment paradigm. In essence, PLE combines RAFT’s data generation with a ranking loss akin to RRHF (Yuan et al., 2023), which “encourages the model to generate preferred responses with higher probability and poor responses with lower probability”​ by integrating a preference-based regularization. PLE’s ranking loss when $s_{\text{prompt}} - s > \tau$ serves the same role as RRHF’s regularizer, and its weighted fine-tuning when the difference is small is reminiscent of label smoothing techniques. 2. The idea of using a set of principles to guide a model’s own generated responses for further fine-tuning has been explored in recent “self-alignment” or “constitutional AI” approaches (e.g., Sun et al., 2023)​. The authors do cite Sun et al. (2023) as motivation for designing the principle prompt​, but PLE does not substantially go beyond those ideas – it uses principles in a straightforward way (simply prepending a fixed set of rules to the query during generation). Other works (like Constitutional AI by Bai et al., 2022b) used principles to generate feedback or to iteratively refine outputs, which is arguably a more novel use of model-generated data. PLE’s use of a principle-guided prompt is comparatively basic and could be seen as a minor variation on these existing alignment techniques. Theoretical Claims: While a theoretical convergence analysis is provided, it relies on strong assumptions about the relationship between model probabilities and reward rankings​. For instance, Eq. (8) introduces an assumption that bounds the model’s error by a term related to ranking inconsistencies with respect to the reward model​. This is a non-trivial assumption – essentially presuming the reward model’s ranking is a reliable guide to the optimal policy. If this assumption does not hold (e.g., the reward model is imperfect or there are multiple optimal responses), the convergence proof may not apply. Moreover, the paper only sketches the proof idea (due to space), so a reader cannot fully verify the claims. In summary, the methodology section leaves some gaps in understanding (reward model setup, training dynamics, hyperparameter choice) that could undermine confidence in the approach’s soundness. Experimental Designs Or Analyses: 1. Missing Baseline Comparisons: While the paper evaluates PLE against several state-of-the-art baselines (SFT, PPO, DPO, RAFT) on multiple datasets, it notably omits some relevant comparisons. For example, RRHF (which was discussed in related work) is not included in the experiments. Since PLE’s loss essentially incorporates an RRHF-like term, it would be important to see a direct comparison to the RRHF method​. Additionally, other “principle-driven” alignment methods (like Constitutional AI or self-critiquing approaches) are not quantitatively compared. The absence of these baselines leaves a gap in demonstrating PLE’s superiority. A reviewer might question whether PLE’s gains are simply due to using more data (since it utilizes all generated samples) rather than the particular strategy, something a comparison with a simpler “use all data” baseline could clarify. 2. A potential concern is that PLE is tuned to these specific setups and might not generalize broadly. For instance, how would PLE perform on aligning a model to code and math reasoning? Supplementary Material: Yes. Relation To Broader Scientific Literature: The topic is important. Essential References Not Discussed: The paper generally does a good job citing relevant literature. It references the key prior works in large-language-model alignment, including RLHF methods​, instruction-following tuning​​, and recent alternatives like DPO​, RAFT, LIMA​, and RRHF​. We did not identify major foundational papers that were omitted. This suggests the authors are well-versed in the literature. Moreover, when introducing concepts like “self-align” principles or “label enhancement,” they cite sources (e.g., Sun et al., 2023 for self-alignment​, and Xu et al., 2021 for label enhancement​), which helps place their contributions in context. Other Strengths And Weaknesses: Refer to the above points. Other Comments Or Suggestions: Refer to the above points. Questions For Authors: Refer to the above points. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper and providing valuable feedback. I appreciate your efforts in ensuring the quality of the research. Regarding your concerns, I would like to provide the following explanations: > The proposed approach PLE shares considerable similarity with prior alignment methods, raising concerns about novelty. In particular, RAFT (Dong et al., 2023) already “expands the SFT dataset by generating additional samples and selecting those with high reward scores”. PLE’s core idea – generate model responses and utilize reward scores to decide how to train on them – is a direct generalization of RAFT. The main difference is that instead of discarding low-scoring samples entirely (as RAFT does), PLE continues to use them with reduced weight or as negative examples. This is a fairly incremental change: it addresses data utilization inefficiency noted in RAFT but does not introduce a fundamentally new alignment paradigm. In essence, PLE combines RAFT’s data generation with a ranking loss akin to RRHF (Yuan et al., 2023), which “encourages the model to generate preferred responses with higher probability and poor responses with lower probability” by integrating a preference-based regularization. PLE’s ranking loss when $s_prompt−s>\tau$ serves the same role as RRHF’s regularizer, and its weighted fine-tuning when the difference is small is reminiscent of label smoothing techniques. we would like to emphasize that the core motivation and overall framework of PLE are fundamentally different. As stated in our paper, many existing methods treat data generation and model training as separate, static processes, which leads to suboptimal use of generated data. In contrast, PLE explicitly couples these two stages through a dynamic and adaptive training strategy, where the model’s behavior and reward feedback guide the evolving use of generated responses. This synergy between data generation and adaptive training is central to PLE’s novelty. Therefore, rather than being an incremental extension of RAFT or a hybrid with RRHF, PLE proposes a new unified framework that systematically addresses the inefficiency in prior alignment approaches. > The idea of using a set of principles to guide a model’s own generated responses for further fine-tuning has been explored in recent “self-alignment” or “constitutional AI” approaches (e.g., Sun et al., 2023). The authors do cite Sun et al. (2023) as motivation for designing the principle prompt, but PLE does not substantially go beyond those ideas – it uses principles in a straightforward way (simply prepending a fixed set of rules to the query during generation). Other works (like Constitutional AI by Bai et al., 2022b) used principles to generate feedback or to iteratively refine outputs, which is arguably a more novel use of model-generated data. PLE’s use of a principle-guided prompt is comparatively basic and could be seen as a minor variation on these existing alignment techniques. As we mentioned above, the core contribution of our work lies not in the specific design of the principle-guided prompt itself, but in the overall framework that couples data generation and model training in a dynamic and synergistic manner. The principle-guided prompt serves as one component within this framework—its role is to produce alternative responses that reflect desirable alignment properties, which are then evaluated and integrated into the training pipeline using our adaptive strategy. While our current implementation adopts a simple prompting mechanism, we do not claim novelty in the prompt design itself. In fact, one of the strengths of our method is that it is modular and compatible with more sophisticated principle-guided strategies, such as feedback-based refinement or multi-turn rewriting used in works like Constitutional AI. We chose a straightforward setup to demonstrate the effectiveness of our approach even under minimal settings. We appreciate the suggestion and will explore integrating more advanced principle-guided generation techniques in future work to further enhance the overall performance. Due to space limitations, other responses can be found at the anonymous link https://anonymous.4open.science/r/ICML_rebuttal_PLE-1F6E/responses_1.md --- Rebuttal Comment 1.1: Comment: My concern has been solved, I would increase my score.
null
null
null
null
null
null
Multiple-policy Evaluation via Density Estimation
Accept (poster)
Summary: The paper introduces CAESAR, an algorithm for efficiently evaluating multiple policies in finite-horizon MDPs to a desired accuracy and confidence level. CAESAR first obtains coarse estimates of the visitation distributions with low sample complexity, and then refines these estimates by computing importance weights using a step-wise quadratic loss inspired by DualDICE. Additional contributions include the IDES subroutine for improved importance ratio estimation and the MARCH algorithm, which, along with a new β-distance metric, enables efficient estimation even with exponentially many deterministic policies. Claims And Evidence: The theoretical results is backboned by comprehensive proof. Methods And Evaluation Criteria: The result is evaluated in terms of sample complexity. Theoretical Claims: I checked the prooffor Secton 4.1 and 4.2 briefly and it seems make sense. Experimental Designs Or Analyses: No experiments. Supplementary Material: Yes, part of the proof of main results. Relation To Broader Scientific Literature: see below Essential References Not Discussed: I think this paper is widely related to offline RL (instance-dependence bound), offline policy evaluation and I think the paper should add more dicussions related to these literature. Other Strengths And Weaknesses: - The paper is well-organized and the main idea of the whole algorithm is well devided into several sections with detailed discussions - The idea is quite new and I think the problem fomulation itself in practice since it may be related to how to construct a shared dataset for policy evaluation - However, I'm quite confusion with the setting since it seems to be hibrid evaluation process since the coarse estimation is estimated online and then the policy is evaluted, in an offline pattern, with a dataset collected by the chosen sampling policy based on the estimation attained online. Such a hybrid process is quite counteriintuitive to me and need more clarifications. - Also, the definition of coarse estimation seems new and I'm not sure whether it's reasonable. Other Comments Or Suggestions: - Add more clarifications on the setting and the definition - Since a new concept/measure is proposed, it would be better to use some experiment to show whether it's reasonable or not. Questions For Authors: - Please clarify the weakness part. - Could author connect the proposed method more with offline RL literature and show what's the similarity and difference since the high-level idea seems really similar to me - The comparison with existing results is unclear to me which is better. Would it be possible to provide a more clear result in a specific setting? - Why MoM is introduced? Why the bouned expectation in Lemma 4.7 does not directly indicate the high probability bound? - Why lower bound of MARCH need the assumption that policies are deterministic? From my intepretation, it's a result for tabular MDP Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your efforts on reviewing our paper. We are confident that we can solve your concerns and questions. And we kindly request you to consider increasing your score if you think we address your concerns. > The hybrid setting of online and offline is confusing. Sorry to cause the confusion. The setting of our problem is online. Our problem formulation is: given a set of target policies and an unknown MDP, evaluate the performance of all these policies. No additional dataset is given, interactions with the environment are necessary, so it is definitely online. We do mention in the paper that our method is implemented more like in an offline manner. Since for offline RL, an offline dataset is given and no more interaction is allowed, i.e. we can only sample data from this fixed given distribution. This is exactly the second phase of our algorithm, where we have a fixed sampling distribution/policy derived from the first phase, and use data from this fixed distribution to perform the evaluation. > The connection between our method and offline RL. In offline RL, an offline dataset is given and usually is assumed to have good coverage over the state space. In our scenario, if we already have an offline dataset that has good coverage over the space of these target policies, then we can directly use methods from offline RL to do off-policy evaluation of these target policies. However, it is not the case. We don't assume such a good offline dataset which is always unpractical because we have multiple policies, and even if it exists, it may be super sub-optimal with respect to the policies at hand, e.g. a uniform offline dataset. What our method does is, we first interact with the environment to get some information of the target policies and then we can calculate an approximately optimal sampling policy in terms of the coverage over the target policies. We successfully show that only low-order samples are needed to construct such a sampling dataset/policy. In the second phase, we can just sample data using this fixed sampling policy to do policy evaluation just like offline RL. > Doubts on the reasonability of coarse estimation. That's one of our contributions. In our work, we theoretically show the effectiveness of coarse estimation in multiple-policy evaluation problem. The coarse estimation has been proved reasonable. We are not sure what exact concerns you have about coarse estimation. Are you concerned about how coarse estimation leads to final accurate performance evaluation? It is true that one can never achieve estimation up to $\epsilon-$accuracy by just doing coarse estimation with $\tilde{O(1/\epsilon)}$ sample costs, the technique is that coarse estimation is only used to approximate an intermediate quantity in the whole procedure, e.g. the sampling distribution $\hat \mu^*$ in our scenario. In our setting, to evaluate the final performance accurately, the second phase is unavoidable. Coarse estimation is powerful since first, it only needs low-order samples which is negligible compared to the main order of sample costs, second, the multiplicative constant error it induces is acceptable. In other words, it enables us to approximate some useful intermediate quantities at an acceptable sample cost and error which serves an important role to achieve the ultimate goal. > The comparison with existing results is unclear which is better. We already discussed the comparison of our sample complexity with the existing work in Section 5.1. It is unclear whether our sample complexity is universally better, however, our result is meaningful and significant by offering new insights for the problem and sharing many advantages. First, we solve the unfavorable dependency of $1/d^{max}(s)$ in existing results. Second, our work tackles the problem from a different perspective which is complementary to the existing method. Third, our sample complexity is cleaner and more interpretable than the existing result. Finally, we can parallelize our method while the existing method can't since it relies on trajectory stitch which is a practical advantage. > Why lower bound of MARCH need the assumption that policies are deterministic? It is not an assumption. MARCH is our proposed algorithm to coarsely estimate all deterministic policies. Since we consider tabular MDPs, any policy can be expressed as a combination of deterministic policies. So it is enough to estimate all deterministic policies. > Why MoM is introduced? Why the bounded expectation in Lemma 4.7 does not directly indicate the high probability bound? A result holds in expectation usually does not imply that the result holds with high probability. In our scenario, we get our importance density estimators by Algorithm 1 which uses stochastic gradient descent. > Other comments Thanks for the comment. We will consider updating our paper to make the setting clearer, include more related works in offline RL and implement some experiments.
Summary: This paper addresses the problem of evaluating the performance of multiple target policies in reinforcement learning (RL) using a sample-efficient algorithm called CAESAR. Existing methods for single-policy evaluation can be inefficient when applied separately to each policy, especially when policies are similar. CAESAR tackles this by first estimating coarse visitation distributions for all policies, then designing an approximately optimal sampling distribution that efficiently covers all target policies simultaneously. The sampled data is used to estimate policy values using importance weighting, with importance ratios estimated through a new technique called IDES, inspired by the DualDICE method. Compared to prior work, CAESAR offers a flexible offline evaluation framework, works for finite-horizon MDPs, and is applicable to both policy evaluation and near-optimal policy identification tasks. Claims And Evidence: The claims made in the paper are well supported with the theoretical results accompanied. Methods And Evaluation Criteria: The proposed methods has strong theoretical backgrounds. It is not empirically evaluated, but it is evaluated with its sample complexity and compared with other algorithms with its order. Theoretical Claims: I was not able to fully check the correctness of theorems, but the proofs seem to be detailed enough to easily point out the issues, and I haven't found any issue. Experimental Designs Or Analyses: The paper is not performing any experiments. Supplementary Material: Some of the proofs Relation To Broader Scientific Literature: - This paper contributes to the less explored area of what policy should be used to effectively evaluate multiple policies. This paper seems to be one of a very few papers that provides theoretical results on this field. This may have some impact to practical algorithms who have similar goals. - This paper proposes IDES, a non-trivial extension of DualDICE to finite-horizon MDPs. Essential References Not Discussed: I am not aware of any essential references that are not discussed in this paper. Other Strengths And Weaknesses: - very detailed, easy to follow proofs - the paper is overall easy to follow - deals with an issue that have a number of practical applications - purely theoretic, no empirical analyses Other Comments Or Suggestions: . Questions For Authors: I am not sure whether framing the task as "offline multiple-policy evaluation" is appropriate, because at a first glance it looks like a trivial problem, evaluating multiple policies with a given offline dataset. IMO, it would be easier to understand if the task was framed as computing a policy with least OPE error on multiple policies. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your efforts on reviewing our paper. We really appreciate your positive feedbacks on our work. We agree that framing the task as 'offline' multiple-policy evaluation is kind of wired and may cause some unnecessary confusions. Thanks for pointing it out and we will consider reorganizing the language.
Summary: This paper tackles the problem of multiple-policy evaluation, by proposing an offline off-policy approach named “CAESAR”, in contrast to the online on-policy approach by Dann et al. 2023. The proposed approach first performs a coarse estimate of the visitation distributions of the target policies. These estimates are then used to approximate the optimal sampling distribution and to compute the importance density ratio, drawing inspirations from DualDICE [Nachum et. al. 2019]. For estimating the visitation distributions of deterministic target polices, they also proposed an algorithm MARCH, leveraging a novel notion called $\beta$-distance. Furthermore, the paper provides a detailed analysis of the sample complexity of CAESAR, presenting PAC bounds and show how they scale with key parameters such as horizon H, number of policies K, number of states S and actions A. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proof of the main results and did not identify obvious issues. Experimental Designs Or Analyses: There are no experiments in this paper. Supplementary Material: There are no supplementary materials. Relation To Broader Scientific Literature: It extends prior work in multiple-policy evaluation from online, on-policy setting to an offline off-policy setting. And it adapts and extends the distribution ratio estimation technique from DualDICE (Nachum et al., 2019) to the finite horizon context, with a step-wise objective function. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The paper introduced the notion $\beta$-distance for analyzing the sample complexity of MARCH. However, the intuition behind $\beta$-distance does not seem entirely clear. Additional discussions about that would enhance the clarity. 2. Due to the $H^4$ dependence of sample complexity, it would strengthen the paper to include experiments that empirically verify that scaling. Other Comments Or Suggestions: No. Questions For Authors: The sample complexity in Theorem 4.9 scales with $H^4$, compared to the $H^2$ scaling in Dann et.al (2023) has $H^2$. This higher order dependence on $H$ may limit the practical applicability of the proposed method. You mentioned that this gap was induced by the error propagation and conjectured that a different loss function might address it. Can you elaborate on that point and discuss if there are other approaches that might reduce the dependence on $H$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your efforts on reviewing our paper. We appreciate your comments. And we kindly request you to consider increasing your score if we solve your concerns and questions. > The intuition behind $\beta$-distance does not seem entirely clear. The $\beta$-distance is defined as follows $dist^{\beta}(x,y)=\min_{\alpha\in[1/\beta, \beta]} |\alpha x - y|$. The $\beta$-distance is zero as long as $\frac{x}{\beta} \le y \le \beta x$ which means $x$ approximates $y$ up to multiplicative constant $\beta$ error. The intuition behind $\beta$-distance is that we want a metric that can measure the coarse estimation. $\beta$-distance is such a metric. We showed that in Lemma B.5, if $\beta$-distance is smaller than $\epsilon$, then we can show $x$ is a coarse estimator for $y$. With this metric, we can focus on analyzing the bound of $\beta$-distance, $dist^{\beta}(\hat d, d)$, to show $\hat d$ is a coarse estimator for visitation distribution $d$. Analyzing $\beta$-distance is much more convenient than analyzing the formulation in Definition 4.1. Thanks for the comment. The detailed elaboration of $\beta$-distance and MARCH is in Appendix B. We will consider updating the paper to clarify $\beta$-distance and MARCH more in the main page. > It would be better to include experiments that can show the $H^4$ dependency of sample complexity. We do agree with the reviewer that having such an experiment will strengthen the paper, but practically, to show the dependency of $H^4$ in experiments would be extremely hard since it is just an upper bound with high probability. And we also want to reiterate that we consider the scope of our submission to be the advancement of our theoretical understanding of this problem. As such we hope to obtain a fair evaluation of our submission based on its merits to advance theoretical research in RL. What's more, same as most theoretical works in RL, we were concerned with achieving the sharpest S and K (for this specific problem) dependence, which given the large size of S in applications is typically considered more consequential than dependence on H. Our minimax upper bounds achieve the sharpest dependence on S and A. > Elaborate that a different loss function might address the sub-optimal dependency on $H$. And is there any other approach to address it? The loss function we defined to estimate the importance densities is a step-wise loss function (see L312-315 (right)). And we are adopting a step-by-step optimization procedure, i.e., we first get the estimator at step h-1 by minimizing the loss function $l_{h-1}(w)$, and then, utilizing it to get estimator at step $h$ by minimizing $l_h(w)$. Since the loss function at step $h$ depends on the estimator from last step, there is an additive error propagation which induces the $H^4$ dependency. What we are thinking is to propose a comprehensive loss function $L(w)$ that incorporates all steps utilizing some coarse knowledge of transition functions. By minimizing this loss function, we can get the importance density estimator for all layers at once which avoids the step-to-step error propagation thus gets rid of $H^4$ dependency. It is a good question that if there are other approaches to solve multiple policy evaluation problems well with optimal dependency on $H$. Based on our pipeline, we think it is hard to do more beyond the method we described above but we are optimistic that there exists totally different methods that can achieve it. Multiple policy evaluation is a challenge and under-studied problem which is worth exploring.
Summary: Propose algorithm for off-policy estimation of a set of $K$ policies. To me, the main idea is to try to improve the linear dependence in $K$ in the naive $K / \varepsilon^2$ sample complexity (where one simply estimates the visitations of all $K$ policies via Monte Carlo) by collecting samples from a covering distribution, then doing importance-weighted estimation. The method is comprised of the following steps (Algorithm 2) - Estimate the visitation distributions of each policy up to constant accuracy -- the sample complexity here has a fast statistical rate $\varepsilon^{-1}$, but I believe it scales as $\text{poly}(K,S,A)$, see below - Compute the "optimal sampling distribution" $\hat\mu_h^*$, which is essentially the distribution that best covers the estimated distributions of the $K$ policies (in the sense of minimizing variance or expected coverage, aka $\ell_1$ or $\ell_2$-coverability ), and collect samples - Since the data is now off-policy, importance weights must be computed in order to estimate the policies' values (Algorithm 1) Claims And Evidence: Overall, the analysis seems reasonable. However, some important claims have been misrepresented or are at the very least questionable, which belies the contributions of the paper. I believe that important parts of the analysis of CAESAR have been ignored or abstracted away, that may significantly increase the sample and computational complexity. In particular: - the sample complexity of coarse visitation estimation for all policies. - the cost of solving the optimization problem for the sampling distribution in Eq. (5) - analysis of the weight estimation algorithm (IDES) In addition, I feel that comparisons to the most relevant works are inaccurate, in particular: - claimed differences with Amortila et al. (2024), e.g., that CAESAR's objective is easier to solve (L144-154, left) - the claimed novelty of the coarse visitation estimation results (Section 4.1) **Sample complexity of estimating visitation for all policies: dependencies on state space $S$ and policies $K$** The abstract states that $\tilde O(\varepsilon^{-1})$ samples are required to estimate the visitation distributions of all policies, which is required in step 1 of CAESAR (Algorithm 2). Correct me if I'm wrong, but I believe that the sample complexity of step 1 is actually something like $$O\left(\frac{K S A}{\varepsilon^{-1}}\right).$$ As we often treat $S$ (and seemingly $K$ here) as being very large (potentially exponentially), this is not a lower-order term to be absorbed into the total sample complexity of CAESAR (Theorem 4.9) without further assumptions. None of this has been clearly stated in the main body nor discussed, and I believe that it warrants extended consideration as it does form a trade-off. I am also concerned that a large number of additional samples will be required to compute (5); this computed distribution is the foundation of the paper but has been completely abstracted away, see below. **Optimization problem seems expensive and difficult to solve** Since the sampling distribution computed from the optimization problem in (5) is central to the paper, I think it's necessary to provide evidence that it can be solved, and under what conditions it can be solved. I am concerned about this because - The inner maximization is non-concave as it is over a discrete set, and relaxations to convex hulls might incur larger sample complexities - The entire paper of Amortila et al. (2024) is about methods to solve (5), and I am not sure that any of them apply without relaxations of the objective and model access / online rollouts. Therefore the expense to solve (5) may dominate the total sample complexity of CAESAR, or fall outside the claimed problem setup (only one data collection phase). In addition, the the authors should probably discuss the robustness of the downstream methods (IDES etc) in the case that $\hat \mu^*$ is inaccurate. **Analysis of the weight estimation method (IDES)** The gradient is estimated using samples from $\mu$, but the sample complexity of estimating this gradient is not considered and is assumed away (as far as I can tell in the main body, since Lemma 4.6 simply assumes that we have an accurate $\hat w$). It seems probable that this will incur worse dependencies in the total sample complexity of CAESAR. **Comparisons to Amortila et al. (2024)** In L144-154 (left), the authors claim that the optimization problem in Eq. (5) is easier to solve than the one in Amortila et al. (2024); I think this simply may not be true given that the inner maximization in Eq. (5) is over a non-concave / discrete set, nor do the authors actually discuss how to solve Eq. (5). Also in general, stating that Amortila et al. (2024) is a "concurrent work" (L144) is simply dishonest, as it precedes this submission by 1 year, and deserves much greater credit in Section 4.2. **Novelty of coarse visitation estimation** Zhang and Zanette (2023), which is not cited in the submission, has already demonstrated how to estimate visitations up to constant accuracy, and their Lemma A.22 is essentially the same result as Eq. (1). Although their setting is different, it's related in the sense that they use coarse estimation to get a fast rate and use coarsely estimated occupancies to explore. I think it is not fair to claim coarse estimation as a contribution in this submission, without at least heavily citing Zhang an Zanette (2023). **References** Zhang, Ruiqi, and Andrea Zanette. "Policy finetuning in reinforcement learning via design of experiments using offline data." Advances in Neural Information Processing Systems 36 (2023): 59953-59995. Methods And Evaluation Criteria: As this work is theoretical in nature, I would say that the guarantees the authors have desired to prove make sense for the stated problem. Theoretical Claims: I did not check the proofs, but the high-level dependencies in the theoretical claims seem consistent with what one would expect and with those in existing work. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: No Relation To Broader Scientific Literature: The paper combines methods from previous literature on weight and policy visitation estimation, and reward-free exploration with visitations and weight estimation, to the setting of multiple policy off-policy evaluation. The goal is to reduce linear dependence in $K$ to a better coverage coefficient. Essential References Not Discussed: Important references are missing and/or inadequately credited - As mentioned previously, Zhang and Zanette (2023) show how to coarsely estimate distributions, and show how to use them for exploration. Another highly relevant work along this vein is Li et al. (2023), who also estimate distributions up to constant accuracy, then use them to compute a covering distribution (for tabular MDPs, but they describe how to optimize the objective efficiently). - The optimization problem in Eq. (5) is identical to the one in Amortila et al. (2024) - Techniques behind gradient-based methods to estimate weights, and handling error propagation of estimated weights and visitations, have been utilized in in Huang and Jiang 2024; Kallus and Uehara 2020; Amortila et al. 2024; Huang et al. 2024 **References** Li, G., Zhan, W., Lee, J. D., Chi, Y., & Chen, Y. (2023). Reward-agnostic fine-tuning: Provable statistical benefits of hybrid reinforcement learning. Advances in Neural Information Processing Systems, 36, 55582-55615. Kallus, N., & Uehara, M. (2020, November). Statistically efficient off-policy policy gradients. In International Conference on Machine Learning (pp. 5089-5100). PMLR. Huang, A., & Jiang, N. (2024). Occupancy-based Policy Gradient: Estimation, Convergence, and Optimality. Advances in Neural Information Processing Systems, 37, 416-468. Amortila, P., Foster, D. J., Jiang, N., Sekhari, A., & Xie, T. (2024). Harnessing density ratios for online reinforcement learning. arXiv preprint arXiv:2401.09681. Huang, A., Chen, J., & Jiang, N. (2023, July). Reinforcement learning in low-rank mdps with density features. In International Conference on Machine Learning (pp. 13710-13752). PMLR. Other Strengths And Weaknesses: I think that the technical contributions and conceptual insights of this paper are limited in novelty, in the sense that they are largely derived from existing papers and combined for the task of multiple policy off-policy evaluation. However, the observations that (a) one only needs to estimate the visitation distributions up to constant accuracy and (b) one can extract useful importance weights from this seem interesting and useful. At the same time, all of this relies on being able to solve Eq. (5), which to me, is the hardest part of the problem. I am recommending rejection because of the obfuscated presentation of results (sample complexity), the lack of consideration towards how to solve Eq. (5), and the improper acknowledgement of existing work from which many technical tools were almost surely derived. Other Comments Or Suggestions: n/a Questions For Authors: 1. What is the sample complexity of step 1 of CAESAR, i.e., estimating the visitations of all policies? 2. How can Eq. (5) be solved, and can you comment on the expected sample complexity? 3. Since the gradient in IDES is estimated, can you comment on the expected complexity of learning $\hat w$ via SGD? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks the reviewer for the detailed comments on our work. We are confident that we can solve your concerns well. And we kindly request you to consider increasing your score based on our following elaboration. One of your main concerns is about our optimization objective (5): > The problem (5) is hard to solve since the inner maximization is non-concave. The sample complexity of solving (5) is unclear. We want to correct this comment since it is not true. Our opt objective is a well-formulated convex problem. Let's denote $\sum_{s,a} \frac{(\hat d_h^{\pi^k}(s,a))^2}{\mu_h(s,a)}$ by $f_k(\mu)$. It is trivial that $f_k(\mu)$ is convex w.r.t $\mu$, and we know the maximum of a bunch of convex functions is still convex, hence, our objective $\max_{k\in[K]} f_k(\mu)$ is convex. It is ready to be solved by any convex optimization solver which means no sample is needed in solving (5). > Our opt problem (5) is identical to the one in Amortila et al. (2024). The formulation same to the one in Amortila et al. (2024) is (4) instead of (5). (4) is common in many works since it is a standard formulation to describe the coverage property of a dataset. The emphasis of two works is different. We show that an up to constant optimal $\mu$ is enough in our scenario, hence we use coarse estimators to approximate (4), leading us to (5) which is easy to solve. They want an exact optimizer to (4) which is not straightforward to solve since the true visitation distribution is unknown, and they have to reformulate the problem as a RL problem. > The sample complexity of coarse estimation for all policies depends on poly(K). It is not true. We want to emphasize that our low order term for coarse estimation is always bounded by $\tilde{O}(poly(H,S,A)/\epsilon)$ without $poly(K)$ which is a non-trivial contribution. We achieve this by proposing MARCH algorithm along with novel theoretical analysis (utilizing our proposed $\beta$-distance which possesses lots of nice properties). We briefly discussed it in Line 240-249 (left) in the main page. The detailed content is in Appendix B due to page limits. We will consider updating the paper to emphasize it more in the main page. > The complexity of estimating $\hat w$ in IDES is unclear. The sample complexity of IDES is exactly the number of optimization iterations since we are using stochastic gradient. Specifically, in each iteration of optimization, one sample is used to estimate the stochastic gradient. The sample complexity of IDES is same as the one presented in Theorem 4.9, which is also presented in Algorithm 1 where the iteration number $n_h$ is specified. > Missing related works. We appreciate it that you listed these related works which we didn't mention in the current version carelessly due to the massive amount of literature. We will definitely update our paper to include these works as well as the discussions on relations. We also want to clarify here that the statement of "Amortila et al. (2024) is a "concurrent work" (L144)" is not dishonest. Our first draft version was made public in the same period as Amortila et al. (2024). But we understand reviewers have no information on our submission. We will update the expression to make it more appropriate for this submission. > The novelty of coarse estimation and other derivation techniques is limited. We agree that low order sub-procedure is not new in the literature. And thanks for providing these related works. We will definitely include them in the updated version. However, we disagree that the existing results can imply our results on coarse estimation. The Lemma A.22 in Zhang and Zanette (2023) is based on estimation of transition functions, while we directly estimate visitations. And in our result, the constant (i.e. $c$ in Definition 4.1) can be any value (see Appendix A). Eq (45) in Li et al. (2023) is similar to our results, however, our formulation is much cleaner and more elegant based on simple concentration inequalities while they have more additional complex terms. We formally formulate the technique as coarse estimation in this work and present results that are ready to use in other different scenarios. We disagree with the comment that the contribution of our work is just simply combining existing techniques. We presented our contributions clearly in the end of introduction section. Multiple policy evaluation is a very challenging and under-studied problem, and we provide very nice theoretical results for this problem which is a significant step along with many useful side products which have many potential usages. Again, we appreciate your efforts on reviewing our paper. Please let us know if our above rebuttal solves your concerns and we are happy to keep discussing if you have more questions.
null
null
null
null
null
null
Label Distribution Propagation-based Label Completion for Crowdsourcing
Accept (poster)
Summary: To complete the missing labels, this paper proposes a novel label completion method for crowdsourcing by utilizing label distribution propagation. Both the worker similarity and the label correlation are considered to generate the label distribution for missing labels. Based on the worker similarity, the weighted majority voting is applied to obtain the initialized label distribution, and local linear embedding is adopted to finish the label distribution propagation. Experimental results and related analysis demonstrate the effectiveness of the proposal. ## update after rebuttal The author's response has addressed my concerns, and after reviewing the other reviewers' comments, I support the acceptance of this paper. Therefore, I maintain my original score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The theoretical claim is about the convergence of the proposed method. Experimental Designs Or Analyses: Yes. This paper designs the related experiments to show the effectiveness of the proposal. Supplementary Material: Not provided. Relation To Broader Scientific Literature: This paper discusses the effectiveness of label completion for crowdsourcing problem, and proposes a label propagation method for label completion. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1 This paper studies the label completion problem in crowdsourcing area, which is a very important issue as usually the missing proportion is very high in real-world scenarios. 2 This paper proposes a label completion method based on label distribution propagation. Compared to existing classical WSLC method, both worker similarity and label correlation are considered. 3 This paper theoretically analysis the convergence of the proposed method. 4 This paper is well-written and easy to read. Weaknesses: 1 There are several label completion methods mentioned in related work, why only compare WSLC only in the experiments? 2 For the methods employed in this paper, it is better to give the thorough analysis and discussion, such as explaining why Pearson correlation is selected to learn feature vectors rather than other correlation methods. Why is cosine similarity employed to obtain worker similarity? An introduction and discussion on the reasonableness of using these methods should be presented. Other Comments Or Suggestions: Regarding the weaknesses of WSLC, the paper only provides a conclusion without conducting a detailed analysis. It is necessary to further analyze the reasons why WSLC has these weaknesses in the introduction. Questions For Authors: 1 There are several label completion methods mentioned in related work, why only compare WSLC only in the experiments? 2 For the methods employed in this paper, it is better to give the thorough analysis and discussion, such as explaining why Pearson correlation is selected to learn feature vectors rather than other correlation methods. Why is cosine similarity employed to obtain worker similarity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks a lot for your comments. Please find our detailed responses to your concerns as follows. **Author Response to Q1:** We choose WSLC as the primary comparison method because WSLC is the most recent and relevant for our work. Moreover, both WSLC and our work are designed to address multi-class crowdsourcing tasks. In contrast, PMF and PMF-TLC are primarily designed for binary-class crowdsourcing tasks. To address the reviewer’s concerns, we conduct additional comparative experiments on four simulated datasets specifically designed for binary-class crowdsourcing tasks. The results are as follows: |Dataset|MV|GTIC|DEWSMV|MNLDP| |--|--|--|--|--| |biodeg (PMF)|73.18%|75.77%|76.30%|78.39%| |biodeg (PMF-TLC)|74.22%|78.38%|77.73%|**80.56%**| |biodeg (LDPLC)|**79.05%**|**81.36%**|**79.05%**|79.43%| |breast-w (PMF)|54.36%|62.43%|55.94%|51.36%| |breast-w (PMF-TLC)|77.83%|83.03%|79.54%|85.99%| |breast-w (LDPLC)|**83.12%**|**85.97%**|**83.12%**|**86.98%**| |credit-a (PMF)|59.57%|63.59%|59.28%|57.83%| |credit-a (PMF-TLC)|73.62%|79.91%|74.78%|82.03%| |credit-a (LDPLC)|**82.90%**|**80.48%**|**82.90%**|**83.19%**| |diabetes (PMF)|41.93%|68.43%|43.88%|37.89%| |diabetes (PMF-TLC)|74.87%|**80.43%**|77.08%|79.08%| |diabetes (LDPLC)|**79.04%**|79.18%|**79.04%**|**79.56%**| || From the experimental results, it can be observed that LDPLC outperforms PMF and PMF-TLC in most binary-class crowdsourcing tasks, further demonstrating its effectiveness and applicability in label completion tasks. In the final version of the paper, we will add the reasons why we only compare WSLC in the experiments. **Author Response to Q2:** To address the reviewer's concerns, we conduct two groups of experiments: (1) We compare the performance of Pearson correlation and mutual information for learning workers' feature vectors. We refer to the method with mutual information as LDPLC-MI. (2) We compare the performance of cosine similarity and Euclidean distance for estimating worker similarity using the learned feature vectors. We refer to the method with Euclidean distance as LDPLC-ED. We conduct the first group of experiments on all seven simulated datasets with discrete variables. The results are as follows: |Dataset|MV|GTIC|DEWSMV|MNLDP| |--|--|--|--|--| |audiology (LDPLC-MI)|80.53%|79.65%|80.53%|78.32%| |audiology (LDPLC)|80.53%|79.65%|80.53%|78.32%| |breast-cancer (LDPLC-MI)|74.48%|74.48%|74.48%|71.68%| |breast-cancer (LDPLC)|**75.17%**|**75.17%**|**75.52%**|**75.52%**| |car (LDPLC-MI)|83.56%|83.56%|83.56%|**85.76%**| |car (LDPLC)|83.56%|83.56%|83.56%|85.65%| |kr-vs-kp (LDPLC-MI)|**85.54%**|**85.51%**|**85.51%**|**85.70%**| |kr-vs-kp (LDPLC)|85.45%|85.42%|85.45%|85.64%| |mushroom (LDPLC-MI)|**91.31%**|**91.30%**|91.30%|91.49%| |mushroom (LDPLC)|91.30%|91.29%|91.30%|91.49%| |tic-tac-toe (LDPLC-MI)|**81.63%**|**81.63%**|**81.63%**|80.48%| |tic-tac-toe (LDPLC)|81.52%|81.42%|81.42%|**80.69%**| |vote (LDPLC-MI)|82.53%|82.53%|82.53%|84.83%| |vote (LDPLC)|**82.99%**|**82.99%**|**82.99%**|**85.06%**| || From the results, it can be found that there is no significant difference in performance between Pearson correlation and mutual information. However, it's worth noting that mutual information can only be used to measure the correlation of discrete variables. With the aim of broadening the applicability of the proposed LDPLC, we choose Pearson correlation as the preferred method to learn workers' feature vectors. We conduct the second group of comparative experiments on the "LabelMe" datasets. The results are as follows: |Dataset|MV|GTIC|DEWSMV|MNLDP| |--|--|--|--|--| |LabelMe (LDPLC-ED)|81.4%|81.4%|81.4%|81.7%| |LabelMe (LDPLC)|**81.7%**|**81.7%**|**81.6%**|**82.5%**| || The experimental results clearly indicate that cosine similarity is better than Euclidean distance in estimating worker similarity. Therefore, we choose cosine similarity as the preferred method to estimate worker similarity. In the final version of the paper, we will give related analysis and discussion of the methods employed in this paper. **Author Response to WSLC's Weaknesses:** The core assumption of WSLC is that workers with similar cognitive abilities will annotate similar labels on the same instances. Therefore, it completes each missing label solely based on the labels annotated by similar workers on this corresponding instance. However, in real-world crowdsourcing scenarios, each instance usually has few labels, thus WSLC struggles to complete its missing labels. By introducing the assumption that the same worker will also annotate similar labels on similar instances, the missing labels can not only be inferred from labels of similar workers on the same instance but also absorb the distribution information of labels from all workers across neighboring instances. In the final version of the paper, we will further analyze the reasons why WSLC has these weaknesses in the introduction.
Summary: This paper proposes a novel label distribution propagation-based label completion (LDPLC) algorithm to address the sparsity issue in crowdsourced label matrices. Existing worker similarity-based label completion (WSLC) algorithm only considers the correlation of labels annotated by different workers on individual instances, ignoring the correlation of the labels annotated by different workers among similar instances. To fill this gap, LDPLC initializes label distributions using worker similarity weighted majority voting and then propagates these distributions iteratively to absorb information from neighboring instances. Finally, LDPLC completes each missing label based on converged label distributions. Experimental results on both real-world and simulated datasets validate the effectiveness of LDPLC. Claims And Evidence: The most important claim made in the paper is that the proposed LDPLC can considers not only the correlation of the labels annotated by different workers on per individual instance, but also the correlation of the labels annotated by different workers among similar instances. By doing so, LDPLC can further improve the performance of label completion. To support this claim, LDPLC initializes label distributions through worker similarity weighted majority voting, which utilizes the correlation of the labels annotated by different workers on per individual instance. Subsequently, LDPLC iteratively propagates these distributions to absorb information from neighboring instances, which utilizes the correlation of the labels annotated by different workers among similar instances. The results shown in Figure 3, Table 1, and Table 2 demonstrate that LDPLC can further improve the performance of label completion. Methods And Evaluation Criteria: Yes. LDPLC utilizes the correlation of the labels annotated by different workers among similar instances through label distribution propagation, which effectively fills the gap left by WSLC. Regarding evaluation criteria, this paper adopts the integration accuracy, which is commonly used in other label completion studies. Theoretical Claims: Yes. The paper provides a detailed theoretical analysis of the convergence properties of LDPLC. I have verified the correctness of the theoretical analysis. Experimental Designs Or Analyses: Yes. The paper conduct extensive experiments on both real-world and simulated datasets, providing strong empirical evidence to support the effectiveness of LDPLC. Supplementary Material: Yes. The paper conduct extensive experiments on both real-world and simulated datasets, providing strong empirical evidence to support the effectiveness of LDPLC. Relation To Broader Scientific Literature: This paper makes a contribution to the field of label completion for crowdsourcing by addressing a critical limitation of existing work. The proposed LDPLC utilizes the correlation of the labels annotated by different workers among similar instances through label distribution propagation, which effectively fills the gap left by existing work and thus further improve the performance of label completion. Essential References Not Discussed: No. All essential references have been cited/discussed in the paper. Other Strengths And Weaknesses: Strengths: 1) This paper reveals a critical limitation of existing work: WSLC considers solely the correlation of the labels annotated by different workers on per individual instance while totally ignoring the correlation of the labels annotated by different workers among similar instances. 2) To fill this gap, this paper proposes a label distribution propagation-based label completion (LDPLC) algorithm. The proposed LDPLC utilizes the correlation of the labels annotated by different workers among similar instances through label distribution propagation. 3) This paper provides a detailed theoretical analysis of the convergence properties of LDPLC (subsection 3.4), ensuring the algorithm's reliability. 4) This paper provides extensive experiments on both real-world and simulated datasets, providing strong empirical evidence to support the effectiveness of LDPLC. The results show that LDPLC significantly outperforms WSLC in terms of label integration accuracy across multiple datasets and label integration algorithms. Weaknesses: 1) The key motivation for proposing LDPLC in this paper lies in the limitation that WSLC does not take into account the correlation of the labels annotated by different workers among similar instances. However, why WSLC has this limitation and what consequences it may lead to are not discussed. To further clarify the motivation of this paper, these aspects are necessary to address. 2) As shown in Tables 1 and 2, the proposed LDPLC performs significantly worse than WSLC only on the “tic-tac-toe” dataset when the label integration algorithm is MNLDP. Why does this happen? A detailed explanation is necessary. Other Comments Or Suggestions: I have found a small issue in writing as follows: 1) On line 107 in page 2, the full name of the “EM” framework should be provided when it first appears. Questions For Authors: See the Weaknesses parts: 1) The key motivation for proposing LDPLC in this paper lies in the limitation that WSLC does not take into account the correlation of the labels annotated by different workers among similar instances. However, why WSLC has this limitation and what consequences it may lead to are not discussed. To further clarify the motivation of this paper, these aspects are necessary to address. 2) As shown in Tables 1 and 2, the proposed LDPLC performs significantly worse than WSLC only on the “tic-tac-toe” dataset when the label integration algorithm is MNLDP. Why does this happen? A detailed explanation is necessary. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** The key motivation for proposing LDPLC in this paper lies in the limitation that WSLC does not take into account the correlation of the labels annotated by different workers among similar instances. However, why WSLC has this limitation and what consequences it may lead to are not discussed. To further clarify the motivation of this paper, these aspects are necessary to address. **Author Response to Q1:** Thanks for your valuable comments. The core assumption of WSLC is that workers with similar cognitive abilities will annotate similar labels on the same instances. Therefore, it completes each missing label solely based on the labels annotated by similar workers on this corresponding instance. However, in real-world crowdsourcing scenarios, each instance usually has few labels, thus WSLC struggles to complete its missing labels. By introducing the assumption that the same worker will also annotate similar labels on similar instances, the missing labels can not only be inferred from labels of similar workers on the same instance but also absorb the distribution information of labels from all workers across neighboring instances. In the final version of the paper, we will include these discussions on why WSLC has this limitation and what consequences it may lead to. Thanks again for your valuable comments. **Q2:** As shown in Tables 1 and 2, the proposed LDPLC performs significantly worse than WSLC only on the “tic-tac-toe” dataset when the label integration algorithm is MNLDP. Why does this happen? A detailed explanation is necessary. **Author Response to Q2:** Thanks for your valuable comments. Through an in-depth understanding of the “tic-tac-toe” dataset, we found that this dataset has a unique feature structure. This dataset encodes the complete set of possible board configurations at the end of tic-tac-toe games. Each instance contains 9 structured features, corresponding to the nine positions on a 3×3 game board, precisely recording the occupancy state of each cell: player X (marked as ‘x’), opponent O (‘o’), or unoccupied (‘b’). The different occupancy state of one cell on the board can directly affect the outcome of the game. However, when corresponding to instances, we consider them similar because only a small number of features differ, leading to incorrect information being propagated across these similar instance. Since MNLDP also has a propagation stage, it propagates more erroneous information, which leads to a decrease in integration accuracy. In the final version of the paper, we will include these discussions on why the proposed LDPLC performs significantly worse than WSLC only on the “tic-tac-toe” dataset when the label integration algorithm is MNLDP. Thanks again for your valuable comments. **Other Comments Or Suggestions:** On line 107 in page 2, the full name of the “EM” framework should be provided when it first appears. **Author Response:** Thanks for your valuable comments. In the final version of the paper, we will provide the full name of the EM, i.e., Expectation-Maximization. Thanks again. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. After considering the other reviewers' comments, I have decided to maintain my original rating.
Summary: 1. The paper primarily addresses the shortcomings in WSLC, which traditionally considers only the correlations among labels annotated by different workers for individual instances. The authors propose the LDPLC algorithm, which additionally accounts for correlations among labels annotated by different workers across similar instances. 2. Overall, the reported results demonstrate superior performance, consistently outperforming most existing methods on both real-world and simulated datasets. 3. The robustness of the proposed algorithm is well-supported by thorough theoretical analysis and comprehensive parameter sensitivity analysis. Claims And Evidence: The method proposed in the paper was verified in the experimental stage. Methods And Evaluation Criteria: The choice of comparison methods, evaluation criteria, and datasets is appropriate and well-justified. Theoretical Claims: I am not an expert in this field, but upon reviewing the theoretical claims briefly, I found no obvious errors. One minor limitation is the absence of a consistency analysis for other architectures (e.g., kernel methods or deep neural networks). Experimental Designs Or Analyses: 1. The experimental design is comprehensive, including extensive comparisons across multiple datasets and thorough parameter analysis. 2. A potential limitation is the absence of detailed descriptions of dataset characteristics. Therefore, it remains unclear whether the proposed method can perform equally well in scenarios involving a large number of categories. Supplementary Material: There are no supplementary materials for this work. Relation To Broader Scientific Literature: This paper integrates manifold consistency with crowdsourcing learning. Essential References Not Discussed: The core contribution of this paper is its consideration that different workers with similar cognitive abilities tend to annotate similar labels for the same instance, and that the same worker tends to annotate similar labels across similar instances. In other words, the authors address manifold consistency among workers and labels, as well as among instances and labels. Therefore, I suggest that the paper should include discussions on relevant literature related to manifold consistency. Other Strengths And Weaknesses: Strengths: (1) The paper considers multiple manifold consistency issues within the crowdsourcing learning process and employs local linear embedding for Label Distribution Propagation. (2) Experiments conducted across various datasets demonstrate the feasibility and effectiveness of the proposed method. (3) The paper also provides a convergence analysis. Weaknesses: (1) The current work is limited to linear scenarios. It would be valuable to discuss whether this approach can be extended to kernel-based methods or deep neural networks. (2) The authors have not discussed the computational complexity of the algorithm. Specifically, computing neighbors in feature space can be time-consuming, which raises my concerns about practical applicability to real-world scenarios. Other Comments Or Suggestions: The text in Figures 1 and 3 is too small, reducing readability. I suggest enlarging the font size or redesigning these figures to enhance clarity. Questions For Authors: 1. In practical experiments (rather than theoretical analysis alone), what is the convergence efficiency of this algorithm? 2. Can the proposed approach be extended to more complex scenarios, such as kernel-based methods or deep neural networks? This is particularly relevant because, in practice, feature data often do not satisfy linear assumptions. 3. Could the authors discuss the computational complexity of the proposed algorithm? If the authors address these questions clearly, I am willing to raise my score. ## update after rebuttal The author's response has partially addressed my concerns, and after reviewing the other reviewers' comments, I think the current article should be supplemented with a kernel-based extension method to enrich the completeness of the article, which is not difficult. Therefore, I maintain my original score. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks a lot for your comments. Please find our detailed responses to your concerns as follows. **Author Response to Convergence Efficiency:** As shown in Figure 4, LDPLC converges after just 4 iterations on the “LabelMe” dataset. To address the reviewer's concerns, we further observe the convergence efficiency of LDPLC on the first five simulated datasets. We observe that integration accuracies (%) of MV after label completion by LDPLC when T varies as follows: |Dataset\T|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15| |--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--| |anneal|89.64|94.65|94.99|95.66|95.55|95.55|95.55|95.55|95.55|95.55|95.55|95.55|95.55|95.55|95.55| |audiology|89.38|93.36|94.69|94.25|94.69|94.25|94.69|94.25|94.25|94.25|94.25|94.25|94.25|94.25|94.25| |autos|88.29|84.88|90.73|90.73|90.73|91.22|91.22|91.22|91.22|91.22|91.22|91.22|91.22|91.22|91.22| |balance-scale|76.80|87.20|86.72|87.52|87.04|87.36|87.68|87.36|87.36|87.36|87.36|87.52|87.20|87.52|87.20| |biodeg|70.14|72.42|72.80|72.32|71.94|72.23|72.32|72.13|72.13|72.23|71.85|71.85|72.13|72.13|72.23| || These results confirm that LDPLC can achieve convergence within 10 iterations. In the final version of the paper, we will add the convergence efficiency analysis of LDPLC. **Author Response to Extension:** To the best of our knowledge, local linear embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method [1]. Because for most datasets, the assumption of local linear correlation in LLE is satisfied. This is why LDPLC performs well on most datasets. However, if the local space is particularly complex, the assumption of LLE may not be satisfied. Therefore, to handle more complex nonlinear problems, we think LDPLC can be extended to more complex scenarios, such as kernel-based methods or deep neural networks. By leveraging kernel methods, such as Kernel LLE, the nonlinear feature information can be extracted when mapping input data into some high dimensional feature space. Moreover, deep neural networks can map a nonlinear feature space to a linear feature space. In the new feature space, we can better find neighbors and optimize their weights. These will become important directions for our future work. [1] Theoretical Connection between Locally Linear Embedding, Factor Analysis, and Probabilistic PCA; **Author Response to Computational Complexity:** Below, we provide a detailed analysis of LDPLC’s computational complexity. In Algorithm 1, lines 3-8 learn feature vectors with a time complexity of $O(R(N+M(n_ln_a|D_r|)))$, where $n_l$ and $n_a$ are the average number of values for a label variable and an original feature variable, respectively. Lines 9-13 estimate worker similarity with a time complexity of $O(R^2M)$. Lines 14-22 initialize label distributions with a time complexity of $O(NRQ)$. Lines 23-26 identify neighbors and optimize their weights with a time complexity of $O(N^2M+NK^3)$. Lines 27-29 propagate the label distributions with a time complexity of $O(TNKQ)$. Finally, lines 30-36 complete missing labels with a time complexity of $O(NRQ)$. If only the highest order terms are taken, the overall time complexity of LDPLC is $O(RM(n_ln_a|D_r|)+R^2M+NRQ+N^2M+NK^3+TNKQ)$. In addition, we compare the run time of the WSLC and LDPLC, as follows: |Dataset|WSLC|LDPLC| |--|--|--| |LabelMe|2.48s|12.40s| || The experimental results show that LDPLC takes more time than WSLC. Although LDPLC slightly increases the run time, it has better performance. In the final version of the paper, we will add the computational complexity of LDPLC. **Author Response to A Large Number of Categories:** Among 34 simulated datasets, “audiology” and “letter” contain 24 and 26 categories, respectively. In addition, to further address the reviewer’s concerns, we simulate a dataset with 10000 instances and 100 categories, named “Simulation_100”, to conduct a new experiment. The results are as follows: |Dataset|MV_WSLC|MV_LDPLC|GTIC_WSLC|GTIC_LDPLC|DEWSMV_WSLC|DEWSMV_LDPLC|MNLDP_WSLC|MNLDP_LDPLC| |--|--|--|--|--|--|--|--|--| |Simulation_100|90.50%|**92.70%**|90.50%|**92.66%**|90.52%|**92.64%**|91.56%|**92.24%**| || From these results, it can be found that the integration accuracy of each integration algorithm improves significantly after label completion using LDPLC compared to WSLC. These results further verify that LDPLC can perform equally well in scenarios involving a large number of categories. In the final version of the paper, we will include more detailed descriptions of dataset characteristics. **Author Response to Essential References:** In the final version of the paper, we will add a discussion of the relevant literature related to manifold consistency. **Author Response to Figures 1 and 3:** Indeed, the text in Figures 1 and 3 is too small, reducing readability. In the final version of the paper, we will enlarge the font size and optimize the layout of the figures and tables.
Summary: This paper proposes a crowdsourcing label completion method to complement subsequent truth-inference/label-integration methods. The proposed method primarily focuses on improving the existing method WSLC, which “considers solely the correlation of the labels annotated by different workers on per individual instance while totally ignoring the correlation of the labels annotated by different workers among similar instances”. ## update after rebuttal Thanks for the feedback from the authors. I have raised my scores accordingly. Claims And Evidence: Regarding this point, my main concern lies in the experiments. Please refer to the reviews on “Experimental Designs Or Analyses”. Methods And Evaluation Criteria: Regarding this point, my main concern lies in the experiments. Please refer to the reviews on “Experimental Designs Or Analyses”. Theoretical Claims: Yes, I have reviewed the methodology part of this paper. Experimental Designs Or Analyses: The experiment section has the following mian issues: 1) Only one real-world dataset, LabelMe, is used. In fact, there are many real-world crowdsourcing annotation datasets available, e.g., [1, 2]. 2) The truth inference methods considered lack the classic DS method and other methods. It is recommended to include the DS method and the methods used in benchmark [1]. 3) The experimental analysis is insufficient. In summary, for experiments on real-world datasets, all results are only involes Figure 3. [1] Truth inference in crowdsourcing: Is the problem solved? VLDB 2017. [2] Deep learning from crowds. AAAI 2018. Supplementary Material: The paper does not include supplementary materials. Relation To Broader Scientific Literature: The key contributions of the paper are important for research on supervised learning. Essential References Not Discussed: It is recommended to cite Reference [1]. [1] Truth inference in crowdsourcing: Is the problem solved? VLDB 2017. Other Strengths And Weaknesses: Strengths: 1) First, the overall narrative of the paper is clear and easy to follow, with logical flow (especially in the introduction section). The paper has not excessive embellishment. 2) In the experiment section, particularly on the real-world dataset LabelMe, the proposed method outperforms the optimized method WSLC. Weaknesses 1) Regarding the research motivation and the research problem. The research motivation and the research problem are clearly introduced, but the significance of the addressed problem is not sufficiently compelling. This is because the paper primarily focuses on improving the existing method WSLC to eliminate its limitation—it “considers solely the correlation of the labels annotated by different workers on per individual instance while totally ignoring the correlation of the labels annotated by different workers among similar instances”. Furthermore, the proposed method aims to enhance the accuracy of subsequent truth inference processes. However, the problem of truth inference for crowdsourced annotations is a long-standing issue, with mature benchmark datasets and review papers already available since 2017, e.g., [1]. As a long-standing research problem, publishing high-level papers on truth inference requires higher standards and the resolution of sufficiently significant pain points. 2) Regarding the experiments. Please refer to the above reviews on “Experimental Designs Or Analyses”. [1] Truth inference in crowdsourcing: Is the problem solved? VLDB 2017. Other Comments Or Suggestions: none. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks a lot for your comments. Please find our detailed responses to your concerns as follows. **Author Response to Research Motivation and Research Problem:** Label integration (truth inference) is indeed a long-standing research problem. Over the past decades, numerous algorithms have been proposed to improve the performance of label integration. These algorithms have gradually reached a consensus: when worker annotation is more accurate than random annotation, the more noisy labels an instance receives, the easier it becomes to infer its unknown true label. However, in real-world scenarios, each worker typically annotates only a small number of instances, and few labels are typically collected per instance to reduce cost, resulting in a highly sparse crowdsourced label matrix. This fact leads to label integration failing to achieve the expected performance relying solely on the existing labels in the label matrix. To address this issue, label completion has been proposed to fill in missing labels in sparse label matrices and is gaining increasing attention. To the best of our knowledge, WSLC is currently the most advanced and convincing benchmark for label completion. However, WSLC solely considers the correlation of the labels annotated by different workers on per individual instance. Our proposed algorithm, LDPLC, is the first to jointly model both worker similarity and instance similarity. Specifically, LDPLC first estimates worker similarity by learning feature vectors, enabling a more accurate initialization of label distributions. Next, it finds neighbors and employs locally linear embedding (LLE) to optimize their weights, capturing the geometric structure among instances. Finally, LDPLC propagates label distributions across instances, allowing missing labels to absorb information from similar instances, thereby overcoming the limitations of WSLC. **Author Response to Experimental Designs:** We have carefully considered these issues and conducted additional experiments to support our work. Below are the updates we make: (1) In addition to “LabelMe”, we have now included two real-world crowdsourced datasets, “Ruters” and “Leaves”.“Ruters” and “Leaves” are also collected from the AMT platform. “Ruters” contains 1799 instances, 5410 labels, 8 classes, and 38 workers. “Leaves” contains 384 instances, 3840 labels, 6 classes, and 83 workers. (2) We have added the DS, KOS, and IWMV algorithms to the experiments to provide a more comprehensive comparative analysis. The experimental results are as follows: |Dataset|MV_WSLC|MV_LDPLC|GTIC_WSLC|GTIC_LDPLC|DEWSMV_WSLC|DEWSMV_LDPLC|MNLDP_WSLC|MNLDP_LDPLC|DS_WSLC|DS_LDPLC|KOS_WSLC|KOS_LDPLC|IWMV_WSLC|IWMV_LDPLC| |--|--|--|--|--|--|--|--|--|--|--|--|--|--|--| |LabelMe|76.40%|**81.70%**|76.40%|**81.70%**|76.60%|**81.60%**|80.50%|**82.50%**|76.70%|**81.50%**|76.30%|**81.70%**|76.30%|**81.70%**| |Ruters|70.37%|**79.32%**|70.48%|**79.32%**|70.37%|**79.32%**|73.49%|**79.54%**|70.43%|**79.32%**|70.37%|**79.32%**|70.48%|**79.32%**| |Leaves|63.80%|**65.10%**|63.80%|**65.10%**|63.80%|**65.10%**|64.58%|**65.36%**|63.80%|**65.10%**|64.06%|**65.10%**|63.80%|**65.10%**| || From these results, it can be found that the integration accuracies of both classic algorithms (MV, DS, KOS) and the lastest algorithms (GTIC, IWMV, DEWSMV, MNLDP) improve significantly after label completion using LDPLC compared to WSLC across all three datasets. These findings confirm that LDPLC approximates crowds more accurately than WSLC and further improves the performance of all label integration algorithms. These results once again verify the effectiveness and robustness of LDPLC. In the final version of the paper, we will further extend the experiment and analysis on real-world datasets. **Author Response to Essential References:** By studying this reference, we have gained a clearer understanding of the differences between various classic algorithms and the future directions of crowdsourcing research, which is highly beneficial for our future work. In the final version of the paper, we will cite [1] and discuss its relationship with our work.
null
null
null
null
null
null
GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via Multi-Step Reasoning
Accept (poster)
Summary: The paper introduces GRADEO, a model for evaluating text-to-video generation using multi-step reasoning. The authors propose GRADEO-Instruct dataset, which contains 3.3k videos and 16k human annotations, to train GRADEO to mimic human evaluation. The evaluation metrics include multiple dimensions, including quality, aesthetic, consistency, alignment, rationality, safety, and creativity. The paper evaluate various T2V models by the proposed GRADEO and shows that GRADEO aligns better with human evaluations than existing methods. --- **[Update after Rebuttal] Final Comment by Reviewer RczA** I thank the authors for providing a detailed response. Overall, this paper explores an interesting idea by incorporating multi-step reasoning into T2V evaluation. However, the rebuttal does not fully address my concern about the limited scale of benchmark dataset, which was also pointed out by Reviewer hvju. Specifically, the table included in the rebuttal clearly shows that the model performance improves with increasing more training data, indicating the current dataset size is insufficient. While the authors explained that it is limited by resources, I believe there is potential to explore automated pipelines to scale up data collection more efficiently. As such, I will maintain my original rating as weak accept. Claims And Evidence: - As one of the motivation of GRADEO, the authors claim that the current LLM-based evaluation methods only provide scores without rationale, which is a reasonable claim and provides an inspirefeul insight for involving the idea of multi-step reasoning in T2V evaluation. - The authors claim that current T2V models struggle with high-level semantic understanding. Such weaknesses is confirmed by the proposed benchmark, mentioning that most of the state-of-the-art models perform poorly in rationality and creativity scores. Methods And Evaluation Criteria: - To fine-tune a LLM for video evaluation, the authors collected a GRADEO-Instruct dataset with 3.3k videos and 16k human annotations. However, I doubt that such scale of the dataset is large enough to train a good evaluator. It would be interesting to see an experiment ablating different scales of training dataset (e.g., including 1k / 2k / 3k videos for training) and check how the dataset scale affect the model performance. - GRADEO-Instruct dataset only includes AI-generated videos, limiting its perspective and knowledge on synthetic videos. Most of the LLM-based methods, such as VideoScore, include both real and fake videos for training, which is a more reasonable setting and could teach the model distinguish the videos from these two distributions. - The evaluation criteria is comprehensive and well-structured. It includes seven evaluation dimensions, covering both low-level perception (e.g., quality, aesthetic) and high-level semantic reasoning (e.g., rationality, creativity). [1] "VIDEOSCORE: Building Automatic Metrics to Simulate Fine-grained HumanFeedback for Video Generation" Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: - The experiments are comprehensive. To evaluate the proposed GRADEO evaluator model, the authors provide the correlation analysis with human scores and compare with a wide variety of the models. Such experiments are also conducted on multiple datasets. - In the while, the authors also benchmarks the current T2V models using the proposed GRADEO evaluator model and provide some insights (e.g., most models perform poorly in creativity and rationality) based on the results. Supplementary Material: The authors do not provide a supplementary material. Relation To Broader Scientific Literature: The paper introduces a novel and well-motivated T2V evaluation model by loosening the constraint that the current evaluation model usually provides the scores only without the rationale behind. Through annotating the videos with the evaluation of multi-step reasoning, the model performance could further be improved from the chain-of-thoughts reasoning. Essential References Not Discussed: This paper has well cited most of the related works. Other Strengths And Weaknesses: - The overall presentation is good. The paper is well written and the figures are properly plotted which could help readers quickly understand the proposed method. Other Comments Or Suggestions: - Line 305, with those of "GRADE" -> there is a missing "O" Questions For Authors: - Have the authors considered adopting automatic pipeline to expand the dataset scale with a smaller cost? - What is the model architecture of GRADEO? Since GRADEO model is one of the main contribution of this paper, it would be better to elaborate how the model backbone is designed. For example, how is the video encoded and inputted to the proposed vision-language model? Different model designs could also significantly affect the model performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your comprehensive comments and constructive advice. We are very excited to see that the reviewer finds our work (1)"provides an **inspireful insight**", (2) "evaluation criteria is **comprehensive and well-structured**", (3)"**comprehensive experiments**", (4)and "paper is well written". Your suggestions and questions are valuable, and we will explain your concern as follows. **[Q1] Concerns about the size of the dataset.** **[A1]** We conducted the following experiments and analyses to allay your concerns as much as possible. We trained on different amounts of training data and tested on the test set. The similarity and consistency between the dimensions and the human evaluation are in the table below. We found that for the model trained on 1k, 2k data, the model is able to learn the entire format of the assessment, the output is formatted correctly. There is a performance improvement as data increases. | | Quality | Aesthetic | Consistency | Alignment | Rationality | Safety | Creativity | | --- | --- | --- | --- | --- | --- | --- | --- | | 1k | 0.589/0.533/0.526 | 0.697/0.666/0.404 | 0.502/0.238/0.500 | 0.549/0.511/0.821 | 0.552/0.513/0.633 | 0.727/0.765/0.360 | 0.675/0.654/0.792 | | 2k | 0.626/0.600/0.474 | 0.667/0.619/0.333 | 0.565/0.414/0.477 | 0.560/0.497/0.786 | 0.551/0.510/0.571 | 0.712/0.756/0.380 | 0.771/0.756/0.792 | | all | 0.743/0.715/0.404 | 0.717/0.719/0.351 | 0.634/0.641/0.341 | 0.601/0.418/0.439 | 0.606/0.515/0.560 | 0.747/0.762/0.360 | 0.797/0.759/0.542 | In fact, we believe that orders of magnitude larger (10k or even 100k) video and human assessment reason data and a better training approach (as opposed to cost-saving LoRA fine-tuning) may be able to result in a more robust t2v evaluation model. Our work has been constrained by GPU computational resources and funding limitations, preventing further expansion. In addition to this, **some of the previous AIGC quality assessment datasets are also not very large**; the T2VQA-DB [1] dataset contains 10,000 videos, the MQT [2] dataset has only 10,005 videos, the FETV [3] dataset has only 2,476 videos, and the LGVQ [4] dataset has only 2,808 videos and the above datasets only contain human subjective ratings, which are not available for the training of interpretable models. **[Q2] Typo error.** **[A2]** Thank you for your careful reading. We will correct this issue and carefully check for any other typo errors in the revised version of the paper. **[Q3] Automatic pipeline to expand the dataset scale.** **[A3]** Your comments are very insightful. We used an automated approach based on GPT-4o in generating the instruction tuning dataset from human assessment reasons, reducing the cost of organizing human assessment reasons into assessment CoT data. The main limitation to our dataset construction is human annotations. Instruction tuning for MLLMs requires human-assessed instruction datasets, and collecting human assessments for each video is time-consuming and costly. In order to minimize the impact of annotators' own biases on the evaluation dataset, we exclude videos from the dataset if there is a significant disparity between annotators' ratings. Overall, human evaluation are the **gold standard** and proof of the credibility of the assessment model. Collecting human assessment reason is a step that we believe cannot be replaced by an automated approach, which allows the **high dataset construction cost to limit the size of the dataset**. We also hope for more efficient methods to scale human annotation in the future. **[Q4] Model architecture.** **[A4]** We adopt Qwen2-VL-7B[5] as our base model. Qwen2-VL-7B adopts the tandem structure of ViT and Qwen2, and mainly realizes two major innovative architectures: (1) Fully supports native dynamic resolution, which can handle image inputs of arbitrary resolution, and different sizes of images are converted into dynamic numbers of tokens, with a minimum of only 4. (2) Proposes M-ROPE to decompose rotational position embedding decomposition into three parts: time, height, and width, so that the large language model can integrate the position information of one-dimensional text, two-dimensional image and three-dimensional video. [1] Subjective-aligned dataset and metric for text-to-video quality assessment. ACM-MM 2024. [2] Measuring the quality of text-to-video model outputs: Metrics and dataset. [3] FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation.NeurIPS 2023 [4] Benchmarking Multi-dimensional AIGC Video Quality Assessment: A Dataset and Unified Model. [5] Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution. --- Finally, we deeply appreciate the thoughtful questions and suggestions, which have greatly contributed to improving our work and will be incorporated into our revised manuscript. **We sincerely hope it sufficiently address your concerns and earn your recognition.** --- Rebuttal Comment 1.1: Comment: I thank the authors for providing a detailed response. Overall, this paper explores an interesting idea by incorporating multi-step reasoning into T2V evaluation. However, the rebuttal does not fully address my concern about the limited scale of benchmark dataset, which was also pointed out by Reviewer hvju. Specifically, the table included in the rebuttal clearly shows that the model performance improves with increasing more training data, indicating the current dataset size is insufficient. While the authors explained that it is limited by resources, I believe there is potential to explore automated pipelines to scale up data collection more efficiently. As such, I will maintain my original rating as weak accept. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We would like to further clarify that our training dataset is sufficient to fully stimulate the model's video evaluation ability. To address your concern, we respond from the following two perspectives: - (1) The table presented in the previous rebuttal A1 was too coarse-grained to effectively demonstrate how performance improves as the data volume increases, which may have caused some confusion. To address this, we provide comparisons of SROCC and PLCC across a broader range of training data amounts, presented through a curve plot (Figure 1: SROCC (↑) and PLCC (↑) with respect to varying training dataset sizes. Both metrics improve with larger datasets, though the performance gains gradually plateau as the dataset size increases. Anonymous link: https://imgur.com/a03N3iG) along with a Table 1, showing the performance growth clearly. It can be noticed that around 3K data, the model performance improvement with the increase of data has become relatively small, so we chose 3.3K data to be a good balance between the cost of data collection and the performance of the model. Therefore, we believe that our 3.3K data are sufficient to stimulate the model's video evaluation ability, especially given that the marginal effect of adding more data becomes increasingly negligible. - (2) We would like to emphasize that **our work introduces one of the first human preference-based video evaluation datasets, aiming to teach the model human-like video evaluation capabilities, especially in high-level semantic understanding and reasoning**. Our work establishes a foundation and benchmark in this area, upon which future research can build to further advance this important and promising direction. Notably, after aligning with human preferences using our data, our model's video evaluation capabilities significantly surpass SOTA MLLMs, such as GPT-4o and Gemini-1.5-Pro (please kindly check Tables 2,3 in our main paper). Table 1: SROCC (↑) and PLCC (↑) comparison on various sizes of training datasets. The Δ SROCC and Δ PLCC represent the performance improvement between the current samples and the -100 samples metrics. |training data size|SROCC(↑)|ΔSROCC|PLCC(↑)|ΔPLCC| |-----|-------|-------|-------|-------| | 2000 | 0.636 | - | 0.593 | - | | 2100 | 0.652 | 0.016 | 0.608 | 0.015 | | 2200 | 0.663 | 0.011 | 0.619 | 0.011 | | 2300 | 0.669 | 0.006 | 0.626 | 0.007 | | 2400 | 0.674 | 0.005 | 0.633 | 0.007 | | 2500 | 0.682 | 0.008 | 0.641 | 0.008 | | 2600 | 0.686 | 0.004 | 0.644 | 0.003 | | 2700 | 0.688 | 0.002 | 0.645 | 0.001 | | 2800 | 0.690 | 0.002 | 0.645 | 0.000 | | 2900 | 0.692 | 0.002 | 0.647 | 0.002 | We sincerely hope our response has addressed your concerns, and we would be happy to provide further clarification if needed.
Summary: This paper introduces a benchmark for evaluating T2V models. The authors sample 10 video generation models and employ five distinct annotators to perform CoT labeling on the outputs, providing both reasoning and scores. The resulting dataset is then used to fine-tune Qwen2-VL-7B. Across the seven evaluation dimensions and eight tested models, the proposed approach achieves performance surpassing all existing image-based LLMs and video-based LLMs. Notably, the evaluation framework presented in this work encompasses all essential aspects of high-level criteria, offering a comprehensive and well-rounded assessment methodology. ## update after rebuttal The author show the generalization experiment compared with foundation models (i.e., GPT-4o, Gemini) and the performance is quite well, which addresses my concern. I raise my rate to 4. Claims And Evidence: Yes. Methods And Evaluation Criteria: 1. The most critical aspect of evaluation is the model's generalization capability, especially after fine-tuning. In this paper, the authors use LoRA for fine-tuning, which, while cost-effective, inherently limits the model's generalization ability from the outset. This constraint introduces a potential bias and may lead to unfair evaluations for future video generation models, as the fine-tuned model might not adequately represent the broader capabilities of the original architecture or other competing approaches. 2. The decision to sample only three frames for constructing the CoT data during data collection is problematic. While this limitation arises from GPT-4o's constraint of only supporting image inputs, three frames may not sufficiently capture the temporal segments that annotators focus on. As a result, the derived CoT chains could become inconsistent or chaotic. Although the authors mention retaining only cases where both scores and reasoning are correct, this approach risks discarding challenging or fine-grained samples, which could otherwise provide valuable insights into model performance on more complex scenarios. Theoretical Claims: No theoretical claim. Experimental Designs Or Analyses: There is a generalization flaw in the comparison with other foundational VLMs and in the evaluation of video generation models. 1) Although the authors demonstrate superior performance across all seven evaluation dimensions compared to mainstream VLMs, this advantage may largely stem from the gap between real-world video data and synthetic video data. The high performance reported in this work could be attributed to a domain shift toward synthetic video data, which does not adequately validate the method's universality and generalization capabilities. A significant issue lies in the fact that the training data includes videos generated by the same models used for testing, resulting in an identical data domain. Consequently, the authors fail to provide evidence of the method's effectiveness on unseen video generation models, raising concerns about its applicability to broader or more diverse scenarios. 2)The fine-tuned model is trained on video data with a maximum duration of only 5 seconds, which limits its temporal understanding capability to extremely short time spans. However, the original foundational models are capable of comprehending videos at the minute level. This approach not only weakens the temporal understanding ability of the base model but also raises concerns about the scalability of the proposed method for future video generation models that may extend to durations of 30–60 seconds. The authors should provide an analysis of the model's performance on 10-second videos and compare it with the capabilities of the foundational model to demonstrate the generalization potential of the proposed method. Without such evidence, the applicability of the approach to longer videos remains questionable. Supplementary Material: All of it. Relation To Broader Scientific Literature: This paper provides a promising direction for more efficient and effective evaluation of video generation models, an increasingly critical challenge as these models continue to advance. Traditional evaluation algorithms struggle to comprehensively assess high-level semantic dimensions, which often require the use of VLMs to address. Additionally, there is currently a lack of robust automated evaluation methods for diverse assessment dimensions. By fine-tuning a foundational model to tackle this issue, the authors demonstrate preliminary feasibility, showcasing its potential to streamline and enhance the evaluation process. This paradigm is also applicable to video understanding tasks in MLLMs, highlighting its broader relevance and utility in addressing challenges at the intersection of video generation and multi-modal reasoning. Essential References Not Discussed: No. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: My primary focus is on the generalization performance of the method proposed in this paper, as well as the difficulty level of the benchmark it introduces. Below, I outline three experiments that I believe are necessary to thoroughly evaluate these aspects. If the authors can provide reasonable and satisfactory results for these experiments, I would be inclined to raise my score. Conversely, the absence of such results would lead me to maintain or lower my current evaluation. 1) A performance comparison with the foundation model in the case of longer videos is needed, which should include both models seen in the training data and unseen models, such as the 10-second version of CogVideo-1.5-5B or CogVideoX-2. 2) A performance comparison between video generation models not included in the training set and foundational models, such as CogVideoX , HunyuanVideo , and wanx , is necessary. 3) Since the training data only uses three frames processed by GPT-4o to construct the CoT, I believe that the retained samples are likely to be relatively coarse-grained and easy. I request the authors to showcase challenging samples from the test set and provide a performance comparison on these difficult samples relative to the foundational model. 4) For evaluation, reproducibility is very important, but it is also very challenging for VLMs. Please provide the mean and variance of the results for one dimension based on five repeated trials. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed and constructive feedback. We are grateful that you recognized the strengths of our work, including (1) "**comprehensive and well-rounded**" method, (2) "provides a **promising** direction", (3) and "highlighting its **broader relevance and utility** in addressing challenges at the intersection of video generation and multi-modal reasoning". Your comments and questions are highly valuable, and we would like to address your question as follows. **[Q1] LoRA fine-tuning.** **[A1]** Thank you for your insightful comments. We acknowledge that full fine-tuning can offer greater performance and generalization. However, due to computational resource constraints, we opted for LoRA fine-tuning, which enables efficient adaptation with fewer trainable parameters. We believe that fine-tuning should introduce task-specific adaptation while leveraging the video comprehension and reasoning capabilities of MLLMs for semantic-based evaluation. LoRA provides a sufficiently efficient and cost-effective solution in this context. **[Q2] Long AIGC videos.** **[A2]** Considering time and computational constraints, we first analyzed the T2V model benchmarking results and selected the 20 prompts with the lowest mean scores across all model-generated video evaluations in each dimension as “difficult benchmark”. Using an RTX 3090, we generated 81-frame, 8 FPS, 10-second videos with CogVideoX1.5 based on these prompts as long video data. Three annotators rated these videos on a scale of 1-5, and we then evaluated and compared the results using both our evaluation model and the base model, reporting their respective correlations with human annotations.(See https://imgur.com/sc2BMVa) **[Q3] More recent T2V models benchmarking.** **[A3]** We evaluated more advanced T2V models on the previously defined “difficult benchmark”. Kling 1.6 generated 5-second standard videos using the official API, while Hunyuan and Wan 2.1 generated 5-second videos via the SiliconCloud API. Three annotators rated these videos on a scale of 1-5, and we then assessed and compared their correlation with human annotations using both our evaluation model and the base model. The results are as follows.(See https://imgur.com/71BbB6A) **[Q4] Challenging samples.** **[A4]** Thank you for your professional and insightful suggestions. Further optimization of thought chains is indeed a promising direction for improvement. Our work serves as an initial exploration of thought chaining in video evaluation, showcasing the potential of MLLM reasoning in this domain. Future research can build upon our approach for further refinements. To address your concerns, we tested our model on a challenging sample set(10 longest prompts per dimension) and observed a significant improvement in correlation compared to the base model. | | Avg | Quality | Aesthetic | Consistency | Alignment | Rationality | Safety | Creativity | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Base | 0.462/0.202/1.109 | 0.595/0.527/0.636 | 0.294/-0.135/0.8 | 0.261/-0.392/0.727 | 0.479/0.123/0.9 | 0.436/0.345/1.3 | 0.697/0.375/2.6 | 0.473/0.569/0.8 | | GRADEO | 0.733/0.673/0.296 | 0.889/0.853/0.273 | 0.864/0.764/0.1 | 1.0/1.0/0.0 | 0.612/0.395/0.4 | 0.503/0.405/0.5 | 0.758/0.673/0.200 | 0.509/0.623/0.600 | Below is an example along with a simple evaluation comparison between GRADEO and the Base Model: https://imgur.com/a/nCu81gv (Dimension: Rationality, Prompt: Water at 100°C, Human: 2) (Base Model: …The event depicted, water being poured into a glass, is a common-sense action that aligns with real-world expectations and logical consistency in common sense knowledge and reasoning…I would rate the rationality of this video as 5…) (GRADEO: …prompt involves water at 100°C, which is the boiling point of water…The splash is not sustained, and there is no visible steam rising, which is a critical aspect of boiling water…Rating Score: 2…) **[Q5] Model reproducibility.** **[A5]** Thank you for raising this important point. We conducted five repetitions on Rationality, which resulted in some variations in individual data points, as shown in the table. Given that **human evaluations also exhibit disagreement** and that our **scores are coarse-grained (integers from 1 to 5)**, we believe the model demonstrates sufficient stability. | | exp1 | exp2 | exp3 | exp4 | exp5 | mean | variance | | --- | --- | --- | --- | --- | --- | --- | --- | | SROCC,PLCC,MAE | 0.606,0.515,0.56 | 0.613,0.55,0.56 | 0.635,0.541,0.54 | 0.585,0.492,0.56 | 0.634,0.562,0.54 | 0.615,0.532,0.552 | 3.5e-4,6.39e-4,9.6e-5 | --- Finally, we deeply appreciate the thoughtful questions and suggestions, which have greatly contributed to improving our work and will be incorporated into our revised manuscript. **We sincerely hope it sufficiently address your concerns and earn your recognition.** --- Rebuttal Comment 1.1: Comment: Thank you very much for the author's supplementary experiments. However, I still have some questions regarding the implementation details of these supplementary experiments. What is the base model you are comparing against? If it is Qwen2-VL-7B, then I believe there might be a misunderstanding of my point. The generalization comparison I am referring to should be relative to all the foundational models you are comparing, not just one specific model. You should be comparing against models like GPT-4V or Gemini-1.5. When conducting benchmarks, the two most critical aspects are always generalization and accuracy. If current foundational models like GPT-4o already surpass your method in terms of generalization, then the entire evaluation system becomes meaningless. --- Reply to Comment 1.1.1: Comment: **To address the reviewer's concern, we further compare our method against the most advanced MLLMs, GPT-4o and Gemini 1.5 Pro, on both real-world datasets and SOTA video generation models, demonstrating the superior generalization capability of our approach.** **[A] Generalizability on Real-World Video and New T2V Models (Comparison with GPT-4o and Gemini 1.5 Pro)** - To better demonstrate the generalizability of our model on video scoring tasks, we constructed a real-world video dataset derived from Panda-70M and OpenVid-1M. Though our model was not trained on any real-world data, the video assessment capability learned by our GRADEO generalizes well to real video data. In fact, our model significantly outperforms SOTA models such as GPT-4o and Gemini 1.5 Pro (see Table 1). - As discussed in Rebuttal A2 and A3, we also evaluated our model on **unseen**, AI-generated videos produced by recent models including Kling-1.6, Hunyuan, and Wan 2.1-14B, none of which were included in our training set. Despite this, our model still achieves significant performance improvements, demonstrating its robust generalization (see Table 2). - While GPT-4o and Gemini 1.5 Pro are powerful, they are not specifically aligned with human video preferences. So they can not directly be used for video evaluation, highlighting the importance of our human-annotated data. Our approach shows better human alignment, offering a stronger path for T2V evaluation. **[B] Generalizability on Long Video(Comparison with GPT-4o and Gemini 1.5 Pro)** - As there are currently few AIGC datasets with 30s videos, we test on real-world 30s+ video. Our model achieves significantly better performance than GPT-4o and Gemini 1.5 Pro, demonstrating its superior capability to generalize to longer video (see Table 1). - Additionally, in Rebuttal A2, we evaluate our model on generated long videos by introducing 10s videos from CogVideoX (as the reviewer suggested). Our model also achieves the best, further supporting its generalizability to longer video (see Table 3). Table 1: Generalizability on real-world video data. ||Avg|Quality|Aesthetic|Consistency|Alignment|Rationality|Safety|Creativity| |---|---|---|---|---|---|---|---|---| |GPT-4o(5s)|.344/.311/1.143|.516/.489/1.000|.150/.105/1.360|.273/.234/1.240|.506/.481/.920|.134/.080/1.280|.460/.444/1.100|.371/.342/1.100| |Gemini-1.5-Pro(5s)|.411/.387/.980|.368/.319/1.120|.195/.156/1.220|.550/.536/.740|.575/.567/.860|.377/.330/1.040|.382/.351/.960|.431/.449/.920| |Qwen2-VL-7B (5s)|.361/.333/1.106|.378/.356/1.060|.331/.308/1.100|.380/.352/1.080|.491/.458/1.100|.462/.436/.920|.401/.352/1.100|.086/.072/1.380| |GRADEO(5s)|.727/.715/.489|.769/.742/.500|.730/.723/.480|.843/.831/.420|.786/.772/.420|.552/.533/.700|.686/.681/.440|.725/.721/.460| || |GPT-4o(30s-60s)|.329/.293/1.063|.349/.306/1.100|.422/.397/.960|.174/.118/1.200|.276/.221/1.100|.251/.228/1.020|.425/.402/1.020|.407/.377/1.040| |Gemini-1.5-Pro(30s-60s)|.354/.310/1.011|.315/.282/.960|.489/.451/.820|.367/.329/.880|.252/.192/1.220|.241/.175/1.100|.340/.293/1.180|.475/.450/.920| |Qwen2-VL-7B (30s-60s)|.274/.231/1.191|.283/.265/1.220|.241/.152/1.260|.210/.170/1.220|.390/.372/1.020|.171/.127/1.280|.269/.215/1.260|.352/.316/1.080| |GRADEO(30s-60s)|.666/.644/.529|.681/.664/.540|.616/.595/.520|.730/.702/.340|.705/.682/.520|.564/.530/.620|.673/.650/.580|.695/.685/.580| Table 2: Average score on new T2V models (Kling1.6, Hunyuan, Wan2.1-14B). ||Avg|Quality|Aesthetic|Consistency|Alignment|Rationality|Safety|Creativity| |---|---|---|---|---|---|---|---|---| |GPT-4o|.375/.267/.947|.412/.335/.767|.308/.190/.833|.281/.002/.833|.386/.300/.967|.420/.394/1.033|.429/.402/1.117|.384/.248/1.083| |Gemini-1.5-Pro|.384/.298/.895|.228/.101/1.083| .404/.247/.650|.269/.200/.783|.378/.278/1.033|.445/.391/.983|.444/.432/.967|.522/.441/.767| |Qwen2-VL-7B|.322/.191/1.033|.333/.247/.917|.350/.215/.933|.452/.201/.717|.405/.196/.783|.055/-.017/1.483|.395/.382/1.133|.263/.114/1.267| |GRADEO|.629/.529/.548|.701/.628/.483|.514/.426/.617|.534/.306/.667|.498/.359/.683|.662/.626/.500|.794/.751/.483|.699/.610/.400| Table 3: Generalizability on long CogVideoX data(10s). ||Avg|Quality|Aesthetic|Consistency|Alignment|Rationality|Safety|Creativity| |---|---|---|---|---|---|---|---|---| |GPT-4o|.261/.163/.893|.323/.207/.900|.216/.192/.800|.481/.347/.700|.315/.259/.700|.184/.103/1.050|.128/.018/1.250|.179/.013/.850| |Gemini-1.5-Pro|.384/.325/.779|.340/.228/.950|.352/.318/.700|.506/.582/.600|.315/.259/.700|.311/.218/.950|.274/.134/1.000|.590/.538/.550| |Qwen2-VL-7B|.363/.253/.850|.246/.212/1.000|.335/.266/.750|.512/.481/.550|.657/.543/.600|.337/.360/.950|.219/-.069/1.100|.236/-.020/1.000| |GRADEO|.602/.496/.586|.482/.297/.650|.618/.495/.550|.691/.730/.500|.638/.483/.500|.776/.747/.550|.550/.400/.650|.462/.323/.700| We have made every effort to address your concerns and hope the improvements demonstrate the strengths of our approach. We kindly ask for your reconsideration.
Summary: This paper introduces GRADEO, a novel approach for evaluating text-to-video (T2V) generation models using human-like multi-step reasoning. The authors identify key limitations of existing evaluation methods, which often lack high-level semantic understanding and reasoning capabilities, making them inadequate for comprehensive video assessment. To address this gap, they created GRADEO-Instruct, a dataset containing 3.3k videos with human annotations across seven evaluation dimensions (Quality, Aesthetic, Consistency, Alignment, Rationality, Safety, and Creativity). The trained GRADEO and employed a four-step reasoning process: Overview, Description, Analysis, and Assessment, which mimics human evaluation practices. Experiments proved that it's effective in both correlation with human preference and pairwise comparison accuracy. ## update after rebuttal The authors' rebuttal resolved most of my concerns and I think it's a good paper to accept. However, there is still a limitation of the efficiency of the model, as said by the authors "takes 30 seconds and generates about 400 tokens per video". Since these kinds of scoring models will eventually be used for cases like RL for diffusion models, BoN selection, 30 seconds per generation seems not quite acceptable. However, this is not a big concern as there are many acceleration frameworks like vllm, sglan and I encourage the authors to try integrating the model into them. Overall, I think it's a good paper to be accepted Claims And Evidence: yes Methods And Evaluation Criteria: Yes. The introduce of CoT in Video Evaluation is a great attempt. Also the definition of 7 evaluation aspects make sense. The pairwise benchmarks used for selection also makes sense. Theoretical Claims: There is not theoretical claim in the paper. Experimental Designs Or Analyses: The experimental designs are generally sound. The comparison with baseline methods includes both automated metrics and LLM-based approaches. And the pairwise evaluation on existing datasets (T2VQA-DB, GenAI-Bench-Video, TVGE) helps validate the model's effectiveness. However, the paper mentions filtering videos with "unsuitable dynamics" but doesn't fully define what criteria were used, which could introduce selection bias. Supplementary Material: The supplementary material provides valuable additional details: - Comprehensive dimension definitions and examples - Data collection methodologies - Implementation details for baselines - Qualitative comparisons of T2V models Relation To Broader Scientific Literature: - There are already previous works like VideoScore that develop specific models for text-to-video evaluation. However, none of these works have ever employed CoT reasoning during the model training. This paper's most significant contribution is the CoT based evaluation dataset and proved that CoT-based evaluation works well. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths: - The Chain-of-Thought reasoning process creates interpretable assessments beyond simple scores. - The qualitative examples effectively demonstrate the issues in current T2V models. ### Weaknesses: 1. The dataset size (3.3k videos) is relatively small compared to some other benchmark datasets. 2. The seven evaluation dimensions may have some overlap or correlation, which isn't deeply analyzed. 3. The video generations sources, i.e. the t2v models, are limited to Sora, VideoCrafter-1, VideoCrafter-2, Latte, LaVie, and ZeroScope-576w. From my experience, these models are usually not good enough and nowadays there are many more powerful t2v models like wanxiang, stablevideodiffusion, etc. Whether GRADEO can generalize to these model's outputs are unknown Other Comments Or Suggestions: 1. A discussion on the cost-efficiency tradeoff compared to other evaluation approaches would be valuable. Like how many CoT token will GRADEO usually output. Questions For Authors: 1. How did you determine the four-step reasoning process (Overview, Description, Analysis, Assessment)? Did you experiment with other reasoning structures, and if so, how did they compare? 2. The VideoScore has less competitive results on Table 2 (your developed benchmark) compared to Table 3 (existing benchmarks). Are there any biases in the results of Table 2? Do you have any insights on this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your time and appreciate your valuable comments. We are motivated to see that the reviewer finds our work (1) **the novelty and validity** of introducing CoT, (2) **the comprehensiveness and rationalization** of dimension definitions and experimental setups, (3) CoT reasoning process creates **interpretable assessments beyond simple scores**. Then we will explain your concerns point by point. **[Q1] Definition of inappropriate dynamics.** **[A1]** We filtered video data with "inappropriate dynamics" based on our observations during data collection. In AIGC-video datasets like T2VQA-DB[1], some videos, generated by earlier models, showed abrupt frame transitions with poor dynamic consistency, while others were nearly static. Human evaluators easily identified these low-quality videos, which could affect model evaluation accuracy. To ensure dataset quality, we applied SSIM and FLOW scores (see Appendix A.2), with thresholds based on human visual perception. **[Q2] Concerns about the size of the dataset.** **[A2]** To minimize individual evaluator bias, we have five evaluators assess each video and exclude data points where there are significant discrepancies in evaluations. Though cost limits scale, our approach is still scalable and generalizable, and we hope it will inspire further advancements in video generation. Additionally, previous AIGC quality assessment datasets are relatively small: T2VQA-DB [1] contains 10,000 videos, MQT [2] has 10,005, FETV [3] includes 2,476, and LGVQ [4] has 2,808. While previous datasets mainly rely on human ratings, our dataset includes both ratings and annotated assessment rationales, requiring deeper analysis and increasing costs. These datasets only provide human subjective ratings, which are not suitable for training interpretable models. **[Q3] Overlap or correlation among seven dimensions.** **[A3]** Human assessment is the gold standard, but fully isolating evaluation dimensions remains challenging. To address this, we define more granular sub-dimensions for each assessment dimension, aiming to provide a more comprehensive and mutually independent evaluation framework. **[Q4] Limitations of video generation sources.** **[A4]** To address your concerns, we benchmarked recently introduced T2V models using our model(See https://imgur.com/wzQubJK). Considering the time and arithmetic constraints, we first counted the data of the T2V models from previous benchmarking papers and took out the 20 prompts with the lowest average scores of all model-generated video evaluations out of the 100 prompts for each dimension as the difficult benchmarks. **[Q5] The cost-efficiency tradeoff.** **[A5]** On average, GRADEO takes 30 seconds and generates about 400 tokens per video. While more computationally expensive than automatic metrics, these metrics fail to capture high-level semantics and lack explainability. **[Q6] 4 steps reasoning process.** **[A6]** First, we identified the need for a two-step process: "Analysis" and "Assessment", where score is the goal and analysis extends human reasoning, aligning the process more closely with human thought for more accurate evaluations. With the introduction of the "Description" step, the model’s outputs are more grounded in video content analysis. The sub-dimensions of reasoning and assessment vary across evaluation dimensions, and the "Overview" step ensures reasoning stays within the intended dimension. We conducted ablation experiments to highlight the importance of the four-step reasoning framework. **[Q7] The videoscore results issue.** **[A7]** This is an interesting question, and we believe several factors may contribute: (1) Pairwise comparative assessment focuses on relative ranking rather than absolute scoring. For example, given a pair of videos with human scores of <video 1>: 4 and <video 2>: 5, the model only needs to rank video 1 lower than video 2 to align with human judgment. This results in outputs like (1,2), (2,3), or (3,4), ensuring consistency in pairwise comparison but not necessarily in absolute scoring. (2) VideoScore produces scores in the range of 1-4, and we applied linear mapping and rounding to align them with our assessment scale, which may have introduced additional inconsistencies. [1] Subjective-aligned dataset and metric for text-to-video quality assessment. ACM-MM 2024. [2] Measuring the quality of text-to-video model outputs: Metrics and dataset. [3] FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation. NeurIPS 2023 [4] Benchmarking Multi-dimensional AIGC Video Quality Assessment: A Dataset and Unified Model. --- Finally, we deeply appreciate the thoughtful questions and suggestions, which have greatly contributed to improving our work and will be incorporated into our revised manuscript. **We sincerely hope it sufficiently address your concerns and earn your recognition.**
Summary: This paper addresses the challenge of evaluating video generation models by introducing GRADEO, a novel video evaluation model designed to provide explainable scores and assessments through multi-step reasoning. The authors curate GRADEO-Instruct, a multi-dimensional dataset with 3.3k videos and 16k human annotations, enabling the model to better align with human evaluations. Experiments demonstrate that GRADEO outperforms existing automated metrics in capturing high-level semantic understanding and reasoning, revealing limitations in current video generation models to align with human reasoning and complex real-world scenarios. Claims And Evidence: Yes. Methods And Evaluation Criteria: 1. The authors select 7 aspects for evaluation. What is the rationale of choosing these dimensions? Are these dimensions suitable and enough to comprehensively evaluate T2V models? For example, how to evaluate whether objects in a video adhere to physical laws? how to evaluate whether the text, color, and font generated in the video according to the user's prompt is correct? 2. Descriptions on some dimensions are ambiguous. For example, how to assess the creativity and diversity of T2V models when the prompts for evaluation are pre-designed and fixed? The tested models may generate more diverse results based on more detailed, varied, and fine-grained prompts. 3. The capabilities of T2V models in generating lengths and resolution are different. As the authors claimed in Table 9, tested models generate videos in different durations, fps, and resolutions. How to guarantee the evaluation fairness when adopting a different experimental configuration? Besides, for videos that can generate videos in a significantly longer duration (such as videos more than 30 seconds) or much higher resolution, can the proposed benchmark highlight these aspects on one or more dimensions? More clarifications are needed. 4. The multi-step reasoning process is over-claimed. I believe the so-called multi-step reasoning process is actually a single-step fine-grained perception + single-step reasoning since I do not find any multi-step reasoning chains either from the CoT prompt templates and the model responses. Theoretical Claims: The paper does not propose any new theoretical claim. Experimental Designs Or Analyses: The authors conduct two parts of experiments to verify the human alignment and evaluate current frontier T2V models, respectively. 1. For the human alignment experiments, the authors report the SROCC, PLCC, and mean absolute error and compare the proposed metric with several MLLMs. The proposed method outperforms all baseline methods in all metrics, yet I believe such results are not surprising since these baseline MLLMs are not specially designed for T2V evaluations, nor widely adopted as T2V evaluation metrics by previous methods. For the absolute value, the Spearman correlation coefficients of all dimensions fell into a threshold of 0.6-0.8, which I believe cannot be recognized as a strong correlation considering that the aesthetic, quality, and consistency correlation scores of VBench are more than 0.90. Therefore, compared with other T2V evaluation protocols, the proposed method seems not to show strong human preference correlations. 2. The authors select Qwen2-VL-7B model as the evaluation agent. Since there are some strong alternatives such as InternVL2.5 and Gemini, what will these models perform when using a similar instruction-tuning strategy on open-source MLLMs or use a few-shot CoT prompt for proprietary models? Will these models achieve higher human correlation scores? 3. For the benchmarking results, I found the score distribution in each dimension is very narrow and typically within 10 points, indicating the proposed benchmark may not effectively distinguish between models with significantly different performance levels. I suggest the authors conduct additional experiments on two models that perform significantly differently, such as Sora vs. some early-stage models to show whether the evaluation agent could give an extreme high or low scores. 4. The benchmarking is not sufficient. The authors should evaluate more recent models such as Hunyuan-Videos, StepVideo-T2V, Sora, Kling, etc. Results of current models are too similar to give some valuable insights. Supplementary Material: Yes, I have read the appendix attached to the main manuscripts. Relation To Broader Scientific Literature: The key contribution could benefit the evaluation of text-to-video generation models. Essential References Not Discussed: The preprint Evaluation Agent [1] is a similar evaluation protocol that also adopts an agent-based assessment strategy. I suggest the authors have a brief discussion towards the discrepancy against this paper. [1] Zhang, Fan, et al. "Evaluation Agent: Efficient and Promptable Evaluation Framework for Visual Generative Models." arXiv preprint arXiv:2412.09645 (2024). Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A Questions For Authors: 1. What is the evaluation performance of Qwen2-VL-7B without instruction tuning? The authors should report the correlation in Table 2 to probe the effectiveness of the instruction tuning. 2. How does the proposed evaluation protocol prevent the negative influence of the LLM hallucinations? Though the authors explicitly discuss the issues in Appendix E, I believe some possible measures could be adopted to minimize the harm as much as possible, such as using prompts with clear and straightforward instructions, or using a self-reflection manner to automatically fix any potential mistakes. 3. As the authors claim in the main manuscript, the evaluation process requires high reasoning capability, how about selecting MLLMs that are trained with reasoning abilities, such as llava-cot, internvl-mpo, or video-of-thought? On the contrary, to what extent does the deficiency in reasoning capabilities of existing MLLMs affect the overall evaluation accuracy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and appreciate that they valued the thorough evaluation. We would like to address your question as follows. Sorry for the space constraints, the table is provided via an anonymous link: https://imgur.com/a/VzqdmYY. **[Q1] Experimental setup.** **[A1]** We would like to clarify and analyze the experimental setup through the following points. (1) Dimensions setup: Our seven evaluation dimensions are designed to comprehensively assess both low-level perceptual quality and high-level semantic consistency in AI-generated videos. Our framework remains extendable for future enhancements. (2) Evaluation fairness: Many studies [1,2,3,4] lack uniform preprocessing for evaluation AIGC videos (EvalCrafter only adds watermarks for fairness). Preprocessing can alter video features, affecting model evaluation. Enforcing a uniform frame rate may distort motion quality in models optimized for high-frame-rate generation [5,6]. Segmenting long videos for comparison with short-video models disrupts temporal coherence and narrative fluency. To ensure fairness, we only remove small, corner-positioned watermarks, preserving video integrity. **[Q2] Human correlation.** **[A2]** We would like to clarify the following points: (1) It is not fair to directly compare VBench's correlations with those in Table 2. In Section 3.3 of VBench [1], relative comparisons were conducted using four T2V models, whereas our evaluation is based on an absolute 1-5 scoring scale. Aligning absolute scores with human assessments is inherently more challenging than aligning relative rankings. (2) Other T2V evaluation methods[4,7,8,9,10] also exhibit limited correlation with human assessments, likely due to the inherent variability in human judgments and the complexity of the evaluation task. (3) The MLLM-as-a-Judge [11] study further supports the difficulty of absolute scoring tasks for MLLMs compared to pairwise comparisons. (4) We conducted LoRA fine-tuning on InternVL2.5-4B for computational resource constraints(See anonymous link). **[Q3] Narrow score gap and more recent T2V models benchmarking.** **[A3]** This phenomenon is expected. Achieving a baseline quality score is easy, but high scores are harder. Advanced T2V models like Hunyuan and Wanxiang now focus on fine details. Thank you for your suggestions. We benchmarked more T2V models (hunyuan, wan2.1, kling1.6, cogx1.5, See anonymous link). Due to time and computational constraints, we identified "difficult benchmark" by selecting the 20 lowest-scoring prompts from previous benchmarking papers across 100 prompts per dimension. **[Q4] Evaluation Agent.** **[A4]** Thank you for highlighting this related work. (1) Evaluation Agent(contemporaneous with our work) introduces a dynamic, agent-based framework with hierarchical assessments but raises fairness concerns in evaluating diverse models. It lacks human annotations but offers flexibility and efficiency. (2) Our work develops MLLM-based models focused on reasoning with a curated human assessment dataset. Through instruction fine-tuning, our model demonstrates novelty and scalability in advanced semantic assessment. **[Q5] Performance of base model.** **[A5]** We will include the correlation between the base model and human evaluations in Table 2 of the revised version(See anonymous link). **[Q6] MLLM hallucinations.** **[A6]** We categorize hallucinations into two types: (1) Detachment from video content or inclusion of nonexistent elements: We address this with "Description" step, ensuring assessments stay grounded in actual video content, reducing hallucinations that affect reasoning and scoring. (2) Lack of focus on the intended assessment dimension: We mitigate this with "Overview" step, structuring the evaluation process and emphasizing sub-dimensions to maintain focus. Ablation experiments show that removing these steps leads to hallucinations and a quantitative drop in consistency with human evaluations. **[Q7] Four steps evaluation and Reasoning MLLMs.** **[A7]** We agree that an MLLM trained on reasoning datasets could improve evaluation accuracy. Future work could enhance accuracy by combining MLLMs with stronger reasoning and video comprehension capabilities alongside human evaluation datasets. [1] VBench. CVPR 2024 [2] Towards A Better Metric for Text-to-Video Generation. [3] AIGCBench. TBench 2024 [4] EvalCrafter. CVPR 2024 [5] ZeroSmooth: Training-free Diffuser Adaptation for High Frame Rate Video Generation. [6] StreamingT2V. CVPR 2025 [7] FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation. NeurIPS 2023 [8] Subjective-aligned dataset and metric for text-to-video quality assessment. ACM-MM 2024. [9] VideoScore. EMNLP 2024 [10] Evaluating Text-to-Visual Generation with Image-to-Text Generation. ECCV 2024 [11] MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. ICML 2024 --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. After reading through the authors' rebuttal and other expert reviewers' comments, most of my original concerns are addressed, and I raise my rating to weak acceptance. --- Reply to Comment 1.1.1: Comment: Thank you very much for raising your score! We're glad that our response has addressed your concerns. We truly appreciate your valuable comments, which will continue to guide us in improving our work.
null
null
null
null
null
null
Maximum Total Correlation Reinforcement Learning
Accept (poster)
Summary: This paper proposes an algorithm called MTC, which is a SAC-style approach that utilizes total correlation regularization. The basic idea behind MTC is that reducing unnecessary variations in states and actions increases robustness; accordingly, the authors propose an algorithm that maximizes the total correlation—a generalized mutual information among all states and actions within an episode. For the total correlation regularization term, a variational lower bound is derived, and MTC employs this lower bound as the regularization term. Experimental results demonstrate that MTC exhibits higher performance compared to other SAC-based algorithms and is particularly beneficial for enhancing robustness. Claims And Evidence: The main claim of this paper is that reducing unnecessary variations in states and actions enhances robustness. This claim is supported by various experiments. In particular, the paper presents graphs showing that maximizing the proposed total correlation increases the consistency of the state and action trajectories of the learned policy, and it provides results demonstrating robust performance against action noise, state noise, and mass variations, thereby offering persuasive evidence for the main claim. Methods And Evaluation Criteria: The total correlation maximization in MTC enables more efficient prediction of states and actions, and it is intuitively expected to work well for periodic tasks like those in DMC. However, in non-periodic tasks, there may be inherent limitations to the benefits of increasing total correlation. For instance, in Figure 4, while MTC shows superior performance compared to other baselines in the button-press-wall environment, the differences in other environments appear less significant. A more detailed explanation of the role and effects of total correlation maximization in non-periodic tasks would be appreciated. Theoretical Claims: In Appendix A.1’s derivation of eq (2), it appears that eq (11) may be incorrect. Specifically, at L589, the terms $p(z\_{t+1})$ and $p(a\_t)$ should be defined as $\int\_{s\_{t+1}} p(z\_{t+1} | s\_{t+1} ) p(s\_{t+1}) ds\_{t+1}$ and $\int\_{s\_{t}} p(a\_{t} | s\_{t} ) p(s\_{t}) ds\_{t}$, respectively. Therefore, it does not seem appropriate to simply substitute them with $p(z\_{t+1} | s\_{t+1} )$ and $p(a\_{t} | s\_{t} )$ without including the integrals. Experimental Designs Or Analyses: 1. In Figure 2, the magnitude of the observation/action noise appears to be too small. With such minimal noise, there is not enough significant change to observe robustness, and the performance of all baselines seems to drop linearly. It might be more informative to increase the magnitude of the observation/action noise, as shown in Figure 7, to better demonstrate robustness. 2. The robustness experiment results regarding mass scale changes seem inconsistent between Figure 2 and Figure 7. This appears to be due to differences in the hyperparameter $I\_p$. If $I\_p$ is set to -1.0, as in Figure 7, the results might show improved robustness. 3. The experiments in non-periodic environments, such as those in metaworld, appear to have been conducted on too few environments. Expanding the evaluation to a broader range of environments could more convincingly demonstrate the effectiveness of MTC in non-periodic settings. 4. It would be beneficial to include a graph depicting behavior in non-periodic environments. Such a graph could help verify whether unnecessary state and action noise is effectively reduced in these cases as well. Supplementary Material: I carefully reviewed Appendix A, which includes detailed derivations of the equations, and I skimmed through Appendices B–C to get an overview of their content. Relation To Broader Scientific Literature: In the existing literature, algorithm performance and robustness are improved by removing unnecessary noise from either state or action sequences. This study also follows that research direction; however, while previous works have considered only the state sequence or the action sequence, this study takes into account the entire state-action trajectory. To achieve this, total correlation is introduced, and a lower bound is derived and utilized for the practical algorithm. Essential References Not Discussed: An approach with a similar motivation is the use of Fourier transform, which has been explored in previous studies [1, 2]. These methods also aim to remove unnecessary information from state and action sequences to facilitate faster learning, which is aligned with the motivation of MTC. Explaining the differences between these studies and MTC, and incorporating additional experiments, could further enhance the novelty of the paper. [1] A. Li & D. Pathak, “Functional regularization for reinforcement learning via learned fourier features,” NeurIPS 2021. [2] M. Ye, et al., “State sequences prediction via fourier transform for representation learning,” NeurIPS 2023. Other Strengths And Weaknesses: There are no additional strengths or weaknesses to mention. Other Comments Or Suggestions: 1. Figure 2 is reported only in normalized scores; it would be preferable to also report episode rewards so that the results are consistent with the other evaluations. 2. Including the $I\_p$-related experiments from Figure 7 in the main paper could be beneficial, as the graph demonstrating that a stricter trajectory consistency constraint increases robustness serves as compelling evidence for the paper’s main claim. Questions For Authors: 1. Could you also show the learning curve for Table 1? While the final performance is impressive, observing the stability and learning speed during training might provide more detailed insights into MTC’s performance. 2. Have you conducted experiments on other metaworld tasks with non-periodic environments (e.g., drawer-close, reach, etc.)? Presenting evaluations across a wider range of environments could further demonstrate the method’s high performance in non-periodic settings. 3. In Figure 4, does the right-hand graph include experimental results regarding robustness in non-periodic environments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for thoroughly reviewing our work and appreciating the performance and robustness of our method, and the persuasive evidence of our main claims. > benefit of total correlation maximization in non-periodic tasks Our main hypothesis that simple behavior that does not overfit to slight variations is less brittle is not restricted to periodic tasks, but also well-motivated for non-periodic tasks, such as manipulation tasks. For example, consider two different policies for grasping an object. The first policy is able to achieve high reward by always using essentially the same grasping motion and no adaptation. The second policy achieves similar reward, but relies more strongly on feedback and adapts even to slight variations. We expect the first behavior, which would have higher total correlation and better predictability but no periodicity, to be more robust to perturbations that have not been observed during training. > more non-periodic experiments We conducted additional experiments on five non-periodic MetaWorld tasks. On all tasks we achieved similar or better success rate than the baselines, supporting our hypothesis that regularizing towards simpler behavior is also beneficial for non-periodic tasks. A table with the results can be found in our reply to reviewer ts9N. > Derivation of eq (11) Our derivations do not assume that $\log(p(z\_t))$ is equal to $\log(p(z\_t|s\_t))$. However, due to the non-negativity of the expected KL $E\_{p(s\_t)} [ \text{KL}(p(z\_t|s\_t) || p(z_t))]$, substituting $p(z\_t|s\_t)$ for $p(z\_t)$ or $\pi(a\_t|s\_t)$ for $p(a\_t)$, does not invalidate our lower bound. > magnitude of noise Thank you for this constructive feedback. We use stronger state and action noises now. The following tables compare the performance (mean and 90% confidence interval over 20 seeds) on all eight tasks. Overall, MTC obtains better performance in the presence of strong noise. | Action noise with scale 0.5 | Acrobot Swingup | Hopper Stand | Finger Spin | Walker Walk | Cheetah Run | Quadruped Walk | Walker Run | Walker Stand | |:---------------------------:|:---------------:|:------------:|:-----------:|:-----------:|:-----------:|:--------------:|:----------:|:------------:| | MTC | **159 ± 19** | **711 ± 36** | **643 ± 14** | **921 ± 9** | **637 ± 8** | **928 ± 8** | **396 ± 12** | **975 ± 3** | | RPC | 111 ± 22 | 300 ± 115 | **609 ± 24** | 832 ± 54 | 539 ± 46 | 825 ± 80 | **388 ± 25** | 967 ± 4 | | LZ-SAC | 27 ± 7 | 19 ± 3 | 346 ± 26 | 715 ± 47 | 278 ± 12 | 430 ± 84 | 295 ± 9 | 905 ± 20 | | SAC | 108 ± 19 | 485 ± 132 | **627 ± 27** | 857 ± 30 | 502 ± 24 | 686 ± 111 | 356 ± 18 | 970 ± 6 | | Obs noise with scale 0.3 | Acrobot Swingup | Hopper Stand | Finger Spin | Walker Walk | Cheetah Run | Quadruped Walk | Walker Run | Walker Stand | |:------------------------:|:---------------:|:------------:|:------------:|:-----------:|:---------------:|:--------------:|:------------:|:------------:| | MTC | **8 ± 2** | **190 ± 70** | **138 ± 12** | **851 ± 10** | 48±4 | **901 ± 23** | **351 ± 18** | **975 ± 2** | | RPC | **6 ± 1** | 44 ± 30 | 28 ± 8 | 769 ± 35 | 54 ± 9 | 825 ± 82 | **341 ± 22** | 961 ± 8 | | LZ-SAC | **3 ± 5** | **218 ± 22** | 97 ± 11 | 755 ± 34 | **101 ± 5** | 567± 113 | **334 ± 15** | 955 ± 13 | | SAC | **7 ± 1** | 102 ± 44 | 104 ± 13 | 775 ± 29 | 48 ± 5 | 672 ± 110 | **332 ± 26** | 945 ± 21 | > Inconsistency between Figure 2 and Figure 7 The inconsistency is caused by using different $I_p$, which we did not tune for each task in our robustness experiments. > depicting behavior in non-periodic environments As we are not able to show a graph in our reply, we evaluated the action-predictability to demonstrate that our regularizer also improves simplicity for non-periodic tasks. The table can be found in our reply to Reviewer 8GSd. > Related approaches using Fourier transform [1,2] Thank you for suggesting these methods, which are related to MTC and will be discussed in our revision. However, they are not closely related, since they do not consider the simplicity of behavior. > Normalized scores and learning curves. We can not show the learning curves here, but will add them to the revision with original rewards. MTC shows good learning stability. > Figure 7 We will put Figure 7 in the main in our revision. > Figure 4 No, Fig. 4 does not show robustness in non-periodic tasks. We focused on DMC for our robustness experiments to follow the setup of closely related work (RPC and LZ-SAC). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I now understand the authors’ point that reducing unnecessary perturbations can be beneficial even in non-periodic environments. The results showing improved robustness in the presence of environmental noise support this claim. All of my concerns have been addressed, and I will update my score from 2 to 3. Lastly, regarding the predictability experiment table in the rebuttal to reviewer 8GSd—are the reported values negative log-likelihoods? If the prediction error is low, we would expect the log-likelihood to be high, or equivalently, the negative log-likelihood to be low. Could the authors clarify this? --- Reply to Comment 1.1.1: Comment: > are the reported values negative log-likelihoods? Sorry for the lack of clarity. The tables indeed show negated log-likelihood, indicating that the actions generated by our policies are more easily predicted. We sincerely appreciate that you raised your score and are now leaning toward accepting our work. Since this is our final opportunity to engage in this thread under this year’s ICML review process, we would like to take the opportunity to provide additional evidence that we could not fit in our last reply due to character constraints: 1. To demonstrate the learning stability of MTC, we now show the intermediate rewards (mean and 90% confidence interval over 20 seeds) on all 8 tasks every 200K steps. | Score | Acrobot Swingup | Hopper Stand | Finger Spin | Walker Walk | Cheetah Run | Quadruped Walk | Walker Run | Walker Stand | |:-----:|:---------------:|:------------:|:-----------:|:-----------:|:-----------:|:--------------:|:----------:|:------------:| | 200K | 43 ± 10 | 481 ± 129 | 866 ± 21 | 897 ± 40 | 472 ± 47 | 294 ± 69 | 595 ± 28 | 959 ± 10 | | 400K | 97 ± 17 | 792 ± 97 | 956 ± 12 | 959 ± 3 | 757 ± 28 | 724 ± 95 | 700 ± 21 | 976 ± 2 | | 600K | 150 ± 22 | 867 ± 79 | 975 ± 9 | 962 ± 2 | 826 ± 26 | 792 ± 90 | 741 ± 17 | 972 ± 5 | | 800K | 190 ± 21 | 923 ± 14 | 984 ± 2 | 962 ± 2 | 856 ±24 | 869 ± 60 | 774 ± 12 | 979 ± 5 | 2. We now evaluate the robustness to observation noise, action noise, and mass changes in all 8 manipulation tasks from Metaworld. Overall, MTC achieves better performance (mean and 90% confidence interval over 20 seeds) when actions and dynamics are perturbed, while being comparable to baselines in the presence of observation noise. | Action noise with scale 2.0 | Handle-pull-side-v2 | Drawer-open-v2 | Plate-slide-back-v2 | Peg-insert-side-v2 | Sweep-v2 | Button-press-wall-v2 | Door-lock-v2 | Push-back-v2 | |:---------------------------:|:-------------------:|:---------------:|:-------------------:|:------------------:|:---------------:|:--------------------:|:---------------:|:---------------:| | MTC | **0.98 ± 0.01** | **0.60 ± 0.13** | **0.71 ± 0.06** | **0.01 ± 0.01** | **0.01 ± 0.01** | **0.24 ± 0.04** | **0.84 ± 0.03** | **0.10 ± 0.04** | | RPC | 0.72 ± 0.05 | 0.21 ± 0.13 | **0.67 ± 0.08** | 0.00 ± 0.00 | 0.00 ± 0.00 | **0.26 ± 0.05** | 0.69 ± 0.06 | **0.08 ± 0.03** | | SAC | 0.62 ± 0.12 | 0.19 ± 0.10 | 0.58 ± 0.07 | 0.00 ± 0.00 | 0.00 ± 0.00 | **0.31 ± 0.06** | 0.71 ± 0.06 | 0.01 ± 0.01 | | Mass change with scale 1.75 | Handle-pull-side-v2 | Drawer-open-v2 | Plate-slide-back-v2 | Peg-insert-side-v2 | Sweep-v2 | Button-press-wall-v2 | Door-lock-v2 | Push-back-v2 | |:---------------------------:|:-------------------:|:---------------:|:-------------------:|:------------------:|:---------------:|:--------------------:|:---------------:|:---------------:| | MTC | **0.98 ± 0.02** | **0.62 ± 0.13** | **0.98 ± 0.01** | **0.07 ± 0.06** | **0.29 ± 0.10** | **0.51 ± 0.08** | **0.96 ± 0.02** | **0.18 ± 0.07** | | RPC | 0.71 ± 0.07 | 0.22 ± 0.16 | 0.87 ± 0.08 | 0.00 ± 0.00 | **0.18 ± 0.09** | **0.59 ± 0.12** | 0.84 ± 0.05 | **0.12 ± 0.04** | | SAC | 0.65 ± 0.12 | 0.21 ± 0.15 | 0.93 ± 0.03 | 0.00 ± 0.00 | **0.18 ± 0.12** | **0.62 ± 0.10** | 0.86 ± 0.05 | 0.00 ± 0.00 | | Obs noise with scale 0.05 | Handle-pull-side-v2 | Drawer-open-v2 | Plate-slide-back-v2 | Peg-insert-side-v2 | Sweep-v2 | Button-press-wall-v2 | Door-lock-v2 | Push-back-v2 | |:------------------------:|:-------------------:|:---------------:|:-------------------:|:------------------:|:---------------:|:--------------------:|:---------------:|:------------:| | MTC | **0.95 ± 0.04** | **0.45 ± 0.15** | 0.29 ± 0.09 | **0.00 ± 0.00** | **0.02 ± 0.01** | **0.27 ± 0.04** | **0.76 ± 0.07** | **0.00 ± 0.00** | | RPC | 0.74 ± 0.07 | 0.18 ± 0.11 | 0.29 ± 0.08 | **0.00 ± 0.00** | **0.02 ± 0.02** | **0.24 ± 0.08** | **0.74 ± 0.04** | **0.00 ± 0.00** | | SAC | 0.69 ± 0.13 | 0.13 ± 0.08 | **0.46 ± 0.07** | **0.00 ± 0.00** | **0.01 ± 0.01** | **0.16 ± 0.07** | **0.73 ± 0.04** | **0.00 ± 0.00** | We hope this additional evidence fully convinces you of the significance of our work, and we kindly ask you to consider whether it might warrant an even higher score. Thank you again for your thoughtful feedback and engagement throughout the review process.
Summary: This paper proposes a method that learns compressible policies via a lower bound on the total correlation over a trajectory. This is motivated by aiming to increase robustness by learning more simplistic policies. In practice, the method trains a recurrent latent state and action predictor and adds the predictability of actions to the reward function. In experiments, the authors show that their method leads to more robust and compressible policies. Claims And Evidence: * The main claim of the paper is that policies with a compressibility are more robust to disturbances, and that their method creates more compressible policies. The improved robustness is well supported by experiments with distribution shifts. * They also claim that it improves robustness to spurious correlations. This claim seems reasonable, but experiments do not directly evaluate it. The authors write that Gaussian noise is a strong distractor, but that's qualitatively different from an actual spurious correlation/ an actual distrator. * The correlation or predictability of their learned policy is evaluated by compressing trajectories with an off-the-shelf lossless compression method and by visual inspection. A more interpretable way to measure it might be to train predictors for each and evaluate their accuracies, or to actually evaluate the lower bound of total correlation derived by the authors Methods And Evaluation Criteria: As stated above, the main experiments and evaluation are reasonable, but especially the claim of reducing spurious correlation should be investigated more directly or removed. Overall it's reasonable. Showing 90% CIs instead of the usual 95% is surprising, but it is clearly stated in the figure captions so that's fine. Theoretical Claims: I checked the derivation of the proof for the lower bound and did not find any mistakes. My main concern is that the reward formulation in (4) depends on all previous actions and states in an episode, as well as on the learned predictors. This makes the reward formulation non-markovian and dependent on predictor parameters $\phi$ and $\theta$, and the notation as $r^*(s_t,a_t,s_{t+1})$ is questionable, it should be $r^*(s_{1:t},a_{1:t},\theta,\phi)$. The paper claims that this can be "optimized straightforwardly with existing RL methods". This is in no way obvious, it is not clear why we can simply ignore this dependency. Experimental Designs Or Analyses: The main experiments are sound. Some experiments related to the evaluation of correlation could be done more directly and the evaluation of spurious correlation only with observation noise is insufficient. Supplementary Material: Appendix A.1, B,C Relation To Broader Scientific Literature: The relation to prior method is discussed appropriately in the paper. Essential References Not Discussed: I am not aware of missing references. Other Strengths And Weaknesses: The presentation/writing is very good. Other Comments Or Suggestions: L432 (right): "goal post" should be "goalpost Questions For Authors: My main concern is the described non-markovianness of the reward as well as the non-stationarity of the rward due to the changing predictor parameters. How was this addressed in practice, or why is this not a problem? Maybe I am misunderstanding something. If this can be addressed I am willing to raise my score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our submission, and acknowledging the soundness of the main experiments, the good presentation and the empirical results that show more robust and compressible policies. > the claim of reducing spurious correlation should be investigated more directly or removed We evaluated the effect of spurious correlation in Appendix C4. We did not add Gaussian noise to the observation, but added additional state dimensions that are not controllable by the actor, but instead follow a fixed Gaussian transition model. These distractors are not correlated with the remaining states, the actions nor the reward that the agent receives, although by coincidence, it might appear that such correlations exists, resulting in spurious correlations. Our results show that MTC achieves higher rewards than RPC and SAC on the Walker Stand task, suggesting that MTC improves robustness to spurious correlations. > Some experiments related to the evaluation of correlation could be done more directly Thank you for suggesting evaluating the predictability to more directly assess the total correlation. We trained a t-step-ahead predictive model for t=[3,5,8,10] time steps on data sets that have been collected with the policy that has been learned with the different methods for a locomotion task and a manipulation task. The model was trained to predict the action t steps ahead, when given the current action. For all tested time differences, the prediction error of MTC was the smallest among all baselines. | test log-likelihood (Finger-spin) | t=3 | t=5 | t=8 | t=10 | |:--------------------------:|:---------------:|:---------------:|:---------------:|:-----------:| | MTC | **0.22** | **0.32** | **0.48** | **0.67** | | RPC | 0.34 | 0.58 | 0.72 | 0.76 | | LZ-SAC | 0.96 | 1.31 | 1.48 | 1.53 | | SAC | 0.78 | 1.13 | 1.47 | 1.57 | | test log-likelihood (Drawer-open-v2) | t=3 | t=5 | t=8 | t=10 | |:--------------------------:|:---------:|:---------:|:---------:|:---------:| | MTC | **-4.84** | **-4.40** | **-4.05** | **-3.35** | | RPC | -2.62 | -3.05 | 0.14 | 0.17 | | SAC | -2.37 | -2.07 | -2.7 | -1.7 | > non-markovianness of the reward as well as the non-stationarity of the reward Although we clearly state that our augmented reward depends on the history-based decoder, we agree that the notation should reflect this dependency, and that the implication for the optimization should be discussed, and we will revise our submission accordingly. Our reward function depends on past states and actions to encourage the agent to act consistently with its previous actions. For our experiments, however, we did not provide the past states and actions to the policy, which limits its ability to perform optimally due to the non-Markovianity of its state-action-space. We did not provide the history to the policy solely to improve the simplicity of our method and the fairness of the comparisons. However, we performed an additional experiment on the Finger Spin task for the rebuttal to evaluate our approach in a Markovian setting (by using a recurrent policy), which did not result in significant differences. | Score (mean and 90% CI over 10 seeds) | MTC | MTC-History | |:-------------------------------------:|:-----------:|:-----------:| | Performance | **985 ± 2** | **986 ± 3** | | Trajectores Size in Bytes | **2993 ± 292** | **3031 ± 259** | We also agree that the non-stationarity of the reward should be discussed and we will revise the submission accordingly. Non-stationary rewards are quite common nowadays and are used effectively in different subfields, such as imitation learning, representation learning, and---most-relevant for us---by related works such as RPC and LZ-SAC. While establishing convergence guarantees for MTC (and the related works) would be desirable, we unfortunately have to leave this for future work. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their comprehensive reply. All my concerns were addressed, I'm raising my score (2->4) --- Reply to Comment 1.1.1: Comment: We greatly appreciate your approval of our work. Thank you!
Summary: An RL framework is presented that encourages total correlation within trajectories, leading to smoother, more periodic looking, and more compressible trajectories. This leads to robustness to observation noise, action noise, and robustness to changes in dynamics. The proposed algorithm is employed for DeepMind environments where it surpasses related work in expected reward, robustness, and compression. Claims And Evidence: Claims: - MTC leads to robustness - MTC leads to less "unnecessary variation", and more periodic trajectories - MTC leads to compressibility Good Evidence: - Robustness evidence presented for one environment - Observation noise is added and MTC seems clearly more robust - Action noise is added and MTC seems clearly more robust - Dynamics are modified and MTC seems clearly more robust - MTC generally better than other algorithms in DMC tasks But Questions Arise - is MTC better because of total correlation, or because of it being non-Markovian? While it is true that total correlation would induce a more periodic behaviour, it is harder for a Markovian policy to exhibit periodicity. We see this trend in this paper as well: SAC is Markovian, LZ-SAC uses a history of some states, RPC uses the complete history but only of states, and MTC uses the complete history of state-action pairs. And the the trend in performance is similar from worst to best: SAC, LZ-SAC, RPC, MTC. In other words my question is, is a good policy (regardless of total correlation) just more periodic and it will be learnt if it is non-Markovian, or does total correlation itself has a significant role to play. - Robustness claims would be more solid if the other algorithms did not start worse off. A counter point can be made that since other algorithms start worse-off, addition of noise in actions/observations/dynamics causes degradation relative to how good a policy was originally. - The main hypothesis: that a policy that exhibits more periodic behaviour is less brittle, I think needs better substantiation. It might be very true that some domains like such policies where is there is repetitive behaviour, but there might be others that don't . That does not take away anything from the core contribution of the paper, but the language I feel is a bit exaggerated if we don't have a strong theoretical proof that periodicity <-> robustness. (But I see that compressibility has been proposed as a good thing to have in other related works) Methods And Evaluation Criteria: Yes. Popular benchmarks. Very thorough experimentation. Comparison to relevant work. Theoretical Claims: Not in depth. No issues from a surface level reading except one: I don't understand why there is the term $r(s_T, a_T)$ in equation (3). i.e. the reward of last state-action pair ? Experimental Designs Or Analyses: Did not run any code. The design description in paper and appendix seems sound. Supplementary Material: No. Relation To Broader Scientific Literature: A good contribution where we desire policies that exhibit more periodic behaviour. Good contribution in continuous control domains. Possibly the next natural step after RPC etc that the work is compared to. Essential References Not Discussed: none that I am aware of Other Strengths And Weaknesses: Strengths: - Good, thorough experiments that empirical evidence of the claims in DMCS tasks. Other Comments Or Suggestions: I am not so sure about the use of the word "Coherence" for trajectories that show periodic behaviour or more total correlation. But maybe the authors are a better judge for that. But I highly suggest reviewing this term. I don't understand p.5 col 2 last paragraph where it is said that rounding is necessary otherwise it leads to high variance in the results. Since rounding ignores the least significant bits, shouldn't it be the case that variance is not much affected by rounding? Questions For Authors: - Is periodicity -> robustness, or that periodicity in these domains = good policy and therefore high reward and more robustness? - How much of this is due to non-Markovian nature, and how much due to explicit introduction of total correlation term? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our submission, and appreciating our contributions, and thorough experimentations. > is MTC better because of total correlation, or because of it being non-Markovian? While our total correlation objective results in a reward function that depends on the past in order to encourage the agent to perform actions that are consistent with its previous actions, we did not choose a history-based policy in our experiments for a simpler and fairer comparison. We ran additional experiments to test the effect of history-based observations for standard SAC and our method on the Finger Spin task. The results are consistent with non-history-based policies, indicating that the improvements are caused by our total correlation objective, whereas non-markovianity is not sufficient. | Score (mean and 90% CI over 10 seeds) | MTC | MTC-History | SAC | SAC-History | |:-------------------------------------:|:-----------:|:-----------:|:----------:|:-----------:| | Performance | **985 ± 2** | **986 ± 3** |955 ± 18 | 951 ± 23 | | Trajectores Size in Bytes | **2993 ± 292** | **3031 ± 259** | 5156 ± 526 | 5259 ± 442 | > Robustness claims would be more solid if the other algorithms did not start worse off Maintaining high performance in the face of noise or dynamics mismatch is significantly harder than maintaining low performance. Indeed, the performance of the worst possible policy could not degrade when subjecting it to observation noise. Instead, we argue that it is more relevant to consider the absolute performance in such test settings, rather than considering the changes relative to the training performance. > The main hypothesis, a policy that exhibits more periodic behavior is less brittle, needs better substantiation. We want to clarify that our primary interest is in learning ***simple*** behavior, which may, but does not need to manifest in periodic behavior. We challenged the hypothesis that simpler behavior tends to be less brittle on various periodic and non-periodic tasks, but did not encounter detrimental effects. We did not intend to claim that periodicity always increases robustness, and will carefully revise our submission to improve clarity. We will also revise the introduction to better motivate our hypothesis: During training, the reinforcement learning agent, for example a robot, typically observes many slight variations in the state, for example due to sensor noise or unmodelled dynamic effects. In some cases, it can be crucial for the agent to drastically adapt its behavior to react to these variations, but sometimes they can be safely ignored. We aim to bias the agent towards ignoring such variations, whenever doing so does not seriously decrease the expected return, and thereby follow Newton's first rule of his third book of principia: "We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances". Intuitively, we expect a given behavior that performed well for previous variations, to also perform well for future variations. We introduce this inductive bias by means of the additional objective of maximizing the total correlation within the trajectory produced by the agent. This total correlation corresponds to the amount of information that we can save by using a joint encoding of all (latent) states and actions within trajectories, compared to compressing all time steps independently. By maximizing total correlation, the agent is encouraged to produce compressible and predictable trajectories, and thereby biased towards open-loop behavior such as clean periodic gaits, without preventing it from performing adaptations when necessary. > why there is the term $r(s_T, a_T)$ in equation (3)? Thank you for spotting this mistake. The term belongs outside of the summation. > I am not so sure about the use of the word "Coherence" for trajectories that show periodic behaviour or more total correlation Thank you for this feedback. We used coherence mostly interchangeably with consistency, but will only refer to consistency in the revision. > [why rounding is necessary] From the point of lossless compression, the least significant bits are just as relevant as the most significant ones. When directly writing the floating point numbers of the simulator into a file, most of the bits in that file are used to encode digits that are beyond the precision of the control. When compressing such a file, the resulting file size will largely depend on how effectively those essentially random bits can be compressed, which significantly increases the variance of the resulting file size. > Is periodicity -> robustness? We do not believe that periodicity implies robustness and did not intend to make such a claim. We hypothesize that an inductive bias towards simpler behavior that avoids ***unnecessary*** adaptions can often improve robustness. --- Rebuttal Comment 1.1: Comment: Thank you for these experiments and other experiments that I see in other rebuttals, and also for responding to our concerns with clarifications and explanations. I am increasing my score. Some questions still remain: In SAC-history are the rewards non-Markovian as given in MTC or just observations? I do not want to push for further experiments but this would further solidify the claims or give a direction for further investigation. About robustness, indeed you are right that a bad policy will not be made worse by further addition of noise and it is easier to maintain the level of a worse policy than a better policy, but I am hesitant to fully accept the claim in fig 2-left for example. Maybe we can imagine these to be comparable in the small region where all are performing well) An increase in noise strength of 0.02 results in average reward drop of 0.2. But then again there is no reason there should be a linear relationship between noise strength and reward, even in this region. But a case of robustness is more strongly made when one method drops off to near-random / random quickly (e.g. in fig2-center) but the robust one does not. --- Reply to Comment 1.1.1: Comment: > [Rewards for SAC-History] We indeed only used the original rewards for SAC-history because we did not understand which non-Markovian reward we could provide. We now understand that you are suggesting to use the MTC reward, which will lead to an ablation of MTC that does not backpropagate the lower bound directly into policy and encoder, but only uses its effects on the reward to come. We agree that such ablation would be interesting, in particular as the resulting algorithm would be better decoupled from the underlying RL algorithms. However, we argue that such experiment would not be suitable to test whether the improvements are caused by our total correlation objective, since the MTC-reward is derived based on total correlation maximization after all. In MTC the total correlation regularizer has two effects: 1) modifying the reward function and 2) directly modifying the gradient to policy & encoder during the policy improvement step. So far, we only demonstrated that both effects in total positively impact the performance, robustness, compressibility and predicatbility of the resulting behavior. As our lower bound with history-based prediction models causes the desired effects, we believe that it would also be beneficial when only using it for 1) or 2). But, of course, we would need to investigate this in future work. For the current submission, we focused on the algorithm that is consistent with our derivations. We argue that an algorithm that is derived from a principled objective is desirable by providing more insights into the underlying mechanics (e.g. biasing towards history-based predictions improves total correlation) and may also lead to more stable algorithms (updating all models with respect to the same bounded objective ensures convergence under the assumption---which admittely is strong in the RL setting---that this objective is indeed improved during every update). However, we agree that only using the lower bound for computing the reward could perform similarly well in practice, which would be of large interest for the community, for example, by providing a simple modification to PPO (which is almost exclusively used for real-robot locomotion) that can induce simpler, more robust behavior, without manual reward engineering. > there is no reason there should be a linear relationship between noise strength and reward We agree that there is no reason to expect the relationship between noise strength and reward to be linear. A constant increase in noise could result in moderate effects relative to the zero-noise level, larger effects when applied on top of some existing noise (due to compounding effects), and in negligible effects when applied to very large noise level where the policy is not able to perform reasonably anyway. Hence, it is difficult to compare the robustness to noise when the different policies already start at different reward levels. However, we noticed that MTC, RPC and SAC perform very similarly on Walker-Stand and investigated the robustness while focusing on this single environment. As shown below, robustness increases with respect to observation noise, action noise as well as mass change. | Obs noise | Scale=0.0 | Scale=0.1 | Scale=0.2 | Scale=0.3 | Scale=0.4 | Scale=0.5 | |:------------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:------------:| | MTC | **983 ± 2** | **984 ± 1** | **980 ± 2** | **975 ± 2** | **960 ± 7** | **924 ± 12** | | RPC | **980 ± 5** | **981 ± 2** | **978 ± 3** | 961 ± 8 | 911 ± 18 | 829 ± 24 | | LZ-SAC | 977 ± 2 | 976 ± 2 | 972 ± 3 | 956 ± 13 | 898 ± 24 | 779 ± 35 | | SAC | **985 ± 2** | **983 ± 1** | **975 ± 8** | 945 ± 21 | 895 ± 29 | 799 ± 33 | | Action noise | Scale=0.0 | Scale=0.2 | Scale=0.4 | Scale=0.6 | Scale=0.8 | Scale=1.0 | |:------------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:------------:| | MTC | **983 ± 2** | **982 ± 1** | **977 ± 2** | **967 ± 5** | **893 ± 15** | **718 ± 21** | | RPC | **980 ± 5** | **980 ± 2** | **976 ± 3** | **960 ± 4** | 850 ± 15 | 661 ± 12 | | LZ-SAC | 977 ± 2 | 975 ± 2 | 961 ± 8 | 790 ± 25 | 588 ± 19 | 459 ± 15 | | SAC | **985 ± 2** | **981 ± 2** | **976 ± 4** | **949 ± 15** | 842 ± 22 | 666 ± 16 | | Mass change | Scale=0.1 | Scale=0.5 | Scale=1.0 | Scale=1.5 | Scale=2.0 | |:-----------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:| | MTC | **935 ± 6** | **982 ± 1** | **983 ± 2** | **962 ± 10** | **767 ± 34** | | RPC | 850 ± 24 | **978 ± 3** | **980 ± 5** | **963 ± 6** | **738 ± 32** | | LZ-SAC | 706 ± 28 | 970 ± 3 | 977 ± 2 | 938 ± 20 | **708 ± 52** | | SAC | 858 ± 28 | **979 ± 2** | **985 ± 2** | **960 ± 7** | **759 ± 33** | Thank you for your thoughtful feedback and engagement!
Summary: The paper proposes an extension to standard reinforcement learning by introducing a regularization objective that maximizes the total correlation across latent state representations and actions in an agent’s trajectory. It derives a variational lower bound on this total correlation, which is incorporated into the soft actor-critic framework to jointly optimize the policy, state encoder, and prediction models. Experimental evaluations on simulated robotic control tasks show that this approach yields more coherent, periodic, and compressible trajectories, leading to improved performance and enhanced robustness against observation noise, action perturbations, and dynamics changes. Overall, the paper argues that embedding total correlation as an inductive bias can foster simpler and more robust behaviors in reinforcement learning agents. Claims And Evidence: Most of the claims are supported by the experiments in simulation, which show improvements in consistency, robustness, and performance. However, one issue is that the method uses a lower bound for total correlation that is always negative, making it unclear how accurately it reflects the true value. Additionally, since the evidence comes only from simulated environments, it's not fully clear if the same benefits would apply in real-world situations. Methods And Evaluation Criteria: Most claims are based on DMControl results. Results should be evaluated across a wider range of benchmarks. Theoretical Claims: Did not check. Experimental Designs Or Analyses: Experiments are insufficient, as claims are entirely supported by DMControl experiments. Author is encouraged to apply their approach to a wider variety of environments. Supplementary Material: Did not check. Relation To Broader Scientific Literature: Method attempts to imbue priors in existing RL algorithms to encourage temporal consistency and periodicity of movements. However, optimal policies in DMC environments exhibit periodic behavior regardless of additional priors. Unclear how total correlation affects this. Essential References Not Discussed: Algorithms such as TD-MPC use BYOL losses to learn latent representations of the environment. Comparison to this literature would be relevant. Other Strengths And Weaknesses: Strengths: - Introduces a creative method that combines total correlation with reinforcement learning to improve policy robustness and consistency. Weaknesses: - Results limited to DMC. Other Comments Or Suggestions: None for now. Questions For Authors: None for now. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for providing valuable comments and acknowledging the novelty and empirical performance of our approach. > Method uses a lower bound for total correlation that is always negative, making it unclear how accurately it reflects the true value. As stated under limitations, our lower bound is not meant for approximating the total correlation, but still effective for optimization, as demonstrated in our experiments that show simpler, more compressible, and more robust behavior. To further investigate the ability of our method to maximize the total correlation, we conducted additional experiments on the Finger spin task for this rebuttal. Namely, we collected data sets using the policies that have been learned with the different methods, and trained a model to predict the action t steps ahead, based on the current action. As indicated in the following table, our lower bound object significantly increases the predictability, further suggesting that MTC is effective in increasing the consistency, predictability and thereby total correlation of the emerging behavior. | test log-likelihood | t=3 | t=5 | t=8 | t=10 | |:--------------------------:|:---------------:|:---------------:|:---------------:|:-----------:| | MTC | **0.22** | **0.32** | **0.48** | **0.67** | | RPC | 0.34 | 0.58 | 0.72 | 0.76 | | LZ-SAC | 0.96 | 1.31 | 1.48 | 1.53 | | SAC | 0.78 | 1.13 | 1.47 | 1.57 | > Not fully clear if the same benefits would apply in real-world situations. Our experiments show that policies that are trained with our regularizer exhibit a simpler behavior that is more robust to mismatches in the system dynamics. We argue that these properties are highly desirable when attempting to transfer these policies to a real robot. However, we unfortunately were not able to perform experiments on a real robot. > Author is encouraged to apply their approach to a wider variety of environments. We now additionally evaluated our approach on 5 more tasks from Metaworld. Also on these non-periodic manipulation tasks, MTC outperforms SAC and RPC in terms of average success rate, demonstrating the benefits of maximizing the total correlation on solving manipulation tasks. All in all, our experiments are quite diverse, span periodic and nonperiodic, vision-based and non-vision-based tasks, closely follow the testbeds of the most related works, and show strong performance in all of these settings. | Success rate at 500K steps (mean and 90% CI over 10 seeds) | Handle-pull-side-v2 | Drawer-open-v2 | Plate-slide-back-v2 | Peg-insert-side-v2 | Sweep-v2 | |:----------------------------------------------------------:|:-------------------:|:---------------:|:-------------------:|:------------------:|:---------------:| | MTC | **0.98 ± 0.02** | **0.68 ± 0.18** | **0.99 ± 0.02** | **0.06 ± 0.05** | **0.28 ± 0.14** | | RPC | 0.79 ± 0.10 | 0.23 ± 0.21 | **0.98 ± 0.03** | 0.00 ± 0.00 | **0.23 ± 0.14** | | SAC | 0.76 ± 0.19 | 0.22 ± 0.21 | **0.93 ± 0.07** | 0.00 ± 0.00 | **0.19 ± 0.17** | > However, optimal policies in DMC environments exhibit periodic behavior regardless of additional priors. Unclear how total correlation affects this. While DMC tasks are characterized by periodic motions, our results in Fig.1, Fig.3, Fig.9 and Fig.10 clearly show that maximizing the total correlation improves the consistency and periodicity of behaviors compared to leading baselines. Furthermore, we are primarily interested in learning ***simple*** behavior, which does not necessarily imply periodicity, for example, for manipulation tasks. > [Comparison to literature such as TD-MPC and BYOL] We now provide the following discussion of these two approaches in our Related Work Section: Our approach is also related to previous approaches that extract temporally consistent representations from observations by learning latent dynamics models, such as TD-MPC and BYOL. Different from these approaches only considering temporal consistency in representations of states, our total correlation objective aims to maximize the consistency among the whole trajectories of state representations and actions. As shown in our experiments (Fig.5), additionally enforcing the consistency within action sequences by learning the action prediction model improves the performance in the presence of environmental perturbations.
null
null
null
null
null
null
Anytime-Constrained Equilibria in Polynomial Time
Accept (poster)
Summary: This paper introduces anytime constraints to the Markov game setting and develops a comprehensive theory of anytime-constrained equilibria (ACE). The authors present three main contributions: (1) a computational characterization of feasible policies, (2) a fixed-parameter tractable algorithm for computing ACE, and (3) a polynomial-time algorithm for approximately computing ACE. The work also develops efficient algorithms for action-constrained Markov games. Claims And Evidence: See the theoretical claim section below. The experimental evaluations are missing. Methods And Evaluation Criteria: Equilibirum cirteria is used for the evaluation. Theoretical Claims: 1. Extension of anytime constraints to multi-agent reinforcement learning 2. Computational characterization of feasible policies using AND/OR trees 3. Fixed-parameter tractable algorithm for computing subgame-perfect ACE 4. Polynomial-time algorithm for approximately computing ACE 5. Development of efficient algorithms for action-constrained Markov games Experimental Designs Or Analyses: The experiments are not provided in the paper. It would be beneficial if authors can provide at least some basic experiments that demonstrates the usefulness of the method and compare it with the other state of the art methods. Supplementary Material: I have read the proofs in the appendix, majorly I am convinced by their correctness, however, there might be some points that I have missed. Relation To Broader Scientific Literature: The paper is well written and is the first to provide anytime constraint for the mulit-agent system. The idea is therefore, novel and the authors did a fairly good job to provide a clear disctinction the work with the existing literature. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: See the theoretical claims section above. Weakness: 1. Lack of experimental results or case studies 2. Limited discussion on the practical implications and potential applications 3. Assumption of finite support for cost distributions may be restrictive for some scenarios Other Comments Or Suggestions: The paper is generally well-written and structured. The introduction provides a clear motivation for the work and situates it within the existing literature. The technical sections are detailed and rigorous. However, it is desirable to 1. More intuitive explanations or examples to illustrate key concepts 2. A dedicated section on potential applications or empirical evaluations 3. A more extensive discussion of limitations and future work directions Overall, the paper presents a significant theoretical contribution to the field of constrained multi-agent reinforcement learning. However, it could be strengthened by including practical demonstrations and a more thorough discussion of real-world implications. Questions For Authors: 1. How does the performance of the proposed algorithms compare to existing methods in practice? 2. Can we extend the to handle continuous state and action spaces? 3. How sensitive are the algorithms to the choice of parameters, particularly in the approximate case? 4. Are there specific real-world scenarios where this approach provides significant advantages over existing methods? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback! We will make sure to improve the paper using the suggestions mentioned. Please see our general rebuttal in the rebuttal section for Reviewer Gjeu, which addresses your concerns about empirical evaluation and existing methods. For your other concerns, please see below. **[Q1 + Q4]** These are addressed thoroughly in the general rebuttal, but we also mention here that there are no existing methods to compare to, as our work is the very first to design any algorithm whatsoever for anytime-constrained Markov games and for action-constrained Markov games. **[Q2]** Yes, the methods do extend, but the analysis is more involved, so we have deferred it to future work. **[Q3]** Which parameters is the reviewer referring to here? The only parameter in the control of the user is the violation parameter $\epsilon$. In terms of $\epsilon$, Corollary 6.7 and 6.8 indicate the running time scales as $1/\epsilon^n$ where $n$ is the number of agents. We conjecture this scaling is unavoidable due to the usual curse of multi-agents. Thanks again, and please let us know if there is anything else we can clarify!
Summary: This paper introduces the concept of anytime-constrained equilibria in the context of constrained Markov games, where agents must adhere to strict budget constraints at every time step. The authors extend the notion of anytime constraints from single-agent settings to multi-agent settings. The authors provide a method to determine whether a feasible policy exists under any time constraints. They propose an FPT algorithm for computing subgame-perfect ACE, which runs in polynomial time when the cost precision is small. At last, the authors also present an algorithm for computing approximately feasible ACE in polynomial time, provided the maximum supported cost is bounded by a polynomial factor of the budget. Claims And Evidence: Yes, all the claims are well justified. Methods And Evaluation Criteria: No experiments are included in this paper. Theoretical Claims: The proofs look good to me. I checked the proof of main theorem 4.4. Experimental Designs Or Analyses: N/A Supplementary Material: The proof of the main theorem. Relation To Broader Scientific Literature: The paper studies cMDPs and constrained MARL, introducing concepts like ACE and providing efficient algorithms for their computation. By addressing the challenge of enforcing constraints at every time step in multi-agent settings, the paper contributes to the broader goals of safe and reliable AI, bridging the gap between theoretical research and practical applications. Its connection to prior work on approximation methods and equilibrium concepts further solidifies its relevance to the scientific literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper introduces ACE for multi-agent systems and provides rigorous proofs for their existence, NP-hardness, and efficient computation via FPT and approximation algorithms. However, the paper lacks empirical evaluation, which limits its practical validation, and the reliance on low-cost precision for the FPT algorithm potentially restricts scalability. The complexity is so large since $D_G$ could be significantly large. Other Comments Or Suggestions: The paper defines feasibility via realizable traces, ensuring all histories satisfy budget constraints at every step, which is ideal for safety-critical applications. An alternative is expectation constraints, where only the expected cost must stay within bounds. The authors could discuss the trade-offs between these definitions and their suitability for different applications. Questions For Authors: The paper assumes the existence of equilibria but does not discuss uniqueness. Are there conditions under which the anytime-constrained equilibrium is unique, and how does non-uniqueness affect the proposed algorithms? The paper briefly mentions handling multiple constraints. How does the complexity of the algorithms scale with the number of constraints, and are there specific challenges when constraints are conflicting? For infinite-horizon problems, how would the feasibility tree and backward induction approach need to be adapted, and what are the theoretical guarantees in this setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback! We will make sure to improve the paper using the suggestions mentioned. Please see our general rebuttal in the rebuttal section for Reviewer Gjeu, which addresses several of your concerns, including practical applicability. For your other concerns, please see below. **[Scalability]** First, we would like to emphasize that the FPT result can be restrictive if the precisions are high, but this is a fundamental limit in the problem itself rather than a flaw in our particular algorithm design. This potential scalability issue is exactly why we then derive provable approximation algorithms that run in polynomial time regardless of the precision of the input numbers. We also note that this is the very first set of algorithms for the problem, and we hope our insights will lead to faster algorithms in the future. **[Uniqueness]** To clarify, we do not assume existence in this paper. In fact, developing an algorithm to determine the existence of an ACE for an acMG is the primary purpose of section 3, which culminates in Algorithm 1. In terms of uniqueness, we do not have any characterization for the uniqueness of ACE; to our knowledge, a characterization of uniqueness is an open question for standard CCE as well. Nevertheless, the non-uniqueness of ACE is not an issue for our algorithms. **[Multiple Constraints]** Handling multiple constraints is immediate by adding dummy agents that capture that constraint. Alternatively, one could think of each agent as keeping a cumulative cost vector for each of its constraints. Either way, whether the constraints conflict or not, only affects the existence of a feasible policy, not our algorithmic approach. The complexity of our methods then scales exponentially with the number of constraints instead of the number of agents. This seems to be a fundamental bottleneck, as recent works have shown that anytime feasible policies are NP-hard to even find approximately when the number of constraints is too large, reflecting the standard curse of multi-agents. **[Infinite Discounting]** The infinite discounted setting is immediate just by using the standard trick of effective horizon. We can truncate the game at the point where the future discounted cost is at most $\epsilon/2$ for all players and then run our approximation algorithm with parameter $\epsilon/2$. This guarantees us a solution that only violates the budget by $\epsilon$ as before and retains polynomial complexity since the discounting always ensures this effective horizon will be polynomial-sized. Thus, the infinite discounted case enjoys the same theoretical claims. Thanks again, and please let us know if there is anything else we can clarify!
Summary: The paper extends anytime constraints to the Markov game setting and the corresponding solution concept of anytime-constrained equilibrium (ACE). The authors provide: (1) a computational characterization of feasible policies, (2) a fixed-parameter tractable algorithm for computing ACE, and (3) a polynomial-time algorithm for approximately computing ACE. The approximation guarantees are the best possible unless P = NP. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: N.A. Theoretical Claims: I went through the correctness of many proofs but not in depth. Experimental Designs Or Analyses: N.A. Supplementary Material: I went through the supplementary material but not in depth. Relation To Broader Scientific Literature: The paper studies the solution concept of anytime-constrained equilibria in constrained Markov games. The key contributions of the paper in the context of related work are based on the definition of anytime-constrained equilibria, and essentially extend to the multi-agent setting a previous work [1], which studies single-agent constrained MDPs. The proposed constrained Markov game setting differs from the more established safe MARL setting (e.g., see [2]) introducing an extra cost function with anytime aggregated cost constraints, instead of explicitly using constraints for the value function. In the context of the proposed setting, the main results of the paper are interesting and novel. [1] McMahan, Jeremy, and Xiaojin Zhu. "Anytime-constrained reinforcement learning." International Conference on Artificial Intelligence and Statistics. PMLR, 2024. [2] Chen, Ziyi, Shaocong Ma, and Yi Zhou. "Finding correlated equilibrium of constrained markov game: A primal-dual approach." Advances in Neural Information Processing Systems 35 (2022): 25560-25572. Essential References Not Discussed: The authors have included the important references related to their work. Other Strengths And Weaknesses: ### Other strengths - The paper is well-written and easy-to-follow. - The paper provides an efficient computation for action-constrained Markov games, which can be of independent interest. - The authors use interesting, and different, techniques, as well as non-trivial reductions to prove the main results. - The main results hold for the more general settings (e.g., the infinite horizon setting) ### Weaknesses - The anytime-constrained equilibria studied in the paper are not a well-established solution concept in the literature. Other Comments Or Suggestions: - Maybe it would be helpful the authors to provide a table with related settings in constrained Markov games and similar solution concepts. Questions For Authors: - What is the relation between the anytime-constrained equilibrium and the more established constrained equilibria of [1] ? - What can we say about Markovian/Non-Markovian and stationary/non-stationary anytime equilibria in the proposed constrained Markov game setting ? [1] Chen, Ziyi, Shaocong Ma, and Yi Zhou. "Finding correlated equilibrium of constrained markov game: A primal-dual approach." Advances in Neural Information Processing Systems 35 (2022): 25560-25572. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback! Please see our general rebuttal below, which addresses the relevancy of anytime constraints in the literature and comparisons to standard expectation constraints. We also provide additional commentary below the general rebuttal. --- *General Rebuttal* --- We appreciate the reviewers' thoughtful feedback and would like to address the general concerns regarding the relevancy of the ACE concept and the absence of empirical evaluation. **[Anytime Constraints Literature]** Anytime and almost-sure constraints were introduced to the Constrained Reinforcement Learning (CRL) literature to handle safety and resource management tasks and have received much recent attention. Anytime constraints are important to many applications that we mentioned in the introduction, including the popular settings: 1. Self-driving cars with fuel and safety constraints, 2. Autonomous rescue vehicle teams for disaster relief scenarios with safety and rescue constraints. Observe that both of these applications, in addition to many others, are naturally multi-agent settings, yet previous works discussing these applications have only designed algorithms for the single-agent setting. This discrepancy between the motivating applications and the work's algorithms motivated our investigation of anytime constraints in the multi-agent setting. Even though previous works have already motivated anytime-constrained multi-agent applications, we are the first to formalize and develop algorithms for these constraints in the multi-agent context. We would be happy to add more motivating applications from previous works if the reviewers feel this would be beneficial! **[Empirical Evaluation]** Being a work on theoretical foundations, we excluded empirical evaluations for two key reasons. 1. First, the main purpose of our work is to address the problem's intrinsic computational complexity: "For what class of cMGs can ACE be computed (approximately) in polynomial time?" This mirrors the question posed in the original paper on anytime constraints for the single-agent setting [1], allowing us to fairly compare the computational complexity of the single-agent and multi-agent settings. In particular, we showed that while exact computation is harder in the multi-agent setting, approximate computation is not much harder. Importantly, we emphasize that establishing worst-case complexity bounds requires mathematical proof; empirical evaluation cannot establish such bounds. 2. Second, we designed the first-ever algorithm for anytime-constrained Markov games. Consequently, there are no previous works to compare against. The most similar setting to ours is the classical expectation-constrained Markov game, but anytime constraints and expectation constraints are fundamentally different, both structurally and computationally [1]. Notably, expectation-constrained policies can arbitrarily violate anytime constraints and can be computed in polynomial time, whereas anytime-constrained policies are NP-hard to compute. Although we do not perform any empirical evaluation, we acknowledge their importance and hope our theoretical insights will be beneficial for the design of practical algorithms in the future. **[Our Contributions]** We would like to emphasize our theoretical contributions to the literature. Not only is our algorithm the *first ever* algorithm for anytime-constrained Markov games, but it is also provably *optimal* in terms of approximation guarantees. On the path to creating this algorithm, we developed new insights about anytime-constrained policies through the notion of realizability trees and established fixed-parameter tractability bounds for anytime-constrained Markov games. We also developed the first-ever polynomial-time algorithm for solving action-constrained Markov games that we use as a subroutine in our main algorithm. Our results imply the complexity of approximating anytime-constrained Markov games is at most polynomially larger than for single-agent anytime-constrained MDPs. **References** [1] "Anytime-Constrained Reinforcement Learning", Jeremy McMahan and Xiaojin Zhou. AISTATS 2024. --- *End of General Rebuttal* --- **[Q1]** As mentioned in the general rebuttal, these two constraint types are generally incomparable. In terms of applications, anytime constraints are preferred in safety contexts since an expectation-constrained policy could be arbitrarily unsafe under some realizations. **[Q2]** Generally, non-Markovian policies are not feasible for anytime constraints [1]. The policies we construct are augmented and so just a compact history dependent policy. Similarly, non-stationarity is required for finite-horizon settings [1], but stationary ACEs are possible in infinite discounted settings (though again Markovian only in the augmented space). Thank you for your feedback and let us know if you have any other questions!
null
null
null
null
null
null
null
null
Improving Out-of-Distribution Detection with Markov Logic Networks
Accept (poster)
Summary: This paper proposes a novel framework for improving out-of-distribution (OOD) detection using Markov Logic Networks (MLNs), which combine probabilistic reasoning with human-interpretable logical constraints. The approach addresses limitations of traditional OOD detectors, such as reliance on superficial statistical patterns and lack of explainability. Empirical validation across benchmarks, demonstrating superior performance and computational efficiency. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable, as this paper does not put forward any theoretical claims or provide corresponding proofs. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. This submission does not provide supplementary materials in zip format. The Appendix section contains some analysis of the proposed method and some additional experimental results, both of which I have checked. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the fields of out-of-distribution (OOD) detection and Markov Logic Networks (MLNs). The authors propose using MLNs to combine first-order logic constraints with probabilistic reasoning, allowing the model to incorporate human-understandable semantic constraints into the OOD detection process. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The authors propose using Markov Logic Networks (MLNs) to combine first-order logic constraints with probabilistic reasoning. This allows the model to incorporate human-understandable semantic constraints (e.g., "stop signs are red and octagonal") into the detection process, improving explainability and enabling the integration of prior knowledge. 2. The framework combines MLN-derived semantic scores with traditional neural representation-based detectors through score normalization and multiplication. This hybrid approach leverages both semantic plausibility and neural pattern recognition, significantly boosting performance. Also, the authors propose an algorithm to automatically learn logical constraints from data when prior knowledge is unavailable. The greedy search strategy selects constraints that maximize detection performance while regularizing complexity, ensuring robustness against overfitting. 3. Extensive experiments on datasets like GTSRB (traffic signs) and Celeb-A (face attributes) demonstrate that MLN-augmented detectors achieve higher AUROC and lower FPR95 scores compared to standalone methods. Weaknesses: 1. The inline formula in Line 181 is missing a label and fails to clearly explain how the specific values are calculated. 2. Section 4 needs to be reorganized and polished, as some sections lack clarity. For example, the phrase "maximizes the weighted sum of the discriminative power J" leaves readers unclear about the meaning of "discriminative power." 3. There is a lack of explanation and transition for the interpretation of C(\phi) in Eq. (8). For example, why we need to penalize the complexity of a solution candidate, how is it calculated, and what is its relationship to Eq. (9)? 4. The authors proposes a greedy search strategy outlined in Alg. 1 to solving Eq. (8). However, why the strategy could solve the optimization of Eq. (8) is unclear. 5. In Section 1, it mentions that “neuro-symbolic approaches for OOD detection have shown promise in addressing these limitations”. This suggests that the work represents an incremental advance over prior efforts by employing a different model to express logical rules in the context of OOD detection. 6. Additionally, it is recommended to provide a complete algorithmic workflow rather than focusing solely on the algorithmic steps of a minor module. Alternatively, present an overall algorithmic framework first, and if the detailed steps of a specific component are complex, include supplementary algorithmic details for clarity. Other Comments Or Suggestions: Please refer to the weaknesses above. Questions For Authors: 1. In Line 274, it mentions that “Additionally, we train a DNN for the ID-predicate”. The training methodology for this component is not described, and it remains unclear whether it follows the same training paradigm as the previously mentioned network. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for the comprehensive and constructive feedback. Below, we address the specific points raised by the reviewer. ### Comment 2 > the phrase "maximizes the weighted sum of the discriminative power J" leaves readers unclear about the meaning of "discriminative power." The term "discriminative power" is an established concept in machine learning and refers to the classifier's ability to distinguish between classes - in our case, ID and OOD. For binary classification, common performance measures include accuracy and AUROC, with AUROC being particularly prevalent for OOD detection tasks. This definition was explicitly stated immediately following the cited phrase: > "As a measure of the discriminative power of the resulting OOD detector , we use the AUROC of the trained detector." A formal description of AUROC follows this explanation. ### Comment 3 > why we need to penalize the complexity of a solution candidate, how is it calculated, and what is its relationship to Eq. (9)? The complexity penalty serves as regularization to avoid overfitting by limiting model complexity. As described in Algorithm 1, we achieve this by only adding rules that enhance performance by at least a threshold $\delta_{min}$. Appendix A formally demonstrates that this regularization corresponds to using the number of rules $| \varphi |$ as a measure of complexity. We agree with the reviewer that this relationship could be clarified further and will explicitly include this clarification in a revised manuscript. Figure 4 in our ablation study illustrates that omitting the complexity penalty leads to models with high complexity (many rules) and decreased generalization performance. Eq. (9) is the definition of the AUROC, which is the measure of discriminative power, $J$, that we optimize for. ### Comment 4 > The authors proposes a greedy search strategy outlined in Alg. 1 to solving Eq. (8). However, why the strategy could solve the optimization of Eq. (8) is unclear. Indeed, solving Equation (8) exactly is computationally intractable, as we would have to evaluate $2^{702}$ solutions. Our proposed greedy algorithm runs for $\approx 1$ hour (702 evaluations), and empirical results demonstrate that it finds solutions yielding strong performance. However, we concur (and do not claim otherwise) that this approach does not guarantee a global optimum compared to exhaustive search strategies, which would be computationally prohibitive. ### Comment 5 > This suggests that the work represents an incremental advance over prior efforts by employing a different model to express logical rules in the context of OOD detection. We respectfully disagree with this characterization. Previous research employed strictly logical rules, whereas our method introduces probabilistic rules through Markov Logic Networks, providing significantly greater expressive power. The advantage of probabilistic over purely logical approaches is clearly demonstrated by our results on Celeb-A, where LogicOOD (Kirchheim et al.) fails, performing no better than random chance, while our method achieves state-of-the-art results. Moreover, the recent survey "Recent Advances in OOD Detection: Problems and Approaches" references only LogicOOD (Kirchheim et al.) within this methodological paradigm. We explicitly compare against this baseline and surpass it in all experimental setups, underscoring the novel and substantial contribution of our work. ### Comment 6 > present an overall algorithmic framework first, and if the detailed steps of a specific component are complex, include supplementary algorithmic details for clarity. We kindly request the reviewer to consider that we have provided complete source code for all experiments as supplementary material, ensuring transparency and reproducibility. Due to space constraints, explicitly detailing every algorithmic aspect within the main paper has been challenging. To clarify, our method follows a straightforward process: 1. We first pass the input x through several deep neural networks (DNNs). 2. Based on the outputs of these DNNs, we calculate a baseline outlier score. 3. We normalize this baseline outlier score by estimating the probability that ID inputs receive an outlier score that is greater $P(D(x) > X)$. 4. Concurrently, we compute a separate outlier score using weights of our trained MLN. 5. Finally, we obtain our overall outlier score by multiplying these two scores. In the additional page permitted for revisions, we will include an explicit and concise algorithmic description of these steps to further enhance clarity. ### Question 1 > [...] “Additionally, we train a DNN for the ID-predicate”. The training methodology for this component is not described, [...] The ID-predicate is produced by a DNN component trained in a supervised manner, as described in Section 3.3. We agree that this connection was not sufficiently explicit and will clearly state this in the revision.
Summary: The paper presents a novel approach to OOD detection by integrating Markov Logic Networks (MLNs) with existing OOD detectors. This fusion of probabilistic reasoning with logical constraints over human-understandable concepts distinguishes it from traditional statistical or neural representation-based methods. Claims And Evidence: + The authors suggest that MLNs provide "improved explainability" due to their use of human-understandable constraints. Yet, this is poorly substantiated. Figure 6 offers an anecdotal example of an OOD decision, but there’s no systematic demonstration or metric. + The claim of maintaining efficiency is contradicted by Figure 5, where inference time for a batch size of 1 rises from ~2ms (baseline) to ~10ms (MLN-augmented), a 5x increase. While the overhead diminishes with larger batches, this undermines the efficiency claim for real-time or resource-constrained applications + The assertion of "significant" performance improvement is overstated. Table 1 shows that for the GTSRB dataset, the AUROC increases from 99.8% (Ensemble) to 99.9% (MLN+Ensemble), a marginal 0.1% gain. Methods And Evaluation Criteria: + Testing is confined to two datasets, which is insufficient to claim broad applicability. GTSRB (traffic signs) and Celeb-A (faces) are visually distinct and relatively simple compared to diverse OOD challenges (e.g., ImageNet). This narrow scope questions the method’s robustness. + Algorithm 1 employs a greedy strategy to select constraints, but its optimality is not justified against alternatives (e.g., exhaustive search, reinforcement learning). The paper acknowledges this limitation superficially without exploring its impact, weakening confidence in the method’s effectiveness. Theoretical Claims: The paper does not introduce new theoretical proofs or claims, relying instead on established MLN theory applied to OOD detection. Experimental Designs Or Analyses: + The supervised variant uses auxiliary outliers, but the choice of these outliers (e.g., tiny images) is not justified, nor is their impact on performance explored. + For Celeb-A, the constraint search uses a validation set with OOD data (Textures), potentially overfitting the constraints to this specific OOD type. The paper does not test against other OOD distributions (e.g., iNaturalist, LSUN) during constraint learning, risking poor generalization. Supplementary Material: All. Relation To Broader Scientific Literature: The paper positions itself as an advance over statistical OOD detectors and neuro-symbolic methods like LogicOOD (Kirchheim et al., 2024) by introducing probabilistic MLNs. Essential References Not Discussed: The paper cites neuro-symbolic works (e.g., Besold et al., 2021) but engages superficially, missing a detailed discussion of how MLNs differ from or build on prior probabilistic or symbolic OOD approaches. Other Strengths And Weaknesses: **Strengths**: + The concept of blending semantic reasoning with OOD detection is creative and could inspire future work. + The constraint search algorithm (Algorithm 1) offers a practical step toward automating constraint derivation, a potential asset in knowledge-scarce domains. Other Comments Or Suggestions: + Beyond "cutlier" and "Mathemetically," minor grammatical errors persist (e.g., "then ID data points" in Section 2.1 should be "than"). These suggest a lack of polish. + An ablation on components (e.g., normalization, GED choice) would clarify their contributions. Questions For Authors: Refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Explainability Quantitatively measuring explainability remains notoriously challenging, thus we rely primarily on qualitative and structural justification. However, we would highly appreciate if the reviewer could suggest some metrics. By construction, our method's explainability stems directly from Equation 6, which explicitly represents the outlier score as a weighted sum of human-understandable constraints. This structure allows clear statements regarding by what amount each constraint violation or satisfaction increased or decreased the outlier score. Additionally, we exemplify the practical benefits of our explainable rule sets explicitly in the "Impact" section. ### Significance of Improvement > The assertion of "significant" performance improvement is overstated. Table 1 shows that for the GTSRB dataset, the AUROC increases from 99.8% (Ensemble) to 99.9% (MLN+Ensemble), a marginal 0.1% gain. We respectfully disagree. The improvement is significant for two reasons: 1. The improvement is statistically significant (t-test $p < 0.05$) with a Cohen's D of $\approx 1.27$, which indicates an effect size that is considered "very large" by conventional standards. 2. At a baseline performance (99.8% AUROC), a 0.1% absolute increase means effectively halving the remaining error rate. ### Latency > The claim of maintaining efficiency is contradicted by Figure 5, where inference time for a batch size of 1 rises from ~2ms (baseline) to ~10ms (MLN-augmented), a 5x increase. [...] this undermines the efficiency claim for real-time or resource-constrained applications The reviewer compares our method with the simplest baseline (MSP). However, this comparison neglects substantial accuracy improvements: FPR95 error decreases from approximately 3% (MSP) to 0.5% (MLN) - a 83 % error reduction. As Figure 5 shows, the primary overhead originates from using an ensemble rather than the MLN itself. Compared to the Ensemble, our method reduces FPR95 from 0.86% to 0.54% (a 37% reduction) at a cost of 2ms (+20%) per image. We kindly emphasize that even under the slowest inference scenario (batch size 1), our method achieves more than 80 frames per second, arguably sufficient for typical real-time application requirements. ### Ablation on OOD Training Distribution Our experiments confirm that constraints extracted from datasets exhibiting sufficient complexity (e.g., Textures, iNaturalist, ...) generalize reliably across various OOD scenarios, while constraints mined from e.g., Gaussian Noise, fail to generalize. We conducted additional experiments to systematically assess this concern: constraints were mined using specific OOD datasets, explicitly excluding those same datasets from subsequent evaluation. | Dataset | Method | AUROC | FPR95 | | -------------- | --------------- | ----- | ----- | | iNaturalist | Ensemble | 84.06 | 42.54 | | iNaturalist | MLN+Ensemble | 87.44 | 33.52 | | iNaturalist | Mahalanobis | 95.20 | 17.66 | | iNaturalist | MLN+Mahalanobis | 95.21 | 17.07 | | Places365 | Ensemble | 84.76 | 41.17 | | Places365 | MLN+Ensemble | 86.14 | 27.90 | | Places365 | Mahalanobis | 95.34 | 17.07 | | Places365 | MLN+Mahalanobis | 96.06 | 13.89 | | Gaussian Noise | Ensemble | 82.70 | 47.46 | | Gaussian Noise | MLN+Ensemble | 79.74 | 60.19 | | Gaussian Noise | Mahalanobis | 94.81 | 19.51 | | Gaussian Noise | MLN+Mahalanobis | 94.99 | 18.91 | ### Alternative Constraint Optimisers We discuss that exhaustive search is infeasible for realistic problems due to computational constraints. While we concur that more sophisticated search strategies might yield improved results, our straightforward greedy approach already achieves state-of-the-art performance and generalises effectively across multiple DNNs and OOD datasets. ### Auxiliary Outlier Distribution We selected the "tiny images" dataset as auxiliary outliers because it is well-established and commonly used in relevant literature, including foundational works such as Outlier Exposure by Hendrycks et al., Energy-based OOD Detection by Liu et al., and the related work on LogicOOD by Kirchheim et al. Thus, the precedent for using this dataset is firmly established. ### Ablation on Score Normalisation Here are results (averaged over 10 random seeds) on GTSRB with different distributions choices for Mahalanobis and ViM: | Normalization | MLN+Mahalanobis | MLN+ViM | | ------------------ | --------------- | ------- | | No Normalization | 72.57 | 35.25 | | Normal Dist. | 96.89 | 98.13 | | Uniform Dist. | 99.21 | 99.46 | | Log Normal | 99.67 | 99.79 | | Generalized Normal | 99.08 | 99.63 | | GED | 99.72 | 99.80 | --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal and the additional results. The results are compelling. I will raise my score.
Summary: This work explores enhancing out-of-distribution (OOD) detection using Markov Logic Networks (MLNs), which integrate probabilistic reasoning with logical constraints for improved structure and interpretability. The proposed framework augments existing OOD detectors by incorporating MLNs to define human-understandable logical constraints, enhancing both detection accuracy and explainability. Additionally, a greedy algorithm is introduced to automatically learn logical constraints from data. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper does not have theoretical contribution. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, I reviewed all of the the supplementary materials. Relation To Broader Scientific Literature: This work builds upon and extends previous research by enhancing OOD detection, a crucial aspect of ensuring the reliability of deep learning models in real-world applications. By referencing and innovating upon existing OOD detection techniques, the paper introduces a novel approach that integrates probabilistic reasoning with logical constraints. Essential References Not Discussed: The work discusses relevant works. Other Strengths And Weaknesses: Strengths: 1. Integrating MLNs with OOD detection is a novel approach. 2. Traditional OOD detection methods (e.g., MSP, Mahalanobis distance) provide only a confidence score. This paper incorporates logical constraints (e.g., "A stop sign should be red") and enhances them with MLNs, improving the explainability of the detection process. Weaknesses: 1. MLNs introduce additional computational overhead to existing deep learning models. Evaluating constraints and computing MLN weights increase inference time, as shown in Fig. 5 (Inference Time vs. Batch Size). While the paper omits the partition function (Z) in Equation (3) to accelerate inference, this compromises probabilistic interpretability. 2. The evaluation is limited to two datasets: GTSRB (German Traffic Sign Recognition Benchmark) and Celeb-A (Face Attribute Prediction), both featuring clear, discrete attributes (e.g., "STOP sign is RED," "Male vs. Female"). However, the study does not test other OOD detection benchmarks such as CIFAR-10 vs. SVHN or ImageNet-O. Additionally, no NLP or tabular datasets are considered. Other Comments Or Suggestions: Here are some typos: line 110: "refered to" -> "referred to" line 96: "were higher weights correspond" -> "where higher weights correspond" Questions For Authors: 1. How does the algorithm handle contradictory constraints? 2. Have you tested this method on natural images, NLP or tabular data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for the comprehensive and constructive feedback. Below, we address the specific points raised by the reviewer. ### Contradictory Constraints > How does the algorithm handle contradictory constraints? During training, the algorithm can automatically down-weights the conflicting constraints. As a result, the network balances and resolves contradictions based on their relative statistical support in the data. ### NLP or Tabular > Have you tested this method on natural images, NLP or tabular data? We seek clarification regarding the term "natural images," as we have conducted experiments on image datasets. Regarding NLP and tabular data, we have not yet tested our approach in these domains. We would greatly appreciate any recommendations from the reviewer regarding specific NLP or tabular datasets they consider valuable for additional evaluation. ### Other Image Datasets > However, the study does not test other OOD detection benchmarks such as CIFAR-10 vs. SVHN or ImageNet-O. Indeed, our approach relies explicitly on underlying "structure" or "prior knowledge" inherent to the training data. Such structure is, to our knowledge, absent in benchmarks like CIFAR-10 vs. SVHN or ImageNet-O, as these datasets do not provide semantic constraints or structured knowledge our MLN-based approach is designed to leverage. ### Latency > Evaluating constraints and computing MLN weights increase inference time, as shown in Fig. 5 (Inference Time vs. Batch Size). While the paper omits the partition function (Z) in Equation (3) to accelerate inference, this compromises probabilistic interpretability. The reviewer’s observation is accurate. Nonetheless, we kindly highlight that, even with the additional overhead, our approach introduces an additional computational overhead of only 2ms/image.
null
null
null
null
null
null
null
null
A Certified Unlearning Approach without Access to Source Data
Accept (poster)
Summary: This work focuses on certified unlearning when source data are unavailable due to resource limitation or regulation constraints. Instead, this work uses surrogate datasets to guide unlearning process, where the dataset mimics the original set to a certain extent. The indistinguishability guarantee is based on distance between those two datasets, and works for classification problem. Claims And Evidence: Yes. Methods And Evaluation Criteria: Proposed method is based on Newton updates and DP noise, which are two important components been used in machine unlearning area, so it makes sense. Theoretical Claims: The theoretical claims look good. Starting from Assumption 4.1 for loss function, this work develops Hessian estimation, model update (with Newton updates) and Gaussian noise injection. Finally, it defines the bound between true model from the retraining and approximated model, and (epsilon-delta)-certified unlearning. Experimental Designs Or Analyses: The datasets (e.g., CIFAR 10 and Stanford dogs) used in this work are quite outdated. Numbers on its method is on par with fully-retrained models. Supplementary Material: I have read all appendixes. Relation To Broader Scientific Literature: 1. This work discusses some popular works on certified unlearning with different approaches. For example, Guo et al., 2019, Sekhari et al., 2021 and Zhang et al., 2024 focus on single-step Newton updates, and Neel et al., 2021 and Chien et al., 2024 focus on projected gradient descent algorithms. 2. As the closest problem to the proposed problem, authors discuss zero-shot unlearning approaches. Chundawat et al., 2023 requires the forget set, while this work does not need. Similarly, Cha et al., 2023 needs to apply gradient ascent on the forget set. This work also mentions Foster et al., 2024 and Bonato et al., 2025. However, none of them formally provide theoretical guarantee for the problem. Essential References Not Discussed: To my best knowledge, authors covered most essential works related to this topic. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Typos on Line 150 “athat”. Questions For Authors: I am not sure that the difference between DP and DP-inspired unlearning, even after reading appendix B. Why is the statistical indistinguishability of two approaches different? Can you clarify this point more in the text? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. Please see our responses below. **About the datasets:** It is worth noting that prior works in certified unlearning have similarly employed datasets such as CIFAR‑10 and StanfordDogs [R5-R7]. Our primary objective in using these datasets was to provide a controlled experimental validation of our theoretical results rather than to claim state‑of‑the‑art empirical performance.The fact that our method performs comparably to fully‑retrained models confirms that our unlearning approach preserves model utility while offering certification guarantees. [R5] A. Golatkar, A. Achille, A. Ravichandran, M. Polito, and S. Soatto, ‘Mixed-Privacy Forgetting in Deep Networks’, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 792–801. [R6] E. Chien, H. P. Wang, Z. Chen, and P. Li, ‘Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning’, in The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. [R7] B. Zhang, Y. Dong, T. Wang, and J. Li, ‘Towards Certified Unlearning for Deep Neural Networks’, in Forty-first International Conference on Machine Learning, 2024. **Difference between DP and DP-inspired unlearning:** The key difference between DP and DP‑inspired unlearning lies in the nature of their statistical guarantees. In traditional Differential Privacy (DP), the guarantee is based on the statistical distance between the output distributions of an algorithm when run on two adjacent datasets. In contrast, DP‑inspired unlearning focuses on ensuring statistical indistinguishability between the output of a fully‑retrained model and the output of the unlearning mechanism. This distinction means that while both approaches aim to limit the leakage of sensitive information, DP‑inspired unlearning calibrates noise based on the distance between these two different mechanisms rather than on adjacent datasets. We elaborate on this point further in Appendix B. **Typo:** Thanks for pointing this, we will fix this in the revised version.
Summary: In this paper, the authors studied an unlearning problem with the assumption that at the time of unlearning, the model provider lost access to the original training dataset, but has access to a surrogate dataset that’s very close to the original training dataset in terms of distribution. They adapted the second-order Newton update algorithm to perform unlearning and also provide theoretical guarantees for their proposed method. Moreover, they conducted preliminary experiments to demonstrate the performance of their proposed method on synthetic generated datasets and real-world datasets. Claims And Evidence: See my main comments in **Theoretical Claims** and **Experimental Designs Or Analyses** sections. Methods And Evaluation Criteria: 1. Most of the surrogate datasets in the experiments are synthetically generated, does it mean that for real-world usecases, surrogate datasets should also be generated in this way? Can we use the train model to understand the original distribution and generate surrogate dataset based on model prediction? 2. Can the authors provide a few usecase examples when one would have accessed or created a surrogate dataset and lost access to the original dataset in real-world scenarios? I think the motivation for this problem setting is not clear. Theoretical Claims: I have some questions about the assumptions stated in this paper. 1. Assumption 4.1 though may be typical for the second-order type of methods, it is very strong assumption on the loss function. To me, I think common loss functions used for training deep neural networks do not satisfy assumption 4.1, which makes the theoretical result stated in this paper unrealistic. 2. I am a bit confused about the assumption of not having access to the original training dataset. What I understand is one does not have access to the full original dataset $\mathcal{D}$, however, it seems like the authors only assume that one does not have access to the retain portion of the original dataset $\mathcal{D_r}$, which seems to make the problem easier, at least computing an approximation of $H_{\mathcal{D}_r}$ is easier under assumption 4.1 and with the knowledge of $\mathcal{D}_u$. Experimental Designs Or Analyses: 1. In the experimental results presented in Tab 1, it seems like the model after unlearning performs similarly at test and and forget data splits. In this case, does that means the test data has a distribution shift? Or the original model is not sufficiently trained on the “desired to learn” distribution? Moreover, it would be good to provide the original model performances on those data split, so we have a good understanding of the unlearn algorithms’ performance and also if unlearning makes sense in this setting. 2. Since there is randomness introduced by noise calibration, it is good to repeat the experiments multiple times and report the means and standard deviations for all experiments. 3. Instead of unlearning a randomly sampled subset of the original dataset, is it possible to unlearn a certain class from the original dataset, or unlearn all poisoned samples from the original dataset? In these problem settings, it seems to be easier to evaluate the performance of the proposed algorithm. Supplementary Material: No, I did not. Relation To Broader Scientific Literature: The paper studied unlearning in a specific problem scenario, however, I am not very convinced that the problem setup is realistic, see my comments in **Methods And Evaluation Criteria** section. Essential References Not Discussed: NA Other Strengths And Weaknesses: Na Other Comments Or Suggestions: 1. Page 3 line 150, "åthat" should be "that"? Questions For Authors: See my main comments in **Theoretical Claims** and **Experimental Designs Or Analyses** sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. **Justification of the motivation:** In our experiments, most surrogate datasets were synthetically generated to provide a controlled environment for evaluating our theoretical results. However, for real-world applications, surrogate datasets need not be generated purely synthetically. In practice, surrogate data can be obtained from publicly available sources or even generated by leveraging the trained model to capture aspects of the original distribution. Regarding the motivation, there exist numerous real-world scenarios where the original dataset is no longer accessible. For example, regulatory policies may require data deletion after a fixed retention period, while models trained on this data continue to be deployed. Similarly, in collaborative or federated learning settings, data privacy constraints often prevent access to the full original dataset, leaving only a surrogate or limited statistical summary available. Finally, we clarify the assumption regarding access to the original training dataset. In our framework, we assume that the unlearning mechanism does not have direct access to the retain portion of the original dataset ($\mathcal{D}_r$). It only has access to the samples to be forgotten ($\mathcal{D}_u$), which may for instance be provided by a specific user. Although one might expect that having access to the forget samples ($\mathcal{D}_u$) would simplify approximating the Hessian for the retained data ($\mathcal{D}_r$), in practice, this is nontrivial. The distribution of $\mathcal{D}_u$ can differ significantly from that of $\mathcal{D}_r$ (as in class unlearning), making Hessian estimation challenging when relying solely on the forget samples. This reflects our intent to keep the unlearning framework broadly applicable. **Assumptions on loss functions and practicality:** Yes, we agree with the reviewer that Assumption 4.1 is indeed strong; while it may not hold for all deep neural network loss functions, it is important for tractability of the theoretical analyses of second-order unlearning methods which is a key first step towards the analysis of the more general setting. To satisfy these conditions while still achieving competitive performance in practice, we leverage a mixed linear network [R4] implementation. Mixed linear networks combine the simplicity and theoretical tractability of linear models with select non-linear components, which enables them to meet the convexity and smoothness criteria. As demonstrated in the table below, our implementation achieves a significant unlearning performance depending on the accuracy metrics while also achieving significant training accuracy. Extending our theoretical guarantees to general non-convex settings in the source-free setting is an important future direction. [R4] A. Golatkar, A. Achille, A. Ravichandran, M. Polito, and S. Soatto, ‘Mixed-Privacy Forgetting in Deep Networks’, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 792–801. ||Method|Train|Test|Retain|Forget|MIA|RT|$\Delta$|$\hat{\Delta}$| |-|-|-|-|-|-|-|-|-|-| ||Original|94.7±0.2%|88.5±0.3%|95.3±0.2%|95.0±0.4%||||| |0.1|Retrain|93.6±0.4%|86.4±0.9%|95.6±0.7%|84.7±0.6%|51.2±0.1%|53±2||| ||Unlearn(+)|93.7±0.3%|86.4±0.6%|94.8±0.4%|87.2±0.6%|51.3±0.1%|54±3|0.02|| ||Unlearn(-)|94.1±0.3%|85.2±0.3%|94.9±0.7%|86.8±0.1%|52.1±0.8%|54±2|0.31±0.06|0.35±0.03| |0|Retrain|81.7±0.4%|72.3±0.1%|92.7±0.1%|0%||142±3||| ||Unlearn(+)|82.2±0.4%|72.5±0.9%|93.2±0.2%|4.2±0.2%||135±5|0.02|| ||Unlearn(-)|82.4±0.5%|72.4±0.1%|93.5±0.8%|5.1±0.8%||132±6|0.49±0.03|0.52±0.02| **Class unlearning:** Thanks for raising this interesting question. Yes, it is possible to unlearn an entire class. We conducted experiments to unlearn class 0 from the CIFAR10 dataset, using our mixed linear network implementation, which satisfies the convexity assumptions while maintaining competitive accuracy. We show our results in the second row of the table above. We observe that we can effectively remove the influence of the targeted class while preserving performance on the remaining data. We believe the proposed method not only facilitates class unlearning but also can be extended to tackle scenarios such as unlearning all poisoned samples. **Original model results:** We added the original model results in every table we provided in this rebuttal (tables under Reviewer a1HE and the table provided above). **Experiments with different seeds:** To address the inherent randomness introduced by noise calibration, we conducted our experiments over multiple runs and report the averaged values with their error margins (table under Reviewer a1HE, the table provided above, Figure 1-4 and Table 1-2 in [link](https://limewire.com/d/a68qm#RoLgJHE3KN)). This evaluation demonstrates that our method is robust with performance variations well within acceptable margins. **Typo:** Thanks, we will fix this in the revised version.
Summary: This paper studies certified unlearning in a setting where the original training data is inaccessible. Prior work on certified unlearning guarantees that the unlearned model is statistically indistinguishable from a model retrained without the deleted data, however it requires access to the original training data to perform the update. To get around this limitation, the authors of this paper assume access to a surrogate dataset and provide bounds on indistinguishablility based on the total variation (TV) distance between the original training data distribution and surrogate data distribution. This TV distance is also used to calibrate the noise added to the model parameters to ensure indistinguishability. Since the TV distance cannot be computed in practice (as it depends on the unknown training data distribution), the authors propose a heuristic method to estimate it using the original model and a model trained on surrogate data. The approach is evaluated empirically on real and synthetic datasets, where it is found to maintain utility and privacy guarantees. Claims And Evidence: - The paper claims that achieving certified unlearning without training data access is an open problem. I agree based on my knowledge of the literature. - The paper claims that the proposed method is certified, however this is only true for the method described in Sec 4.1 which is not realizable in practice. While the practical approach described in Sec 4.2 appears to perform well empirically, the bound on the unlearning error $\hat{\Delta}$ is not guaranteed to hold. - The paper claims that extensive experiments demonstrate the effectiveness of the approach, however I have some concerns (see below). Methods And Evaluation Criteria: - The proposed method builds on Sekhari et al. (2021), except that the original training dataset is replaced with a surrogate. This makes sense, provided a close surrogate is available (I’m not sure how reasonable this assumption is). It makes sense to me to account for the error in approximating the training dataset with the surrogate, however, I wonder whether it would be possible to estimate an upper bound on the TV distance, rather than using an estimator with unknown properties. If this were possible, the “practical” method could be certified. - The evaluation broadly makes sense. The proposed “practical” method (Unlearn $-$) is compared with two baselines with training data access: Sekhari et al. (2021) and retraining from scratch. A variety of standard metrics for assessing unlearning performance are used across four datasets (three real, one synthetic). Due to strong assumptions on the loss function (Assumption 4.1), the evaluation focuses on linear models or pre-trained encoders with a learned linear head. This is also a limitation in prior work. Theoretical Claims: The main theoretical claim is stated in Theorem 4.2 and proved in Appendix C. I did not check the proof, however it seems to make use of the triangle inequality, various assumptions on the loss function, and properties of the Hessian. I don’t have any reason to doubt the correctness. Experimental Designs Or Analyses: As stated above (Methods and Evaluation Criteria) the experiments broadly make sense to me. However, I have the following concerns: - I couldn’t spot the size of the forget set in Sec 5. I suspect this may have an impact on the performance of Unlearn $-$. It would be great to see results reported as a function of the size of the forget set. - The error in the estimate of the KL divergence between the training and surrogate data does not seem to be reported. If this information were available, it would be possible to more directly assess the validity of the estimation approach described in Sec 4.2. - The empirical estimate of the unlearning error $\hat{\Delta}$ (Corollary 4.4) is not reported. As a result, it’s not possible to assess whether the “approximate” certificate is practically useful, or vacuous. - Fig 1(a): I believe the curve corresponds to the exact estimate of the required variance (assuming training data access)? If so, it would be good to additionally report the heuristic estimate (not assuming training data access). - For the synthetic experiments, it would be good to repeat the experiment multiple times averaging over multiple synthetic datasets. The results could be reported with error bars. Supplementary Material: I skimmed Appendix C. Relation To Broader Scientific Literature: - The paper builds on Sekhari et al. (2021), adopting their definition of certified unlearning and adapting their unlearning method. Sekhari et al.’s formulation of unlearning differs from prior work by Guo et al. (2019) and Neel et al. (2021), in that they focus on ensuring the unlearned model achieves a high (generalization) test error rather than training error. Heuristic approaches for unlearning without access to training set. - The practical version of the method estimates the KL divergence from the original model and a model trained on surrogate data using: - An implicit energy model to estimate the marginal distribution over inputs $P(X)$ (Grathwohl et al. (2019). - A Donsker-Varadhan variational representation of the KL divergence (Donsker & Varadhan, 1983). - The empirical evaluation employs a variety of unlearning metrics by Kurmanki et al. (2023), Golatkar et al., (2020) and Triantafillou et al., (2024). Essential References Not Discussed: I’m not aware of any essential references that were not discussed. Other Strengths And Weaknesses: S1. I found the paper a pleasure to read. S2. The problem seems well-motivated given the existence of several heuristic approaches that aim to perform unlearning without training data access. S3. Apart from the suggestions above, the experiments seem well-executed. W1. The paper focuses on single batch unlearning for classification models, which limits its applicability. However, this also appears to be a limitation of prior work. Other Comments Or Suggestions: N/A Questions For Authors: 1. How does the method perform for different sized forget sets? 2. How large is the empirical bound on the unlearning error? Is it practically useful? 3. Is it possible to approximate a (non-vacuous) upper bound on the KL divergence? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. **Experiments with different forget ratios:** We conducted extensive experiments on the StanfordDogs dataset with varying forget ratios to assess how forget set ratio impacts unlearning. The results in the table below show that our method Unlearn(-) scales well across different forget set ratios (FR). Also, results under the MIA and RT columns indicate that similar unlearning performance is achieved across FR with Unlearn(+) (the model achieved after unlearning with the exact data samples) and Retrain models. These findings confirm the robustness of our approach. |FR|Method|Train(%)|Test(%)|Retain(%)|Forget(%)|MIA(%)|RT|$\boldsymbol{\Delta}$|$\boldsymbol{\hat{\Delta}}$| |-|-|-|-|-|-|-|-|-|-| |0.01|Original|87.3±0.2|73.7±0.3|87.2±0.2|87.9±0.4||||| ||Retrain|87.1±0.1|73.7±0.7|87.2±0.3|73.8±0.6|52.1±0.2|10±2||| ||Unlearn(+)|87.3±0.4|74.1±0.6|87.3±0.1|74.5±0.7|53.2±0.3|10±2|0.0002|| ||Unlearn(-)|87.1±0.6|74.1±0.4|87.2±0.3|74.1±0.6|53.1±0.2|10±1|0.023±0.006|0.032±0.007| |0.2|Original|87.3±0.2|73.7±0.3|87.6±0.2|86.4±0.4||||| ||Retrain|85.6±0.1|72.4±0.6|88.7±0.7|73.3±0.1|50.6±0.8|40±2||| ||Unlearn(+)|84.9±0.8|71.8±0.5|88.3±0.1|71.5±0.3|51.8±0.2|40±1|0.08|| ||Unlearn(-)|85.0±0.5|71.4±0.2|88.0±0.9|72.6±0.6|52.0±0.1|40±2|0.82±0.06|0.91±0.03| **Justification of the heuristic method and practicality:** We conducted experiments on both synthetic and real datasets to justify the heuristic KL‐divergence estimation (Sec. 4.2) and the corresponding empirical unlearning error $\hat{\Delta}$. In Figs. 1 and 3 [(link)](https://limewire.com/d/a68qm#RoLgJHE3KN), we plot both the “exact” and “approximated” results (our heuristic method) for the KL‐divergence and the respective noise $\sigma$. For the synthetic data experiments, the exact KL divergence is computed using its closed-form for Gaussians. For real data experiments, since the exact KL divergence was not available, we estimated it using the Donsker-Varadhan bound as a reference, leveraging both exact and surrogate data samples. Figs. 1–4 and Tables 1–2 [(link)](https://limewire.com/d/a68qm#RoLgJHE3KN) show our approximations closely match exact values. To address whether the approximate certificate is practically useful, we report both $\Delta$ (exact) and $\hat{\Delta}$. The results confirm that even if the estimated bounds grow, model performance aligns with Unlearn(+). For completeness, we ran multiple trials with different seeds, included error bars in all figures, and included error margins into the tables demonstrating consistency across repeated experiments. Overall, these findings validate the heuristic’s reliability and practical utility in estimating KL‐divergence and unlearning error. **Non-vacuous upper bound for statistical distances:** While a trivial upper bound on the TV distance is available ($TV(\rho \| \nu) \leq 2$), extending this to a non-vacuous bound on the KL divergence is challenging without further assumptions on the retained data. Our primary goal was to maintain generality and avoid introducing new restrictions with the assumptions about the data distributions. To that end, we proposed a heuristic method to approximate the KL distance using the model parameters. Using this approximation, showed that as the distance between exact and surrogate distributions increases, achieving certified unlearning requires adding more noise calibrated using Corollary 4.4. Figs. 1 and 3 [(link)](https://limewire.com/d/a68qm#RoLgJHE3KN) further show that our method can approximate the exact KL distance using only the model parameters, without access to the original data samples, making it suitable for source-free unlearning. **Limitations:** Regarding the assumptions, we build on the mixed linear network which enables us to meet the strong convexity criteria while also achieving competitive accuracy results. We believe that this choice satisfies both the necessary theoretical conditions and provides a solid foundation for our experimental evaluation. Please also see our response to Reviewer mUHz, where we provide further details about this network under "Assumptions on loss functions and practicality". While our current formulation focuses on single-batch unlearning for classification models as noted by the reviewer, this also provides an important and tractable first step. Extending our mechanisms to sequential unlearning requires new theoretical insights on the certification budget considering the noise added after each unlearning step [R2, R3]. This is an interesting future direction we are actively exploring. [R2] E. Chien, H. P. Wang, Z. Chen, and P. Li, ‘Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning’, in The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. [R3] B. Zhang, Y. Dong, T. Wang, and J. Li, ‘Towards Certified Unlearning for Deep Neural Networks’, in Forty-first International Conference on Machine Learning, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my questions around the experimental evaluation. I'm satisfied with the response and will update my score. --- Reply to Comment 1.1.1: Comment: We are glad to have answered your questions, thanks for the insightful comments.
Summary: This paper proposes a novel certified unlearning framework that enables the unlearning process without requiring access to the original dataset. The authors use an estimated surrogate dataset Hessian to approximate the second-order unlearning process. The surrogate dataset is generated by estimating the model's original target distribution and using SGLD and Donsker-Varadhan techniques to sample and estimate the distribution of the new surrogate dataset. The work also provides a theoretical upper bound proof that supports the theoretical correctness of the proposed method. Claims And Evidence: yes Methods And Evaluation Criteria: Yes, the authors provides results on various real world datasets and synthetic datasets Theoretical Claims: Appear correct Experimental Designs Or Analyses: Sufficient validations on various datasets and model architures. Though the test architectures and dataset complexity are very simple. Supplementary Material: No Relation To Broader Scientific Literature: The use of surrogate datasets and second-order unlearning is interesting and promising, can be beneficial to broader context. Essential References Not Discussed: No Other Strengths And Weaknesses: Pros: 1. The paper addresses an important issue by introducing a framework for unlearning without access to the original dataset. 2. The use of surrogate datasets and second-order unlearning seem novel 3. The theoretical analysis provides a strong foundation for the proposed method. Cons: 1. The sampling and estimating the surrogate dataset part could be presented earlier. 2. The network's architecture used in the empirical is too simple. I wonder the effectiveness when apply in models with moderate complexity. Other Comments Or Suggestions: No Questions For Authors: How did you compute the noise variance and forget scores for various datasets shown in Figure 2? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. Please see our responses below. **Experiments on complex models:** Our main focus in the current work was the theoretical foundations, to explore provable certified unlearning mechanisms in the source-free setting, and provide rigorous theoretical guarantees for certified unlearning using surrogate datasets. To that end, our empirical evaluations ranged - as a proof-of-concept - from simple linear models to networks with one or two convolutional layers (see Table 5 in the paper) that demonstrate that our certified unlearning mechanism maintains competitive performance across various complexity levels. To further address the concern about moderately complex architectures, we introduce in this rebuttal a mixed linear network implementation. This model satisfies the strong convexity assumptions made in our analysis—particularly regarding the loss function—and provides a more expressive and realistic evaluation setting than standard linear models. Despite its increased complexity, it remains analytically tractable and directly reflects the practical applicability of our theory. We gave more details about mixed linear networs in our responses to Reviewer mUHz under “Assumptions on loss functions and practicality”. As shown in the table given under the same response, our certified unlearning mechanism performs effectively in this mixed linear setting. This strengthens our claim that the theoretical results extend meaningfully to models beyond simplistic settings, including those with moderate complexity. In response to the reviewer’s comment, we evaluate our method on a ResNet-18 model by computing the full-model Hessian using the Conjugate Gradient method. The results are shown in the table below. We observe a performance drop on the forget set after the unlearning update for both Unlearn (+) (retrained with exact data) and Unlearn (−) (our method). Importantly, Unlearn (−) performs comparably to Unlearn (+), demonstrating that our method achieves similar unlearning behavior without access to the retain dataset, even on complex models like ResNet-18. |Method|Train|Test|Retain|Forget|MIA|RT| |-|-|-|-|-|-|-| |Unlearn(+)|96.5%|90.4%|97.1%|94.4%|56.2%|45| |Unlearn(-)|96.1%|89.7%|96.2%|94.2%|57.3%|49| **Noise variance and forget score:** - Noise Variance ($\sigma$): Our method derives $\sigma$ based on the theoretical bounds provided in Theorems 4.2 and 4.3 using an upper bound on the statistical distance between the true data distribution and the surrogate distribution, which control the term $\|w_{r}^* - \hat{w}_r\|_2$ (see Equation (8)). In practice, when we do not know the exact upper bound on the statistical distance between the two datasets, we estimate it using the heuristic method explained in Sec. 4, and this estimate is then substituted into Equation (6) to compute the calibrated noise variance. To do so, we first sample from the trained model parameters using Langevin dynamics (details in App. D3) and then estimate the KL distance using the Donsker-Varadhan bound (Equation (12)). - Forget Scores: To quantify the quality of forgetting, we compute per-example $\epsilon$ estimates that capture the strength of an optimal attacker's ability to distinguish between the outputs of the unlearned model and a model retrained from scratch. These per-example estimated $\epsilon$ values are then aggregated into an overall forget score using a binning approach to ensure granularity and robustness. Our forget score methodology and implementation follows along the lines of Triantafillou et al. [R1], and we report three variants: “FS1(+)” (using noise calibrated with the original data), "FS1(-)” (using noise calibrated with the surrogate data, our method), and “FS2(-)” (adding noise calibrated with the original data but applied without re-calibration to the surrogate data). We hope this clarifies how the noise variance and forget scores are computed for the datasets in Figure 2 in our paper. Finally, we agree with the reviewer’s suggestion on moving the surrogate dataset part earlier, we will present this part earlier in the revised version. [R1] Triantafillou, E., Kairouz, P., Pedregosa, F., Hayes, J., Kurmanji, M., Zhao, K., Dumoulin, V., Junior, J. J., Mitliagkas, I., Wan, J., et al. Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition, arXiv:2406.09073, 2024.
null
null
null
null
null
null
NBDI: A Simple and Effective Termination Condition for Skill Extraction from Task-Agnostic Demonstrations
Accept (poster)
Summary: The paper considers the problem of learning the termination conditions of the Option framework from task-agnostic demonstrations (demonstrations that are either exploratory or for other tasks). The key idea of the paper is to use state-action novelty to find states where termination will likely be reasonable to choose alternative skills: when a skill is finished or a decision-making should be made at an important point, the error of predicting the next state is lower. The paper's approach enables skills to be variable length based on the learned termination condition. The experiments show that the proposed approach improves performance on learning in unseen tasks by utilizing the termination condition to make essential decisions at appropriate states. The authors also conduct comprehensive benchmark tests against other termination condition learning approaches as well as ablations of its components. The results strongly support the claims made by the paper. In general, the paper presentation is very clear and easy to follow. Although the novelty of the paper is somewhat limited (by utilizing a previously known method of curating novelty to identify termination conditions), I believe the paper presents some meaningful contribution in the novel idea of applying novelty measure in the option framework, and the results show the benefit of doing so. Claims And Evidence: Yes. Methods And Evaluation Criteria: The idea of using novelty to detect termination conditions for options framework makes sense. The evaluation is done to see if the better termination conditions help options framework to adapt to unseen tasks faster and achieve better performance. 1. One concern I do have is for Section 6.6 and Figure 8. Why is "mean value of state-action novelty for 25 trajs" a good metric to state "proposed approach is robust to various number of parameters or size of datasets"? To me, learning the mean value is the easiest thing to do for ML, and of course with an i.i.d. sampled sub dataset or a smaller neural network it can still learn it. The key of ML is to learn the variances among data (some data corresponds to a high predicted value vs. some data corresponds to a low predicted value). Please clarify if I misunderstood, otherwise I believe the experiment for Section 6.6 should be redone with a different metric. Theoretical Claims: The paper does not have theoretical claims. Experimental Designs Or Analyses: Yes, the experimental design is sound and through in comparing with benchmarks and ablations of the proposed approach. Supplementary Material: Yes, supplementary B and G. Relation To Broader Scientific Literature: The contribution of the paper is to relax previous option framework's fixed-length skills to be variant-length skills according to the learned termination function. Essential References Not Discussed: Not that I am aware of Other Strengths And Weaknesses: Although I find the paper overall well-written, I do have a few suggestions for the structure of the paper. Currently, the technical details of Section 5.1 and 5.2 are largely in supplementary, which I believe hinders the ability for readers that are less familiar with the topic to appreciate the work. The learning of the novelty function \Chi(s,a), and the significant novelty criteria to determine termination, \beta, are important details to understand the reproduce the approach. As such, I believe moving them into the main paper would be beneficial, and for the page limit's sake, some contents of Section 6.1 could be moved to supplementary, as the illustration of Figure 2a and 2b seem very similar. Similarly, Figure 3 seems hard to interpret without reading Appendix G.1, and I recommend at least providing a brief explanation of how the figure is generated. Other Comments Or Suggestions: The Figure 2 should be placed later in the paper as it is no introduced until Section 6. Questions For Authors: 1. According to Section 5.1, z is the embedding of the s-a pairs and \beta from time t to t+H-1. It is unclear to me how this information is available during execution time - how can one predict the future trajectory in order to calculate z? 2. According to line 318 (right column), "a large set of task-agnostic agent experiences is collected" - how large is this dataset? Is it assumed that the demonstration set covers the state space well? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and constructive suggestions. We address your questions below. **Q1: Details in skill termination and execution** Thank you for your feedback. To clarify, the variable-length skill embedding space $z$ is learned **offline** during the skill extraction phase using a deep latent variable model. During this phase, the low-level policy (i.e., the skill decoder in Figure 12) receives a randomly sampled experience from the task-agnostic demonstration, along with a termination condition vector $\beta$ provided by the pre-trained state-action novelty module. The latent model is trained to reconstruct the corresponding action sequence and its length (i.e., the termination point) by maximizing the evidence lower bound (ELBO) (Appendix B.1). During the execution phase (or when solving downstream tasks), the pre-trained low-level policy (i.e., the skill decoder) takes as input a latent skill vector $z$ from the high-level policy and reconstructs action and termination signal $\beta$ at each step, without needing to observe or predict future states or actions (Appendix B.2). Thus, while future trajectory segments are used during skill extraction to learn variable-length skill embeddings and terminations, they are **not required at execution time**, ensuring our approach remains practical and deployable. **Q2: Task-Agnostic Dataset Size and Coverage** Thank you for your feedback. The number of task-agnostic trajectories collected for each environment were as follows: maze (85,000 trajectories), block stacking (37,000 trajectories), and kitchen (400 trajectories). Since we used **image-based observations** in the maze and block stacking environments, and the downstream tasks involve significant configuration changes (e.g., entirely new maze layouts or a larger-scale block stacking environment with more blocks in random positions), we require a larger set of task-agnostic demonstrations compared to the kitchen environment, where the overall layout remains consistent across tasks (Appendix F). Since our goal in maze and block stacking environment is to solve downstream tasks with significantly different environment configuration, we do not assume that task-agnostic demonstrations fully cover the **state space** of the downstream tasks. However, it is important that they provide a good coverage of the **observation space**. Note we are using cropped image centered around the agent from task-agnostic demonstrations as observations. This way, we can extract consistent structural information that can be extracted across multiple task-agnostic trajectories, which then can be used to detect state-action novelty based decision points even in downstream tasks with significantly different environment configurations (Appendix F). **Q3: Impact of model capacity and dataset usage** Thank you for the insightful feedback regarding the experiments. We agree with the reviewer that simply reporting the mean and standard deviation of state-action novelty across 25 trajectories may not sufficiently demonstrate the robustness of our method with respect to model capacity or dataset size. The concern that machine learning models can trivially capture mean values under i.i.d. sampling is valid, and we appreciate the opportunity to clarify our intent. To address this, we revised the analysis in Figure 8 to present box plots that visualize the full distribution of state-action novelty scores across 25 sampled trajectories. The revised version of Figure 8 is available at: https://imgur.com/a/9gQ8IrU. This allows us to assess the variance, range, and consistency of state-action novelty estimates across different network sizes (left) and dataset usage levels (right). From the revised plots, it can be observed that the spread (variance, median and interquartile range) of state-action novelty remains relatively stable across different model sizes and dataset usage. **Q4: Paper structure rearrangement** Thank you for your thoughtful suggestions regarding the structure of the paper. We agree that the current presentation of Sections 5.1 and 5.2 may limit accessibility for readers who are less familiar with skill extraction from task-agnostic demonstrations. To address this, we will incorporate concise summaries of Appendix B.1 and B.2 into Sections 5.1 and 5.2. Additionally, we will expand the caption of Figure 3 to briefly explain the visualization procedure used to generate the figure. This added context will help readers better interpret the figure and understand its significance. To accommodate these changes within the page limit, we will move Figure 2(b) and its accompanying explanation (currently in Section 6.1) to the appendix. Furthermore, we will relocate Figure 2 to Section 6, where it is first referenced, to improve the flow of the paper. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for providing detailed answers to my questions. Q1: clearly answered. Q2: I think it is important to provide an operational definition for "a good coverage of the observation space", such that future works could have a guidance of how much data is required to replicate the success of NBDI on other domains. Q3: clearly addressed with the new evidence. Q4: clearly answered. As the authors have addressed my main concerns, I increase my overall score from 3 to 4.
Summary: This paper presents NBDI (Novelty-based Decision Point Identification), a state-action novelty-based decision point identification method that allows an agent to learn terminated skills from task-agnostic demonstrations. The key advancement presented in this work includes the mechanism for determining critical decision points, allowing variable-length skills to be learned. The authors present a cohesive methodology section followed by an abundance of results. Overall, the work concludes with several key advancements and interesting insights, displaying the importance of the formulation of the state-action novelty-based termination module. ### Post-Rebuttal Thank you for your responses regarding the diversity of the task demonstration set and the average length of skills learned. Please include this in the updated manuscript. Claims And Evidence: Yes, the claims are supported by an abundance of evidence. Methods And Evaluation Criteria: Yes, the paper has sufficient evaluation for the problem at hand. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, the validity of the experimental design was checked thoroughly. Supplementary Material: The supplementary material was briefly reviewed. Relation To Broader Scientific Literature: Yes, the work is compared to several related works in Section 2 and compared against several works in Section 6. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: + The paper contains abundant results and support for the proposed method. + There are several intuitive explanations that help with understanding the paper. Weaknesses: - Due to the amount of material within the paper, many important details are found in the Appendix. The paper could be improved with longer figure captions. Other Comments Or Suggestions: - Could you comment on how diverse the initial task demonstration set can be (or needs to be)? Questions For Authors: - Could you provide further details regarding the average length of skills learned? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. Please find our responses to your concerns below. **Q1: Diversity of the task demonstration set** Thank you for your insightful suggestion. In our experiments, we found that the initial task-agnostic demonstration set needs to be diverse enough to cover the **observation space** (i.e., cropped images centered around the agent (Figure 2)), rather than fully covering the **state space** of downstream tasks. This diversity is especially important in environments like maze and block stacking, where downstream tasks involve significant changes in configuration (e.g., unseen maze layouts or random block arrangements in a larger-scale block stacking environment). To ensure sufficient diversity, we used 85,000 trajectories in the maze environment and 37,000 trajectories in the block stacking environment. In contrast, the kitchen environment, where the scene layout is consistent across tasks, required only 400 trajectories. **Q2: Details in the average length of skills learned** Thank you for your feedback. In Appendix A.4, we explored the impact of fixed skill trajectory lengths by evaluating whether NBDI still outperforms SPiRL [1] when SPiRL is configured to use the average skill length observed in NBDI (referred to as SPiRL (avg-fixed)). We found that NBDI consistently outperforms SPiRL even when SPiRL uses these average fixed skill lengths. Interestingly, SPiRL (avg-fixed) performed worse than SPiRL with a fixed skill length of 10 in the block stacking environment, but better in the maze environment. However, across all fixed skill lengths tested (ranging from 10 to 30), there was no instance where SPiRL outperformed NBDI. These findings support our claim that NBDI can effectively leverage critical decision points in the environment compared to fixed length approaches. Furthermore, in Table 2 of Appendix A.4, we report the skill lengths of high-level policy for each environment during downstream reinforcement learning. The results show that our method maximizes temporal abstraction by frequently executing longer skills (lengths 21–30). At the same time, it retains the flexibility to terminate earlier (lengths 1–10) when critical decision points are encountered. This adaptability enables more efficient exploration of the state space and enhancing the transfer of knowledge across various tasks. **Q3: More informative figure captions** Thank you for your suggestion. We will revise the captions for Figure 1 and Figure 2 to provide additional context about the tasks and domains depicted. Additionally, we will expand the caption for Figure 3 to briefly explain the visualization procedure, thereby helping readers better interpret the figures and their significance. [1] P., Karl, et al. "Accelerating reinforcement learning with learned skill priors." Conference on robot learning. PMLR, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your responses regarding the diversity of the task demonstration set and the average length of skills learned.
Summary: This paper introduces NBDI (Novelty-based Decision Point Identification), a novel approach for learning termination conditions to extract skills from task-agnostic demonstrations. The method consists of using state-action novelty to identify critical decision points where skills should terminate, allowing for variable-length skill execution. The method leverages an intrinsic curiosity module (ICM) to estimate novelty, considering state-action pairs with high novelty scores as potential decision points. NBDI is evaluated across multiple environments (maze navigation and robotic manipulation tasks) and consistently outperform fixed-length skill extraction methods like SPiRL. The authors demonstrate that by executing variable-length skills that terminate at meaningful decision points, agents can explore more efficiently and better transfer knowledge across different tasks. Claims And Evidence: The claims are generally well-supported by evidence. The authors demonstrate NBDI's greater performance compared to fixed-length skill approaches (like SPiRL) across multiple environments. The claim that state-action novelty can identify meaningful decision points is supported by visualizations showing that high novelty points correspond to intuitive decision points (e.g., crossroads in mazes, and completed subtasks in manipulation tasks). The authors are transparent about limitations regarding sensitivity to dataset quality. They provide experiments (Section 6.5) and visualizations (Appendix C) to demonstrate how varying levels of stochasticity in demonstrations affect decision points detection. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate. The authors evaluate NBDI on two navigation tasks and two robotic manipulation tasks, providing diverse testing environments with continuous state and action spaces. They compare against multiple relevant baselines (SAC, BC+SAC, IQL+SAC, SPiRL, and other variable-length skill methods). The metrics used (success rate for navigation, number of stacked blocks for manipulation, and number of completed subtasks for kitchen environment) are suitable for the respective tasks. The ablation studies (with "NBDI-x" baselines, or in Appendix A) isolate contributions from different novelty measures and design choices. Theoretical Claims: Not applicable (as no new theoretical claims) Experimental Designs Or Analyses: The analyses appear sound. The authors conduct experiments across multiple seeds (five different seeds per experiment) and report confidence intervals. They compare against ablations, fixed-length skill methods (SPiRL), other variable-length skill approaches (LOVE, relative novelty), and even algorithms that do not leverage temporal abstractions (flat RL methods). The analyses of (1) how dataset quality affects performance and (2) where the decision points are placed in maze environments both add valuable insights. Supplementary Material: I reviewed the Appendix. I found the extra experiments (Appendix A) and visualizations (Appendix C and D) particularly interesting. The additional algorithmic details from Appendix B helped me get a better understanding. Relation To Broader Scientific Literature: The paper positions itself well within the literature on skill-based RL, particularly regarding skill extraction and termination condition learning. It builds upon the option framework (Sutton, 1998), and SPiRL (Pertsch et al., 2021), a recent fixed-length skill extraction method. The authors acknowledge prior work on identifying sub-goals through novelty (Şimşek & Barto, 2004), but extend this to continuous state-action spaces. The work also relates to research on curiosity-driven exploration (Pathak et al., 2019) by repurposing novelty estimation for termination conditions rather than exploration bonuses. Essential References Not Discussed: I am not aware of any essential reference not discussed in this work. Other Strengths And Weaknesses: Strengths: - The approach is simple and clear. The theoretical motivation linking state-action novelty to termination improvement is well-justified. - Visualizations nicely illustrate how the detected decision points align with intuitive critical points in the environment. Weaknesses: - The novelty threshold is a critical hyperparameter requiring environment-specific tuning, which may limit generalizability. - The effectiveness appears sensitive to the quality of demonstration data, potentially limiting applicability with highly stochastic or suboptimal demonstrations. Other Comments Or Suggestions: The paper would benefit from clearer explanation of the termination condition implementation in Section 5.1. Questions For Authors: 1. Given the significant variation in novelty thresholds across environments (0.3 for kitchen vs. 50 for maze tasks), have you explored adaptive threshold setting approaches? For instance, would it be effective to set a coefficient $\alpha$ as a hyperparameter, track the maximum novelty score $m$ observed during training, and consider all state-actions whose novelty is higher than $\alpha m$ as critical decision points? 2. How would you expect NBDI to compare against LOVE on discrete action environments, and against "relative novelty" approaches on discrete state spaces? This could help clarify the method's advantages across different types of environments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. Please find the responses to your questions below. **Q1: Details in implementation of termination conditions** Thank you for the suggestion. To improve clarity, we will revise Line 232 (second column) to include a more detailed explanation of the termination condition implementation, which is currently described in Appendix B.1. The revised sentence will read: “During the training of the low-level policy, the deep latent variable model receives a randomly sampled experience from the training dataset, along with a termination condition vector $\beta$ provided by the state-action novelty module. The model is trained to reconstruct the corresponding action sequence and its length (i.e., the termination point) by maximizing the evidence lower bound (ELBO).” **Q2: Adaptive thresholds for environments** Thank you for your insightful suggestion. In practice, the thresholds we tuned through experiments (Appendix A.1) approximately correspond to the 97th percentile of the novelty values computed over task-agnostic demonstrations—e.g., kitchen = 0.3 (97.47th percentile), maze = 50 (96.86th percentile), and block stacking = 40 (96.12th percentile). We will include this percentile-based guidance in the paper to help readers more easily apply our method across new environments. **Q3: Comparison NBDI to other methods on discrete state/action space** Thank you for your valuable suggestion. To provide an intuitive comparison between NBDI, Relative Novelty[1], and LOVE[2] in discrete settings, we conducted a case study using a grid-based maze environment. This allowed us to directly visualize and compare termination points across methods in a controlled, discrete domain. To align with our task-agnostic setup, we collected diverse expert-level demonstrations from random start and goal positions, and used these datasets to extract skills and termination points. The figure is available at: https://imgur.com/a/aRFrZ30. [1] defines novelty at a given state as the ratio between the average novelty of future and past states, measured within a fixed-size sliding window (n_lag). A high value indicates that the agent transitions from a familiar region to a less familiar one. Following the original formulation, we only evaluated states with sufficient trajectory context on both sides of the window. As shown in the visualization, Relative Novelty is highly dependent on transition history, often identifying only a subset of bottleneck states. [2] uses a variational inference framework to extract skills based on the Minimum Description Length (MDL) principle in that its objective is to “effectively compress a sequence of data by factoring out common structure”. While [2] employs a variational inference framework to implement the MDL principle, we used Byte Pair Encoding (BPE) to extract skills in discrete setting as BPE is a specific formulation of MDL [3] and it offers a more intuitive and interpretable formulation of compression. Termination points were visualized based on where the segmented skills terminated during 100 goal-reaching tasks. The results show that terminations vary significantly with the number of trajectories used to extract skills, as [2] focuses on capturing common structure rather than consistent bottlenecks. NBDI terminates skills based on both conditional action novelty and state novelty (Section 4.1). The visualization demonstrates that NBDI consistently identifies key bottleneck states and exhibits robustness to the number of trajectories collected in contrast to other methods. Moreover, states with high state novelty often correspond to regions that are rare or hard to reach within the task-agnostic dataset. By increasing the decision frequency in such unfamiliar states, NBDI promotes more effective exploration of the state space. [1] Ş., Özgür, and A. Barto. "Using relative novelty to identify useful temporal abstractions in reinforcement learning." Proceedings of the twenty-first international conference on Machine learning. 2004. [2] J., Yiding, et al. "Learning options via compression." Advances in Neural Information Processing Systems 35 (2022): 21184-21199. [3] G., Matthias. "Investigating the effectiveness of BPE: The power of shorter sequences." Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 2019. --- Rebuttal Comment 1.1: Comment: Thank you very much for your responses. They are all very clear and address all the interrogations I had. Furthermore, I completely agree with the suggested additional content and experiment.
Summary: The paper is on the topic of skill learning in reinforcement learning. Its contribution is on the subject of when skills should be terminated. The authors build on the existing work on the literature on skill learning, where the learned skills are executed for a fixed number of time steps. Here the authors add the flexibility of terminating the skills at other times. Specifically, they focus on identifying states that would serve as particularly useful points of termination based on the novelty of the state and the stat-action pair. The authors present empirical results in several domains. Claims And Evidence: The main claims are as follows: (1) The proposed skill termination condition for task-agnostic demonstrations remains effective even when the environment configuration of complex, long-horizon downstream tasks undergo significant changes. (2) The paper presents "a novel finding in reinforcement learning, which is the identification of state-action novelty-based critical decision points." (3) Executing terminated skills, based on state-action novelty, can substantially enhance policy learning in both robot manipulation and navigation tasks. It is not clear how well the paper supports claim (1). There is not a clear description of the test environments and how they change in future tasks. The second claim is confusing because "identification" of critical decision points is not a "finding". For the third claim, there is some supporting evidence in the paper. However, it must be noted that the experimental evaluation does not consider a broad range of skill-based approaches to policy learning. Methods And Evaluation Criteria: The environments and the tests that are carried out make sense for the problem at hand. However, ideally a broader set of skill-based methods would be used in the evaluation. Currently, only a single approach is tested (SPiRL) Theoretical Claims: N/A Experimental Designs Or Analyses: No particular problems I have noticed. Supplementary Material: No. Relation To Broader Scientific Literature: The paper implements an intuitive idea for skill termination in reinforcement learning. This proposed approach to skill termination is an alternative to executing skills for a pre-determined fixed period of time. There are a large number of approaches in the literature to skill discovery -- here I am using the term "skill" broadly to denote a behaviours that can take multiple time steps to execute. Many of these approaches identify termination conditions explicitly, which means that these skills terminate in accordance with their termination conditions rather than after executing for a fixed period of time steps. So the ideas in this paper will not necessarily be relevant for those approaches. However, there are also some existing methods that learn skills with no explicit termination conditions; with these methods, the skills are executed for a fixed amount of time. The proposed approach can be useful for such skills. Essential References Not Discussed: The paper proposes a termination condition for skills, which could in potential be useful with many existing approaches to skill discovery that do not identify termination conditions for the discovered skills and instead executes them for fixed time lengths. But the paper examines only one such approach (SPiRL). A few other approaches are mentioned in the paper but not included in the experimental evaluation. I suggest that the authors include a broader discussion of existing approaches to skill learning for which their approach may be useful and that they include some of these methods in their experimental evaluation. Other Strengths And Weaknesses: The main idea in the paper -- that skills should be terminated at points where renewed decision making may be useful -- is I think a good one. It may be useful in combination with many different ways of discovering skill policies. The writing can be more organised and clear. I have given some specific comments in the next section, which I hope the authors will find useful. Other Comments Or Suggestions: Figure 1 -- The figure shows a number of images from the kitchen environment but without sufficient context. The reader needs a better understanding of the domain in order to get meaningful information from these images. Figure 2 -- The last figure here has low resolution. More importantly, in both figures showing states from the environment, it is not fully clear what these figures are showing. The domains, and their visual depiction, needs to be explained to the reader. Line 198 -- "where a subtask has been completed" -- What is a "subtask"? How is it defined? Additionally, this domain needs to be explained in more detail. Figure 3 -- Needs a larger font size on the side bar showing the color scale. Line 185-190, second column: "When the skills (or options) are discovered from diverse trajectories (e.g., trajectories gathered from a diverse set of goals), termination improvement is typically observed in states where a multitude of actions have been executed, such as crossroads." It is not clear what the basis is for this observation. Line 264 - 268, column 1. Here a discussion on the agent policy would be useful. Does the policy matter? Are we making any assumptions about the agent policy when trajectories are collected. Would there be a difference if the agent is acting optimally within the current task versus acting randomly? Figure 5. Why is the success rate so low in the two mazes? In sparse block stacking, what is the maximum number of stacked blocks possible? Line 325 -- The problem should be specified by using the reward function not in terms of what the agent needs to do. How specially is the agent being rewarded? Line 318: "A large set of task-agnostic agent experiences is collected from each environment" Please state what the agent policy was in these environments. Was it the optimal policy for the task? Something else? There is very little description of the environments in the main paper. A large part of the evidence provided against the claims in the paper comes from empirical evaluation. Therefore the structure of the environments is important in understanding the strengths and weaknesses of the proposed approach. Ideally the main paper would provide sufficient high-level description of the environments, and this is currently not the case. These include, for example, the states, actions, reward function, and some explanation of the domain dynamics, for example, the level of stochasticity. For baseline skill-based algorithms with fixed execution length, it would be useful to explore the impact of the skill trajectory length on the results. Questions For Authors: Do you think your proposed approach would be useful in conjunction with other existing methods of determining skill policies? Which ones? Have you done any analysis of the use of the proposed skill termination with different ways of setting skill policies? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough and constructive comments. We hope we can address your concerns below. **Q1: Description of the train/transfer environments** In Appendix E, we introduced the train/transfer domain similarity metrics for the environments used in our experiments. To demonstrate the effectiveness of our method in challenging downstream tasks involving significant configuration changes, we tested on a maze with an unseen layout and a larger-scale block stacking environment with more blocks in random positions —both of which differ significantly from the task-agnostic offline datasets. The significant performance improvements reported in Section 6 clearly demonstrate the effectiveness of our method. **Q2: Confusing expression** We apologize for the confusion caused by our wording. We agree that the term "finding" may have been misleading in this context. We will revise the word to “a novel method”. **Q3: Applying NBDI to other skill-based approaches** Thank you for your constructive suggestion regarding the experiments. Our work focuses on extracting meaningful termination points from task-agnostic demonstrations. To explore the broader applicability of our approach, we investigated its compatibility with a skill-based approach that leverages task-agnostic demonstrations to solve challenging long-horizon, sparse-reward meta-RL tasks [1]. Specifically, we applied our method during the skill extraction phase of [1] to learn variable-length skills in place of fixed-length skills. In the maze environment, we randomly sampled 10 goal locations for meta-training. The table below compares the success rate of meta-policies trained with [1] (fixed-length skills) and our method during the meta-training phase across episodes. The result shows that the extracted variable-length skills allows the meta-policy to better promote knowledge transfer between different tasks, helping the meta-policy in combining the skills to complete complex tasks. We report mean success rates on the 10 meta-training tasks across 3 different seeds with standard errors. | |ep200|ep400|ep600|ep800|ep1000| |---|---|---|---|---|---| |[1]|0.193$\pm$0.030|0.329$\pm$0.037|0.501$\pm$0.049|0.436$\pm$0.048|0.582$\pm$0.037| |[1]+NBDI|**0.341**$\pm$0.035|**0.554**$\pm$0.037|**0.737**$\pm$0.028|**0.828**$\pm$0.028|**0.896**$\pm$0.025| Furthermore, during the target task learning phase, the meta-policy learned through our approach leads to **significantly better sample efficiency** on the unseen target task. These results indicate that our approach can be effectively integrated with a broader class of skill-based methods that leverage task-agnostic demonstrations (3 seeds). | |ep20|ep100|ep300|ep500| |---|---|---|---|---| |[1]|0.143$\pm$0.063|0.593$\pm$0.191|0.667$\pm$0.193|0.990$\pm$0.006| |[1]+NBDI|**0.560**$\pm$0.121|**0.960**$\pm$0.021|**0.980**$\pm$0.011|**0.993**$\pm$0.003| **Q4: The trajectory collecting policy** As our work primarily focuses on learning termination conditions from task-agnostic demonstrations, we will revise Line 264–268 and 318 to clarify that we assume access to **task-agnostic, expert-level demonstrations**. In Section 6.5, we analyzed the impact of dataset quality on our method’s performance. The results indicate that the level of stochasticity and the suboptimality of the dataset influence the effectiveness of our approach, which remains a limitation of our work. **Q5: Skill length impact on baselines** Due to space limitations, we kindly refer you to our response to **Reviewer bZqs’s Q2**, where we provide a detailed discussion. **Q6: Details in environment settings** We apologize for not including information about the environments (Appendix E, F) in the main paper. In the kitchen environment, the agent completes an unseen sequence of object manipulation subtasks, like opening the cabinet or turning on the oven. In the maze environment, it receives a binary reward for reaching the goal. In the sparse block stacking environment, the agent is rewarded based on the height of successfully stacked blocks at the end of the episode. We will include a brief introduction to the environments in the experiment section. **Q7: Other comments and suggestions** **(Line 185-190)** We apologize for the confusion. Since the statement in Line 185–190 is based on Figure 3, we will move it to the paragraph where Figure 3 is discussed in detail. **(Figure 5)** Figure 5(a) and 5(b) illustrate the performance of the baselines and our method on a challenging downstream maze task, which involves reaching the goal in a maze with an unseen layout (Appendix E). Due to the complexity of this task, performance is highly sensitive to random seeds, leading to generally lower success rates compared to previously reported results (Appendix F). In the sparse block stacking environment, the maximum number of blocks that can be stacked is five. [1] N., Taewook, et al. "Skill-based meta-reinforcement learning." ICLR, 2022.
null
null
null
null
null
null
Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders
Accept (poster)
Summary: This paper performs a systematic investigation of the various modelling choices for SAEs, particularly the choice of the encoder. The paper shows both theoretically and empirically (on synthetic data) that there exists an "amortization gap" in the sense that SAEs are unable to recover latent features due to their linearity. Then, the paper performs an analysis of the interpretability of representations upon various choices for the encoder (LCA, MLP) and find that MLP features are far more interpretable than either canonical SAE or LCA; contradicting the folk belief that non-linear encoders in SAEs can lead to non-interpretable features. Claims And Evidence: The claims made by this paper is well supported both theoretically and empirically. Methods And Evaluation Criteria: The proposed methods and criteria make sense for this study. Theoretical Claims: The theoretical claims seem correct based on my check of the proof. However, I believe a detailed discussion of the setting (num. data sources N > dimensionality M), especially in relation to its practical significance, can help readers. Please also see my clarification question at the end regarding this issue. Experimental Designs Or Analyses: The experimental design is mostly sound. However, there seem to be some inconsistencies between experiments on synthetic data and those on real data. In particular, while the experiments on synthetic data involve sparse coding, and SAE + ITO; those baseline were missing from the real data experiments. Instead, another baseline, LCA was introduced without discussion in that section. It might be better to have the same set of methods evaluated on both synthetic and real data to understand the trade-offs between latent variable recovery and interpretability. Supplementary Material: I only reviewed the proof of theorem 1 in the supplementary material. Relation To Broader Scientific Literature: This paper adds to the nascent literature on the theoretical understanding of SAEs (e.g.: Menon et al., "Analyzing (In)Abilities of SAEs via Formal Languages"). While most SAE work has been empirical in nature (e.g.: Gao et al., "Scaling and evaluating sparse autoencoders"), this work sheds light on a key aspect, that being the choice of the encoder, and its relation with classical dictionary learning. Essential References Not Discussed: Nothing "essential" is missed to the best of my knowledge. Other Strengths And Weaknesses: Other Strengths: - The paper performs a systematic evaluation of SAE modelling choices and show that MLP-based encoders seem to outperform the more commonly used linear encoders. If these findings hold for larger-scale models and datasets, this can influence the choice of SAE architectures in future works. Other weaknesses: - The experiments are performed on (1) synthetic data, and (2) real data, but small scale toy GPT-2 models. Given this, it is unclear whether the findings translate to larger scale models and datasets. - The paper does not include a qualitative discussion or visualization of the features discovered by the MLP models, SAEs & LCA, to help better judge the benefits of using the MLP encoder. This is especially important as recent work (Heap et al., "Sparse Autoencoders Can Interpret Randomly Initialized Transformers") has raised questions regarding the efficacy of commonly used auto-interpretability piplines. Hence only presenting benefits in terms of auto-interpretability scores (Figure 7) may lead to spurious findings. Other Comments Or Suggestions: N/A Questions For Authors: **Clarification question**: An interesting aspect of the theory was that the statement holds only when number of sources N > data dimensionality M; which seems to be the exact scenario for modern SAEs, i.e., their latent dimensionalities are much larger than their feature size. On the contrary, usual assumptions for the data generating processes involve assuming M > N, which seems to better correspond to the settings in which contractive autoencoders operate. My question is: does this imply that SAE type architectures might be better suited to the M > N setting as opposed the N > M setting that is studied in this paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer WhEs We thank the reviewer for their thoughtful engagement with our paper and their recognition of its theoretical and empirical contributions. We address their specific concerns below. ## Regarding inconsistencies between synthetic and real data experiments The reviewer correctly notes that there are differences in the methods evaluated on synthetic versus real data. This was primarily due to implementation and computational constraints: 1. **SAE+ITO on real data**: We did not implement SAE+ITO for the GPT-2 experiments due to the significant computational cost of performing inference-time optimisation on hundreds of millions of tokens. While feasible on synthetic data, this would have been quite expensive at LLM scale, particularly in addition to our LCA method. 2. **LCA versus sparse coding**: The LCA method used in our real data experiments is an implementation of sparse coding that uses competitive dynamics to achieve sparse inference. We chose LCA specifically because it has an established history in the sparse coding literature and offered a more efficient implementation path for our large-scale experiments than our synthetic sparse coding method. We acknowledge that using the same set of methods across all experiments would have provided better consistency. In future work, we plan to implement a more unified experimental framework that can efficiently scale to larger models and datasets. ## Qualitative feature visualisation We agree that providing qualitative visualisations and examples of the features discovered by different methods would strengthen our paper. While our automated interpretability evaluation provides quantitative evidence for the superior interpretability of MLP features, examples would offer readers a more intuitive understanding of these differences. In the final version, we will add an appendix section with representative feature examples from each method, including: - Raw activation patterns on top activating tokens - Generated feature interpretations - Validation examples showing correct/incorrect feature activation predictions We note that while recent work by Heap et al. (2024) raises important questions about interpretability pipelines, their concerns primarily relate to interpreting features in randomly initialised transformers, not to the comparison of different encoding methods applied to the same model. In addition to this, there have been demonstrations that trained transformers still out-perform randomly initialised transformers on autointerp baselines. Our work focuses on the relative performance differences between encoding strategies rather than absolute claims about interpretability. ## Clarification on dimensionality settings (N > M versus M > N) The reviewer raises an excellent question about the dimensionality settings and their implications. To clarify: Our theorem addresses the case where N (number of sources/latent dimensions) > M (observed dimensions), which indeed matches the typical SAE setting where the latent space is larger than the activation space. The key insight is that in this regime, a simple linear-nonlinear encoder cannot perfectly recover all possible sparse codes from their lower-dimensional projections, even when such recovery is theoretically possible with iterative methods. Regarding the contractive autoencoder settings with M > N, this represents a different regime than what SAEs typically address. In that case: 1. The encoding is undercomplete rather than overcomplete 2. The primary goal is often dimensionality reduction rather than disentanglement 3. The sparsity constraint becomes less critical since there is no inherent superposition Our theoretical result does not imply that SAE architectures would be better suited to the M > N setting. Rather, it suggests that in the standard N > M setting where SAEs operate, more expressive encoders (like MLPs) can reduce the amortization gap and improve feature recovery. ## Scaling to larger models We acknowledge the limitation that our experiments are conducted on GPT-2 Small. While computational constraints prevented us from scaling to larger models, we have encouraging evidence that our findings would generalise: 1. Our synthetic experiments at larger scales (Appendix A.3) show that the amortisation gap becomes more pronounced as dimensionality increases 2. The theoretical result is independent of scale 3. Recent work applying SAEs to larger models (e.g., Gao et al., 2024) has shown qualitatively similar features and challenges across model scales We believe our contributions provide valuable insights despite these limitations, and we hope to validate our findings on larger models in future work. Thank you again for your thoughtful review and constructive suggestions for improving our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal! I agree with the authors' comments and believe discussing (1) the N>M vs M>N issue; and (2) the reliability of auto interp pipelines, would make the paper more complete. I will keep my score unchanged at this time.
Summary: The authors prove that typical SAEs (ReLU, JumpReLU, TopK) cannot recover the optimal encoding (sparse coefficients) compared to sparse coding methods that solve individual examples iteratively. They empirically demonstrate this using multiple synthetic experiments. Finally, they apply sparse coding using the locally competitive algorithm (LCA) to real LLM activations and find that non-linear encoders (an MLP) and LCA lead to more interpretable features than a ReLU encoder as evidenced by automatic interpretability scoring methods. Claims And Evidence: Claim: Simple linear-nonlinear SAE encoders (linear layer + ReLU, for example) are provably suboptimal at sparsely encoding data than iterative sparse coding algorithms. This is a theorem that's proved in Appendix A. Claim: This same optimality gap holds on SAEs applied to large language model activations. This is demonstrated via training a ReLU SAE, an MLP SAE and an LCA encoder on GPT-2 small activations, then evaluated using existing automated scoring methods. The MLP SAE and the LCA encoder are more interpretable. I believe this claim is overstated. The authors note that their optimality gap is only globally suboptimal, with adversarially chosen sparse codes. In practice, I am suspicious that LLM activations are so evenly spaced that this optimality gap is meaningful. I would be much more convinced that the optimality gap matters in practice if: * The authors used a more recent or larger LLM. GPT-2 small is very small (120M parameters) and is 6 years old at this point. Qwen2 has a 500M parameter model, Pythia has a 160M parameter model, OpenELM has 270M and 450M parameter variants, etc. If model size is not an issue, then something like LLama3 8B would be great. * The authors used a more improved baseline. While their proof is for any linear-nonlinear encoder, more recent works in SAEs (JumpReLU, TopK, BatchedTopK) notably improve upon the MSE and L0 tradeoff. Perhaps a stronger baseline means that the optimality gap is not an issue in practice. * While FLOPs and efficiency are extensively compared in the synthetic experiments, there is no comparison of compute efficiency at scale. How does LCA compare to the amortized encoders when applied to real LLMs? Methods And Evaluation Criteria: The authors use synthetic datasets to demonstrate that their proof holds empirically (great!). Then they use GPT-2 small as a benchmark for "real-world" evaluation. As I stated above, I think the LLM is too small/old and the baseline SAE (a vanilla ReLU) is not a strong enough baseline. Theoretical Claims: I did not check the correctness of the proof. Experimental Designs Or Analyses: The experimental design is fine. Like I said before, I think the SAE baseline is too weak. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The work is well-positioned in the scientific literature. I think it is better-positioned than many other SAE works because it specifically discusses sparse coding and pre-SAE methods for solving this kind of problem. Essential References Not Discussed: No essential references are missing. Other Strengths And Weaknesses: * I appreciate the theoretical analysis of SAEs. Current SAE works are overwhelmingly empirical at the moment. * The synthetic experiments are well-designed to demonstrate a gap. Other Comments Or Suggestions: I am happy to accept this paper if ICML values theoretical contributions. While I personally am more concerned with empirical, "practical" results, I understand if this venue has different goals. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer STGo We appreciate the reviewer's thoughtful analysis of our work and their recognition of the theoretical contribution. We would like to address several points regarding the practical implications of our findings. ## Regarding the claim about optimality gap in LLM activations We agree that the connection between our theoretical results and practical LLM applications deserves further elaboration. Our work demonstrates that the amortisation gap exists not just theoretically but also empirically across various settings, including LLM activations. ### Model choice and scale While GPT-2 Small (124M parameters) is indeed older than more recent models, it remains a standard benchmark in the mechanistic interpretability literature for several reasons: 1. **Established baseline**: Numerous interpretability papers continue to use GPT-2 as a testing ground, including recent work by Anthropic, EleutherAI, and others. This facilitates comparison with existing literature. 2. **Computational accessibility**: Our experiments involved training multiple models on hundreds of millions of tokens, which becomes prohibitively expensive with larger models. 3. **Feature stability**: The principles of superposition and sparse feature representation appear consistent across model scales, as demonstrated by recent work finding similar phenomena in models from GPT-2 to GPT-4 and Claude. That said, we acknowledge that verifying our findings on larger models would strengthen our claims, and we plan to pursue this in future work. ### Baseline improvement Regarding the baseline SAE implementation, we intentionally tested a standard ReLU SAE as it represents the most widely used architecture in current interpretability research. We agree that newer architectures like JumpReLU and TopK offer improvements, and we discuss these in Appendix A.7.2. Our theoretical result applies to this entire class of models, as they all rely on fixed function approximations rather than iterative optimisation. The empirical gap we observe provides an explanation for why these architectural innovations improve performance - they partially address the amortisation gap through more sophisticated function approximation. ### Computational efficiency at scale The reviewer raises an excellent point about computational efficiency comparisons at scale. While our LCA implementation was not fully optimised, we can provide some context: - Training the LCA model took approximately 3x the computation time of the SAE model due to the additional gradient steps per batch. - During inference, LCA required approximately 20x the computation of the SAE. However, as noted in recent work (e.g., Nanda et al., 2024), inference-time optimisation approaches can be made significantly more efficient through techniques like matching pursuit and careful algorithm selection. The fundamental trade-off between amortised and iterative approaches remains, but the computational gap can be narrowed considerably. ## On empirical significance beyond theory While our theoretical contribution stands independently, we believe the empirical results are meaningful for several reasons: 1. The significant interpretability improvement of the MLP encoder (median F1 of 0.83 vs 0.6 for SAE) suggests that even modest increases in encoder expressivity can substantially improve feature extraction. 2. Our work offers an explanatory framework for why recent SAE variants achieve better performance, connecting theoretical understanding with practical innovations. 3. The results suggest promising directions for further research, such as developing more expressive encoders or hybrid approaches that balance computational efficiency and encoding quality. We thank the reviewer for highlighting areas where our practical evaluation could be strengthened, and we hope to address these in future work with larger-scale evaluations across multiple model families and more sophisticated baselines. --- Rebuttal Comment 1.1: Comment: That's an excellent point about GPT-2 being used broadly in mechanistic interp. work and I agree completely. Thank you. I completely understand about computational feasibility as well. While I am not familiar with recent work on inference-time optimization, a 20x slowdown cannot be ignored (3x absolutely can be ignored, no issues from me there). I think this is an important limitation that should be included in the final work, along with any citations you'd like to include about obvious implementation improvements. Can you say more about this? > Our work offers an explanatory framework for why recent SAE variants achieve better performance. What parts of your work look at recent SAE variants, and what variants specifically? I remain at a score of 3 in favor of other, more theoretically-inclined reviewers to decide on the value in your theoretical results. Thank you for your hard work and detailed rebuttal.
Summary: The authors study Sparse Autoencoders (SAE), first showing that simple linear-nonlinear encoding leads to an amortisation gap. Next the authors compare different SAE architectures on synthetic settings, showing that better architectures can beat standard SAE in this setting. Finally they study the interpretability by comparing different models on the pre-activations of a GPT2 layer. Claims And Evidence: No concern Methods And Evaluation Criteria: The proposed methods seem sensible, see questions below for potential issues Theoretical Claims: Skimmed the proofs, there is a question about Theorem 3.1 in questions down below Experimental Designs Or Analyses: Did not check the experiments in detail. Supplementary Material: Skimmed the appendix Relation To Broader Scientific Literature: This paper furthers the understanding of sparse autoencoders, and may be interesting to both theorists and practitioners in the field. Essential References Not Discussed: None that I know of Other Strengths And Weaknesses: Strengths: - Good Presentation - Clearly written, easy to read Weaknesses: - The theoretical claim seems to be a heuristic rather than a theorem, see questions below - It is unclear how much the experiments support the made claims see discussion below. Given that there is little in terms of theoretical results the experiments feel a bit weak. Other Comments Or Suggestions: - A more close up version of figure 2 would be good to see the differences better at convergence Questions For Authors: - The proof of theorem 3.1 does not seem rigorous to me, in particular I have the following questions. What exactly is the assumption on the sparse prior $P_S$? If we choose a deterministic function for example projection on the first $K$ components than this is a very different setting then say i.d.d sparse entries, I am not sure that the proof is true for the former. The conclusion after equation (8) should be made rigorous, right now I am not sure how this follows. - In FIgure 3, why does SAE exhibit non-monotone behaviour? - What do the circles in Figure 7 represent? This should be explained in the description. - Why is the metric used for Figure 7 a good choice? Given that it is not a very interpretable metric it would be good to have more points of comparison, are there other known baselines that one could compare this to? - The main contribution of this paper are the experiments on the synthetic data, but the used data seems quite far from real data both in terms of dimension and complexity. Why do you believe that these results reflect well what is going on, and why the intuition from these examples should translate to practical problems - Related to the above point, the models as introduced in 3.3 seem rather simple. I am not an expert in the area, so I do not know what the most sensible sota models would be, but why are more practical models not used to evaluate on the synthetical setting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer JWnB We thank the reviewer for their time spent evaluating our paper. We believe there are several misunderstandings in the review that we would like to address, as they appear to have led to an incomplete assessment of our work. ## Regarding Theorem 3.1 The theorem is indeed rigorous and makes minimal assumptions about the sparse prior $P_S$. The only requirement is that the support of $P_S$ consists of vectors with at most $K$ non-zero entries. This is explicitly stated in the theorem: "Let $S=\mathbb{R}^N$ be $N$ sources following a sparse distribution $P_S$ such that any sample has at most $K \geq 2$ non-zero entries, i.e., $||s||_0 \leq K, \forall s \in \text{supp}(P_S).$" The proof holds regardless of whether $P_S$ is deterministic or stochastic. It's a constructive proof that shows a specific set of vectors (the standard basis vectors and their sums) cannot be correctly encoded simultaneously by a linear-nonlinear encoder. Since these vectors are valid under any sparse distribution with max sparsity $K \geq 2$, the conclusion holds generally. Regarding the conclusion after equation (8): This follows directly from linear algebra. If $S'$ must be diagonal to correctly represent the sparse codes after ReLU, but also must have rank ≤ M < N due to the dimensionality constraints, we have a contradiction since a diagonal matrix with non-zero diagonal entries has rank N. ## Figure 3 Non-monotone Behaviour The non-monotonic behaviour observed in Figure 3 (particularly for SAE+ITO) is a deliberate focus of our analysis, not an oversight. As stated in Section 4.2: "SAE+ITO initialised with SAE latents exhibits distinct, stepwise improvements throughout training, ultimately achieving the highest MCC." This pattern reveals interesting dynamics of how inference-time optimisation interacts with the learned dictionary during training. ## Figure 7 Visualisation The circles in Figure 7 represent statistical outliers in the distribution of F1 scores, following standard boxplot conventions. We will add this clarification to the figure caption to avoid confusion. ## Interpretability of F1 Score Metric The F1 score is a widely recognised and highly interpretable metric for classification tasks. In our case, it measures how well a second instance of GPT-4o can predict neuron activations based on explanations from a first instance. This is explicitly described in Section 6: "The model predicted which examples should activate the feature based on the first instance's explanation, allowing us to compute an F1-score against the ground truth." The approach is standard in the field and follows established methods from Anthropic, EleutherAI, and other research groups cited in Appendix A.7. ## Relevance of Synthetic Experiments While our synthetic experiments use moderate dimensionality for clarity and computational efficiency, we demonstrate their relevance to practical problems in multiple ways: 1. We explicitly test larger-scale experiments in Appendix A.3 (N=1000, M=200, K=20), showing that our findings hold and even strengthen at larger scales. 2. We examine non-uniform feature distributions in Appendix A.4 to better match real-world latent spaces. 3. Most importantly, we validate our findings on actual GPT-2 residual stream activations in Section 6, showing that the principles discovered in synthetic settings transfer to real neural networks. The synthetic experiments provide a controlled environment where ground truth is known, allowing us to rigorously evaluate the amortisation gap that forms the theoretical foundation of our work. ## Model Complexity and SOTA Comparison Our focus is on the fundamental mechanisms behind sparse encoding and dictionary learning, rather than specific architectural innovations. The models in Section 3.3 represent core architectural categories (linear-nonlinear encoders, MLPs, sparse coding) that underlie most advanced SAE variants. In Appendix A.7.2, we discuss how our findings relate to advanced SAE architectures like top-k SAEs, JumpReLU, Gated SAEs, and ProLU activations. These are cutting-edge developments in the field, many published in the past six months. Our work provides theoretical and empirical foundations that explain why these architectural innovations yield performance improvements. We appreciate the opportunity to clarify these points and believe our work makes significant contributions to both the theoretical understanding and practical application of sparse autoencoders for neural network interpretability. --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying my concerns. I still have some concerns about (the proof of) Theorem 3.1. (A.1 in the appendix). Specifically: - Let "$S=\mathbb{R}^N$ be N sources following a sparse distribution" S as defined here is a set, I assume it is meant that there is some random variable on S that can be understood as N sources. What exactly is $P_S$ i.e. what is its domain and what are the precise assumption made on it. - What does sparse distribution mean here, this is not defined. I assume it means that there are only K non-zero entries, but this is in principle not uniquely clear from the way it is written - The proof starts with redefining S, which is confusing since $S=\mathbb{R}$ is already defined in this scope. - What I do not understand in the proof is that we are choosing a specific S, however in the statement of the theorem we assumed a generic distribution given some properties. Why can this structure be assumed here? While I cannot confidently judge the experiments, I don't feel confident accepting a paper with a Theorem statement and proof this confusing. Without additional evidence this is usually a sign of shallow analysis. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful suggestion regarding the use of the notation $S$ in our paper. Your comment prompted us to carefully review the entire manuscript, and we have now clarified that $S$ in the proof is used exclusively as a diagonal matrix representing a collection of 1-sparse signals, and it does not conflict with any other usage of sparse codes (which are denoted by lowercase $s$) elsewhere in the paper. See the updated proof statement below: Let $K \geq 2$ and $P_K$ be a sparse distribution over $\mathbb{R}^N$, i.e., $\forall s \in \mathbb{R}^N: s \in \text{supp}(P_K) \iff \|s\|_0 \leq K$. This means that any sample has at most $K$ non-zero entries or, equivalently, the support of $P_K$ is a union over $K$-dimensional subspaces. The sources are linearly projected into an $M$-dimensional space, satisfying the restricted isometry property, where $K \log \frac{N}{K} \leq M < N$. A sparse autoencoder (SAE) with a linear-nonlinear (L-NL) encoder must have a non-zero amortisation gap. We appreciate your helpful feedback in clarifying these important details. Could you please confirm if this revision addresses your concerns fully, or if there are additional points you would like us to clarify further? Your suggestions have significantly improved our manuscript, and we greatly value your input. Thank you again for your valuable review.
null
null
null
null
null
null
null
null
Decision-aware Training of Spatiotemporal Forecasting Models to Select a Top-K Subset of Sites for Intervention
Accept (poster)
Summary: In a spatiotemporal forecasting setting, the authors consider traning prodiction models adapted to the task of selecting the top-K sites for intervention. The authors consider BPR as the desired metric and develop an algorithm for training models using a gradient-based approach. The main difficulty of this approach lies in the uninformative gradients that the BPR objective, combined with the top-K selection provides. To differentiate through the sampling of discrete variables $r$, the authors adapt a REINFORCE-like approach, while deriving the Loss (which includes the top-K ranking) is done using a stochastic smoothing approach. In a last step, a combination of DPR and log-likelihood is used to propose DAML, which is intended to train decision aware models which provide good likelihood estimates. Claims And Evidence: The claims are well supported. Methods And Evaluation Criteria: The evaluation methods are suitable. Theoretical Claims: I followed the mathematical derivation in the main text, which looks correct to me. Experimental Designs Or Analyses: As far as I can tell, the experimental designs make sense. Supplementary Material: The appendix mainly contains experimental details. I believe the title of Algorithm A.1 is incorrect, as it gives the algirthm for DAML and not BPR. Relation To Broader Scientific Literature: The authors use a decision aware objective for training a model adapted to the top-K selection problem. I am not aware of any other works that tackle decision aware training for the top-K problem. Decision aware training is however a growing trend in the literature, see Sadana et al. (2024). Essential References Not Discussed: I think it would be valuable to discuss the connection to other works in the literature related to decision-aware learning, end-to-end learning and contextual optimization, see for instance Sadana et al. (2024) and references therein. Sadana, Utsav, et al. "A survey of contextual optimization methods for decision-making under uncertainty." *European Journal of Operational Research* (2024). Other Strengths And Weaknesses: The authors tackle an interesting problem and propose a new training scheme which is well adapted to the the problem and improves performance compared to the baselines. The paper is well written and explained, I specifically appreciated the interesting, and practically relevant experiments section. Other Comments Or Suggestions: The uncertainty plots in Figure 3 are quite confusing. Do they provide uncertainty only in the BPR direction? If not, why does uncertainty increase for lower log-likelihood values? Questions For Authors: 1. On line 91ff, right side, you state that the framework is flexible and can be applied to neural networks. How would you propose to sample from the joint model $p_\phi$ if it was parametrized by a multilayer neural network? 2. In Figure 2, left panel, “OPT BPR only”. I think this result is quite intersting, especially looking at the weights $\pi$. The model identifies the IDs to select (high $\pi$) and the ones not to select (low $\pi$). Intuitively, one would expect this to carry over to any top-K setting: No matter the complexity of the task, a mixture of 2 gaussians should be enough as it is a proxy for identifying the selected IDs (being identified with the gaussian with higher mean) and non-selected IDs (being identified with the gaussian with higher mean). Is this intuition true? 3. In Figure 2, on the right panel: DAML attains the same BPR as directly training on the BPR loss, but with better log-likelyhood. The structure of the BPR loss in equation (4) suggests that minimizing (9) could have multiple global optima. I think one view DAML (with appropriately chosen $\varepsilon$ as “finding the global optima of $\min(9)$ that has the highest log likelihood. 4. On line 252ff, left side: “For convenience and reliability, we use all T records in the training set in every estimate, avoiding minibatching over time.” Do the results remain similar if minibatching is used? I understand that for the datasets used minibatching may not be neccessary, but for larger datasets one would certainly like to use it. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments about our work. We try to address key points below. > RE Question about “the title of Algorithm A.1” Algorithm A.1 is meant to summarize the decision-aware ML training (DAML) approach described in Sec. 4.3. Current title is “Decision-aware ML training for top-K tasks judged by BPR”. We will revise to just say “Decision-aware ML training” in case the latter half of the title was confusing ## Essential References (Common Issue) > I think it would be valuable to discuss the connection to other works in the literature related to decision-aware learning, end-to-end learning and contextual optimization We agree there’s a broader set of related work, especially from the OR community. We plan to cite and discuss the survey by Sadana et al. (2024), as well as valuable works pointed out by ZTv9. Specifically, we plan revisions that will * cite and discuss the application of “decision-aware denoising” to the problem of where to place speed humps to calm traffic (Gupta et al., SSRN preprint from 2024) * cite and discuss the health supply chain task of Chung et al. (ML4H 2022) * cite and discuss the PyEPO package (Tang & Khalil '24), especially how it can support solving problems like ours * cite the survey on “decision-focused learning” by Mandi et al. (2024) and the survey by Sadana et al. (2025), and relevant references therein > uncertainty plots in Figure 3 are quite confusing. Do they provide uncertainty only in the BPR direction? Yes, the uncertainty in this plot is only in the BPR direction. We will revise the caption to clarify. In Figure 3, we intend to show how each method’s estimated parameter performs across two metrics: log likelihood and BPR. Log likelihood is well-summarized by one deterministic number (vertical position of the marker). In contrast, BPR is not deterministic, as it depends on calculating the ratio estimator for r in Eq 7 which requires an average over M *samples* of y. Each method in Fig 3 has BPR visualized as a histogram showing the distribution across 1000 trials, with each trial using M =1000 samples. > 1. … How would you propose to sample from the joint model p if it was parametrized by a multilayer neural network? There’s a substantial literature on deep probabilistic models, which combine neural networks for flexible parameterization with classic statistical distributions with well-known sampling routines. For example, suppose our joint model for vector y was multivariate Gaussian with some mean and some covariance. The mean vector could be parameterized as the output of a neural network. So could the covariance matrix, assuming symmetry and positive definite constraints were enforced. Sampling from a Gaussian given its mean and covariance is then straightforward. > 2. In Figure 2, left panel, “OPT BPR only”. I think this result is quite interesting, … one would expect this to carry over to any top-K setting: No matter the complexity of the task, a mixture of 2 gaussians should be enough … for identifying the selected IDs (being identified with the gaussian with higher mean) and non-selected IDs … Is this intuition true? Yes we agree with this intuition. Deliberate construction of component weights $\pi$ for each site to assign the top K sites to a “high mean” component and other sites to a “low mean” component should be enough to get good BPR on training data, regardless of the task. Naturally, whether this can generalize to test data depends on the model’s (mis)match to the true data-generating process. > 3. Can we view DAML as “finding the global optima” (in terms of BPR) that “has the highest log likelihood”? Yes, we agree with this view. There can be different models $\phi$ that have equivalent rankings r and thus equivalent BPR value. We can view DAML with a high BPR constraint as seeking $\phi$ that have both high BPR and high log likelihood. > Do the results remain similar if minibatching is used? I understand that for the datasets used minibatching may not be necessary, but for larger datasets one would certainly like to use it. Minibatch gradient descent to minimize the BPR only loss in Eq 9 or the DAML objective in Eq 13 appears straightforward in both cases. In each case, the overall “whole dataset” loss is a sum over the loss at each timestep indexed by t. Taking a random subset of timesteps as a minibatch and computing the gradient of that minibatch should be an unbiased estimate of the gradient of the whole-dataset loss. We have not tested minibatching in our experiments, as it wasn’t necessary and would introduce extra noise to our gradient-based learning that is already noisy (due to score function trick and perturbations). Tuning learning rates and batch sizes carefully would be important, but if done well, minibatching seems feasible.
Summary: This paper studies measures of best possible reach (BPR) to select the best subset of interventions. They analyze different measures based on a probabilistic model, as well as different way to train these probabilistic models from historical data. They also propose new methods for training, including a decision-aware maximum likelihood solution which empirically achieves the best trade-off between BPR and test log-likelihood. They evaluated their methods on synthetic data and real overdose forecasting data. Claims And Evidence: The claims made in the submission are clear, and convincingly supported by experiments. Methods And Evaluation Criteria: The methods (ranking and training) make sense for the application at hand. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs and analyses are sound. Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Despite not being familiar with this literature, this paper seems to be a strong practical contribution for ICML. The pedagogical approach of the different methods for ranking and training is greatly appreciated. The rigor shown in the experimental results is valuable, and the results convincing on the advantages of DAML as a trade-off between test log-likelihood and BPR. The authors are also very explicit in the limitations of their work, and possible (realistic) directions for future work. Other Comments Or Suggestions: One suggestion I could make is to expand on the minimization of $J^{BPR}$ when the model is misspecified (lines 255-260), and why the forecasts have questionable utility. This is seems to be related to the observation in Figure 2 (left), but this is not obvious to a reader less familiar with this field. Another suggestion is to highlight the fact that DAML can also integrate prior knowledge if available (which I imagine might be the case in practice?). This is similar to how we can move from MLE to MAP (as mentioned by the authors), but is crucially different from the BPR loss where there is a priori no obvious way to incorporate this type of knowledge. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and helpful feedback. We are glad to hear the overall story of our manuscript made sense to you. We offer a few responses to the questions you raised below: > One suggestion I could make is to expand on the minimization of JBPR when the model is misspecified (lines 255-260), and why the forecasts [for BPR only] have questionable utility. Thanks for this idea, we will revise to improve the clarity of this point. Essentially, the issue is that when only training to maximize BPR, all that matters is the ranking of each site in the r vector. Nothing about the implied distribution over y values is forced to match the true distribution of y, beyond just getting the ranking order correct so the predicted set of top K sites aligns with the true top K Consider the seven sites in the toy example in Fig 2. Imagine two possible r vectors, each defined as a per-site mean of a distinct model for $p(y_1, … y_7)$ ``` site 1 2 3 4 5 6 7 rA: 10 20 30 40 50 60 100 rB: 1 2 3 4 5 6 7 ``` Both rA and rB would have the same BPR, reaching a perfect 1.0, because they rank the top 5 sites (#3 - #7) correctly. However, only model A is at all near the true per-site means indicated in the true-distribution histograms in Fig. 2. Model B has very questionable utility for forecasting y values, because it says the per-site mean of site 7 is 7, which far lower than the actual mean of site 7 (around 100). > Another suggestion is to highlight the fact that DAML can also integrate prior knowledge if available (which I imagine might be the case in practice?). This is similar to how we can move from MLE to MAP (as mentioned by the authors), but is crucially different from the BPR loss where there is a priori no obvious way to incorporate this type of knowledge. Thanks, we will revise to highlight the ability to integrate prior knowledge via the model. Indeed, in the real applications, we do incorporate a prior on the random effect coefficients (see App. E), as recommended by past work on this negative binomial regression model in conventional settings.
Summary: The paper proposes an approach for learning a potentially misspecified spatialtemporal probabilistic model for decision-making settings. They specifically focus on the decision problem optimizing best possible reach (BPR) which closely corresponds to the ranking top-K items problem. Their approach consists of first proposing a metric similar to BPR in order to rank the items. They then show how to compute the metric given a probabilistic model. Given the metrics they then show how to compute the BPR loss and how to optimize the loss relative to the probabilistic model. Putting it all together, they propose a loss that blends decision loss (BPR loss) with traditional ML loss (negative log likelihood). This approach allows users to trade off interpretability (ML loss) with decision performance (BPR loss) via a tuning parameter. Finally, they test their approach on three real-world datasets to highlight the benefits of their approach. Claims And Evidence: In general, this paper provides examples and empirical evidence to support their proposed approach. The paper does not explicitly provide any theoretical justification for their approach, but do cite other papers when deriving key quantities like the gradient for the ratio estimator. The paper does not have any obvious problematic claims. Methods And Evaluation Criteria: The high level idea makes sense as the method provides users a heuristic approach for trading off decision quality and prediction quality. It makes sense to construct a surrogate for decision-loss in order to incorporate it into learning the underlying probabilistic model. The datasets provided by the authors also fit the setting of optimizing best possible reach and seem to have been used in previous papers for the same application. Theoretical Claims: The paper makes no theoretical claims. Experimental Designs Or Analyses: I reviewed the synthetic data experiment, the opioid-related overdose forecasting experiment, and the endangered bird forecasting experiment. The synthetic data experiment provided a simple example highlighting the potential pitfalls of only using traditional ML loss and only using BPR loss. The real-world datasets seem to closely follow the experimental set-up of previous works. Supplementary Material: I did not review any of the supplementary material which was the code linked in the appendix. Relation To Broader Scientific Literature: The key contribution of the paper is showing how to solve the best possible reach (BPR) problem in a decision-aware way. This builds off existing decision-focused and end-to-end learning literature [1][2]. They use compelling real-world data sets and show that decision-aware methods can be effective. The paper outlines the challenges of solving the decision-aware problem and highlights solutions that leverage the score function trick [3] and perturbed optimization [4]. In the decision-aware literature, this work most closely resembles [5] which also focuses on applying decision-aware approaches to concrete real world problem. ----- [1] Mandi, Jayanta, et al. "Decision-focused learning: Foundations, state of the art, benchmark and future opportunities." Journal of Artificial Intelligence Research 80 (2024): 1623-1701. [2] Tang, Bo, and Elias B. Khalil. "Pyepo: A pytorch-based end-to-end predict-then-optimize library for linear and integer programming." Mathematical Programming Computation 16.3 (2024): 297-335. [3] Mohamed, S., Rosca, M., Figurnov, M., and Mnih, A. Monte carlo gradient estimation in machine learning. Journal of Machine Learning Research, 21(132), 2020. [4] Berthet, Q., Blondel, M., Teboul, O., Cuturi, M., Vert, J.-P., and Bach, F. Learning with Differentiable Perturbed Op- timizers. In Advances in Neural Information Processing Systems (NeurIPS), 2020. [5] Chung, Tsai-Hsuan, et al. "Decision-aware learning for optimizing health supply chains." arXiv preprint arXiv:2211.08507 (2022). Essential References Not Discussed: The paper does not really cite decision-aware literature which can be is well summarized in [1],[2], and [3]. The last paper especially provides computational approaches in a python package for computationally solving the decision-aware BPR problem. They also do not cite applications of decision-aware approaches such as [4]. Finally, the authors may also benefit from comparing their approach to [5] which also proposes a decision-aware approach and a case study that closely resembles the BPR problem. [1] Mandi, Jayanta, et al. "Decision-focused learning: Foundations, state of the art, benchmark and future opportunities." Journal of Artificial Intelligence Research 80 (2024): 1623-1701. [2] Tang, Bo, and Elias B. Khalil. "Pyepo: A pytorch-based end-to-end predict-then-optimize library for linear and integer programming." Mathematical Programming Computation 16.3 (2024): 297-335. [3] Sadana, Utsav, et al. "A survey of contextual optimization methods for decision-making under uncertainty." European Journal of Operational Research 320.2 (2025): 271-289. [4] Chung, Tsai-Hsuan, et al. "Decision-aware learning for optimizing health supply chains." arXiv preprint arXiv:2211.08507 (2022). [5] Gupta, Vishal, Michael Huang, and Paat Rusmevichientong. "Decision-aware denoising." Available at SSRN 4714305 (2024). Other Strengths And Weaknesses: Strengths 1. The paper provides compelling real-world datasets and experiments to highlight the benefits their approach. 2. The paper provides good motivation for blending traditional ML loss with decision-aware loss. Weaknesses 1) The paper provides limited theoretical or practical justification for their proposed BPR loss. My main concern is that the loss proposed by the authors seem to equally weight the BPR loss of each time period $t$. Thus, they are maximizing the average historic BPR. However, since they consider temporal elements, a more sensible decision loss to optimize would be the BPR loss of time period T+1. Optimizing over average historic BPR instead of the BPR of time period T+1 may reduce the decision quality of the learned probabilistic models. [1] highlights how to estimate the decision loss at time period T+1 which may be an interesting benchmark to consider. 2) The paper does not consider other more popular decision-aware approaches. It can be shown that the BPR problem could be formulated as a 0-1 knapsack problem since the denominator term $\bf{y} \cdot \text{TopKMask}(\bf{y},K)$ is a constant (does not depend on $r$) and $\text{TopKMask}(\bf{r},K)$ selects the $K$ largest elements in $\bf{r}$. This should allow the authors to optimize the BPR loss with decision-aware approaches found in the python package PyEPO [2]. The approaches in the package should be compatible with the BPR problem since the package only requires users providing the gradients of $r^{*}(\phi)$ and the 0-1 knapsack formulation. As shown in [3], the choice of the approach can affect the resulting decision quality of the learned prediction model. 3) The paper's novelty seems limited. The paper's contributions can be grouped into i) computation and ii) formulation. From a computational perspective, the challenges can be addressed by existing work as discussed in the previous point 2) and the authors do not compare their approach to these existing approaches. From a formulation perspective, the paper's constrained optimization problem reduces to an unconstrained optimization problem that takes a weighted combination of the ML loss and decision loss. This has been previously proposed in [4]. Moreover, the construction of the decision loss is not well justified as highlighted in point 1). _____ [1] Gupta, Vishal, Michael Huang, and Paat Rusmevichientong. "Decision-aware denoising." Available at SSRN 4714305 (2024). [2] Tang, Bo, and Elias B. Khalil. "Pyepo: A pytorch-based end-to-end predict-then-optimize library for linear and integer programming." Mathematical Programming Computation 16.3 (2024): 297-335. [3] Huang, Michael, and Vishal Gupta. "Decision-focused learning with directional gradients." Advances in Neural Information Processing Systems 37 (2024): 79194-79220. [4] Kao Yh, Roy B, Yan X (2009) Directed regression. In: Bengio Y, Schuurmans D, Lafferty J, Williams C, Culotta A (eds) Advances in Neural Information Processing Systems, Curran Associates, Inc., vol 22 Other Comments Or Suggestions: 1. It may help the paper to better justify the choices in Section 3. The paper seems to present two methods of ranking, but in the experiments only present the results of the second approach. Section 3 seems to be the most novel component of the decision-aware approach, so explaining the modeling choices would help highlight the differences between existing decision-aware learning approaches. 2. The paper's title mentions spatialtemporal decision-making settings, however, aside from Eq. (15) there seems to be little consideration of the spatial or temporal aspects of the data. Highlighting components of the paper that leverage the structure of space and time would help differentiate it from existing work. Questions For Authors: Below summarizes some of the questions mentioned in the rest of the review: 1. Is top K not just a 0-1 knapsack problem of size K? In that case why can you not reformulate the problem as a linear program? You have the gradient of the estimator in (10). Methods found in the PyEPO package (https://github.com/khalil-research/PyEPO) like SPO+, DBB, and PG Loss are compatible. SPO+ is a convex surrogate while PG Loss works well for misspecified models. 2. Why is your proposed loss (9) a good measure of decision loss? Is it some unbiased estimate of the decision loss of time T+1? 3. Why does using only BPR loss provide worse Test BPR for the MA opioid-related overdose forecasting? This seems different than the other two Pareto frontiers shown in Figure 3. 4. In your results, can you show that you can control the BPR with $\epsilon$? 5. Can you apply the score function trick to the quantity $\frac{y_i}{\bf{y} \cdot \text{TopKMask}(\bf{y},K)}$? Using this quantity instead of the ratio estimator would be a more direct way to rank the items. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank ZTv9 for their thoughtful review, especially in introducing related work and PyEPO. We offer brief replies to key points below. We will revise to address all points raised. ## Essential References Thanks for several useful references. Please see “Essential References (Common Issue)” in Response to Reviewer His4 for our revision plan. ## W1 & Q2: Does decision loss optimize for time T+1? > Optimizing over average historic BPR instead of the BPR of time period T+1 may reduce the decision quality Empirically, we find decisions at heldout times are good (Fig 3). Revision will discuss justification below, and mention out-of-sample bounds by Gupta et al (SSRN ‘24). We assume the true data-generating distribution of outcomes at time t given the recent past, $p(y_t | y_{t-W:t-1})$, is unchanged across train and test time periods. We will make this explicit in revision. Many forecasting methods use this assumption and use average loss to extrapolate to the future. We could weight recent timesteps more. As further protection, we already use a “leave future out” experimental design: e.g. for MA overdose task, we train on 2011-18, tune/validate on 2019, and test on 2020-21. ## W2 & Q1: 0-1 Knapsack for BPR / PyEPO baseline Thanks for suggesting the 0-1 knapsack formulation. The problem of “how to rank” for BPR given outcome y can reduce to 0-1 knapsack where each site has weight 1 and the budget constraint allows K of S sites. As suggested, we tried both SPO+ loss and PG loss from PyEPO on the 3 datasets in our Fig. 3. Both losses are direct competitors to our BPR-only (Sec 4.2), as they try only to improve top-K decisions and do not account for likelihood. Results are in [revised Fig 3](https://anonymous.4open.science/r/DecisionAwareMaximumLikelihood/Figure3Update.pdf) in our anonymous code repo. We find SPO+ and PG do not advance any panel’s Pareto frontier compared to our methods. In terms of top-K decisions, each method can sensibly beat NLL-only but not our DAML or BPR-only. PG loss gets higher BPR than NLL-only on Cook, SPO+ does the same for MA. However, these objectives ignore likelihood, so they naturally produce models with low likelihood. ## W3: Novelty Our DAML approach in Sec 4.3 is not wholly new in combining classic objectives with decision-aware losses. We will revise to cite Kao et al. (NeurIPS 2009)’s convex-combination of these two loss types, albeit for directed regression rather than our top-K where-to-intervene tasks. Compared to that work, DAML is distinct in its *constrained* approach: as soon as the desired BPR is achieved, training focuses only on likelihood. Another novel aspect of our work is the ratio estimator for the how to rank problem (Sec 3). While previous works used BPR as a metric, they defaulted to the per-site mean, unaware that it may be suboptimal (see App. B for demo of suboptimality). A final novel aspect is our analysis of 3 real where-to-intervene datasets, covering whooping-crane and opioid-overdose forecasts, each with 1000+ sites. These tasks are not yet in any decision-aware literature to our knowledge, and use models much bigger than in Gupta et al. (SSRN ‘24) ## Q3: Why does using only BPR loss provide worse Test BPR for the MA overdose forecasting? Yes, the left panel of Fig 3 shows a different relative ranking of BPR-only vs. our DAML method. Optimizing BPR is prone to local optima and difficult loss landscapes. The extra NLL loss in DAML appears to find better BPR solutions sometimes. ## Q4: Can you control BPR with epsilon? Yes, see Pareto frontier in our Fig 2. On this toy task, as we raise epsilon from low to 0.86 to 1.0, the BPR on test data from the true generating process follows the expected trajectory. On real data (Fig. 3), the epsilon at training has a looser relationship with BPR on a limited test set due to mismatched assumptions. ## Q5: Can you use a ranking with top-K in denominator? The proposed estimator is: $$ r(\phi) = \mathbb{E}[ \frac{ y }{ y \cdot {TopKMask}(y, K) } ] $$ This differs from our ratio estimator (Eq 7) by including the top-K binary mask in the denominator, instead of an all-ones vector. In expectation, our ratio estimator will have the same ranking of the S sites as this proposal. Thus, this estimator will produce the same top-K decisions as our ratio estimator. We prefer our ratio estimator because it is simpler and faster (avoids top-K for each sample y). We will update the Appendix to discuss our reasons for preferring our ratio estimator. ## Other > better justify the choices in Sec. 3 See App. B for detailed examples where given the same parameters, our ratio estimator can deliver BPR 2x higher than the per-site mean. We will revise to clarify further. > consideration of the spatial or temporal aspects All models in Fig. 3 use as a feature for site s the gravity of its spatial neighborhood, that is, a recent average of events in sites spatially near to site s.
Summary: The paper tackles two main issues related with a metric called Best Possible Reach (BPR): (1) the ranking problem, basically how to rank sites numerically to select the top K for intervention, based on a probabilistic method, to solve this, the paper works on a tighter bound on BPR and utilizes the ratio estimator. (2). the training problem, basically how to optimize the model's parameter to maximize BPR performance. The paper tackles the difficult problem of training models to directly optimize the BPR metric, which involves a discrete top-K selection that leads to zero gradients. It uses the perturbed optimizers and the score function trick to estimate gradients and enable training. Experiments are done on synthetic data and two real world datasets: opioid overdose mitigation and endangered bird monitoring. Claims And Evidence: Some claims made by the paper: - Ranking via the per-site mean is suboptimal for BPR. Section 3.1 shows the per-site mean minimize expected loss on a simplistic upper bound on BPR. - The effectiveness of the proposed ratio estimator. Intuitively, it works under the sparse vector setting, and does provide a tighter upper bound. This is also empirically proved in Fig 2. - DAML helps navigating the pareto frontier, both experiments on synthetic data and real-world datasets consistently verify this empirically. Methods And Evaluation Criteria: - In the synthetic experiments, the approach is evaluated only on the very specific models: Gaussian mixtures and negative binomial mixed effects, which might seem too simple for realistic uses cases? Are these still the most common used models? Can the authors comment more on the model's performance on a more complex model families? - I am concerned about the scalability of the proposed method, it seems evaluated on a small scale of parameters $T, S$ and $K$. Could the author comment more on the possible approximate algorithms people might be able to use when face more real-world large scale problems? - The approach comes with a pre-defined $K$. Typically how people find this $K$ value? How does the method perform robustly across a wide range of $K$'s? Can we add more empirical results on this? - Both the hyper-parameter $\sigma$ and $\lambda$ are key hyper-parameters needs to be tuned. Could the authors provide some rule of thumb (or automatic selection methods) in selecting them under different use cases? - Though the paper explicitly discusses the benefits of their approach under model misspecification, the synthetic experiment uses a relatively simple form of misspecification. It would be interesting to see how the methods perform under more severe or diverse types of misspecification. Theoretical Claims: I roughly looked at the ones in the main paper, and does not identify any obvious issue. Experimental Designs Or Analyses: See Methods And Evaluation Criteria. Supplementary Material: No Relation To Broader Scientific Literature: Might be related with spatial-temporal forecasting topics; method-wise, might related with BPR, direct loss minimization, and score function estimator. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength: - The paper is well-written and solves some key problems with BPR metrics, in terms of ranking and optimization. - The authors propose a "ratio estimator" for ranking sites based on a tighter bound of the BPR metric. It moves beyond the suboptimal per-site mean ranking and is theoretically justified. Empirically, this method significantly outperforms the per-site mean. - The proposal of DAML provides a nice way to balance the goals of achieving high BPR for decision-making and maintaining good likelihood for overall forecast accuracy. Weakness: - The scalability of the method. - The effectiveness of the method moving beyond the two families of the models examined in the paper. - The robustness regarding to the pre-determined K value. Other Comments Or Suggestions: See above. Questions For Authors: See Methods And Evaluation Criteria. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We are glad that they thought our DAML method “provides a nice way to balance the goals of achieving high BPR for decision-making and maintaining good likelihood for overall forecast accuracy.” > the approach is evaluated only on …Gaussian mixtures and negative binomial mixed effects, which might seem too simple for realistic use cases? Are these still the most common used models? For our real applications, we selected the negative binomial mixed effects regression with spatially-lagged features (Sec. 5.2), specifically because it had competitive heldout performance in a recent evaluation of methods for opioid-overdose forecasting published in 2024 by Heuton et al. [A]. In that study, the negative binomial model with conventional training (not the decision-aware training in our submission) tied or beat an attention-based neural network as well as a boosted ensembles of trees model and a Gaussian process model in terms of test-set BPR on two different datasets. These spatiotemporal forecasting tasks are overall quite difficult; there is not often a very strong signal for predicting observed counts of overdose deaths or bird sightings given the limited available features. In such cases, somewhat simpler models tend to work well, even when substantial effort is put into tuning hyperparameters to avoid overfitting with complex tree ensembles or neural nets. **New Results with GNNs**. We have also added a [New Table](https://anonymous.4open.science/r/DecisionAwareMaximumLikelihood/NewTable.pdf) as a standalone PDF in our anonymous code repo. This compares our methods to a recent graph neural networks for spatiotemporal forecasting (from Xie et al. ‘24 [B]) on the Crane and Cook County tasks, in terms of BPR, RMSE, and MAE. This GNN is trained to minimize squared error, and thus can do best on RMSE but is inferior to our DAML for top-K decisions as measured by BPR. We do hope to investigate a broader class of probabilistic models directly trained by our DAML in future work. [A] Heuton et al. Spatiotemporal forecasting of opioid-related fatal overdoses. Amer. J. of Epidemiology, 2024. [B] Xie et al. EpiGNN: Exploring Spatial Transmission with Graph Neural Network for Regional Epidemic Forecasting. ECML PKDD ‘22. > concerned about the scalability of the proposed method, it seems evaluated on a small scale of parameters T, S, and K. . Could the author comment more on the possible approximate algorithms We agree that scalability is an important practical consideration. Our current code can scale to number of timesteps T in the dozens, budget K in the hundreds, number of sites S in the thousands, and number of parameters P in the thousands. We have found this sufficient for the practical problems in our paper. Our algorithm can further be scaled up to larger problems by processing one time record (out of T) at a time, and also by computing gradients with respect to one parameter (out of P) at a time. We have prototyped code to do this already in the last few weeks. With further implementation effort, our methods could be easily adapted to process minibatches of timesteps, to scale even further. > The approach comes with a pre-defined K. Typically how people find this value? … does the method perform robustly across a wide range? Typically, the practical intervention budget of the stakeholders determines K. For example, in the whooping crane monitoring application shown in Fig. 1, if the monitoring agency can only afford 10 new cameras, then we would set K=10. We will try to do some further experiments on different K values for one of the applied datasets in the next few weeks. If accepted, we promise to include such experiments in the appendix for the camera-ready deadline. > Both sigma and lambda are key hyper-parameters needs to be tuned. Could the authors provide some rule of thumb? We will revise the paper to provide this information. To complete our experiments in Fig. 3, we select sigma from a grid of values between 0.001 and 0.1 based on validation-set loss. We observed the experiments to be insensitive to lambda, and it was fixed at 30 to make the BPR and likelihood components of the loss similar in magnitude during early training. > the synthetic experiment uses a relatively simple form of misspecification. It would be interesting to see how methods perform under more severe or diverse types of misspecification. We agree that exploring more types of misspecification would be interesting. However, with limited time in this rebuttal period, we elect to leave this to future work. We will revise to acknowledge this limitation in our Discussion section.
null
null
null
null
null
null
EffiCoder: Enhancing Code Generation in Large Language Models through Efficiency-Aware Fine-tuning
Accept (poster)
Summary: This work develops a new instruction-tuning dataset called SwiftCode for efficiency-aware fine-tuning of LLMs for code generation. After fine-tuning on SwiftCode, LLMs are able to generate more efficient code on popular code generation benchmarks. ## Update after rebuttal The rebuttal has addressed my concerns, so I keep my positive score. Claims And Evidence: Weakness 1 (missing evidence): This work uses LLMs to generate candidate code for SwiftCode. However, it has been recently shown [1] that even the strongest LLMs still fall short of generating efficient code on most HumanEval tasks when compared with human expert solutions. Hence, it is unclear whether DeepSeek-Coder and GPT-4o have the ability to generate sufficiently efficient candidate code. This work would be more convincing if, for example, the authors evaluate DeepSeek-Coder and GPT-4o on the benchmark of [1] to provide evidence of how efficient the candidate code is. - [1] Qiu et al. How efficient is LLM-generated code? A rigorous & high-standard benchmark. ICLR, 2025. Methods And Evaluation Criteria: Strength 1 (dataset scale): This work develops a dataset consisting of 65710 tasks. To curate this large dataset, this paper proposes a versatile and scalable framework to process and select candidate solutions. Strength 2 (comprehensiveness): The SwiftCode dataset has integrated seven datasets that covers five popular programming languages (Python, C++, Java, Rust, and Go). Hence, this dataset has great potential to benefit a great many programming applications. Weakness 2 (evaluation): This work uses canonical solutions to evaluate efficiency. However, it has been recently shown [1] that many of the canonical solutions in existing benchmarks are not efficient. Thus, normalized efficiency metrics (like NET) do not fully capture the true efficiency of LLM-generated code. Meanwhile, unnormalized metrics (like ET) alone are not as meaningful besides comparison purposes, because execution time differs for various tasks and typically increases w.r.t. the input scale. For example, an $O(n)$ solution and an $O(n^2)$ solution might have similar execution time when $n$ is small but should have significantly different execution time when $n$ is large. Therefore, the evaluation results would be reflect true efficiency more accurately if, for example, the authors evaluate the generated code on the benchmark of [1], which uses efficient canonical solutions and large-scale inputs in evaluation. - [1] Qiu et al. How efficient is LLM-generated code? A rigorous & high-standard benchmark. ICLR, 2025. Theoretical Claims: This paper does not make theoretical claims. Experimental Designs Or Analyses: Weakness 3 (negative results): This paper has no discussion on negative results, so the analysis in Sec 4 (Experiment) seems a little bit misleading. In particular, in a few cases, fine-tuning on SwiftCode has only negaligible improvement or even worsens the performance (see, e.g., Table 3). The authors should analyze and discuss why such negative results happen. Supplementary Material: I have briefly checked the entire supplementary material. Relation To Broader Scientific Literature: Strength 3 (new direction): SwiftCode is the first instruction-tuning dataset designed to improve the efficiency of LLM-generated code. This opens up an interesting new direction for code generation research. Essential References Not Discussed: This paper has discussed most of the essential references. Other Strengths And Weaknesses: There are no other strengths or weaknesses that I want to point out especially. Other Comments Or Suggestions: On line 72, the paper mentioned Qwen but cited DeepSeek. This seems like a typo. Questions For Authors: Question 1 (space-time tradeoff): Sec 3.2 mentions that the code with the lowest time and memory is selected as the final code. However, the most time-efficient code might not be the most memory-efficient, and vice versa. For example, a dynamic programming algorithm may be faster than brute force but may need more memory, while a brute force algorithm may need less memory but may be much slower than dynamic programming. This is known as *space-time tradeoff* in algorithmic literature (see, e.g., [2]). How did you handle space-time tradeoff when selecting the final code in your dataset? - [2] Hellman (1980). A cryptanalytic time-memory tradeoff. IEEE Transactions on Information Theory. 26(4): 401-406. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for his insightful comments and suggestions. We provide detailed responses point by point. We hope our responses can address your concerns. **W1: LLM-generated code may be inefficient** Thank you for raising this important point about the efficiency of our candidate code generation. To address this concern directly, we evaluated both DeepSeek-Coder (Lite) and GPT-4o on the ENAMEL benchmark [1]. As shown in Table 1, these models achieve higher effi@k scores compared to most LLMs reported in Qiu et al. [1], confirming their capability to generate sufficiently efficient code candidates. Our optimization methodology further enhances this efficiency. After applying our techniques, we observed substantial improvements: average ET decreased from 1.14s to 0.27s, while average TMU dropped from 26.24MB*s to 5.13MB*s—representing 75-80% improvements across efficiency metrics (Figure 4 in our paper). These results demonstrate that our approach effectively generates and optimizes code that meets high efficiency standards, even when evaluated on benchmarks specifically designed to assess code efficiency. *Table 1: Evaluation results of DeepSeek and GPT4o generated code in ENAMEL dataset [1]. Due to OpenAI token limitations, we then only provide the results of GPT4o for k=1 and 10.* Model | eff@1 | pass@1 | eff@10 | pass@10 | eff@100 | pass@100 |-|-|-|-|-|-|-| DeepSeek-Lite | 0.390 | 0.638 | 0.564 | 0.838 | 0.671 | 0.901 GPT-4o | 0.300 | 0.465 | 0.572 | 0.845 | N/A|N/A | **W2: This work uses canonical solutions to evaluate efficiency. The authors evaluate the generated code on the benchmark of [1]** Thank you for highlighting this important methodological consideration regarding canonical solution efficiency. We address this concern in two ways: First, EffiBench, our primary evaluation benchmark, uses optimized canonical solutions provided by the dataset constructors and employs sufficiently large test inputs to meaningfully differentiate between algorithmic complexities (e.g., O(n) vs O(n²)). This allows reliable measurement of efficiency differences across implementations. Second, to further validate our approach, we conducted additional experiments using the ENAMEL benchmark [1], which specifically emphasizes efficient canonical solutions and large-scale inputs. Table 2 presents these results, comparing baseline models against SwiftCoder fine-tuned versions. The results show substantial improvements across all metrics. For example, eff@1 (measuring both efficiency and correctness on a single generation) increases from 0.373 to 0.458 (+22.8%) for Qwen2.5-Coder and from 0.179 to 0.393 (+119.6%) for DeepSeek-Coder. In our final version, we will include these ENAMEL benchmark results to provide a more comprehensive evaluation of SwiftCoder. *Table 2. Evaluation results of baseline models vs. SwiftCoder on ENAMEL benchmark.* | Model|effi@1|pass@1|effi@10|pass@10|effi@100|pass@100| |-|-|-|-|-|-|-| | Qwen2.5-7B| 0.373|0.589|0.628|0.866 |0.732 |0.951| | + SwiftCoder|0.458|0.739|0.653|0.905 |0.763 |0.972| | DeepSeek-6.7B|0.179|0.299|0.549|0.822 |0.727 |0.922| | + SwiftCoder|0.393|0.654|0.633|0.887 |0.752 |0.937| **W3 Discussion on negative results** We appreciate your noting the absence of discussion on negative results. You correctly identified that, in some instances, fine-tuning on SwiftCode produced negligible improvements or slight performance decreases in certain metrics, as shown in Table 3. These variations stem from our optimization approach using the TMU metric for code selection. When optimizing for this composite metric, improvements in TMU may occasionally come at the expense of a slight degradation in individual metrics (either execution time or memory usage). For example, a solution might achieve substantial memory efficiency while slightly increasing execution time, resulting in an overall improved TMU score but showing a negative trend in the execution time metric when viewed in isolation. We'll include a detailed discussion of these cases to better illustrate the complex relationships between efficiency metrics in the revised version. **T1: line 72 typo** Thank you for identifying this error. We will correct it in the revised version. **Q1: Space-time tradeoff** When selecting the final code for our dataset, we addressed the space-time tradeoff using the TMU (Total Memory Usage) composite metric that combines both execution time and memory consumption into a single evaluation metric. Rather than treating time and memory as separate dimensions requiring individual optimization, TMU provides a holistic efficiency measure that accounts for both resources simultaneously. This approach acknowledges the inherent tradeoffs in algorithm design (such as dynamic programming vs. brute force approaches) and allows us to identify solutions that achieve the most favorable overall resource balance. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. It has addressed most of my concerns. Regarding W1 & W2, the results shows that SwiftCoder does improve eff@1 but barely improves eff@10 and eff@100, and the efficiency still seems far from ENAMEL's reference solutions. This seems to suggest that SwiftCoder only makes the LLM's output distribution concentrate more on efficient code but does not really enhance the LLM's capability in algorithm design or implementation optimization. Regarding Q1, while I agree that the TMU metric serves as a time-space tradeoff, this criterion still looks a bit arbitrary to me. As you mentioned in the response to W3, using the TMU metric can sometimes degrade the performance of the finetuned model. This seems to suggest that TMU might not be the best criterion in some cases. What is the rationale of choosing TMU over other tradeoff criteria? --- Reply to Comment 1.1.1: Comment: Thanks for your appreciation for our previous reply that addresses most of your concerns. We provide additional responses in this thread to address your concerns further. We hope that our response can address all your concerns and lead you to consider increasing your rating of our work. **1. Regarding W1 & W2, the results show that SwiftCoder does improve eff@1 but barely improves eff@10 and eff@100, and the efficiency still seems far from ENAMEL's reference solutions** We would like to clarify that ENAMEL's reference solutions represent optimal efficiency because they rely on human experts to manually craft solutions, whereas our approach is **fully automated**. Despite this fundamental difference in methodology, our eff@1 results achieve state-of-the-art performance without requiring extensive manual effort for dataset creation. The eff@1 metric is most relevant for real-world applications, as users typically rely on a model's first solution rather than sampling multiple times. Regarding eff@10 and eff@100, Table 3 in ENAMEL [1] reports the best results of eff@10 = 0.573 and eff@100 = 0.723. SwiftCoder achieves 0.653 and 0.763, respectively, demonstrating state-of-the-art performance through a fully automated process. Our improvements in eff@10 and eff@100 (e.g., 4% in eff@100) are still significant, even though supervised fine-tuning may have inherent limitations in enhancing fundamental algorithmic capabilities. In addition, our synthesis pipeline enables future work in multiple directions to enhance the fundamental coding capabilities of LLMs: (1) generating more high-efficiency training data for continued pretraining, and (2) using our efficient solutions in reinforcement learning finetuning to calculate rewards between model-generated and efficient solutions, further enhancing algorithmic capabilities beyond what instruction tuning alone can achieve. **2. Regarding Q1, TMU might not be the best criterion in some cases. What is the rationale for choosing TMU over other tradeoff criteria?** We would like to clarify that in our evaluations, we have not observed any significant performance degradation caused by the TMU metric. As shown in Table 2 of our paper, among the evaluated models, **only three exhibit a memory peak decrease of less than 1%, while execution time still improves by an average of 40%**. Next, during our dataset construction, we adopted TMU by following existing works (EffiLearner [2] and EffiBench [3]), which uses TMU to balance time and memory usage, and EffiLearner also use TMU to rank the efficiency of LLM-generated code for different iterations to with the balance of time and memory usage. Nonetheless, SwiftCoder is designed to allow easy substitution of other efficiency metrics. For instance, if one wishes to focus solely on execution time or memory peak, the SwiftCoder pipeline can be adapted by replacing the TMU criterion with the desired metric, as illustrated by the following Python snippet: ```python overhead, memory_usage, execution_time, memory_peak = calculate_code_execution_efficiency(dataset[i]) if dataset[i]["memory_usage"] > memory_usage: dataset[i]["memory_usage"] = memory_usage dataset[i]["execution_time"] = execution_time dataset[i]["memory_peak"] = memory_peak ``` We hope this further clarification adequately addresses your remaining concerns. Our goal with SwiftCoder is to provide a versatile, **scalable**, and effective framework for encouraging LLMs to produce code with improved efficiency **without manual efforts**. We believe our experimental results and methodology underscore the potential of this approach while leaving room for future work that focuses on alternative metrics, human-in-the-loop optimization, or domain-specific refinements. Thank you once again for your thoughtful evaluation, and we greatly appreciate your consideration of an improved overall rating. [2] EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization (NeurIPS 24) [3] Huang, Dong, et al. "Effibench: Benchmarking the efficiency of automatically generated code." Advances in Neural Information Processing Systems 37 (2024): 11506-11544.
Summary: This paper introduces SWIFTCODE, a method to enhance code generation in large language models (LLMs) through efficiency-aware fine-tuning. The paper involves leveraging multiple LLMs to generate diverse code solutions for various tasks across different programming languages, then evaluating these solutions by directly measuring their execution time and memory usage through local execution, selecting the code with the lowest execution time and memory consumption as the final output for each task. Experimental results demonstrate significant improvements when fine-tuning with SWIFTCODE. Claims And Evidence: Yes Methods And Evaluation Criteria: The candidate solutions appear not to have undergone any correctness verification, which is perplexing. Theoretical Claims: This paper is not concerned with the proof of theoretical claims. Experimental Designs Or Analyses: Yes. The experimental design is sound and substantial. But I think some further elaboration and additions about generalizability are missing. Supplementary Material: Yes, I check the demo codes generated by the model in this paper. Relation To Broader Scientific Literature: I think the idea of this paper is easy to think of in general, constructing high-quality expected training data to fine-tune the LLM to make it output a higher quality response. It meets expectations and works well. In other words, this paper uses a popular idea to solve the still relatively rarely solved problem of poor efficiency of LLM generated code. Essential References Not Discussed: Recently, several studies have been dedicated to improving the efficiency of code generated by large language models (LLMs). The authors should consider these works and select representative examples as baselines. For instance: 1. Effi-Code: Unleashing Code Efficiency in Language Models 2. EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization (NeurIPS 24) At the same time, the evaluation of the efficiency of LLM-generated code has received considerable attention. In addition to those discussed in this paper, I have identified several recent studies. Given that the literature on this topic is not yet extensive, it warrants thorough discussion. Examples include: 1. How Efficient is LLM-Generated Code? A Rigorous & High-Standard Benchmark 2. A Performance Study of LLM-Generated Code on Leetcode 3. From Effectiveness to Efficiency: Comparative Evaluation of Code Generated by LCGMs for Bilingual Programming Questions. Other Strengths And Weaknesses: Strengths: 1. The paper is well structured. 2. It addresses a very important problem. 3. The experimental evaluation is extensive, and the empirical results are promising. Weakness: 1. Limited Novelty: Although I acknowledge that the paper tackles a critical issue and represents one of the pioneering efforts to enhance the efficiency of code generated by LLMs, it appears to lack distinctive technical contributions. 2. Unclear Technical Design: It seems that the approach for collecting and filtering candidate solutions relies solely on performance metrics obtained from profiling, without assessing correctness. If my understanding is correct, I am concerned that this step might restrict the fine-tuning of LLMs in generating correct code (The author needs to explain the rationale and motivation behind). Otherwise, the authors should clarify how test cases and ground truth are obtained for tasks across different programming languages. 3. Generalizability Requires Further Discussion: I have some reservations regarding the generalizability of the trained model. Although diverse datasets were used during training and evaluation, the authors did not discuss the distributional differences between these datasets—for instance, whether the evaluation dataset contains tasks absent in the training data. A more thorough discussion on the distributional discrepancies between the datasets would further substantiate the generalizability of the proposed method in generating more efficient code. 4. Some Essential References Not Discussed: As mentioned above. I am very eager to engage in discussions with the authors and to revise my comments accordingly. Other Comments Or Suggestions: NA Questions For Authors: 1. Regarding dataset construction, how were the test cases and ground truth obtained? Alternatively, was there no verification of the correctness of the generated candidates? 2. What is the task distribution between the training data and the evaluation data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for his insightful comments and suggestions. We provide detailed responses point by point. We hope our responses can address your concerns and lead you to consider increasing your rating of our work. **C1 & W1 Relation & Limitation.** Thank you for acknowledging the practical value of our work on code efficiency optimization. While the concept may seem intuitive retrospectively, SwiftCoder's implementation required significant technical innovation to overcome multiple challenges. First, our multilingual efficiency dataset development required specialized infrastructure to measure code efficiency across five programming languages with consistent metrics, control for system-dependent execution variations, maintain consistent evaluation environments, and scale our pipeline to handle 70,000+ diverse code samples. Second, we established a previously unconfirmed relationship between training data efficiency and LLM-generated code efficiency. Unlike discrete correctness metrics, efficiency exists on a continuous spectrum with complex interactions with functionality, creating unique optimization challenges beyond merely applying the "quality in, quality out" principle. Finally, while substantial research has focused on code correctness, efficiency optimization remains critically underrepresented despite its practical importance. SwiftCoder provides both methodology and dataset contributions to this nascent research area. We believe these contributions collectively advance the state of efficiency-aware code generation and establish foundations for future research in this important domain. **C2 & W4 Additional references** We couldn't find the latest Effi-Code [1] on ArXiv. If the reviewer provides a link, we'll gladly compare it with SwiftCoder. We've added EffiLearner [2] comparison in the Table below, showing SwiftCoder's superior performance. On EffiBench, SwiftCoder reduces average execution time more effectively than EffiLearner. Unlike EffiLearner, which decreases pass@1 rates, SwiftCoder consistently improves pass@1 across all evaluations. |Model|ET|NET|MU|NMU|TMU|NTMU| |-|-|-|-|-|-|-| |DeepSeek-6.7b|0.56|1.20|40.17|4.09|96.78|13.79| |+EffiLearner|0.46|0.98|40.14|1.00|15.50|1.04| |+Ours|0.39|0.83|40.16|1.00|15.03|0.90| We also evaluated SwiftCoder on ENAMEL [3], which tests efficiency on enhanced HumanEval test cases. As shown in the Table below, SwiftCoder consistently improves both pass@k and effi@k metrics across all models and k values (1, 10, and 100). | Model|effi@1|pass@1|effi@10|pass@10|effi@100|pass@100| |-|-|-|-|-|-|-| |Qwen2.5-7B|0.373|0.589|0.628|0.866|0.732|0.951| |+SwiftCoder|0.458|0.739|0.653|0.905|0.763|0.972| |DeepSeek-6.7B|0.179|0.299|0.549|0.822|0.727|0.922| |+SwiftCoder|0.393|0.654|0.633|0.887|0.752|0.937| Coignion et al [4] focus on LeetCode efficiency evaluation, which is partially covered in the evaluated EffiBench (See paper Table 2) dataset. Jiang et al [5] collected 52 bilingual programming questions (Chinese and English) from existing benchmarks. Their work is under review, and the dataset is not publicly available yet. We will evaluate their dataset when it becomes available and include all related discussions and results. **W2 & Q1 Test cases & ground truth.** All tasks in our datasets have verified solutions (ground truth) that correctly fulfill their descriptions. For example, SelfCodeAlign only includes code that passes all test cases in a controlled environment. Since most datasets lack test cases, we used GPT-4 to generate them based on task descriptions and solutions. We validated these tests by running them against the original solutions from the datasets. Only tests that executed successfully were retained; failed tests were filtered out. Tasks without valid tests were removed entirely. During optimization, we ran LLM-generated code against these validated tests. Solutions failing any test were eliminated. From the remaining valid solutions, we selected the most efficient implementation as the final optimized code. **W3 & Q2 Data Contamination** We addressed data contamination concerns in our SwiftCoder dataset. Analysis shows no exact duplicates between training and evaluation sets, with only 0.20% of evaluation samples having minimal vocabulary overlap (5-10%). Our dataset construction included decontamination processes, as did our source datasets like Evol-Ins, which removed content from common benchmarks. Experiments on EvoEval, designed to avoid benchmark leakage, still demonstrate our approach produces more efficient code than the original LLM-generated solutions. *Table 1 Data Overlap Analysis* | Metric | Value | Percentage | |-|-|-| |Training set size | 65,710| - | |EffiBench size|1,000|-| |Exact duplicates|0|0.00%| *Table 2 Vocabulary Overlap Analysis* | Overlap Range | Test Samples | Percentage | |-|-|-| |0.00-0.05|0|0.00%| |0.05-0.10|2|0.20%| |0.10-0.15|0|0.00%| |0.15-0.20|0|0.00%| --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful replies to my comments and questions. I appreciate the effort you've made to address my concerns. Regarding C2 & W4, the additional baseline comparison with EffiLearner effectively demonstrates the performacne of your approach. The expanded discussion on existing code generation evaluation work will be a valuable addition to strengthen the related work section. I encourage you to incorporate these elements mentioned in your response in the next version. As for W2 & Q1 and W3 & Q2, your detailed explanations and data analysis regarding the test cases, ground truth, and data contamination concerns have adequately addressed my initial reservations on these matters. But can you provide more details about how to calculate the duplication between training and evaluation sets? However, concerning C1 & W1 on Relations & Limitations, while I agree that your work addresses challenges in building a unified multi-programming language-supported framework and makes contributions in this area, I feel these contributions may be more engineering-oriented rather than providing novel technical insights. Your point about using efficiency to transform the optimization objective from discrete to continuous is interesting, but I have two questions: First, correctness can also seem approximable as continuous rather than binary—for instance, by calculating test pass rates. Second, I wonder if efficiency as an optimization target might be susceptible to randomness, where small millisecond variations could be system-induced noise rather than meaningful differences. Could this potentially mislead the model with incorrect reward signals? Overall, while I still maintain some reservations about the innovation aspects and broader technical impact, I have decided to increase my score to 3. --- Reply to Comment 1.1.1: Comment: Thanks for your appreciation of our previous reply and for increasing your overall score from 2 to 3. We provide additional responses in this thread to further address your concerns. We hope that our response can address all your concerns and lead you to consider increasing your rating of our work. **Regarding C2 & W4, I encourage you to incorporate these elements mentioned in your response in the next version.** Thanks for your suggestion. We will add our additional evaluation results and the discussion for the mentioned baselines and the benchmarks into our camera revised version. **As for W2 & Q1 and W3 & Q2, can you provide more details about how to calculate the duplication between training and evaluation sets?** Our data contamination analysis employs a multi-level approach to thoroughly assess potential overlap between the training set (SwiftCoder training set) and the evaluation set (EffiBench). We first perform exact duplication detection by constructing hash tables of all training and evaluation samples and then computing their intersection. This method identifies perfect matches with O(n) efficiency and finds zero exact duplicates between our training set (65,710 samples) and evaluation set (1,000 samples). Beyond exact matching, we implemented a vocabulary overlap analysis that calculates the Jaccard similarity coefficient between tokenized samples. For each evaluation example, we compute the percentage of its vocabulary tokens that appear in any training sample. Results show that only 0.20% of evaluation samples have any meaningful vocabulary overlap (5-10% range), with most evaluation examples using completely distinct vocabularies. We also attempted character-level n-gram similarity analysis using TF-IDF vectorization and cosine similarity, but this proved computationally intensive at scale. To validate findings statistically, we created a random baseline distribution by shuffling vocabulary tokens while maintaining token frequency distributions. This allows us to distinguish between incidental overlap and substantive contamination through statistical significance testing (p<0.05). The vocabulary overlap analysis provides strong evidence that our evaluation set represents an independent task distribution from the training data, ensuring our assessment measures genuine generalization capability rather than memorization. **Concerning C1 & W1 on Relations & Limitations** Thank you for acknowledging our framework's contributions to multi-programming language support. While our work does involve substantial engineering efforts, it also offers methodological innovations, particularly in our code efficiency optimization framework. For instance, our implementation of rejection sampling for code efficiency represents a novel technical approach that we perhaps did not sufficiently highlight as a key contribution. Your observation about correctness potentially being viewed as continuous is insightful. However, we deliberately maintain a binary approach to correctness for critical reasons. In real-world deployments, partially correct code can lead to significant failures and potentially catastrophic outcomes. Complete correctness is a non-negotiable prerequisite. Additionally, our approach aligns with established evaluation paradigms in code completion research, which predominantly use pass@1 metrics to evaluate whether generated code passes all test cases rather than calculating partial success rates. Our framework employs a sequential optimization approach that first ensures correctness and then optimizes for efficiency as a continuous variable. This efficiency is expressed as the relative overhead compared to benchmark solutions (e.g., 1x, 2x, 3x the execution time of reference implementations). This two-phase approach allows us to maintain the binary requirement for correctness while leveraging the continuous nature of efficiency metrics for optimization. Your concern about efficiency measurements being susceptible to system-induced noise is valid and one we anticipated in our work. As demonstrated in Appendix Table 8, multiple executions of LLM-generated code on EffiBench show remarkably consistent performance metrics. The standard deviation of execution times across five runs was zero in most cases, confirming the reliability of our measurements and ensuring our model receives accurate reward signals. For scenarios where randomness might be more pronounced, we have alternative methodologies available. These include measuring FLOPS instead of raw execution time, computing the time required for multiple iterations, or calculating iterations per fixed time. These approaches effectively normalize any minor system fluctuations. We appreciate your critical engagement with our work and hope these clarifications address your concerns regarding the technical depth and methodological soundness of our approach.
Summary: SwiftCode introduces a novel approach to improving code generation in large language models (LLMs) by focusing on both correctness and efficiency. Traditional methods primarily optimize correctness, often neglecting execution speed and memory usage. SwiftCode addresses this gap by fine-tuning LLMs with a curated dataset of high-quality, efficient code. The method involves generating multiple candidate solutions using various LLMs, measuring execution time and memory consumption, and selecting the most efficient option. Experimental results show substantial improvements. This efficiency-aware fine-tuning framework enables LLMs to produce more optimized code, benefiting both software development and computational efficiency. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. The authors selected different LLMs to conduct experiments to verify the effectiveness of their method and verified it on multiple datasets. Supplementary Material: Yes. The authors uploaded their generated finetuning data. Relation To Broader Scientific Literature: This work is related to code generation. Previously, there have been some baselines and evaluation datasets for evaluating the execution efficiency of code generated by llms. Essential References Not Discussed: As far as I know, no. Other Strengths And Weaknesses: **Strengths** 1. The method proposed by the author is simple and effective. 2. This work has a high reference value for code generation research and is relatively easy to follow. **Weakness** 1. The contribution and novelty of this work are relatively limited. 2. The rejection sampling finetuning method used in this work has been widely adopted by mathematics and code-related research, such as [1]. 3. There is a lack of more in-depth analytical experiments, such as the exploration of sampling parameters and comparison with the original dataset. [1] https://arxiv.org/pdf/2308.01825 Other Comments Or Suggestions: 1. Typos: The column names of Table 7 should be kept consistent with those of other tables. 2. What do the purple percentage codes in Table 7 mean? There seems to be no reference value. 3. In addition, the comparative experiment in Table 7 selected codellama (which is inconsistent with the baseline model selected in other experiments in the paper). The baseline performance of this model is relatively poor. I would like to ask whether it has been compared with the other two methods on Deepseek and Qwen. 4. In addition, could you please briefly summarize the differences between your method and PIE and Mercury? As related work, I think these comparisons are very important. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We want to thank the reviewer for his insightful comments and suggestions. We provide detailed responses point by point. We hope our responses can address your concerns and lead you to consider increasing your rating of our work. **W1 & Q4 Novelty and Comparison with PIE and Mercury** Our paper's primary contribution is application-driven: we introduce the first multilingual code efficiency instruction tuning dataset. As the ICML guidelines note, "originality need not mean wholly novel methods… a novel dataset... match the needs of the user." Our dataset addresses the critical real-world need for efficient code generation, enabling researchers to fine-tune models for improved performance. Next, compared to existing works: 1. SwiftCoder introduces a fully automated code optimization framework that transforms initial task descriptions into efficient solutions without human intervention. Unlike PIE, which relies on human programmers to write efficient solutions, or Mercury, which selects the most efficient solution from pre-existing human-written code, SwiftCoder can optimize code starting from just a task description. This automation enables researchers and developers to enhance their existing code generation datasets with minimal manual effort, making efficiency optimization more accessible and scalable. 2. SwiftCoder offers broader language coverage and greater generalizability by including optimized tasks across multiple programming languages (C++, Python, Java, Rust, and Go), contrasting with PIE's focus on C++ and Mercury's focus on Python, allowing models fine-tuned on SwiftCoder to perform effectively across diverse language environments. Additionally, SwiftCoder's significantly larger scale—comprising 65,710 unique tasks compared to Mercury's 1,889 and PIE's 1,474—provides more comprehensive training data, resulting in superior pass@ and efficiency. **W2 Rejection sampling in math and code domain** We agree that rejection sampling has been widely used in the mathematical domain. However, this technique is not our key contribution. Our primary contributions are our empirical study establishing the correlation between training data efficiency and generated code efficiency, the development of a multilingual code efficiency dataset, and the end-to-end pipeline for improving code efficiency across multiple languages. Unlike prior code generation work focuses on binary correctness metrics, efficiency in our work is a continuous metric requiring different optimization strategies. We would appreciate it if the reviewer could suggest specific code-related rejection sampling techniques for efficiency optimization. We would be happy to include them in our paper. **W3 Sampling params** We conduct an ablation study on the sampling size. The evaluation results are shown in **Reviewer jr1T Ref Table 1**, where we evaluate the efficiency results for the most efficient code with 1 and 10 samples. Our results reveal that SwiftCoder consistently achieves higher results. We also compared baselines against those fine-tuned on the **original dataset** and SwiftCoder. The table below shows that models fine-tuned on the original dataset decreased efficiency. For example, average execution time increases from 0.33s to 0.37s for DeepSeek. |Model|ET|NET|MU|NMU|TMU|NTMU| |-|-|-|-|-|-|-| |deepseek-6.7B|0.33|2.48|34.32|1.00|13.09|2.36| |+Original|0.37|2.99|34.18|1.00|10.65|3.04| |+SwiftCoder|0.23|1.84|34.17|1.00|8.91|2.30| |Qwen2.5-7B|0.30|2.50|26.35|1.00|5.22|2.43| |+Original|0.32|2.68|26.23|0.99|5.11|2.54| |+SwiftCoder|0.13|1.02|26.32|1.00|2.27|1.03| **Q1&Q2 Inconsistency of column names** The purple percentages in paper Table 7 indicate the reduction in overhead metrics compared to CodeLlama-7B-hf without instruction tuning from PIE, Mercury, or SwiftCoder. We provide the detailed results in the Table below and we will add it in our paper in camera ready. |Method|ET|NET|MU|NMU|TMU|NTMU| |-|-|-|-|-|-|-| |CodeLlama-7b|0.39|1.94|61.68|1.00|12.78|1.83| |+PIE|0.30|1.47|61.39|1.00|11.28|1.68| |+SwiftCoder|0.21|1.03|61.33|1.00|7.17|1.04| |CodeLlama-7b|0.39|1.94|61.69|1.00|12.78|1.83| |+Mercury|0.31|1.51|61.94|1.00|10.24|1.47| |+SwiftCoder|0.21|1.01|61.73|1.00|6.95|0.98| **Q3 PIE and Mercury with Deepseek and Qwen** All comparative experiments were conducted with fairness as a priority. For Table 7, we used CodeLlama because PIE only provides fine-tuned CodeLlama. For Qwen and DeepSeek, we compared SwiftCoder with Mercury only. The table below shows that both Mercury and SwiftCoder improve the efficiency of LLM-generated code, while SwiftCoder achieves better results compared to Mercury for all LLMs. |Model|ET|NET|MU|NMU|TMU|NTMU| |-|-|-|-|-|-|-| |deepseek-6.7B|1.62|2.57|40.92|1.00|49.64|3.03| |+Mercury|1.42|2.24|40.82|1.00|46.12|2.78| |+SwiftCoder|1.29|2.01|40.79|1.00|42.76|2.55| |Qwen2.5-7B|1.23|1.91|46.48|1.00|39.00|1.91| |+Mercury|0.86|1.24|47.51|1.01|34.65|1.30| |+SwiftCoder|0.70|0.95|46.43|1.00|26.02|0.95|
Summary: This paper studies the problem of using an LLM to generate higher performance code. The authors propose a pipeline that first constructs a training dataset by sampling the LLM and choosing generations that have higher performance. Then, they finetune the LLM on slow-fast pairs to get it to generate faster code. They evaluate their approach compared to several baselines. Claims And Evidence: See below. Methods And Evaluation Criteria: See below. Theoretical Claims: N/A Experimental Designs Or Analyses: See below. Supplementary Material: N/A Relation To Broader Scientific Literature: See below. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths * Important problem: Code performance is a key problem in programming languages and software engineering research, and LLMs show promise in helping with this problem Weaknesses * Unclear novelty: It is not clear what exactly is new about the SwiftCode approach. As far as I understand, they are drawing multiple samples from the LLM, evaluating their performance, and then choosing the best one as the target. There does not appear to be any significant methodological novelty in this pipeline. * Unclear PIE baseline: The authors compare to PIE, a recent work on performance optimization for C++ programs. However, PIE actually studies several different strategies for using LLMs for code optimization. The authors should clarify which algorithm in the PIE paper they compared against. Several of these techniques studied in that paper involved finetuning and were highly effective. * Lack of test-time search: I’m wondering why the authors only consider taking a single sample from the LLM at test time (i.e., they only study pass@1). For performance optimization, it is pretty typical to take multiple samples, since we can take the fastest program that passes all the test cases. How does the comparison to baselines scale with the number of samples taken? * Local execution is noisy: Measuring performance by executing code on a local machine can be very high variance due to a number of factors, including other programs running on the same machine as well as stochasticity in context switches performed by the operating system. The recent PIE paper (Shypula et al., 2024) proposes to use system simulators to mitigate this problem (though it only works for C++, not Python). Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank you for your insightful comments and suggestions. We provide detailed responses point by point below. We hope that our clarifications, additional experiments, and responses can address your concerns and lead you to consider increasing your rating of our work. **W1 Limited novelty** Our paper's primary contribution is application-driven: we introduce the first multilingual code efficiency instruction tuning dataset. As the ICML guidelines note, "originality need not mean wholly novel methods… a novel dataset... match the needs of the user." Our dataset addresses the critical real-world need for efficient code generation, enabling researchers to fine-tune models for improved performance. Next, our empirical study provides valuable insights by establishing the correlation between training data efficiency and LLM-generated code efficiency - a relationship not previously well understood in the literature. This finding can inspire future construction of SFT datasets that optimize for both correctness and efficiency. Unlike prior methods, PIE [1], which relies on human programmers to write efficient solutions, or Mercury [2], which selects efficient solutions from pre-existing human-written code, SwiftCoder introduces a fully automated framework that transforms task descriptions into efficient solutions without human intervention. Finally, SwiftCoder offers greater generalizability by including optimized tasks across 5 programming languages, contrasting with PIE's focus on C++ and Mercury's focus on Python. SwiftCoder's significantly larger scale—comprising 65,710 tasks compared to Mercury's 1,889 and PIE's 1,474—provides more training data, resulting in models that demonstrate superior performance. [1] Learning performance-improving code edits. ICLR 2024 [2] Mercury: A code efficiency benchmark for code large language models. NeurIPS 2024 **W2 Clarify the method in PIE was compared** We compared against the best-performed **All** strategy fine-tuned LLM by PIE in the paper. Now, we included comparisons with other PIE strategies in the table below. SwiftCoder outperforms all PIE variants on ET and TMU (e.g., further reduces ET and TMU from **19.6%** and **72.3%** to **22.5%** and **75.9%** for **All** strategy). |Model|ET|NET|MU|NMU|TMU|NTMU| |-|-|-|-|-|-|-| |CodeLlama-7B|1.02|0.85|42.33|1.00|23.97|0.82| |All|0.82|0.72|8.87|0.18| 6.64|0.24| |HQ|1.14|0.98|10.55|0.23|7.06|0.26| |All w/Perf-Cond|0.92|0.81|8.91|0.19|6.99|0.25| |HQ+Self-Play|0.92|0.80|12.46|0.27|7.80|0.28| |SwiftCoder|0.79|0.70|11.06|0.24|5.77|0.21| **W3 Results with more samples** Our evaluation follows the setup of prior works (EffiLearner, Mercury), where they use greedy decoding. We conducted additional evaluations for pass@10 (T = 0.8) and will add pass@100 results in our camera-ready manuscript due to rebuttal time limitations. The table below presents our findings across different sampling scenarios. SwiftCoder maintains its efficiency advantage across all configurations. Even when baselines are sampled 10 times, they still cannot match the efficiency of 1 SwiftCoder sample. When both approaches use the same number of samples, SwiftCoder consistently generates more efficient solutions. Specifically, when both use 10 samples, SwiftCoder improves pass rates by 4-24 percentage points while maintaining better efficiency. *Reviewer jr1T Ref Table 1* |Model|ET|NET|MU|NMU|TMU|NTMU|Overlap|Pass@1| |-|-|-|-|-|-|-|-|-| |**Baseline Sample 1 vs. SwiftCoder sample 1** | |deepseek-6.7B| 0.34|2.56|47.26|1.45|30.05|9.97|36.0|44.4| |+SwiftCoder| 0.22|1.71|36.31|1.00|9.48|2.11|36.0|51.7| |Qwen2.5-7B| 0.31|2.35|31.66|1.00|11.00|2.15|37.2|44.8| |+SwiftCoder| 0.16|1.12|31.67|1.00|8.28|1.18|37.2|57.7| |**Sample 10 vs. sample 1**| |deepseek-6.7B| 0.39|3.00|43.86|1.24|20.14|6.49|48.1|66.8| |+SwiftCoder| 0.37|2.68|38.99|1.04|20.13|3.28|48.1|51.9| |Qwen2.5-7B| 0.33|2.56|30.99|1.00|9.74|2.33|41.3|49.9| |+SwiftCoder| 0.16|1.16|31.01|1.00|7.80|1.24|41.3|57.7| |**Sample 10 vs. sample 10**| |deepseek-6.7B| 0.44|3.45|41.61|1.19|19.06|6.32|62.5|66.8| |+SwiftCoder| 0.40|3.09|42.24|1.19|17.74|5.54|62.5|70.8| | Qwen2.5-7B| 0.34|2.59|32.68|1.00|12.09|2.38|48.3|49.9| |+SwiftCoder| 0.20|1.49|32.67|1.00|7.82|1.43|48.3|73.8| **W4 High variance on local machine** We use an open-sourced code efficiency platform Monolith for evaluation. Similar to the PIE system simulator, Monolith provides consistent performance evaluation but with broader language support. We also report variability results in the Appendix Table 8, where we execute multiple tasks (32 concurrent tasks) from EffiBench 5 different times on Monolith. Our results demonstrate **consistent performance across different execution runs even under concurrent program load, with a coefficient of variation below 3% across all metrics**. This consistency shows that our measurement approach provides reliable results without requiring specialized system simulators.
null
null
null
null
null
null
It's My Data Too: Private ML for Datasets with Multi-User Training Examples
Accept (poster)
Summary: This paper discusses the user-level privacy under the multi-attribution scenario, where each user is associated with multiple examples but also each example can be attributed to multiple users. It starts with defining the differential privacy used in this case as the fixed-graph DP. Then it goes to discuss how to use some of the known algorithms, DP-SGD and DP-MF for the purpose of multi-attribution data. The paper discusses methods to choose the set S which is a subset of the data where each user contributes at most k examples. This subset needs to be maximized while keeping this restriction. ## update after rebuttal I keep my score. Claims And Evidence: The paper in general follows well and is understandable, however, some claims raise some questions. I list them below: - The claim that allowing duplicates allows for a larger dataset which reduces DP noise. I fail to understand why having duplicates leads to a larger dataset, as we would still have the same limit on contribution. - In section 5.2, the authors hypothesize that the skew in the graph is because k more users will have <k examples than for the regular graph and then the dataset will be smaller. Again, I fail to see why this is true. Methods And Evaluation Criteria: The proposed methods and metrics give a clear idea of the advantages of the used method. Also, the comparison of the greedy algorithm with more sophisticated techniques offers a lot of value to the paper. Theoretical Claims: There are not really theoretical proofs in the paper as the paper empirically justifies and tests the proposed methods, as well as compares the different techniques. Experimental Designs Or Analyses: The experiments are sound and valid as far as I can tell, although I do have the questions I mentioned in the claims and evidence section. Supplementary Material: I did review the supplementary material to get a clearer understanding of the paper, as because of the space restriction, some of the algorithms and discussions are in the appendices. Relation To Broader Scientific Literature: The problem at hand is a very important advancement in the literature as it discusses the important and natural extension of multi-attribution. In the real world, in many cases, such as the emails example in the paper, multiple users contributing to a single example needs to be taken into account along with the user-level privacy where a user contributes to multiple examples. Essential References Not Discussed: One reference I found missing is Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016, October). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (pp. 308-318).‏ When discussing the DPSGD algorithm, it makes sense to cite this work. Other Strengths And Weaknesses: One main strength of the paper is its originality and significance. I think this problem has a lot of important applications and this sheds a lot light on the important techniques that can be used. The paper is generally also well written, although it has some typos which I list in the section below. Other Comments Or Suggestions: - In section 1.1: with multiple attributed data, not multiply - In section 5.1: considered both regular and skewed graphs, not consider - One suggestion is that it is weird that the first figure mentioned is 2(b) instead of 2(a), is there a reason these figures are not flipped? Questions For Authors: I first have the two questions from the claims section, which I copy here: - The claim that allowing duplicates allows for a larger dataset which reduces DP noise. I fail to understand why having duplicates leads to a larger dataset, as we would still have the same limit on contribution. - In section 5.2, the authors hypothesize that the skew in the graph is because k more users will have <k examples than for the regular graph and then the dataset will be smaller. Again, I fail to see why this is true. Also, I have some other questions: - In section 2, the definition of the attributed dataset, this seems like $x_i$ can not be repeated in $e_i$? Can you make this clearer whether it can? - Also same section when defining edge data DP, it is said that m=m' and $e_i=e'_i$ for all $i\in [m]$ but differ in one $x_i$. But doesn't some $e_i$ change when changing $x_i$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your support of the paper, and for the editing suggestions which we plan to incorporate. Here we respond to some of the reviewer’s questions. As space permits, we will plan to include more detailed explanations along these lines in the revision: * __“X_i cannot be repeated in e_i”__: We believe this question is asking whether two different edges can have examples with the same content. Our definition does not restrict the values of different x_i, even in relation to each other. This is in line with other DP definitions such as example-level DP or user-level DP in the single attribution model, which do not make any assumptions on the contents of examples in relation to each other. * __“Doesn’t some e_i change when changing x_i”__: Here we are decoupling the edge (i.e. users associated with an example) and the content of the example. e.g. For edge-data adjacent datasets D, D’, in D Alice might send Bob the email “how was your surgery?” and in D’ Alice might send the email “are you free for lunch tomorrow?”. In both cases e_i = {Alice, Bob}, but the content x_i has changed (and for edge-data adjacency, the rest of the graph is unchanged). * __“duplicates allows for a larger dataset which reduces DP noise…”__: We apologize for not elaborating further in the paper. As a simple example, suppose our users are {A, B, C, D} and our examples are associated with hyperedges {{A}, {A, B}, {A, C}, {B, C}, {A, D}}. If we use k = 3, without duplicates we can get a dataset of size at most 4, since we have to discard one of the examples including A. However, with duplicates we can use the dataset, e.g. {{A}, {A, B}, {A, D}, {B, C}, {B, C}} which includes 5 examples and still satisfies the bound k = 3. That is, we can use duplication (equivalently, upweighting of some examples) to even out users’ contributions and make full use of each user’s allocation. * __“more users will have <k examples than for the regular graph and then the dataset will be smaller.”__: For simplicity let’s focus on 2-uniform graphs, i.e. the usual graphs where every edge has two users. Consider the 2-regular graph {{A, B}, {B, C}, {C, D}, {D, E}, {E, F}}. For contribution bound k = 2, by including all edges, every user saturates their contribution bound and we get 6 edges. In contrast, consider the graph {{A, B}, {B, C}, {C, A}, {A, D}, {B, E}, {C, F}}. The number of users/edges is the same, and the average degree is still 2, but half the users have degree 1 and half have degree 3. So under a contribution bound of 2, the best we can do is to include 4 edges (e.g. {{A, B}, {A, D}, {B, E}, {C, F}}. In short, some of the users don’t have enough edges to saturate their contribution bound, and others have too many edges and must discard some, so we can include fewer edges than if the graph were perfectly regular.
Summary: The paper studies user-level differential privacy when each training sample can be attributed to multiple users, called multi-attribution model. It proposes a new privacy definition called fixed-graph DP, where users are nodes and examples are hyperedges. And the neighboring database is defined as arbitrary changes to all examples associated with a single user, which is similar to node-dp in graph dp. To apply DP-SGD, DP-MF type of algorithms, we need to select a subset of the dataset where each user contributes a limited number of examples. This is proposed as contribution bounding problem, which is NP-hard. The authors present a greedy algorithm for contribution bounding and empirically evaluate it. The paper compares DP-SGD and DP-MF, finding DP-SGD generally outperforms DP-MF in the multi-attribution setting due to privacy amplification challenges with DP-MF. Claims And Evidence: I think there are two claims: 1) The authors proposed a new setting called this multi-attribution model. 2) the proposed greedy algorithm could be used for practical training. The first claim is interesting and practical. The second claim is evaluated based on experiments. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical proof. Experimental Designs Or Analyses: Yes. There are two tasks: training a small transformer on the arXiv dataset, and a synthetic logistic regression task. Both are relatively small scale, but I think it is convincing. Supplementary Material: I read the proof of np-hardness part. Relation To Broader Scientific Literature: This new problem setting multi-attribution model could potentially bring more study and research. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: presentation is clear. Problem is interesting. Weaknesses: the greedy algorithms are somewhat simple and lack of theoretical novelty. I guess this problem would encourage more research in the future. Other Comments Or Suggestions: In those figures, I feel like epsilon larger than 10 does not make too much sense. It is nearly non-private. Questions For Authors: I feel like favoring examples with fewer users could skew the model toward less active or less connected users, potentially harming algorithmic fairness. Do you have any insights or any thoughts on this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your support of the paper. We respond to a few points below: * __"The greedy algorithms are somewhat simple.”__: We agree the algorithms are simple, and we have framed them as baselines to emphasize this. We note that our empirical results, which demonstrate the baseline algorithms are quite competitive, suggest developing much more complicated algorithms may be unnecessary in practice (but may still be an interesting research direction). We also agree that part of the goal of this paper is to encourage more work on the multi-attribution setting. * __“epsilon larger than 10 does not make too much sense. It is nearly non-private.”__: We agree that in practice epsilons larger than 10 may not give meaningful privacy guarantees. Our goal with studying large epsilons was mainly to extend our empirical understanding to low-noise regimes. We also note that e.g. an epsilon of 64 in one setting can correspond to a much smaller epsilon in a different setting with the same noise multiplier and batch size but e.g. a different dataset size or graph structure, so studying these low-noise regimes may provide useful insights even if one does not believe that the corresponding epsilon is meaningful in practice. * __“Potentially harming algorithmic fairness”__: This is an interesting point. Note that contribution bounding also makes the contributions of different users more uniform, so even without differential privacy one might expect contribution bounding to actually increase fairness in this dimension. One could summarize the impact on fairness as, contribution bounding favors examples with fewer users and users with fewer examples. We expect that whether this leads to an overall positive or negative impact on fairness will be highly subjective and context-dependent, depending on what properties of the dataset these quantities (examples per user and users per example) align with.
Summary: The paper introduces a novel differential privacy (DP) definition for datasets with multi-user attribution, where each training example is associated with multiple users (e.g. emails attributed to both senders and recipients). Their proposed adjacency definition, termed "fixed-graph DP," protects the content of each edge (message) but not the graph structure. This approach enables the design of significantly more practical algorithms compared to the more conservative existing definition (Node DP, Fang et al., 2022). The authors address the challenge of bounding user contributions through example selection (an NP-hard problem). They first propose a greedy algorithm and then attempt to improve it using linear programming techniques, though with limited practical impact. The paper includes comprehensive experiments that demonstrate optimal strategies for both data selection and DP training algorithms, exploring the tradeoffs between bias and variance in the multi-attribution setting. ## Post-rebuttal update I choose to maintain my high score. Although I agree that lack of formal protection of a graph structure is concerning, this paper is a first of its kind and proposes a new DP formulation - paving the way for the future work, which can address the issue. Claims And Evidence: The paper presents a comprehensive body of work - they introduce a new, well-justified definition of privacy, develop methods to bound user contributions, and effectively apply existing privacy accounting techniques. All aspects of their approach are thoroughly proven as necessary. The authors explore a good range of algorithmic design options to provide fixed-graph DP. Their investigation covers greedy algorithms, linear programming techniques, and variants that handle duplication and bias mitigation. The experimental results effectively demonstrate the viability of their approach across different settings, showing performance on both synthetic and real-world datasets. Overall, I believe this is a very strong paper with a novel and highly relevant idea, backed by proper evidence and rigorous analysis. However, I note two significant gaps: 1. The paper argues that the additional public data (the graph structure) poses minimal risk because it would be difficult for a practical attacker to exploit given the algorithm design. However, graph structure can often be highly sensitive information (including in their email example). More evidence is needed to support their claim that this doesn't undermine the privacy guarantees - otherwise, the provided guarantees may not be as meaningful as presented. 2. The paper compares their approach to a stronger definition (Node DP), arguing that their definition relaxation allows for more practical algorithms. I would like to see a comparison on "nice" graphs - cases where Node DP has practical implementations - to better understand at what point fixed-graph DP becomes an essential tradeoff. This would help clarify when each approach is most appropriate. Methods And Evaluation Criteria: The paper proposes a well-justified set of methods to address privacy in the multi-attribution setting. The experimental methodology is sound, utilizing both synthetic data (allowing for controlled investigation of specific parameters) and real-world arXiv data (demonstrating practical applicability). The authors thoughtfully explore key algorithmic tradeoffs, particularly the balance between noise reduction and bias mitigation across different privacy budgets. As noted previously, however, a more extensive comparison with node DP would significantly enhance the paper. While the authors convincingly argue for the theoretical advantages of their approach, empirical comparisons on graphs where node DP solutions are feasible would provide better context for understanding when fixed-graph DP becomes an essential tradeoff. Theoretical Claims: The paper introduces a new privacy definition (fixed-graph DP) that's well-formulated for the multi-attribution setting. The authors rigorously prove that their contribution bounding algorithms satisfy this definition and demonstrate how existing DP-SGD and DP-MF accounting techniques can be appropriately applied in this context. The theoretical analysis is sound, properly establishing the properties of their algorithms while acknowledging computational limitations like the NP-hardness of the optimal solution. Overall, the theoretical foundation is solid and well-substantiated. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria" Supplementary Material: I have not reviewed the appendix. Relation To Broader Scientific Literature: The paper addresses a highly relevant question of providing DP guarantees on data simultaneously belonging to multiple users. Despite being common in many realistic and sensitive datasets (emails, messages, collaborative documents), the DP literature has lagged in providing appropriate guarantees for such scenarios. While user-level DP has been extensively studied, its application to multi-attribution settings remained largely unexplored. Fang et al. (2022) proposed Node DP to address this gap, but it had significant practical limitations. This paper makes a valuable contribution by introducing fixed-graph DP as a more practical alternative, bridging an important gap in the privacy literature. Essential References Not Discussed: None Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks to the reviewer for their support of the paper. We want to respond to the two gaps raised by the reviewer. * Gap 1: We agree that the graph structure can be a privacy risk. We do not advocate that _every_ algorithm that achieves fixed-graph-DP is appropriate for ML settings, and do not advocate for actually publishing the graph in practice. While we do not formally protect the graph structure in a DP manner, as discussed at the end of Section 2, the structure of our algorithms (which discard the graph information during training) means they resist straightforward attacks (with non-pathological data). Part of our goal with this work is to invite future research on this problem, including designing attacks on our two-phase algorithms in practical settings. * Gap 2: We agree such a comparison could be useful. However, to the best of our knowledge, the only setting where batch gradient queries with node DP would be readily applicable and feasible at scale is when the hypergraph is already bounded-degree, in which case contribution bounding is a no-op and fixed-graph DP/the comparison are unnecessary. While reductions (by truncating some nodes) to bounded degree graphs are known, these reductions are only understood for “2-uniform” graphs (that is, the usual graphs, where edges involve only two nodes, as opposed to hypergraphs, which are the focus of our work).
Summary: The work describes how to apply two differentially private training methods for data in which more than one individual can contribute to each data instance (i.e. email sender and receivers, or author set in a publication). The approach is based on building a subset of the dataset so as to upper bound the number of contributions (i.e. emails involved in, papers authored) each individual is involved in (as a configurable parameter). Several mostly greedy methods are used to find this subset. Once this subset is available, existing DP training methods can be used. The subsetting method does not incur any privacy cost due to the paper's interpretation of an individuals' contribution to a dataset: their relationships to instances are not private, only the instance contents are (i.e. the contents of emails or papers are considered private but authorship or sender/receivers of emails are not). Experiments exemplify the method on arXiv paper abstracts with contribution defined by authorship. To investigate some aspects of global authorship metrics, synthetic datasets where authorship is generated are also analyzed. First, the greedy subsetting methods are demonstrated to be relatively close to the optimal by comparing them to solutions derived using integer programming. For the experimental task, masked token prediction is used. A small BERT model is fine-tuned using two DP SGD methods with data drawn as per the proposed methodologies. A baseline without DP is also included. The experiments show mostly expected results with respect to the privacy parameter though also shown is that both DP SGD methods have situations under which they perform better. ## update after rebuttal Discussion regarding edge/node privacy suggested there is a path towards acceptance if the paper better presents its limitations and consequences of those limitations both in terms of privacy and what sorts of ML methods the formalism can be applied to (i.e. not graph ML). I have raised my score to weak reject. The bulk of my other objections, the levels of novelty and contributions, will need to be evaluated by the AC for the final decision. Claims And Evidence: Yes. Note that the paper seems to avoid claiming anything. Some speculation is done around the experimental results but nothing one could object to. This is a weakness which I discuss further in the weaknesses section. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper makes no theoretical claims. There is a point about NP-hardness in the appendix but it is mostly a citation. Experimental Designs Or Analyses: Yes. Supplementary Material: Some parts of the appendix. Relation To Broader Scientific Literature: The methodology's privacy arguments reduce to DP group privacy. Methods (graph building with attribution bounds) are simple enough that they are not based on any recent work. Experimentation uses DP-SGD and DP-MF training methods from private ML research. Essential References Not Discussed: I did not check. The paper has sufficient weaknesses already. Other Strengths And Weaknesses: Strengths: + S1. Very well written introductory and background sections. Weaknesses: - W1. No take-aways or claims that can inform application of the methodology. - Examples of suboptimality of the greedy methods shows trivial reduction of number of selected nodes as compared to the optimal method but no claim is made about any suboptimality bounds. It is not clear to what degree one can generalize the experimental observations. - Impact of subsetting on masked model difficult to assess. No baseline is shown in Figure 2 to demonstrate how much utility is lost due to either the proposed method or DP-SGD/MF. - Suggestion G1. Perform sufficient experimentation to write some take-aways that generalize them. If possible, prove conditions under which suboptimality can be bounded. Add baselines to all experiments which should also help with take-aways. Include a Conclusions sections that summarizes the main take-aways. - W2. Objectionable assumption about privacy in the setting. The "Fixed-graph (multi-attribution user-level) DP" seems to make the assumption that attribution graph itself is not private. A mechanism that outputs the graph adjacency in the plain (without the hyper-edge data xᵢ) would be ε=0 DP. While user identities (i.e. email addresses) are not explicitly modeled, the disregard of adjacency in the privacy calculus suggests that email sender/recipients or authorship are not private/sensitive information. The argument for why a recipient of an email should be considered a contributor is that the email might be "about" them. A fact that someone is a recipient may have as much if not more information about them than the content of the message. Database pairs D, D' as per Definition 2.1 differ only in message/abstract contents. This is not the difference one would expect due to the choice of an individual to participate in the dataset or not. Under what circumstances is a user deciding or not to participate in an email/abstract dataset but is not making the choice of being included or not, but instead is making the choice of the email contents/abstracts being included or not? Including empty emails/abstracts for an emails sent to/received by an individual who wishes not to participate would already be a violation of their agency to consent. - Suggestion G2. Either provide scenarios under which the privacy assumptions make sense (email and authorship are not viable as per above discussion) or switch to the Node privacy setting. - W3. Limited novelty. The novelty in the "multi-attribution" setting (each instance may have had contributions from more than 1 individual) doesn't seem to matter to the methodology. The same identical arguments and method could have been used in the group privacy setting without multi-attribution. The impact of multi-attribution is on how large the subgraphs can be as per produced by the methods but those greedy methods do not seem to be tailored in any way to multi-attribution. - Suggestion G3. One option is to significantly expand on the take-aways to demonstrate how viable subgraphs with k-bounded attribution are for DP-SGD/MF under a wide range of datasets, tasks, models, etc. This would be more or less a paper focused on addressing W1 above. Alternatively, alternate approaches or techniques that are specific to multi-attribution can be presented and experimented with. Other Comments Or Suggestions: See Weaknesses section above for suggestions. Comments - C1. It is difficult to connect how the synthetic dataset construction models aspects of the real arXiv dataset. One thing that can help is to report quantitative metrics about both synthetic and real, answering the question: the synthetic dataset with parameter choice $ p $ causes metric $ m $ a function of $ p $ and on the real dataset, this metric is equal to $ m' $. - C2. Considering repeating experiments that feature non-determinism (even in the train split picking) and demonstrate results alongside margins (under some chosen confidence). Many results as plotted seem very close to each other, making it difficult to derive conclusions without considerations of statistical confidence. Small things: - Smallest things: - Several uses of "natural" in the first 2 paragraphs of Section 4 could be reworded to explain the naturalness instead of assuming it is obvious. Questions For Authors: - Q1. Did the BERT-Tiny model already have masked token prediction layers pre-trained? If so, how did it perform the experimental task before fine-tuning on the arXiv abstracts? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks to the reviewer for their feedback. We agree that broadly, there is much more work to be done to fully understand the multi-attribution model, and part of our goal with this work is to motivate future research into this setting. Below we respond to individual points we disagree with in the review: * W1: * __“no claim is made about any suboptimality bounds”__: As mentioned in Appendix B.2, theoretically the problem is known to be hard to approximate to within large factors, even in some very simple settings. So while one could attempt to prove positive theoretical results, these would be overly pessimistic and not give meaningful guidance in practice. * __Q1/“add baselines”__: We use the checkpoint available [here](https://github.com/google-research/bert) as our initialization, which achieves a test loss of 4.955 for arXiv, i.e. is not competitive with DP fine-tuning even at small epsilons. We will state this as a reference point for the empirical results in the revision. Since we introduce fixed-graph DP (and as we discuss in our response to reviewer gGTR, the node-graph DP algorithms are not readily applicable except in trivial settings), besides the not-fine-tuned checkpoint and some other baselines we already included in the paper, we do not feel there are obvious and meaningful baselines to compare our greedy baseline to. However, we are open to suggestions from the reviewer for other baselines to consider adding in the revision. * W2: * We are glad the reviewer raised the question of appropriate DP formulations; in fact, one of the main goals for our paper is to provide a better framework for such discussions. Node-DP is not well known in the ML community currently, and hopefully our paper will provide a clear critique to simply applying example-level or user-level DP to settings where examples contain information from multiple users. * At the same time, we also agree that the fixed-graph DP is not as strong as one ideally would want. We spent considerable time looking for ways to scale node-DP, and nevertheless this approach seems very far from being feasible for large-scale ML applications. Given the explosion of GenAI, there is a pressing need for practical approaches in this space that can be offered now. * We do not advocate that _every_ algorithm that achieves fixed-graph-DP is appropriate for ML settings. Our algorithm carefully decouples the use of the graph from the use of the training data. This decoupling excludes the most obvious problematic algorithms (i.e., ones that publish the graph). In fact, despite considerable effort, we were unable to design any realistic attack to exploit our algorithms. At the same time, we also lack a fully satisfactory guarantee. (For example, one can prove statements assuming independence of the training data from the graph structure; but such independence does not hold in practice and we find it unsatisfactory.) * Hence, we propose an approach here that is practical and provides significantly better privacy than the naive application of existing DP ML solutions. Equally importantly, we hope our work highlights the problem, and stimulates future research. We do not advocate for our approach as a complete solution, but feel it is a valuable starting point that already highlights nontrivial choices and phenomena. * W3: * __On lacking novelty__: While our greedy algorithms are not very complicated, we believe the fixed-graph DP definition and the contribution bounding framework are meaningful contributions, and we believe our empirical results developing a better understanding of the greedy baseline raise and study non-trivial questions about contribution bounding. * __On tailoring to multi-attribution__: While the greedy algorithm retrieves the trivial strategy maximizing |S| in the single-attribution case, most of our empirical results answer questions which do not make sense to ask in the single-attribution case (e.g. in the single-attribution case, there is no difference from the ILP, no bias towards examples with fewer users). Furthermore, designing a more complicated method tailored to the multi-attribution case is unnecessary if the greedy algorithm is competitive with strong baselines like the ILP solver, which our empirical results provide evidence for.
null
null
null
null
null
null
LMAct: A Benchmark for In-Context Imitation Learning with Long Multimodal Demonstrations
Accept (poster)
Summary: LMAct is a benchmark designed to evaluate the multimodal in-context imitation learning capabilities of state-of-the-art, closed-source large multimodal foundation models (LMs). LMAct systematically evaluates model performance over extremely long-context inputs, testing how effectively these models utilize a varying number of expert demonstrations (spanning multiple orders of magnitude, up to context saturation). The benchmark includes six interactive decision-making tasks of varying complexity: Atari (Phoenix), Chess, Crosswords, DM Control (Cheetah Run), Grid World navigation, and Tic-Tac-Toe. Results suggest that the current state of the art multimodal LMs do struggle on the benchmark and in many cases, do not benefit from the additional context/demonstrations that are being provided. Claims And Evidence: Believe so. Methods And Evaluation Criteria: This is a benchmark paper. The paper can benefit from more substantial compare and contrast with prior LLM benchmarks, especially those designed for multi-turn agents. The paper also makes an effort to compare different multimodal representations of the state (i.e. ASCII and RGB observations). However, The test contains 6 environments, which may not be broad enough and potentially can be saturated quickly. In addition, all environments are fully observable. It would be beneficial to 1) expand the number of environments and 2) introduce more partially observable environments. Theoretical Claims: N.A. Experimental Designs Or Analyses: Since this is a benchmark paper, see the methods and evaluation criteria section above. Supplementary Material: I checked the additional tables and results presented in the appendix. Relation To Broader Scientific Literature: I think this is particularly relevant to both the current trend of reasoning models and agents. It is great that this paper identifies areas/tasks where people can evaluate their reasoning models and language models on longer horizon tasks and seeing many of the current LMs not being able to in-context learn them allows the community to explore further in allowing better CoT + reasoning capabilities. Essential References Not Discussed: The paper seems to overlook the exploration that researchers have done in the direction of in-context learning for control. State-based ICL has a long history (which has been applied to some of the environments listed in 2.2). See the line of work following One-Shot Imitation Learning [1] and Prompting Decision Transformer [2]. Multimodal (i.e. image-action-proprio) ICL for control using next-token prediction / autoregressive models have either a trained version (ICRT [3]) or are using pre-trained LLMs (Moka [4] prompt a robot to walk [6] and RoboPrompt [5]). [1] Y. Duan et al., “One-shot imitation learning,” Advances in neural information processing systems, vol. 30, 2017. [2] M. Xu et al., “Prompting decision transformer for few-shot policy generalization,” in international conference on machine learning, PMLR, 2022, pp. 24 631–24 645. [3] F. Letian et al. "In-context imitation learning via next-token prediction." arXiv preprint arXiv:2408.15980 (2024). [4] K. Fang, F. Liu, P. Abbeel, and S. Levine, “Moka: Open-world robotic manipulation through mark-based visual prompting,” Robotics: Science and Systems (RSS), 2024. [5] Yin, Yida, Zekai Wang, Yuvan Sharma, Dantong Niu, Trevor Darrell, and Roei Herzig. "In-Context Learning Enables Robot Action Prediction in LLMs." arXiv preprint arXiv:2410.12782 (2024). [6] Wang, Yen-Jen, Bike Zhang, Jianyu Chen, and Koushil Sreenath. "Prompt a robot to walk with large language models." In 2024 IEEE 63rd Conference on Decision and Control (CDC), pp. 1531-1538. IEEE, 2024. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: Q1. I believe ICL for control is an exciting direction. Many prior works have worked towards using autoregressive models for ICL, including but not limited to the ones in the Essential References Not Discussed section. Adding appropriate discussion to these methods can help readers find better footing in the field. Q2: The benchmark currently only has 6 environments. As mentioned earlier, the performance may saturate really quickly given the current climate of LLM research. Maybe consider expanding the benchmark or introducing harder tasks (i.e. partially observable instead of fully observable environments). Q3: (Exploratory, less relevant to my evaluation of the paper) In all experiments, it seems that the temperature is set to zero except for the openai models. Does temperature affect ICL capability? Or does it just introduce higher variances? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough assessment and insightful feedback. We are pleased that they think that our `paper is particularly relevant to the current trend of reasoning models` and that `it is great that this paper identifies areas where people can evaluate their reasoning models on longer horizon tasks`. **The paper overlooks the relevant line of work on in-context learning for control.** Indeed, the papers the reviewer listed are relevant to the broader context of our work, and we have expanded the related work section to discuss all of these references. While the methods explored in that exciting line of work focus on developing novel agents with strong in-context learning capabilities, our paper's primary goal is to benchmark the current state of existing, general-purpose frontier LMs on such tasks. Understanding how to best leverage insights from specialized agent research to enhance these large, pre-trained models remains an open question, which underscores the importance of establishing clear benchmarks like LMAct. **The paper only has 6 environments, which could lead to quick saturation given the current speed of AI research.** As discussed in Section 4, we designed our benchmark such that it can easily be made more difficult as the capabilities of frontier models evolve. While our benchmark is currently in its “easiest” form (e.g., grid world without obstacles, chess against Stockfish level 0 (less than 1300 Elo)), our results demonstrate that even this version reveals significant limitations in current frontier models. Therefore, we believe that our benchmark provides a strong and high-resolution signal to measure the progress of frontier models. Once our benchmark is saturated (which is far from being the case right now), we can easily construct a more challenging v2. **Consider including partially observable environments to expand the benchmark.** We tried to cover a range of environments that are relatively easy for humans but where current models still struggle to reach expert performance. As a result, we chose fully observable tasks (technically, Atari requires at least two frames to determine the velocities) since they have simple, reactive policies and do not require exploration. Partially observable tasks require consistent integration of information across several potentially non-adjacent time steps, which is a great test of a model’s ability to densely attend to the information in the context. Moreover, there is the theoretical problem of self-delusions in imitation of an expert that has hidden information (e.g., the hidden belief state of the agent), which is necessary under partial observability (see Ortega et al. (2021)). We want to keep this benchmark free from these complicating issues and think that the right time to move to such harder tasks is when frontier models easily solve simple, fully observable tasks at expert level (we are currently at beginner level — see the previous response). Independent of that, we agree with the reviewer that it is great to see benchmarks on interactive decision-making tasks under partial observability (perhaps better suited for in-context RL, which avoids the self-delusion problem) and refer to, e.g., the BALROG benchmark. We have added a brief discussion to acknowledge the value of partially observable benchmarks while clarifying our focus on fully observable tasks in the revised manuscript. *Shaking the foundations: delusions in sequence models for interaction and control. Pedro Ortega et al. https://arxiv.org/abs/2110.10819.* **Does the temperature affect the in-context learning capabilities?** We deliberately set the temperature to 0 to ensure the reproducibility of our results wherever possible (the API for the o1 models does not support this option). Whether the temperature also affects the performance is an interesting direction for future work. Note that since OpenAI’s models perform relatively well in most of our tasks, the effect of the temperature value does not seem to be very significant in our benchmark.
Summary: The paper presents a benchmark to evaluate the capabilities of today’s frontier models on multimodal decision making task in the very long context regime. The paper investigates the in-context learning abilities of these models. The authors compare a variety of the latest multimodal LM models on tasks like chess, crossword, and grid world, among others. Claims And Evidence: The paper investigates the performance of the latest LM models in very large context settings and studies the effect of the number of in-context demonstrations, different observation encoding schemes, and chain of thought prompting on the performance of these model. While the authors provide results for each, some insights about how to improve these models on the tasks being studied would be valuable for the community. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Going through the paper at large, the experimental design seems sound. However, it would be great if the authors could include more analysis explaining the results and potentially provide pointers for improving the current models. Supplementary Material: Yes but not in great detail. Relation To Broader Scientific Literature: The results provided in the paper are valuable to the community for understanding the performance limitations of the latest frontier models. As mentioned above, more analysis of the results would be valuable to the community. For instance, since a lot of tasks have performances much lower than the expert performance, an analysis of common failure modes along with potential improvement strategies would be useful. Essential References Not Discussed: Being a research in a closely related but not the same field, the references seem adequate to me. Other Strengths And Weaknesses: Strengths - The authors address an important problem of evaluating the latest frontier models on a series of long-context tasks. - The paper covers a wide range of tasks including games (Atari, chess, tic-tac-toe), grid environments (grid world, tic-tac-toe, crosswords), and physical simulators (DMC cheetah run). - The paper also considers a wide variety of frontier models for the comparisons, making for a comprehensive study. - The paper also studies the effect of different observation encoding strategies, number of in-context demonstrations, and chain of thought prompting. Weaknesses - It would be great if the authors could include more analysis about failure models and potential improvement strategies to further advance the current state of these frontier models. Other Comments Or Suggestions: - From section 2.1, the o1 models use a larger context length (8192 tokens) than the other models (2048 tokens). Could this be the reason why they perform better than the other models on platforms like Crosswords and Tic-Tac-Toe? - Are rewards from the demonstrations also given as input in the context? If yes, are they given as per-step rewards or as an accumulated value (similar to value functions in reinforcement learning)? - Are 100 steps in an episode enough for most tasks? What about tasks that require more steps (for example in robotics where an episode could go on for a longer time)? Also, how did the authors come up with 100 steps? Can it be attributed to the limited context length of the current models? - Can the authors provide an intuition of why there are a lot of illegal actions in chess and crosswords? Could adding the rules of the game in context be helpful in this case? - It would be great if the authors could provide an insight into why the performance degrades with an increasing number of demonstrations in DMC cheetah run. Questions For Authors: It would be great if the authors could address the questions from the previous section and in Weaknesses. I would be happy to raise my score after these questions are addressed. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful assessment and constructive feedback. We are pleased that they think our `paper addresses an important problem`, our `paper covers a wide range of tasks`, and our `paper considers a wide variety of frontier models, making for a comprehensive study`. **Could you include more analysis about failure modes and potential improvements?** We agree that one of the most important follow-ups is to analyze why models fail on our benchmarks and how to address these failures. We have already conducted a substantial set of such experiments in Appendix C, including: * Checking whether models can successfully “replay” (or copy) an episode, which most models can (Appendix C.2) * Checking to which degree bad performance can be attributed to the percentage of illegal vs. legal but suboptimal actions (Appendix C.3) * Investigating the fraction of repeated actions in Atari – Phoenix, which reveals that models have a high tendency to repeat the previous actions, which leads to few shots being fired, explaining why the performance is worse than the random action baseline (Figure A14) * Investigating whether failures can be attributed to various hyperparameter settings, e.g., checking whether showing legal actions alleviates the problem of having to infer the correct action format from the demonstrations (Appendix C.4) The trends of these investigations are complex (often model and task-dependent), and definitive answers would require sophisticated further analysis, which we consider outside the scope of our benchmark paper. We have added a compact discussion of this analysis to Section 4 in the updated manuscript. **Do the o1 models perform better because they use a larger context length (8192 tokens) than the other models (2048 tokens)?** Section 2.1 states that the o1 models use a larger *output sample length* (i.e., the maximum number of tokens to generate, including the “thinking tokens” for o1), not a larger context length. The (input) context length is the same across all models. We ablated the maximum sample length in Figure A7 and showed that only the o1 models benefit from a larger sample length. This is because the o1 models cannot finish their reasoning traces if they do not have sufficient “thinking tokens” at their disposal. Since our goal is to evaluate the models’ best possible performance, we set the output sample length to 8192 for the o1 models (which is expensive) and 2048 for all other models (since this is cheaper and does not affect their performance). **Do you also provide the rewards from the demonstrations?** No, we only perform in-context imitation learning, so we do not provide reward information to the models. We only use the rewards to compute the model scores for our benchmark results. We consider in-context reinforcement learning an interesting direction for future work. **Are the 100 steps per episode sufficient for most tasks? Can this number be attributed to the limited context length of current models?** Yes, for chess, crossword puzzles, gridworld navigation, and tic-tac-toe, 100 steps are more than sufficient to complete an episode (e.g., for chess, the average number of steps per game is 38 — see Section 2.2). However, for Atari and DM Control, we would ideally want to evaluate more steps (e.g., we only evaluate roughly 6 seconds of Atari play — see Section 2.2), but the context size limitations of current frontier models (32 Atari demonstrations episodes with 100 steps already hit the 1M token limit) do not allow this evaluation. We chose a maximum of 100 steps to be consistent across tasks while still providing a meaningful numerical signal (e.g., the models already fail in the early stages of Atari). **Why are there a lot of illegal actions in chess and crosswords? Could adding the rules of the game to the context be helpful?** The action space for these two tasks is more complex than for the other tasks (maybe except for Cheetah Run, which most models seem to be able to match, though). Adding game-specific explanations would probably help increase the performance, but our study is to investigate task-agnostic in-context learning behavior since such game-specific behavior may not always be available for real-world tasks (e.g., videos of human experts performing tasks in dynamic environments). **Why does the performance degrade with increasing demonstrations for Cheetah Run?** We do not know with certainty. Figure A18 shows that it cannot be attributed to an increased number of illegal actions, i.e., the models’ actions are increasingly suboptimal rather than illegal. A definitive answer to what causes in-context interference would require a thorough investigation, which is beyond the scope of this benchmark paper, but we have added a brief discussion of these observations and open questions to the paper. We hope our responses and revisions effectively address the reviewer’s concerns and clarify the paper's contribution.
Summary: The authors created a benchmark for empirical evaluation of the multimodal in-context imitation learning capabilities of some state-of-the-art LLMs (Claude 3.5 Sonnet, Gemini 1.5 Flash, Gemini 1.5 Pro, Gemini 2.0 Flash Experimental, GPT-4o, o1-mini, o1-preview, and o1) on several of interactive decision-making tasks: playing tic-tac-toe, chess, and Atari, navigating grid worlds, solving crosswords, and a DM Control task. The authors show that even when optimizing the prompt (number of demonstrations, chain-of-thought prompting, etc.) for each model and task - frontier LLMs fail to reach expert performance on Atari, chess, and DM Control. Some models approach expert performance on crosswords, grid world, and tic-tac-toe. All models beat the random action baselines except on Atari. The authors vary the number of expert demonstration episodes in the context from 0 up to 512 (the limit depends on the model and the task) and find that performance is mostly independent of the number of demonstrations. In some cases we observe strong in-context learning. Authors run a control experiment where LLM agents need to replay the single demonstration episode in the context, where all models except for o1-mini perform well. The authors plan to make their benchmark publicly available. Claims And Evidence: The authors created a benchmark that tested some existing Large Language Models, but the existing SOTA models also include QWEN [1], LLAMA [2]. The benchmark is aimed at studying models on a long context, but the authors did not compare their approach with another existing benchmark that allows studying large language models on long contexts – BabiLong [3]. [1] Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., ... & Zhu, T. (2023). Qwen technical report. arXiv preprint arXiv:2309.16609 [2] Grattafiori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., ... & Vasic, P. (2024). The llama 3 herd of models. arXiv preprint arXiv:2407.21783. [3] Kuratov, Y., Bulatov, A., Anokhin, P., Rodkin, I., Sorokin, D., Sorokin, A., & Burtsev, M. (2024). Babilong: Testing the limits of llms with long context reasoning-in-a-haystack. Advances in Neural Information Processing Systems, 37, 106519-106554. Methods And Evaluation Criteria: The work is devoted to the creation of a large-scale benchmark, in general, the estimates that it allows to obtain are advective and can be applied to the assessment of large language models in the future. At the same time, the practical usefulness of this benchmark raises questions, because if the model successfully solves typical game problems given in the benchmark, then its portability for controlling complex real intelligent agents operating in a real environment (for example, robotic agents). Theoretical Claims: The work does not contain theoretical novelty, the main provisions of the work are related to obtaining empirical and practical results. Experimental Designs Or Analyses: The soundness and validity of the experimental design are beyond doubt. At the same time, the lack of experiments with some other SOTA models (QWEN, LLAMA, etc.) is a drawback. Supplementary Material: The Supplementary Material is quite detailed, and the authors also included code that they plan to make open source. Relation To Broader Scientific Literature: Key contributions of the paper are related to the broader scientific literature. The authors provided a fairly comprehensive analysis of existing approaches. Essential References Not Discussed: The article lacks references to articles devoted to other SOTA LLMs: QWEN [1], LLAMA[2]. There is no reference to the benchmark for studying models on a long context Babilong [3]. There is no mention of specialized transformer models, specially developed for similar tasks for working on a long context: RMT [4], RATE [5]. [1] Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., ... & Zhu, T. (2023). Qwen technical report. arXiv preprint arXiv:2309.16609 [2] Grattafiori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., ... & Vasic, P. (2024). The llama 3 herd of models. arXiv preprint arXiv:2407.21783. [3] Kuratov, Y., Bulatov, A., Anokhin, P., Rodkin, I., Sorokin, D., Sorokin, A., & Burtsev, M. (2024). Babilong: Testing the limits of llms with long context reasoning-in-a-haystack. Advances in Neural Information Processing Systems, 37, 106519-106554. [4] Bulatov, A., Kuratov, Y., & Burtsev, M. (2022). Recurrent memory transformer. Advances in Neural Information Processing Systems, 35, 11079-11091. [5] Cherepanov, E., Staroverov, A., Yudin, D., Kovalev, A. K., & Panov, A. I. (2023). Recurrent action transformer with memory. arXiv preprint arXiv:2306.09459.] Other Strengths And Weaknesses: The originality of the work arises from creative combinations of existing environments for studying problems on a long context The main weakness of the work is the insufficient completeness of comparison with existing LLMs, as well as the insufficient applicability to real-world problems. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: I would like to know from the authors whether it is possible to test transformer models created for solving LongContext tasks such as RMT[1], RATE[2] on their benchmark and what their quality indicators would be? [1] Bulatov, A., Kuratov, Y., & Burtsev, M. (2022). Recurrent memory transformer. Advances in Neural Information Processing Systems, 35, 11079-11091. [2] Cherepanov, E., Staroverov, A., Yudin, D., Kovalev, A. K., & Panov, A. I. (2023). Recurrent action transformer with memory. arXiv preprint arXiv:2306.09459.] Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review, constructive feedback, and pointers to additional relevant related work. We are pleased that they think that `the soundness and validity of the experimental design are beyond doubt` and that `the originality of the work arises from creative combinations of existing environments to study problems in a long context`. **Why did you not evaluate the two other state-of-the-art models QWEN and Llama 3?** We had the dilemma of choosing a set of models to evaluate and ultimately settled on *8* models but did not mean to imply that any models outside of this set are not state-of-the-art. Unfortunately, we were unable to evaluate Llama 3 due to licensing issues, which is why we decided to focus on closed-weights models (which we explicitly stated in Section 2.1). Nevertheless, we fully agree that both QWEN and Llama 3 would be interesting additions to our benchmark and have discussed them in our revised manuscript. Since the landscape of state-of-the-art models is rapidly evolving, we open-sourced our benchmark to enable the community to evaluate QWEN, Llama 3, and any other future models using our approach. **Why did you not evaluate transformers created for long-context tasks (such as RMT and RATE) on your benchmark?** As the reviewer points out, RMT and RATE are specialized transformer models that were specifically developed for long-context tasks. However, as stated in Section 1, our goal is to evaluate *state-of-the-art LMs* in dynamic environments. Nevertheless, we agree that innovations around augmenting transformers with recurrency (and forms of more explicit retrieval) are very interesting approaches to potentially overcome LMs’ performance limitations in our benchmark, and we have expanded the related work section to discuss these approaches. Similar to QWEN and Llama 3, we are excited to see how the community will leverage the open-source nature of our benchmark to improve their models. **What is the practical usefulness of this benchmark?** We thank the reviewer for raising the important point regarding practical usefulness and transferability. While the LMAct environments are simpler than complex real-world scenarios like robotics, they require fundamental agentic capabilities (long-context multimodal understanding, in-context imitation learning) that are often tested in robotics research and likely necessary for real-world success. LMAct serves as a controlled testbed to evaluate these core skills, diagnose current model limitations, and guide research toward more generally capable agents. Although direct transfer is beyond this paper's scope, these findings can inform future work addressing that challenge. We have added a brief discussion clarifying this focus in the revised manuscript. **Why did you not compare your benchmark to BabiLong?** The BabiLong benchmark is relevant in the wider context of our work, and we have added a brief discussion in the related work section. Thanks for the pointer! --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response and I am upgrading my rating to "Accept".
Summary: This paper benchmarks the decision-making ability of several frontier multimodal models in interactive environments through in-context imitation. It investigates whether these models can be effectively prompted with few- or many-shot demonstrations to solve interactive tasks. The overall finding is that most frontier models fail to reach expert-level performance through prompting, and their performance remains independent of the number of demonstrations. Claims And Evidence: All claims are clear and supported with convincing evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Yes, they are related to in-context imitation of decision making with LLMs. The finding here is in line with the finding in [1] that zero-shot LMs cannot perform vision-based decision-making effectively in interactive environments. [1] BALROG: BENCHMARKING AGENTIC LLM AND VLM REASONING ON GAMES Essential References Not Discussed: No. Other Strengths And Weaknesses: While the experiments seem simple, they are well designed and presented. Even though the results are not too surprising, they are an important sanity check of the current capability of frontier models for interactive decision making. Other Comments Or Suggestions: No. Questions For Authors: 1. How do you make sense of more demonstrations having little and sometimes negative impact? 2. How should we change the training of these frontier models so that they are more effective with in-context prompting at solving interactive decision-making tasks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and interesting questions. We are pleased that they think that our `claims are clear and supported with convincing evidence`, that our `experiments are well designed and presented`, and that our `results are an important sanity check of the current capability of frontier models for interactive decision making`. **Why do more demonstrations have little and sometimes negative impact?** At least in theory, our fully observable tasks do not require an optimal policy to attend to information more than one step in the past. Accordingly, an LM’s performance on our tasks must lie somewhere between two extremes: * If the LM has stored the optimal policy in its weights, then any historical information would only inform the model about the task at hand and which of its in-weights policies to select (for which even a single observation might suffice). * If the LM has no suitable pretrained policies in its weights and, therefore, would have to rely purely on distilling the expert behavior in the context, more demonstrations should increase performance. In practice, the LMs’ behavior lies somewhere between these extremes and, as our results show, greatly varies per model and task. Possible “mechanical” explanations for this behavior are that LMs are pretrained to attend sparsely over their context (in which case more observations would only help marginally), have a recency bias (which would make attention beyond the current/previous episode even less likely), and are usually not explicitly trained for in-context learning from examples (although it’s a somewhat emergent capability of pre-training). We mostly observe negative impact, i.e., “in-context interference” for Cheetah Run and gridworld navigation with ASCII observations. In both cases, our analysis of the percentage of illegal actions (Figures A18 and A19) suggests the problem is not due to an increased number of illegal actions but, instead, increasingly suboptimal actions. A definitive answer to what causes in-context interference would require a thorough investigation, which is beyond the scope of this paper (which introduces the benchmark), but we have added a brief discussion of these observations and open questions to the paper. Please also see the summarized findings of our investigations of failure modes (mostly appendix results) in our response to reviewer `46g3`, under “Could you include more analysis about failure modes and potential improvement strategies?”. **How should we change the training of these frontier models to be more effective with in-context prompting when solving interactive decision-making tasks?** While we do not have a definite answer, it seems plausible that pretraining or finetuning with data from interactive decision-making tasks, and, in particular, in-context imitation of an expert policy, would be quite effective. Another promising direction could be to focus on models capable of general in-context reinforcement learning, which is a bit different than our in-context imitation setting, but in principle, all our tasks could easily be extended by providing additional reward observations. We consider these very interesting and important directions for future research and have updated our manuscript with the above discussion.
null
null
null
null
null
null
Canonic Signed Spike Coding for Efficient Spiking Neural Networks
Reject
Summary: The paper aims to improve the conversion of Artificial Neural Networks (ANNs) to Spiking Neural Networks (SNNs) by developing a more efficient spike coding scheme, which has improved encoding capacity and reduced computational overhead. Claims And Evidence: Based on a careful review, the claims in the paper are generally well-supported by evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria align with the research problem that reduces the conversion loss between ANN and SNN. Theoretical Claims: The paper's theoretical proofs are mathematically rigorous and provide support for the proposed Canonic Signed Spike (CSS) coding scheme. Experimental Designs Or Analyses: The experimental designs are sound, validating the proposed Canonic Signed Spike coding scheme effectively. Experimental results demonstrate their method's performance and advantages. Supplementary Material: I thoroughly reviewed the entire Appendix sections (A-E) in the document. The appendices provide critical mathematical foundations and supplementary experimental evidence. Relation To Broader Scientific Literature: The paper's key contributions are within the ANN-SNN conversion learning algorithm. More specifically, the authors extend work by Li et al. (2022) and Wang et al. (2022) that uses negative spikes, and then introduces a more systematic approach to negative spike correction. Compared to previous methods, the proposed methods provide more efficient information encoding. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The authors develop a spike coding scheme called CSS that has improved encoding capacity and reduced computational overhead. 2. The paper's theoretical proofs are mathematically rigorous and support the proposed CSS coding scheme. 3. Experimental results demonstrate their method's performance. Weaknesses & Questions: 1. Can the paper's method be extended to broader network architectures, such as Spiking Transformers that contain the attention mechanism? 2. Why did the authors only validate their method on simple image classification tasks? Noteworthy, the state-of-the-art methods in ANN-SNN conversion like Fast-SNN[1] and Spike-Zip-TF[2] have verified their methods across multiple tasks, such as object detection, semantic segmentation, and natural language understanding. 3. The SyOPs and energy efficiency claims in Table 4 require additional re-examination. Why is the energy consumption calculated by the rate-based and the CSS-based method lower than that of the single-spike TTFS method? 4. Can the proposed method support conversion on neuromorphic datasets? 5. Why were the authors not compared with the most recent state-of-the-art ANN-to-SNN conversion algorithms[2,3,4]? Refs: [1] Fast-snn: Fast spiking neural network by converting quantized ann [2] SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN [3] Towards High-performance Spiking Transformers from ANN to SNN Conversion [4] A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks Other Comments Or Suggestions: It is recommended that the author refer to Fast-SNN[1] and Spike-Zip-TF[2] for experimental design. [1] Fast-snn: Fast spiking neural network by converting quantized ann [2] SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN Questions For Authors: See Weaknesses & Questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review. Below, we address some key points of your concerns. --- First, we would like to emphasize that our core contribution lies in the innovation of the ___encoding method___. Our work is not a continuation of Li et al. (2022) and Wang et al. (2022) because our core focus is on weighting the spikes, whereas they still rely on rate coding. Although both approaches use negative spikes, their roles are different, as we explained in Section 4.2. By leveraging spike weighting, we enhance the encoding capacity of spike sequences. Compared to mainstream rate coding or TTFS coding, our approach significantly reduces the required timesteps. __Our method follows the existing ANN-SNN conversion framework, making it straightforward to transition from rate coding to CSS coding__. Since the core objective of ANN-SNN conversion is to accurately represent ANN activations using spike sequences, our approach is not limited to specific network architectures or tasks but rather __provides a broadly applicable optimization__. Our experimental design aims to verify the accuracy of the encoding and the effectiveness of the TSA design. We choose image classification as the primary task because it is the most common benchmark in SNN research. We adopt CNN architectures as they are still the most widely used in ANN-SNN conversion. Additionally, we compare with CNNs of _the same structure_ to highlight the impact of the encoding method, thereby better demonstrating the value of our work. To address your concerns regarding applicability, we have further extended our method to ViTs and object detection tasks, with preliminary results provided below. ### Extended Experiments #### Conversion of Transformer architectures To demonstrate that CSS can also encode activations in Transformers, we converted ViT-S and ViT-B for the ImageNet classification task. We used the pre-trained weights provided in SpikeZIP-TF [1], where "32Level" denotes the quantization precision. As shown, our encoding scheme significantly reduces the required timesteps under the same weights. Additionally, we provide the actual runtime for a more intuitive comparison. Method|Architecture|Param|Encoding Scheme|Timestep|Acc.|Runtime| :-|:-|:-|:-|:-|:-|:- SpikeZIP-TF|ViT-S-32Level|22.05M|rate|64|81.45%|3492.62s CSS-SNN|ViT-S-32Level|22.05M|CSS|6|81.51%|325.55s #### Object detection tasks To demonstrate that CSS can be applied to object detection tasks, we conducted experiments on the VOC2007 dataset using the same architecture and weights as in Fast-SNN [2]. The results are shown in the table below. As observed, our method not only reduces the required timesteps but also significantly lowers the conversion loss. Method|Architecture|ANN mAP|Encoding Scheme|Timestep|SNN mAP| :-|:-|:-|:-|:-|:-| Fast-SNN|YOLOv2(ResNet-34-4b)|76.16|rate|15|76.05| CSS-SNN|YOLOv2(ResNet-34-4b)|76.16|CSS|4|76.18| Fast-SNN|YOLOv2(ResNet-34-3b)|75.27|rate|7|73.43| CSS-SNN|YOLOv2(ResNet-34-3b)|75.27|CSS|3|75.20| In addition, we have also included the experiment on the neuromorphic dataset in our response to Reviewer cfNu, which you may find useful for reference. We would like to emphasize once again that __our core contribution lies in the nonlinear encoding scheme, which has broad applicability__. For existing rate-based ANN-SNN frameworks, one only needs to replace the IF neurons with TSA neurons, as we have done in the experiments above. [1] SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN [2] Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN ### Energy Estimation Our energy estimation is based on an open-source codebase, and we have carefully re-examined the code without identifying any issues. Although it may seem counterintuitive, it is entirely plausible that CSS exhibits lower energy consumption than TTFS. First, CSS operates with _only three timesteps_, meaning each neuron can fire at most three spikes. Second, the CSS encoding scheme applies _a more aggressive quantization to ANN activations_—many small activations in the ANN are encoded as zero. In contrast, TTFS _utilizes more timesteps for fine-grained quantization_, encoding a greater number of "less important" activations. As a result, __although TSA neurons may fire multiple spikes, significantly fewer neurons are activated in CSS coding__. This combined effect leads to CSS achieving lower overall energy consumption. --- If you have any further questions, we would be happy to address them. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I am very grateful to the author for conducting additional experiments to answer my questions, and some of my questions have been resolved, so I will modify my rating. However, I still have the following concerns/questions. 1. Can the proposed method support neuromorphic datasets? 2. The authors have effectively demonstrated the feasibility of their approach in visual processing tasks. As they note, "the core contribution lies in the nonlinear encoding scheme, which has broad applicability. For existing rate-based ANN-SNN frameworks, one only needs to replace the IF neurons with TSA neurons." However, it remains unclear whether this method could be successfully applied to text-based tasks such as NLP or NLU. Furthermore, the potential applicability to speech processing tasks, which inherently contain richer temporal information structures, warrants investigation. How might the proposed encoding scheme perform when extended to these different tasks? 3. The authors state: "our approach follows a nonlinear accumulation process. This key difference allows us to accumulate information more quickly, thereby reducing the required timesteps." Does the proposed method introduce floating-point multiplication operations? Does this compromise the important spike-driven advantages of SNNs? 4. Although TTFS uses more time steps for fine-grained quantization, it emits at most one spike across all time steps. With the same network structure, the number of spikes emitted by the TTFS encoding should be less than the method proposed in this paper. Additionally, how should we understand the author's explanation of " TTFS encodes more 'less important' activations"? --- Reply to Comment 1.1.1: Comment: Thank you for raising your score and providing further feedback! We are happy to address your concerns. ### Additional Experiments #### Neuromorphic Datasets We have already presented results on the _DVS128Gesture_ dataset in our response to Reviewer cfNu. Here, we further include experiments with ResNet-18 on _CIFAR10-DVS_ and _N-Caltech101_ datasets. We implemented a simple rate coding scheme as the baseline, and the results are presented in the table below. The results demonstrate that our method is __fully compatible with neuromorphic datasets__, significantly reduces the number of timesteps, and further mitigates the conversion loss. Method|ANN Acc.|Dataset|Coding Scheme|T|SNN Acc. :-|:-|:-|:-|:-|:- -|90.94%|DVS128Gesture|rate|128|90.56% CSS-SNN|90.94%|DVS128Gesture|CSS|6|90.89% -|83.03%|N-Caltech101|rate|128|82.51% CSS-SNN|83.03%|N-Caltech101|CSS|8|82.76% -|78.35%|CIFAR10-DVS|rate|256|77.87% CSS-SNN|78.35%|CIFAR10-DVS|CSS|8|78.15% #### Natural Language Processing We have already demonstrated the effectiveness of our method on the Transformer architecture, __making its application to NLP tasks a natural extension__. We conducted experiments using the RoBERTa model on the _IMDB Movie Review_ and _SST-2_ datasets, using the pretrained ANN provided in SpikeZIP-TF for conversion. The results are presented in the table below, demonstrating the effectiveness of our method on NLP tasks. We additionally report the runtime, which clearly highlights the efficiency of our approach. Method|Arch|Dataset|Coding Scheme|T|Acc.|Runtime| :-|:-|:-|:-|:-|:-|:- SpikeZIP-TF|RoBERTa-B-32Lv|SST-2|rate|64|92.32%|169.45s CSS-SNN|RoBERTa-B-32Lv|SST-2|CSS|5|92.32%|19.68s SpikeZIP-TF|RoBERTa-B-32Lv|IMDB-MR|rate|64|81.30%|4964.80s CSS-SNN|RoBERTa-B-32Lv|IMDB-MR|CSS|5|81.36%|489.51s #### Audio Classification Our method can also be applied to audio processing. We conduct audio classification tasks using ResNet-18 on the _GTZAN_ and _ESC-50_ datasets. The results, shown in the table below, further demonstrate the strong applicability of our method. Method|ANN Acc.|Dataset|Coding Scheme|T|SNN Acc. :-|:-|:-|:-|:-|:- -|90.62%|GTZAN|rate|256|89.54% CSS-SNN|90.62%|GTZAN|CSS|8|90.28% -|75.15%|ESC-50|rate|256|74.97% CSS-SNN|75.15%|ESC-50|CSS|8|75.00% We have already demonstrated the applicability of our design across visual, textual, and auditory tasks, as you suggested. We sincerely appreciate your constructive suggestions and believe these additional results further enrich our work. However, we hope you understand that it is impractical to enumerate and evaluate our method on every possible task. We would like to reiterate that __our method is not limited to specific models or tasks__. Instead, it __introduces a general innovation in encoding__. Our proposed CSS coding significantly reduces the number of timesteps while preserving the simplicity of the rate-based conversion process. Wherever ANN-to-SNN conversion is applicable, our method can be readily adopted. This makes our approach a substantial contribution to the SNN community. ### Further Clarifications #### Efficient Implementation of Spike Weighting In our method, spike weights are applied by _doubling the membrane potential at each timestep_, which corresponds to a left shift in hardware. Since the shift amount is fixed, _it can be implemented purely through wiring_, eliminating the need for shift registers. __This design introduces no floating-point operations, preserves the spike-driven nature of SNNs, and incurs negligible hardware cost__. For a more intuitive understanding, we provide an illustration of the reference design at [this URL](https://anonymous.4open.science/r/ICML-2025-Rebuttal-0DE7/README.md). You may also refer to our response to Reviewer MkoZ for more detailed information. #### Reduced Spike Count To address your follow-up question, we offer a more detailed breakdown of the spike counts. Suppose the activation value lies within $[0, x_p]$: - For TTFS coding [2] with $T = 64$, _activations in $[\frac{1}{64}x_p, x_p]$_ will be encoded as exactly _one spike_; - For CSS coding with $T = 3$, _activations in $[\frac{1}{8}x_p, x_p]$_ are encoded into _a spike sequence_. Considering the typical activation distribution in ANNs—where a large proportion of activations are close to zero—TTFS ends up encoding more values. Also note that CSS uses only three timesteps, requiring just around 1.5 spikes per activation. We visualize the activation distribution and report the average spike counts for encoding in each layer at [this URL](https://anonymous.4open.science/r/ICML-2025-Rebuttal-Act-Dist-C538/README.md). --- Finally, we believe our method introduces a meaningful innovation and delivers strong practical effectiveness, contributing to the advancement of ANN-SNN conversion. We sincerely appreciate the time and effort you have dedicated to reviewing our work!
Summary: The paper proposes the Canonic Signed Spike (CSS) coding scheme, which enhances encoding capacity while maintaining network simplicity. Additionally, the Over-Fire-and-Correct method is introduced to enable efficient computation. The primary contribution lies in minimizing conversion loss when transforming artificial neural networks (ANNs) into spiking neural networks (SNNs). Claims And Evidence: The paper claims to achieve minimal conversion loss from ANN to SNN while preserving computational efficiency. However, while the proposed approach is promising, it shares similarities with existing methods, particularly the work of Stöckl & Maass (2021). Despite this, the proposed method does not appear to surpass existing techniques in terms of flexibility. Methods And Evaluation Criteria: The validation methods used in the paper are correct. The authors apply Leaky Integrate-and-Fire (LIF) neurons for SNN conversion and introduce modulation mechanisms to ensure precise transformation. The methodology is well-grounded in established conversion techniques. Theoretical Claims: The theoretical analysis focuses on conversion error, and no apparent errors are present in the conclusions. The mathematical formulations appear consistent with existing ANN-to-SNN conversion frameworks. Experimental Designs Or Analyses: The experimental evaluation conducted on CIFAR-10 and ImageNet is reasonable and aligns with the standard benchmarks used in SNN research. Supplementary Material: The authors did not provide supplementary materials. Relation To Broader Scientific Literature: The work primarily targets neuromorphic applications, significantly reducing deployment costs by eliminating the need for quantization training. Essential References Not Discussed: The references used in the paper are well-structured and appropriate for the study. Other Strengths And Weaknesses: The proposed method does not require model quantization, which reduces training requirements and enhances deployment efficiency. Other Comments Or Suggestions: None. Questions For Authors: Can the proposed method be extended to support activation functions beyond ReLU, similar to the approach in Stöckl & Maass (2021)? The paper claims that the algorithm is hardware-friendly. Could the authors elaborate on how it can be efficiently implemented in hardware? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review. Below, we address some key points of your concerns. --- ### **Reference Implementation in Hardware** Compared to traditional rate coding with IF neurons, our method introduces three additional components: _1. Membrane potential amplification_, _2. Silent period control_ and _3. Handling input and output of negative spikes_. _Silent period control is efficiently managed by a state register_. When a neuron starts computing, the register outputs 0. After $P$ clock cycles (where $P$ is the silent period length), it switches to 1 and remains there until a reset. This control signal is __shared across multiple neurons__, resulting in minimal hardware overhead. _Membrane potential amplification is implemented using a simple shift operation_. Since the shift amount is fixed, this can be achieved only by _introducing a grounding line (logic 0) at the least significant bit (LSB)_ of the membrane potential input to the adder. Specifically, the [n-2:0] bits are hardwired to the adder's input [n-1:1], while the LSB of the input is tied to 0. By performing threshold comparison, the system ensures that the residual value does not exceed $2^{n-1}$, thus preventing overflow. As a result, there is no additional cost for the amplification operations. _Handling negative spikes is straightforward and requires a two’s complement addition_, which can be efficiently performed by the adder. Specifically, the most significant bit (MSB) of the membrane potential is used to determine its polarity. If the MSB is 1, we take the two’s complement and compare it with the positive threshold in the comparator. This approach effectively compares the absolute value of the membrane potential with the threshold, __maintaining a single comparator__, which incurs minimal hardware overhead. Based on the comparison and the MSB value, we can determine both the presence and the polarity of the spike. __Thus, the only notable addition is the two’s complement unit__. The silent period control and spike emission modules contribute negligible overhead. We provide an illustration of the reference design at [this URL](https://anonymous.4open.science/r/ICML-2025-Rebuttal-0DE7/README.md). _The amplification operation incurs virtually no overhead and constitutes only a small proportion of the total operations_. For details on the proportion, please refer to our response to Reviewer ujyj. Based on our analysis and experimental results, __our method is hardware-friendly and has minimal impact on energy consumption__. ### Our Main Contributions First, we would like to clarify that we do not use LIF neurons ($\beta<1$); instead, we introduce the novel TSA neuron ($\beta>1$). The difference in $\beta$ leads to distinct weight patterns and using LIF neurons would cause significant conversion errors due to the residual membrane potential at the end. Additionally, we introduce a negative threshold mechanism to further reduce inference latency. For more details, please refer to our response to Reviewer cfNu. Second, we highlight the differences between our work and that of Stöckl & Maass [1]. Their approach relies on complex neuron designs and increased computational latency to ensure conversion accuracy and support GeLU activation. In contrast, while our method does not support GeLU activation, it maintains network simplicity and significantly reduces inference latency. Moreover, ReLU activation remains the most widely adopted target for conversion. For additional experiments related to latency, please refer to our response to ujyj. Lastly, we emphasize that our approach leverages stepwise weighting, which incurs __minimal additional cost__ while delivering substantial benefits. Our method is also __flexible__, _as it adheres to the standard ANN-SNN conversion framework, requiring only the replacement of IF neurons with TSA neurons_. This enables our approach to be effectively applied to architectures such as Transformers and tasks like object detection, significantly __reducing the required number of timesteps__. For further details, we have provided additional experiments on these aspects in our response to pam4 for your reference. [1] Optimized Spiking Neurons Classify Images with High Accuracy through Temporal Coding with Two Spike --- If you have any further questions, we would be happy to address them.
Summary: In this work, the authors proposed a new neural coding, which is named canonic signed spike (CCS) coding. For the proposed encoding, they also introduced over-fire-and-correct and threshold optimization methods. The proposed coding method can efficiently transmit various information by transmitting information as binary spikes, but the accumulated membrane potential expresses information over time. The authors theoretically proved the correctness of information transmssion of this coding method. According to the authors' experiments, higher accuracies were achieved in image recognition performed with CNN models. Claims And Evidence: In order for the proposed method in this paper to be useful, the feasibility of implementation in neuromorphic hardware must be discussed. To implement the proposed method in neuromorphic hardware, synchronization between layers is essential. In addition, the operation of Equation 8 greatly deteriorates the advantage of event-based neuromorphic processors. It has a very big disadvantage that each membrane potential must be increased at every time step even if there are no input spikes. It will not be easy to implement in neuromorphic hardware. Is there a solution for this? Methods And Evaluation Criteria: There is no detailed description or analysis of the proposed method. In addition to theoretical proof, it is necessary to experimentally show what influence each factor has. In addition, ablation studies and overhead analysis for the proposed method are required for evaluation. Does the time step include a silent period? The inference procedure is not clearly explained. If T and the silent period (P) overlap, a dependency occurs between layers, requiring the total time step of TxL. In this case, can we say that the time step took T instead of TxL? There is a lack of detailed explanation about OFC. What would be the performance if there were no negative spikes? What is the ratio of shift operation and input integration in energy consumption analysis? “the optimal threshold ~ accuracy in rate coding.” (lines 30~33) - Isn’t this the optimal threshold for the proposed CCS coding? In addition, detailed explanation and analysis of the optimal threshold are required. Theoretical Claims: There seems to be no major problem with the theoretical claims. Experimental Designs Or Analyses: Additional experiments are required to prove the superiority of the proposed method. It seems to be applicable to other tasks besides image classification. What are the experimental results for tasks such as object detection and segmentation? Also, what if it is applied to transformer models other than CNN? What are the experimental results for neuromorphic datasets? Supplementary Material: Yes, I reviewed it along with the manuscript. Relation To Broader Scientific Literature: Yes, I reviewed it along with the manuscript. Essential References Not Discussed: The proposed method is similar to the method of the paper below in that it expresses information according to the integrated time difference by utilizing temporal information. In addition, it is similar in that it operates by dividing the integration (silent) and firing phases to transfer temporal information between layers. It is necessary to compare and discuss the methods of the papers below. Temporal-Coded Spiking Neural Networks with Dynamic Firing Threshold: Learning with Event-Driven Backpropagation, ICCV-23 T2FSNN: deep spiking neural networks with time-to-first-spike coding, DAC-20 Other Strengths And Weaknesses: Please refer to the above comments. Other Comments Or Suggestions: I think it would be helpful to have an overall figure of the proposed approach. Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review. Below, we address some key points of your concerns. --- ### Energy Overhead of Spike Weighting To achieve nonlinear encoding, we double the membrane potential at each time step. First, we would like to emphasize that __this method enhances encoded information with almost no additional cost__. In our design, the shift amount is fixed for each step, allowing it to be implemented purely through wiring. Specifically, the [n-2:0] bits are hardwired to the adder's input [n-1:1], while the LSB of the input is tied to 0. This eliminates the need for shift registers, and incurs negligible energy consumption. Furthermore, the amplification is performed independently for each neuron, whereas the majority of operations arises from inter-neuron connections (e.g., convolutional or fully connected layers). Therefore, even if all neurons perform a shift at every time step, the overall cost remains minimal. We provide a breakdown of the amplification operation's contribution to total _operation count_ in the table below. The experiment was conducted with ResNet-18 on CIFAR-10. It can be observed that the operations for weighting are also minimal in number (accounting for just 4% of AC operations). Overall, their impact can be considered negligible. Timestep|Amp Ops|ACs|MACs| :-|:-|:-|:- 8|4.42M|108.5 M|14.72M 4|2.23M|76.89 M|7.36M Additionally, we would like to clarify that our approach is indeed quite __hardware-friendly__. We have provided a detailed reference design in our response to Reviewer MkoZ, which we encourage you to check for further details. We also provide an illustration of the reference design at [this URL](https://anonymous.4open.science/r/ICML-2025-Rebuttal-0DE7/README.md). ### Methodological Details #### OFC Method The goal of OFC is to control the residual membrane potential to reduce conversion loss. This is achieved by lowering the firing threshold (causing Over-Firing) and introducing negative spikes (to Correct the excess firing). This method is also applicable to rate coding, as residual membrane potential impacts conversion loss in that setting as well. We addresses the question of how much to lower the threshold by deriving an optimal threshold mathematically. Notably, we have already conducted ablation experiments on negative spikes in Section 5.3, and experimental validation of the optimal threshold is provided in Section 5.4. #### Inference Procedure of TSA We provide the pseudocode for TSA’s forward propagation in Algo. 1, where the input spans all timesteps for clarity. However, our actual implementation __employs a pipelined inference process__: TSA processes spikes _at each timestep_, adjusting its behavior based on its local phase (e.g., silent periods). Due to the pipelined processing, while the total delay from input to output is $P\times L$, each image only occupies $T+P$ timesteps per layer, where $T$ timesteps are used for primary neural computation. We report $T$ in tables as it represents the actual encoding steps per input. This standard is also applied when comparing TTFS coding and [1], with a similar approach found in [2]. To further illustrate efficiency, we report _actual runtime_ below. Although measured on standard GPUs, these results still reflect the pipeline design in inference. We implemented rate coding as a baseline, setting $P=T$ to simulate [1]. Using four 2080Ti GPUs, we validated 50,000 images on ImageNet with VGG-16. Additionally, we processed 2,000 images _one by one_ and averaged the latency to obtain the _inference latency per image_ (LPI). Coding Scheme|T|P|Acc.|Runtime|LPI| :-|:-|:-|:-|:-|:-| rate|256|0|70.50%|11816.04s ($19\times$)|1279ms ($9.2\times$) CSS|8|8|75.18%|886.31s ($1.43\times$)|801ms ($5.76\times$) CSS|8|1|75.17%|621.15s ($1\times$)|139ms ($1\times$) [1] Optimized Spiking Neurons Classify Images with High Accuracy through Temporal Coding with Two Spike [2] Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes #### Related Work The two works you mentioned focus on implementing TTFS coding. Although both approaches involve accumulating information over time, we would like to emphasize that TTFS accumulates information in _a linear manner_ [3], whereas __our approach follows a nonlinear accumulation process__. This key difference allows us to accumulate information more quickly, thereby reducing the required timesteps. In our paper, we acknowledge that the silent period concept has been used in TTFS coding and [1]. However, we do not claim a contribution to the silent period itself; rather, we aim to minimize it to reduce inference latency. By incorporating the OFC method, we reduce $P$ to 1, whereas in [3,5], $P=T$. [3] Temporal-Coded Spiking Neural Networks with Dynamic Firing Threshold: Learning with Event-Driven Backpropagation --- If you have any further questions, we would be happy to address them.
Summary: The paper proposes an implicitly weighted spiking mechanism for direct ANN-to-SNN conversion. The weight of the spikes, $\beta^{T-t}$, is determined by the temporal location $t \in [1,2, \cdots ,T]$ of the spikes, where an earlier spike gets a higher weight than spikes that arrive later, as $\beta > 1$. Further, the authors use single-bit signed spikes to reduce the approximation error computed with respect to the ANN activation. The empirical evaluations are performed on CIFAR-10 and ImageNet datasets, where the authors compared the proposed method with existing methods, showing a reduction in temporal latency in direct ANN-to-SNN conversion. Claims And Evidence: Yes. Methods And Evaluation Criteria: The experimental evaluation could be extended to the CIFAR-100 dataset. Theoretical Claims: The theoretical claims made in the paper are supported with detailed derivations. Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper is well referenced. Essential References Not Discussed: The authors can include the recent publication [1], which uses signed rate encoding to reduce the variance of noise introduced by the input pixels. [1] Bhaskar Mukhoty, Hilal AlQuabeh, and Bin Gu, Improving Generalization and Robustness in SNNs Through Signed Rate Encoding and Sparse Encoding Attacks, in The Thirteenth International Conference on Learning Representations (2025). Other Strengths And Weaknesses: Since the ANN-to-SNN methods pre-suppose the existence of an ANN model, it can be difficult to apply such a method to a neuromorphic dataset where no ANN model exists, or ANN models are equally challenging to train due to the inherent temporal dimension of data. Other Comments Or Suggestions: None. Questions For Authors: Q1: How do the present neuronal dynamics compare to LIF dynamics? Q2. Can the experimental evaluations be extended to CIFAR-100? Q3: What are the challenges to applying the method on neuro-morphic datasets, such as N-NMIST, N-Caltech, and DVS-CIFAR-10? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review. Below, we address some key points of your concerns. --- ### Comparison with LIF The neuron dynamics of TSA and LIF can both given by the following equation: $$u_{i}^{l}[t]=\beta u_{i}^{l}[t-1]+z_{i}^{l}[t]-S_{i}^{l}[t]$$ Apart from the difference in handling negative spikes, the key distinction between TSA and LIF lies in the choice of $\beta$. While __both mechanisms serve to weight the input__, TSA sets $\beta>1$, resulting in a weight pattern that __decreases__ over time. This design is primarily motivated by two factors: 1. Enabling _rapid transmission_ of most information. 2. Reducing the weight of the final residual information, which is _crucial for conversion accuracy_. For comparison, we set $\beta = 0.5$ in the table below and observed a significant increase in conversion error. We conducted experiments using VGG-16 on the CIFAR-10 dataset. It can be observed that when using (Ternary) LIF neurons, it is necessary to extend the length of the silent period to reduce conversion loss, which significantly impacts output latency. This further highlights the importance of adopting a decreasing weight pattern. Neuron|Timestep|Silent Period|Acc. :-|:-|:-|:- TSA|8|1|96.68% LIF|8|1|84.19% LIF|8|4|95.32% LIF|8|8|96.16% ### Additional Experiments Our main contribution is compressing the timesteps for conversion through a stepwise weighting mechanism, which is both convenient and flexible: it still follows the standard ANN-SNN conversion framework, requiring only the replacement of IF neurons with TSA neurons. Therefore, __our method is applicable to a wide range of network architectures, datasets, and tasks__. We have included additional experimental results on _CIFAR-100_ in the table below. The experiments were conducted based on the full-precision VGG-16. Method|ANN Acc.|Coding Scheme|Timestep|SNN Acc. :-|:-|:-|:-|:- OPI|76.31%|rate|128|76.25% SNN Calibration|77.89%|rate|256|77.68% TSC|71.22%|TSC|1024|70.97% LC-TTFS|70.28%|TTFS|50|70.15% CSS-SNN|76.56%|CSS|8|76.51% Moreover, we have conducted experiments on _object detection tasks_ and applied our encoding method to _Transformer architectures_. The results of these experiments can be found in our response to Reviewer pam4. Regarding _neuromorphic datasets_, as you pointed out, one of the challenges of applying our method is the absence of an ANN counterpart, which is also a general limitation of ANN-SNN conversion. A possible solution [1] is to integrate temporal information into static features and then train an ANN for classification. We conducted experiments using ResNet18 on the DVS128Gesture dataset, with the results shown in the table below. For comparison, we implemented a simple rate coding as the baseline. Method|ANN Acc.|Coding Scheme|Timestep|SNN Acc. :-|:-|:-|:-|:- -|90.94%|rate|128|90.56% CSS-SNN|90.94%|CSS|6|90.89% [1] Masked Spiking Transformer --- If you have any further questions, we would be happy to address them.
null
null
null
null
null
null
Beyond Zero Initialization: Investigating the Impact of Non-Zero Initialization on LoRA Fine-Tuning Dynamics
Accept (poster)
Summary: This paper studies how non-zero initialization improves the perforamnce of LoRA, especially the stabilitiy. - The authors define 1) the notation of stabilitity, $BAX = \Theta(1)$ for all LoRA layers when the width is infinity, where $X$ is the input. 2) the notation of efficiency, the linear update term is $\Theta(1)$. - Based on the above two creteria, the author derive the requirements on the random Gaussian variance of A and B as well as the step-size. When using SGD, the optimal initialization under such two creteria is neither classical LoRA initialization nor other variants of zero initialization. - The author continue to define the robustness of LoRA and derive the similar requirements on them. Claims And Evidence: This paper looks good and provides some findings beyond zero initialization. I understand the motivation of using non-zero initialization but the current claim is weak. Previous non-zero initialization work, e.g., LoRA-GA, has better motivation. For instance, LoRA-GA is to ensure that LoRA gradient updates can match full gradient updates as much as possible, which is also the spirit of using LoRA. Under theory-guided instructions, this paper gives some non-zero initialization strategies that achieves better performance than LoRA, e.g., 2x speedup than LoRA. However, the comparison with previous work is limited: only LoRA is compared. I understand that the key idea of this work is to speedup and obtain other benefits (e.g., LoRA). Nevertheless, the experimental comparison is not sufficient. Another significant issue is that, the derivation heavily follows with previous work, e.g., Hayou et al. When I read it at first, it can be a good journal extension but I'm not sure that it can be regarded as an independent work. Methods And Evaluation Criteria: The evaluation makes sense but the comparison is limited. Theoretical Claims: The theoretical claim is ok in terms of the stability and efficiency. Experimental Designs Or Analyses: The experiments are not sufficient. Only LoRA is compared. Supplementary Material: Yes. I high level checked the proof, e.g., Appendix B, C. Besides, I also read the experimental setting in Appendix D. Relation To Broader Scientific Literature: this topic and the obtained findings are interesting to the machine learning community. Essential References Not Discussed: The essential references are sufficient but it's true that not main LoRA-based algorithms are discussed. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Hi Reviewer wxon:** Thank you for your detailed and insightful comments. Below, we provide responses to each point individually. Additional experimental results can be found in **https://anonymous.4open.science/r/nzlora_rebuttal-7D3E**. To save space, we denote zero initialization as ***ZI*** and non-zero initialization as ***NZI***. ***Q1: "the current claim about motivation is weak"*** **R1:** **Unlike LoRA-GA, which was motivated by intuition, our approach is motivated by the theoretical analysis of LoRA's fine-tuning dynamics.** From the solution set in Eqs.(5-6), we observe that stable and efficient learning imposes stricter constraints on learning rates, whereas the initialization space is more flexible. Traditional *ZI* ($\gamma[B_0]=-\infty$) is merely an extreme case. This motivates us to reconsider the necessity of *ZI* and explore the potential benefits of *NZI*. Based on this insight, we conduct further analysis and evaluation, leading to two key findings: 1. *NZI* can reduce the sensitivity of LoRA to suboptimal (i.e., smaller) learning rates. 2. The purpose of traditional *ZI*, "fine-tuning from a pre-trained model", is not strictly necessary. Notably, our motivation and claims are not in competition with prior *NZI* methods, such as LoRA-GA and PiSSA. Instead, our findings offer a theoretical foundation for their effectiveness and provide an explanation for the observed performance improvements. Fig.11 in the above link show that the accuracy gains of LoRA-GA and PiSSA primarily stem from *NZI*. These points will be clarified in the revised version of the paper. ***Q2: "the experimental comparison is not sufficient"*** **R2:** Following your suggestion, we have added additional comparisons and combinations of LoRA-based methods with *NZI*. Specifically, two key aspects are considered: 1. **Ablation comparison with LoRA-GA and PiSSA.** As shown in Fig.11, a large portion of the accuracy gains in PiSSA and LoRA-GA can be attributed to the use of *NZI*. The remaining gains are due to the fact that initialization values derived from pre-trained weights or gradients are more effective than random noise. 1. **Combination with LoRA+ and HydraLoRA.** We introduced *NZI* for both LoRA+ (using a larger learning rate for the matrix $B$) and HydraLoRA [1] (an asymmetric LoRA that uses one matrix $A$ and multiple matrices $B$). As shown in Figs.12-13, *NZI* enhances the robustness of LoRA+ and HydraLoRA to variations in learning rate and improves accuracy. The relevant settings are detailed in the figure caption. [1] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning, NeurIPS 2024. ***Q3: "the derivation heavily follows with previous work"*** **R3:** The derivation used in this paper, including the notation of $\gamma$ and $\Theta$ and the definitions of stability and efficiency, is widely employed in infinite-width analysis and can be traced back to Yang et al. (NeurIPS 2021; see line 520 of our paper). Hayou et al. used these tools to explore the effects of learning rate (ICML 2024; line 475 in our paper) and initialization (NeurIPS 2024; line 479 in our paper) on zero-initialized LoRA. However, a fundamental question remains unaddressed: Why is *ZI* necessary? In this paper, we extend these general derivations to examine the potential advantages of *NZI*. Notably, **the contribution and innovation of this paper lie not in proposing a new derivation method, but in the following three aspects**: 1. **Motivation for *NZI*.** We provided a comprehensive solution set that ensures LoRA's stable and efficient learning. Building on this, we observe that the solution set includes both zero and *NZI*, prompting us to investigate the role of *NZI* further. This constitutes a key distinction between our work and previous studies, where the potential of pure *NZI* has often been overlooked. Our research bridges this gap and provides preliminary evidence supporting the feasibility of *NZI*. 2. **A new metric for LoRA's fine-tuning dynamics, "robustness", is proposed.** We compare the fine-tuning dynamics of zero and *NZI*s and define robustness in terms of the sensitivity of these dynamics to the learning rate. The central argument of this paper is that *NZI* exhibits superior robustness compared to *ZI*. We believe that this metric is crucial for LoRA fine-tuning dynamics, offering a significant extension and enhancement to existing theoretical derivations and analyses. 3. **Breaking inherent cognitions.** Our analysis and experiments further show that fine-tuning does not need to strictly start from a pre-trained model. This challenges the default practice in previous studies, such as LoRA, LoRA-GA, and PiSSA. These contributions provide valuable guidance for understanding LoRA initialization and fine-tuning LLMs. They represent notable advancements and are substantial enough to be considered as independent work. --- Rebuttal Comment 1.1: Comment: thank for the authors' response with additional experiments. The current explanation looks good for me on the motivation. I sugges the authors to mention it (maybe in a high level way) in the introduction. I understand the authors' claim on "lie not in proposing a new derivation method, but in the following three aspects". NZI has been studied in LoRA-GA, LoRA-Pro as well as (Ponkshe et al., 2024) with some experimental-driven design. This paper claims some theoretical understanding/analysis of NZI. There is one paper posted on arXiv (https://arxiv.org/abs/2502.01235) after ICML deadline which builds a mathematical analysis framework of LoRA under NZI. I suggest the authors to discuss this work in the updated version. Based on the above, I increase my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you again for reviewing our paper and for your valuable feedback! We're glad that the additional experiments and motivation clarification resolved your concerns. Your comments were extremely helpful, and we truly appreciate you increasing the score based on our rebuttal response. As suggested, we will enhance the introduction (Section 1) with a more detailed discussion of our motivation to improve clarity for readers. Additionally, we sincerely appreciate your suggestion regarding the latest advances in LoRA initialization, particularly the LoRA-One paper on arXiv. We will carefully study these works and incorporate a discussion in our revision to better contextualize our theoretical contributions in relation to these recent developments.
Summary: This paper investigates the impact of non-zero initialization on the fine-tuning dynamics of LoRA. Traditionally, in LoRA, one of the low-rank matrices, A or B, is initialized to zero to ensure fine-tuning starts from the pretrained model. However, this practice lacks theoretical justification. The authors theoretically analyze the effects of initializing both A and B to non-zero values. Their key findings are: (1) Non-zero initialization improves robustness to suboptimal learning rates; and (2) Fine-tuning does not need to strictly start from the pretrained model. The authors validate these findings through extensive experiments across various models and datasets. These results challenge the conventional practice of zero initialization in LoRA and highlight the benefits of non-zero initialization. Claims And Evidence: The claims made in the paper are theoretically proven and experimentally verified. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable. This paper studies the initialization problem of LoRA, a common but previously overlooked aspect of fine-tuning LLMs. The authors systematically compare different initialization methods (zero vs. non-zero) using models, datasets, and code based on published work, ensuring reliability. Theoretical Claims: The reviewer carefully checked the theoretical claims and corresponding proofs in the paper, including all results in Section 3 and the proofs in Appendices B and C. To the reviewer, the claims and proofs are reasonable. Although there are minor typos, for example, $\gamma[A_0]\leq\eta_A$ in Eq (19) in Appendix B should be $\gamma[A_0]\leq\gamma[\eta_A]$, these do not affect the validity of the theoretical results. Experimental Designs Or Analyses: The reviewer checked the experimental setup, results, and analysis. As described in the paper, the authors conducted experiments on three standard benchmarks. The experimental setups were based on published work and aligned with general practices in LoRA fine-tuning. The authors primarily analyzed different initialization settings and learning rates, which is consistent with the paper's motivation. The experimental analysis is also reasonable and supports the theoretical findings. Supplementary Material: The reviewer checked Appendices B and C for proofs, and Appendix D for additional experimental results. No other supplementary material. Relation To Broader Scientific Literature: Previous work (Hayou et al., 2024b) discussed the difference between initializing A or B to zero but did not explore the rationale behind zero initialization. This paper fills that gap, demonstrating that both A and B can be initialized to non-zero values. These findings provide theoretical support for related LoRA variants, such as PiSSA and LoRAGA, and significantly contribute to LoRA research. Essential References Not Discussed: To the best of the reviewer's knowledge, all related work on LoRA initialization has been covered. Other Strengths And Weaknesses: Strengths: This paper challenges the traditional LoRA initialization method, and studies the significance of non-zero initialization from the perspective of robustness to learning rate. The method is simple yet insightful. It also fundamentally overturns the purpose of traditional zero initialization (fine-tuning from pre-trained models). Experiments show that non-zero initialization with appropriate variance does not affect fine-tuning performance, indicating that fine-tuning does not need to start strictly from a pretrained model. Weaknesses: A minor shortcoming is the lack of discussion on how the definitions of stability and efficiency in this paper differ from those in previous studies (e.g., LoRA+). The authors are encouraged to clarify this distinction in the appendix. Other Comments Or Suggestions: Typo in line 62: "raise" should be "raises." Incorrect reference to Llama 3 in line 799. Typo in Eq (19): $\gamma[A_0]\leq\eta_A$ should be $\gamma[A_0]\leq\gamma[\eta_A]$. Questions For Authors: How do the definitions of stability and efficiency in this paper differ from those in previous studies (e.g., LoRA+)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Hi Reviewer rPo6:** Thank you for your detailed and insightful comments. Below, we provide responses to each point individually. Additional experimental results can be found in **https://anonymous.4open.science/r/nzlora_rebuttal-7D3E**. ***Q1: "typos in lines 62 and 799, and Eq (19)"*** **R1:** Thank you for your thorough review. We will correct the identified typos and carefully re-examine the entire manuscript. ***Q2: "how the definitions of stability and efficiency differ from those in previous studies"*** **A2:** The only difference between our definitions and those in previous studies is that our stability definition is slightly less restrictive. Specifically, we do not consider interval stability, i.e., $Z_A=AZ=\Theta(1)$, where $Z$ is the input of the LoRA layer. Instead, we focus on the stability of the final output of LoRA, $Z_B=BAZ=\Theta(1)$. Eq.(5) outlines the conditions that must be met by the initialization and learning rate when interval stability is excluded. A detailed discussion on interval stability is provided in Appendix B.2. Given that other reviewers have raised concerns regarding interval stability, we summarize the key points related to this topic in **Q3**. ***Q3: "interval stability"*** **A3:** In this paper, stability is defined as $Z_B=BAZ=\Theta(1)$, where $Z$ represents the input to the LoRA layer. The condition $Z_B=\Theta(1)$ ensures the stability of LoRA's final output, while interval stability is defined as $Z_A=AZ=\Theta(1)$, which indicates the stability of LoRA's intermediate results. In Section 3, we present the solution set for stable and efficient learning without considering interval stability, as shown in Eq.(5) or as follows: $\gamma[\eta_A]+\gamma[\eta_B]=-1, \gamma[A_0] \leq \gamma[\eta_A]$ and $\gamma[B_0] \leq \gamma[\eta_B]$. When interval stability is considered (i.e., $Z_A=\Theta(1)$), an additional constraint is imposed: $\gamma[\eta_A]=-1$. Consequently, the solution set of the learning rate and initialization becomes Eq.(21) in Appendix B.2: $\gamma[A_0] \leq \gamma[\eta_A]=-1$ and $\gamma[B_0] \leq \gamma[\eta_B]=0$. Two important points should be noted here: 1. $\gamma[\eta_A]=-1$ and $\gamma[\eta_B]=0$ are the key findings of LoRA+, which suggest using a larger learning rate for the matrix $B$ in practical applications. 2. Regardless of the optimal value for $\gamma[\eta_A]$ and $\gamma[\eta_B]$, the conditions $\gamma[A_0] \leq \gamma[\eta_A]$ and $\gamma[B_0] \leq \gamma[\eta_B]$ must always be satisfied to ensure LoRA's stable and efficient learning. When both "$\leq$" become "=", the maximum robustness to the learning rate is achieved. Therefore, non-zero initialization can also enhance LoRA+'s robustness to the learning rate, as shown in Fig.12 in the above link.
Summary: This paper investigates the impact of non-zero initialization in Low-Rank Adaptation (LoRA) fine-tuning, challenging the conventional practice of initializing one of the LoRA matrices (A or B) to zero. Through theoretical analysis and empirical validation, the authors demonstrate that simultaneously initializing A and B to non-zero values (Init[AB]) enhances LoRA’s robustness to suboptimal learning rates, particularly smaller ones, common due to learning rate decay. The study finds that while non-zero initialization introduces slight noise to the pre-trained model, it does not degrade fine-tuning performance as long as appropriate initialization variances are used. Extensive experiments across models and datasets confirm that non-zero initialization improves accuracy, stability, and convergence speed, making it practical for LoRA-based fine-tuning. Claims And Evidence: Well-supported claims: 1. Non-zero initialization improves LoRA’s robustness to suboptimal learning rates: This claim is supported by theoretical analysis and empirical proofs 2. Fine-tuning does not need to strictly start from the pre-trained model: Experiments and theoretical evidence provided 3. Non-zero initialization achieves superior performance compared to zero initialization, particularly at smaller learning rates: The heatmaps and performance tables demonstrate consistent improvements when using Init[AB] instead of Init[A], especially in low learning rate scenarios. Claims that need more evidence: 1. Non-zero initialization improves performance in all cases: There is clearly a dependence on the learning rate for different tasks as can be seen in Tables 2,3 2. What are the limits on the variance of noise that can be used in the init[AB] case Methods And Evaluation Criteria: The authors test their method with Llama-3 8B and T5 Models on the GLUE and arithmetic reasoning benchmarks. It would be interesting to check how their method works for other fine-tuning settings such as instruction tuning. Also, it is not clear how their method works with varients of LoRA such as Asymmetric LoRA, LoRA+, QLoRA, etc., Theoretical Claims: No Experimental Designs Or Analyses: Yes, I examined the soundness and validity of the experimental designs and analyses in the paper 1. The paper evaluates natural language understanding (GLUE benchmark) and natural language generation (commonsense & arithmetic reasoning), ensuring broad applicability. 2. The paper uses multiple model architectures: T5-Base (encoder-decoder) and Llama 3-8B (decoder-only transformer) 3. The study systematically varies the learning rate (η) and initialization variance (β), allowing a detailed exploration of their effects. 4. The heatmaps and accuracy tables provide clear evidence that non-zero initialization (Init[AB]) improves performance, particularly at lower learning rates. 5. The toy model experiment provides intuitive validation that non-zero initialization reduces sensitivity to learning rate choices. There are areas where the text can improve with additional details 1. The reported accuracy differences (e.g., between Init[A] and Init[AB]) are sometimes small (e.g., ~1%). 2. No confidence intervals or standard deviations are provided 3. It’s unclear if different ranks or scaling factors would affect the relative performance of zero vs. non-zero initialization. 4. The initialization variance (β) is tested in discrete steps (e.g., {1, 2, 4, 8, 16}), but there’s no justification for why these values were chosen. 5. How does their method interact with version improvements of LoRA such as LoRA+, Asymmetric LoRA, etc.? Supplementary Material: Yes, sections A, B, D, E Relation To Broader Scientific Literature: This paper builds upon existing research in LoRA fine-tuning, neural network scaling, and weight initialization, challenging the conventional zero-initialization approach in LoRA. While prior work (Hu et al., 2022; Hayou et al., 2024a) focused on optimizing learning rates and rank selection, this study demonstrates that initializing both LoRA matrices (A and B) to non-zero values (Init[AB]) enhances robustness to suboptimal learning rates. Applying infinite-width scaling theory formalizes conditions for stable and efficient fine-tuning, extending insights from Kaiming initialization (He et al., 2015) to LoRA. Unlike recent empirical methods that use quantization errors (LoftQ), SVD (PISSA), or gradient-based initialization (LoRA-GA), this paper provides a theoretical foundation for non-zero initialization. It validates it with experiments across T5-Base, Llama 3-8B, and multiple benchmarks. These findings refine LoRA fine-tuning dynamics without additional computational cost, offering a practical and theoretically justified improvement. Essential References Not Discussed: The paper focuses on LoRA and the surrounding methods while not shedding light on other PEFT methods, such as BitFit [Zaken et al., 2022] and Adapters [Houlsby et al., 2019]. Adding these papers can help the reader understand the landscape better. Other Strengths And Weaknesses: See my comments above Other Comments Or Suggestions: See my comments above Questions For Authors: See my questions in the analysis part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Hi Reviewer zK3d:** Thank you for your detailed and insightful comments. Below, we provide responses to each point individually. Additional experimental results can be found in **https://anonymous.4open.science/r/nzlora_rebuttal-7D3E**. ***Q1: "accuracy's dependence on the learning rate in Tables 2,3, and accuracy differences are sometimes small"*** **R1:** Our analysis reveals that non-zero initialization can reduce the adverse effects of suboptimal learning rates on LoRA performance. This effect is particularly evident when the learning rate is below its optimal value. However, when the learning rate approaches its optimal value, the performance improvement from non-zero initialization becomes less significant. ***Q2: "no justification for why $\beta\in\\{1,2,4,8,16\\}$"*** **R2:** In our experiments, we set the initialization variance of matrices $A$ and $B$ as $\delta_A^2=\delta_B^2=(\beta \delta_k)^2$, where $\delta_k^2=1/n$ is the variance used in Kaiming initialization (the default setting of LoRA). Notably, $\beta$ does not strictly represent variance, but rather a scaling factor applied to $\delta_k$. Our analysis indicates that robustness improves as $\gamma[A_0]$ and $\gamma[B_0]$ approach −1/2. To explore this, we begin with standard Kaiming initialization ($\beta=1$, corresponding to $\gamma[A_0]=\gamma[B_0]=−1$) and systematically increase the variance $(\beta\in\{2,4,8,16\})$ to study its effects. Our experimental results (Figs.4 and 6 in the original paper) confirm that, within a certain range, increasing the initialization variance enhances robustness to learning rate variations and leads to better accuracy. ***Q3: "limits on the variance in Init[AB]"*** **R3:** The theoretical limits of initialization variance are $\gamma[A_0] \leq -1/2$ and $\gamma[B_0] \leq -1/2$. However, this condition only describes the asymptotic behavior of the initialization variance as $n \to \infty$, rather than providing a specific value. To further investigate this, we performed ablation experiments on the variance limits of LLaMA 3-8B and T5-base models. As shown in Fig.16, these limits vary across models or datasets (e.g. Init[AB]-Init[A] with $\beta=4$ is generally less than 0 in the Commonsense reasoning task). However, the variance associated with Kaiming initialization (i.e., $\beta = 1$) is generally effective, yielding near-optimal accuracy. ***Q4: "different ranks or scaling fators"*** **R4:** We conducted ablation experiments with varying ranks and scaling factors. As shown in in Fig.14 in the above link, adjusting these hyperparameters does not affect the improvement gained through non-zero initialization. ***Q5: "different fine-tuning settings"*** **R5:** Following your suggestion, we conducted experiments using the an instruction tuning dataset, databricks-dolly-15k, and evaluated its performance on the MMLU task. As shown in Fig.15, non-zero initialization enhances LoRA's robustness to small learning rates in the instruction tuning task, thereby improving accuracy. Notably, LLama 3-8B exhibits limited accuracy on MMLU, and thus the improvement due to non-zero initialization is less pronounced. However, the trend is still observable. ***Q6: "LoRA variants"*** **R6:** To address this question, we evaluated the impact of non-zero initialization on LoRA+ (using larger learning rates for matrices $B$) and HydraLoRA [1], an asymmetric LoRA variant (using one matrix $A$ with multiple matrices $B$). The results are presented in Figs.12-13 in the above link. 1. We tested LoRA+ on GLUE and Arithmetic reasoning tasks. The results show that appropriately increasing the learning rate of $B$ can indeed improve the model accuracy. Most importantly, for the same learning rate, non-zero initialization significantly enhances the accuracy of LoRA+. 2. We tested HydraLoRA on Arithmetic reasoning tasks. To ensure that the non-zero initialized $AB$ terms could be subtracted from the pre-trained weights, we use the same initialization for different $B$ matrices within a HydraLoRA layer. As shown in Fig.13, non-zero initialization also improves the robustness of HydraLoRA to the learning rate. [1] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning, NeurIPS 2024. ***Q7: "standard deviations"*** **R7:** The standard deviation for the GLUE dataset ranges from 0.01 to 0.4, while for commonsense and Arithmetic reasoning tasks, it spans from 0.2 to 0.4. In our experiments, smaller learning rates tend to converge less effectively and exhibit higher standard deviations. However, this effect is minimal compared to the performance gains achieved by non-zero initialization. We will include the full standard deviation results in the revised paper. ***Q8: "surrounding methods such as BitFit and Adapters"*** **R8:** Thank you for your suggestion. We will add a discussion of relevant PEFT methods in the revised paper.
Summary: This paper considers scaling of hyperparameters for LoRA finetuning from an infinite width perspective following [1, 2]. The key difference compared to past works is that a non-zero random initialization of both the B and A adapter matrices is considered. The initialization can optionally be subtracted from the pretrained weights to ensure the overall layer starts at the pretrained weights. The key observations are that the new initialization scheme allows "robustness" in addition to previous desiderata such as stability and efficiency. Informally robustness refers to a lack of sensitivity in the scaling of certain quantities to learning rate hyperparameters. The authors demonstrate on a variety of tasks the superior performance of the new scheme and improved robustness to suboptimal learning rates. [1] LoRA+: Efficient Low Rank Adaptation of Large Models - Soufiane Hayou, Nikhil Ghosh, Bin Yu [2] The Impact of Initialization on LoRA Finetuning Dynamics - Soufiane Hayou, Nikhil Ghosh, Bin Yu Claims And Evidence: Yes the evidence is clear and convincing. Methods And Evaluation Criteria: Yes the methods and evaluation criteria make sense. Theoretical Claims: Yes the proofs appear correct. Experimental Designs Or Analyses: Yes the experimental designs are solid. Supplementary Material: Just briefly passed over the supplement. Relation To Broader Scientific Literature: The paper is important for understanding the optimal setting of hyperparameters for LoRA finetuning, a popular parameter efficient finetuning method. The paper characterizes the scaling of certain quantities in terms of width and imposes various desiderata for finetuning akin to a variety of works such as [1, 2, 3]. Importantly this work goes beyond previous works by considering a non-zero initialization of LoRA. In particular they show that finetuning can be successful even when the non-zero initialization is not subtracted from the pretrained weights as long as the initialization variance is not too large, demonstrating robustness of the finetuning procedure to a noisy initialization. Furthermore, the non-zero initialization has certain advantages relative to other initializations including decreased sensitivity to learning rate hyperparameters and improved empirical performance. [1] LoRA+: Efficient Low Rank Adaptation of Large Models - Soufiane Hayou, Nikhil Ghosh, Bin Yu [2] The Impact of Initialization on LoRA Finetuning Dynamics - Soufiane Hayou, Nikhil Ghosh, Bin Yu [3] Feature Learning in Infinite-Width Neural Networks - Greg Yang, Edward J. Hu Essential References Not Discussed: None. Other Strengths And Weaknesses: The strength of this paper is that it expands the practical consideration of initializations for LoRA and offers evidence for the superiority of a new initialization. Empirically this initialization appears to be non-trivially better than the standard practice and is trivial to implement. Other Comments Or Suggestions: typos: In Section B.2 Appendix Hu et al. reference is incorrect. Last line of Eq. (20) should be Z_A^{t-1} not Z_B^{t-1}. The presentation in Sections 3.2 and 3.3 are a bit hard to parse at first as is the definition and intent of "robustness". I don't think it is about perturbing \gamma[\eta] (which doesn't make much sense) but really perturbing \eta and in certain scalings the perturbation has a dominant quadratic dependence on \eta. Also in the grid sweeps the optimum is in the top right corner. Can you extend beyond that to check that increasing the hyperparameters further does not bring improvements? Questions For Authors: Can the analysis say anything useful about PiSSa? To clarify \eta_A = \eta_B is needed for maximum robustness? Also in this case internal stability is not achieved? If we use LoRA+ and non-zero initialization will we do even better? Using Init[AB] can reduce learning rate sensitivity, but it will increase initialization variance sensitivity compared to init[A]? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Hi Reviewer uk4U:** Thank you for your detailed and insightful comments. Below, we provide responses to each point individually. Additional experimental results can be found in **https://anonymous.4open.science/r/nzlora_rebuttal-7D3E**. ***Q1: "typos"*** **R1:** Thanks again for catching these! All typos will be fixed in the revision. ***Q2: "perturbing $\gamma[\eta]$ or $\eta$"*** **R2:** This question is essential for understanding our analysis. To improve clarity, we restate our infinite-width analysis as follows: 1. In this paper, we focus on the asymptotic behavior of the learning rate, $\gamma[\eta]$, as the network width $n \to \infty$, rather than its exact value. The $\gamma$-operator is defined such that $\eta = \Theta(n^{\gamma[\eta]}) \approx c \cdot n^{\gamma[\eta]}$, where $c > 0$ is a constant and lower-order terms are neglected. 2. As $n \to \infty$, the term $n^{\gamma[\eta]}$ dominates, making $\gamma[\eta]$ the key factor determining the asymptotic behavior of $\eta$. While the constant $c$ is important for exact values, it doesn't influence the asymptotic scaling behavior. Ignoring the influence of $c$, perturbations to $\gamma[\eta]$ and $\eta$ are effectively equivalent. 3. Thus, we focus on how perturbations to $\gamma[\eta]$ affect the fine-tuning dynamics, excluding the constant $c$ in $\Theta$. Note that we analyze $\gamma[\eta]$ to guide learning rate and initialization choices, not compute exact values (which depend on $c$). We appreciate the opportunity to clarify our analytical framework and will explicitly incorporate these refinements in the revised paper. ***Q3: "extend beyond the top right corner"*** **R3:** Following your suggestion, we have expanded the upper right corner of the heatmap, and the updated results are presented in Fig.16 in the above link. The results show that increasing the hyperparameters further does not lead to a substantial improvement in accuracy. ***Q4: "insights about PiSSA"*** **R4:** Our analysis suggests that PiSSA is more robust to variations in the learning rate, owing to its use of non-zero initialized LoRA, which is achieved through Truncated SVD on pre-trained weights. Fig.11 in the above link shows that a significant portion of the improvement in PiSSA's accuracy can be attributed to non-zero initialization. The remaining improvement is due to the fact that the initialization values derived from the pre-trained weights are more effective than random noise. A similar trend is observed in LoRA-GA, which employs gradients for non-zero initialization. Please refer to Fig.11 for further details. ***Q5: "clarify the need of \eta_A = \eta_B, and the internal stability"*** **R5:** In fact, the condition $\eta_A = \eta_B$ is not necessary for achieving maximum robustness. The solution set in Eq. (5) indicates that stable and efficient learning can be achieved as long as $\gamma[\eta_A] + \gamma[\eta_B] = -1$, $\gamma[A_0] \leq \gamma[\eta_A]$, and $\gamma[B_0] \leq \gamma[\eta_B]$. Under this condition, when both "$\leq$" become "=", maximum robustness is attained. By default, we set $\eta_A = \eta_B$ since fine-tuning typically employs a uniform learning rate for all LoRA weights. However, achieving internal stability further requires $\gamma[\eta_A] = -1$ and $\gamma[\eta_B] = 0$, which are the core propositions of LoRA+. Due to space limitations, further details on internal stability can be found in **R3 from reviewer_Rpo6** or in the analysis presented in Appendix B.2. ***Q6: "LoRA+ with non-zero initialization"*** **R6:** We integrated LoRA+ with non-zero initialization. As shown in Fig.12 in the above link, non-zero initialization also enhances the robustness of LoRA+ to variations in the learning rate, leading to improved model accuracy. ***Q7: "initialization variance sensitivity of Init[AB]"*** **R7:** Let's first explain the meaning of each initialization method: Init[A]:$A_0\sim \mathcal{N}(0, \delta^2), B_0=0$, Init[AB]:$A_0\sim \mathcal{N}(0, \delta^2), B_0\sim \mathcal{N}(0, \delta^2)$, with $\frac{\alpha}{r}A_0B_0$ subtracted from the pre-trained weights. Init[AB+]: Same as Init[AB], but without the subtraction process. First, we emphasize that Init[A] itself exhibits sensitivity to variance. As shown in Eq.(6), stable and efficient learning requires the variance of $A_0$ to satisfy $\gamma[A_0] \leq -\frac{1}{2}$. In Init[AB], $B_0$ uses the same variance as $A_0$ and only needs to satisfy the same condition (i.e., $\gamma[B_0]\leq-1/2$). Therefore, Init[AB] does not introduce additional sensitivity to initialization variance but instead enhances the robustness of LoRA to $\eta_B$. Notably, if Init[AB+] is used, a larger initialization variance results in greater noise, which negatively impacts performance. However, this issue arises due to the absence of noise subtraction in Init[AB+], rather than a fundamental limitation of Init[AB].
null
null
null
null
null
null
Topology-aware Neural Flux Prediction Guided by Physics
Accept (poster)
Summary: This paper proposes a topology-aware prediction framework that adopts explicit difference matrices that model directional gradients and incorporates implicit physical constraints to difference matrices which enhances the consistency with physical laws. Experiments on two real-world data demonstrate the effectiveness of this framework. Claims And Evidence: I think most of the claims are clear and convincing. Methods And Evaluation Criteria: The authors’ comparison with extensive baseline models is a strong point. However, using less common datasets like River and Traffic may limit generalizability and reproducibility. I suggest that authors compare PhyNFP with baselines on mainstream datasets, e.g., CylinderFlow, Airfoil from [1], or Eagle from [2]. [1] Learning Mesh-Based Simulation with Graph Networks. [2] Eagle: Large-Scale Learning of Turbulent Fluid Dynamics with Mesh Transformers. Theoretical Claims: Theoretical claims are sound. Experimental Designs Or Analyses: Important experiment details such as the number of training steps and the precise learning rate schedule are missing. Furthermore, the implementation details, such as the number of message passing layers for PhyNFP and baselines, are missing. These raise doubts about the credibility. Supplementary Material: Yes. I have reviewed the Appendix thoroughly. Relation To Broader Scientific Literature: The paper situates itself within the broader literature on learning physics dynamics with graph neural networks (GNNs). It mainly contributes to this area by combining discretized difference matrices with implicit physical laws to address limitations in enforcing global consistency in flow dynamics. However, a more detailed discussion of PhyNFP’s scalability and efficiency compared to existing methods would strengthen its contribution and highlight its practical advantages. Essential References Not Discussed: Most related works are included. Other Strengths And Weaknesses: **Strengths** - PhyNFP combines discretized difference matrices with physical constraints. **Weakness** - The need to design specific message-passing mechanisms tailored to different physical dynamics, based on their unique constraints, may reduce the model's generality. This limitation could hinder PhyNFP's applicability to broader or more diverse physical systems without significant customization. Other Comments Or Suggestions: N/A Questions For Authors: 1. How does PhyNFP, trained on a known physical system, perform on similar physical systems with different physical constraints? What is the transferability or generalization of this model in this situation? 2. Can you give detailed dataset descriptions such as the number of training sequences and testing sequences? 3. Can you provide detailed implementation of PhyNFP and baseline models? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions. We address them in a Q\&A format as follows. Q1: How does PhyNFP perform on similar physical systems with different constraints? A1: The PDEs adopted by PhyNFP (Saint-Venant (S-V) for hydrodynamics and Aw-Rascle (A-R) for traffic flow) are basic and general formulations applicable to diverse scenarios within river and traffic networks. For instance, the gravitational term in S-V equations, crucial for elevation variations, can be naturally attenuated for flat rivers by adjusting learned weights during training. This enables PhyNFP to adapt across varying physical constraints without structural changes. Moreover, our ablation study shows that PhyNFP is robust even without explicit PDE constraints, which confirms its generalizability by using the difference operators alone. Q2: Can you give detailed dataset descriptions such as the number of training sequences and testing sequences? A2: We have provided dataset descriptions in Section 4 (Datasets, page 5). In particular, for the river dataset derived from LamaH-CE, we used data from the period 2000–2017, where the data from the years of 2016 and 2017 were used as the test set, and the remaining years were used for training. We followed [1] to make this train/test split. In our revised manuscript and forthcoming code repository, we will include all dataset statistics and detailed train/test splits for all datasets used. Q3: Can you provide detailed implementation of PhyNFP and baseline models? A3: The implementation specifics of PhyNFP are presented in Section 3.3, including the construction of difference matrices $D_1$ and $D_2$ and the PDE integration mechanism. In our revised manuscript and subsequent code release, we will further provide complete architectural descriptions, hyperparameter settings, and detailed training settings. The baseline models used (GraphSAGE, GCN, GAT, GWN, MP-PDE Solver, MPNN, and GNO) are introduced in Section 4 (Competitors, page 6). Q4: Comparison with Mesh-Based Benchmarks A4: We clarify key differences between mesh-based methods and our graph-based approach: 1) Applicability to sparse, irregular networks. Mesh-based PDE methods require structured grids with regular spacing and clear geometric coordinates to ensure numerical accuracy. This does not match our datasets, which are sparse and irregular (such as river and traffic networks). In contrast, PhyNFP uses topology-based difference matrices that work well on irregular graphs and naturally capture directional flow. 2) Modeling topological structure. Mesh-based methods focus mostly on spatial resolution, but they do not model topological effects like upstream-downstream structure. Our method is designed to capture these topological patterns, which are important for real-world networks. This kind of topological sensitivity is hard to achieve using standard mesh-based solvers. Q5: Efficiency / Scalability A5: Although computational efficiency is not our core research contribution, we add an experiment below to show that the runtime of our method exhibits sub-linear scaling with larger graph size. As shown in the table above, when the number of nodes increases from 31 to 358 (more than 10×), the runtime per epoch only doubles, demonstrating that computational complexity grows significantly slower compared to the upscaling of graph size. | Number of Nodes | Average Runtime per Epoch(s) | |-------------------------|-------------------------------------------| | 9 | 57.4 | | 27 | 61.5 | | 31 | 62.2 | | 358 | 129.2 | Table: Average Runtime per Epoch for Different Graph Structures Reference: [1] Nikolas Kirschstein and Yixuan Sun, The Merit of River Network Topology for Neural Flood Forecasting, ICML 2024.
Summary: The paper proposes a PhyNFP framework that aims to improve GNNs for modeling flow dynamics in directed graphs. The main hypothesis of the paper is that the directional insensitivity of traditional GNNs and their inability to capture high-frequency components arise, because GNNs inherently smooth out directional variations. This makes GNNs struggle in distinguishing forward and reverse flows. PhyNFP integrates both explicit difference matrices and global physical constraints to overcome these challenges, where the former encodes local directional dependencies and the latter enforces consistency with natural laws. The framework is validated on two real-world directed graph datasets, including a water flux network and an urban traffic flow network, and the emprical results demonstrate its superior performance over standard GNNs and PDE-based solvers. Claims And Evidence: The paper makes three key theoretical claims. First, it asserts that the standard message-passing mechanism in GNNs acts as a low-pass filter, suppressing high-frequency components that are crucial for capturing sharp transitions and localized changes in directed flow dynamics. The paper investigates and supports this claim by by formulating an inverse problem, where the task is to predict upstream conditions based on downstream observations. The rationale lies in the fact that this inverse setup presents an ill-posed learning setting, amplifying high-frequency components, which traditional GNNs fail to retain. Second, the proposed PhyNFP introduces discretized difference matrix (DDM), which approximates spatial gradients and preserve high-frequency information. These matrices modify the adjacency structure of the graph, ensuring that message passing retains fine-grained directional details. The paper validates the effectiveness of discretized difference matrices through theoretical analysis.Using discrete-time Fourier transform, the authors show that the frequency response of the difference matrix operator is $I + \alpha D$, demonstrates that high-frequency components are selectively preserved, unlike conventional adjacency-based smoothing which diminishes them. In emprical study, the authors define Direction Sensitivity (DS) and RDS as the difference metrics in prediction error between the original graph (Forward Flow) and a graph with reversed edge directions (Reverse Flow). The higher DS scores of PhyNFP compared to all baseline models indicate that PhyNFP distinguishes between forward and reverse flows more effectively than standard GNNs, validating that DDMs capture directional dependencies. Third, PhyNFP integrates physical law constraints directly into the GNN training process, such as conservation of momentum (Saint-Venant equations for river networks) and mass conservation (Aw-Rascle equations for traffic networks). This ensures that predictions remain physically consistent, reducing the reliance on purely data-driven patterns and reinforcing the structural priors inherent in real-world flow systems. This claim is validated through various aspects. 1) In theory, the paper formulates domain-specific physical constraints using governing equations (SV or AR), both are discretized and incorporated as regularization terms into the GNN loss function. 2) In experiment, PhyNFP is compared against purely data-driven GNNs (GCN, GraphSAGE, GAT, GWN, MPNN) and graph-based PDE solvers (MP-PDE Solver, GNO), and the results show that PhyNFP consistently outperforms both categories, confirming that physics constraints enhance model accuracy beyond what data-driven learning can achieve alone. 2) In ablation study, the paper evaluates how error grows as the prediction horizon (lead time) increases. While all models exhibit increasing error over longer horizons, PhyNFP grows its error at a significantly lower rate than baselines, indicating that physics constraints improve long-term stability. Methods And Evaluation Criteria: The proposed method is technically sound, with clear notations following many domain conventions, with well-defined optimization objectives. The use of discretized difference matrices to construct new adjacency operators aligns with numerical methods for hyperbolic PDEs, ensuring that information flow follows physically meaningful gradients, is novel. The evaluation metrics are appropriate, with MSE as the primary metric for predictive accuracy and DS and RDS for assessing directional awareness. The inclusion of baseline comparisons, ablation studies, and robustness evaluations strengthens the validity of the results. Theoretical Claims: The paper makes three key theoretical claims. First, it asserts that the standard message-passing mechanism in GNNs acts as a low-pass filter, suppressing high-frequency components that are crucial for capturing sharp transitions and localized changes in directed flow dynamics. The paper investigates and supports this claim by by formulating an inverse problem, where the task is to predict upstream conditions based on downstream observations. The rationale lies in the fact that this inverse setup presents an ill-posed learning setting, amplifying high-frequency components, which traditional GNNs fail to retain. Second, the proposed PhyNFP introduces discretized difference matrix (DDM), which approximates spatial gradients and preserve high-frequency information. These matrices modify the adjacency structure of the graph, ensuring that message passing retains fine-grained directional details. The paper validates the effectiveness of discretized difference matrices through theoretical analysis.Using discrete-time Fourier transform, the authors show that the frequency response of the difference matrix operator is $I + \alpha D$, demonstrates that high-frequency components are selectively preserved, unlike conventional adjacency-based smoothing which diminishes them. In emprical study, the authors define Direction Sensitivity (DS) and RDS as the difference metrics in prediction error between the original graph (Forward Flow) and a graph with reversed edge directions (Reverse Flow). The higher DS scores of PhyNFP compared to all baseline models indicate that PhyNFP distinguishes between forward and reverse flows more effectively than standard GNNs, validating that DDMs capture directional dependencies. Third, PhyNFP integrates physical law constraints directly into the GNN training process, such as conservation of momentum (Saint-Venant equations for river networks) and mass conservation (Aw-Rascle equations for traffic networks). This ensures that predictions remain physically consistent, reducing the reliance on purely data-driven patterns and reinforcing the structural priors inherent in real-world flow systems. This claim is validated through various aspects. 1) In theory, the paper formulates domain-specific physical constraints using governing equations (SV or AR), both are discretized and incorporated as regularization terms into the GNN loss function. 2) In experiment, PhyNFP is compared against purely data-driven GNNs (GCN, GraphSAGE, GAT, GWN, MPNN) and graph-based PDE solvers (MP-PDE Solver, GNO), and the results show that PhyNFP consistently outperforms both categories, confirming that physics constraints enhance model accuracy beyond what data-driven learning can achieve alone. 2) In ablation study, the paper evaluates how error grows as the prediction horizon (lead time) increases. While all models exhibit increasing error over longer horizons, PhyNFP grows its error at a significantly lower rate than baselines, indicating that physics constraints improve long-term stability. Experimental Designs Or Analyses: The experiments are set up in stardard and structured ways. Two real directed graphs are employed for benchmark, which makes the proposed framework executable compared to the PINN studies that use simulation only. The proposed PhyNFP is compared against traditional GNN models and well as graph-based PDE solvers. Supervised node regression to predict flux volume at a future time step is used as the benchmark task, and model accuracy is assessed using MSE. A crucial aspect of the experimental setup is the direction sensitivity analysis. The authors evaluate models in both the original graph setting (Forward Flow) and an inverse setting (Reverse Flow), whereby using DS and RDS to quantify how much the prediction error changes. The positive results of PhyNFP demonstrates its effectiveness and verifies the tightness of its technical findings. Supplementary Material: Yes, I read the supplementary material and can attest that it provides additional theoretical justifications, including a Fourier transform analysis of the difference matrix operator, which demonstrates its ability to enhance high-frequency components. There also presents an extended discussion on hyperbolic PDEs and reverse characteristic tracing, illustrating how reversing edge directions in a graph introduces instability and noise amplification, which completes the a key motivation for the proposed approach. Relation To Broader Scientific Literature: The paper is likely to enjoy a broader audience group by situating its contributions within GNNs for physical systems. It discusses graph-based PDE solvers, physics-informed neural networks, and spatio-temporal GNNs for flood and traffic prediction. However, it could provide a stronger comparison with recent works on hybrid GNN-PDE models, such as Neural Operators and Physics-Guided Neural Networks. Essential References Not Discussed: I find the following papers related and should not be missing: [1] Li, Zongyi, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. "Neural operator: Graph kernel network for partial differential equations." arXiv preprint arXiv:2003.03485 (2020). [2] Karniadakis, George Em, Ioannis G. Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. "Physics-informed machine learning." Nature Reviews Physics 3, no. 6 (2021): 422-440. [3] Dong, Yushun, Kaize Ding, Brian Jalaian, Shuiwang Ji, and Jundong Li. "Adagnn: Graph neural networks with adaptive frequency response filter." In Proceedings of the 30th ACM international conference on information & knowledge management, pp. 392-401. 2021. Other Strengths And Weaknesses: A key strength of the paper is its well-motivated problem formulation and strong empirical validation. The integration of numerical methods (difference matrices) and physics principles is novel and well-executed. Additionally, the direction sensitivity evaluation is an important contribution to the study of GNNs for flow-based systems. However, the paper has a few weaknesses. First, the computational efficiency is not analyzed in detail, raising concerns about its applicability to large-scale graphs. Second, the interpretability of the learned embeddings is not discussed—how do the physics constraints influence the feature representations in the GNN layers? Finally, the framework is evaluated only on node regression tasks; it would be valuable to explore whether it can generalize to link prediction or spatio-temporal forecasting. Other Comments Or Suggestions: Provide a runtime and complexity analysis to evaluate the scalability of PhyNFP. Conduct feature visualization to analyze how the difference matrices and PDE constraints influence learned representations. Explore applications beyond flux prediction, such as graph-based anomaly detection or turbulent flow modeling. Questions For Authors: 1. How does PhyNFP scale to large networks and what are its computational bottlenecks? 2. Can this framework be applied to link prediction tasks in dynamic graphs? 3. Is PhyNFP robust to incomplete or noisy data? How does it handle missing node attributes? 4. What does the "undirected" in Figure 1 mean and how it was implemented? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions. We address them in a Q&A format as follows. Q1: How does PhyNFP scale to large networks and what are its computational bottlenecks? | Number of Nodes | Average Runtime per Epoch(s) | |-------------------------|-------------------------------------------| | 9 | 57.4 | | 27 | 61.5 | | 31 | 62.2 | | 358 | 129.2 | Table: Average Runtime per Epoch for Different Graph Structures A1: We add an experiment to show that the runtime of our method exhibits excellent sub-linear scaling with larger graph size. As shown in the table above, when the number of nodes increases from 31 to 358 (more than 10×), the runtime per epoch only doubles, demonstrating that computational complexity grows significantly slower than the graph size. Q2: Can this framework be applied to link prediction tasks in dynamic graphs? A2: Our framework is designed for flux prediction in physical systems, focusing on modeling directional flows governed by physical laws. Link prediction in dynamic graphs is a structurally different task, aiming to infer future or missing edges. Such problems are rarely encountered in our physical settings. We would like to defer this interesting setup by adapting our method for dynamic link prediction as a future work. Q3: Is PhyNFP robust to incomplete or noisy data? How does it handle missing node attributes? A3: Yes, our model is robust to both incomplete and noisy data. Our datasets include real-world measurement data, which naturally contain both noisy and missing entries. Our method can mitigate the effect of noise through its physics-guided inductive bias, which acts as implicit regularization to enhance robustness. For missing node attributes, we adopt the preprocessing step used in~[1], where nodes with missing features are excluded from the network. Q4: What does the "undirected" in Figure 1 mean and how it was implemented? A4: In the “undirected” setting we remove edge directions by symmetrizing the adjacency matrix (i.e., using $A_{\text{undirected}} = A + A^\top$), so each node aggregates messages from its upstream and downstream neighbors. In this setting, the model does not differentiate between upstream and downstream connections. In contrast, the forward setting aggregates only from upstream neighbors, and the reverse setting flips all edge directions, so aggregation only considers those originally downstream nodes. [1] Nikolas Kirschstein and Yixuan Sun, The Merit of River Network Topology for Neural Flood Forecasting, ICML 2024.
Summary: This paper addresses the challenge of preserving high-frequency components in Graph Neural Networks (GNNs) when applied to directed graphs, which is crucial for accurately modeling flow dynamics. Traditional GNNs often fail to distinguish between forward and reverse graph topologies, leading to information loss. To overcome this, the authors propose a framework that integrates explicit difference matrices for modeling directional gradients and implicit physical constraints to ensure message passing aligns with natural laws. Experiments on real-world datasets, including water flux and urban traffic networks, demonstrate the effectiveness of the proposed approach. Claims And Evidence: Not really. The proposed method's improvement is not as significant as the authors claim. For instance, its performance on the reverse model is worse than some baselines. Additionally, the reported 4.9% performance gain on the river dataset is misleading, as it is calculated by averaging the forward MSE error across all methods rather than making a fair one-to-one comparison. When compared to the second-best model, the actual improvement is only around 0.5%, which is minimal. Furthermore, since the experiments were conducted using only a single seed, this minor performance gain may not hold when averaged over multiple runs. Methods And Evaluation Criteria: The evaluation criterial seems problematic. The authors defined direction sensitivity of a certain model $M$ as $DS(M)$ = $l_M$(Reverse)−$l_M$(Forward), where $ℓ_M$ indicates the MSE loss of M. According to the paper, a higher $DS$ value indicates a better model, but this is misleading. A model with poor performance on $l_M$(Reverse) (i.e. high MSE error) would naturally have a larger DS, which does not necessarily reflect improved direction sensitivity. Theoretical Claims: It is unclear how Eq12 and Eq13 are derived. Experimental Designs Or Analyses: As mentioned earlier, the way that the authors compared the performance of the proposed method is unfair. The metrics is also problematic and seems not be able to well justify or measure the direction sensitivity. Moreover the proposed method has worse reverse MSE under traffic network dataset, which is not discussed in the paper. More detailed questions will be mentioned below. Supplementary Material: Yes, reviewed all supplementary materials. Relation To Broader Scientific Literature: The paper enhances GNNs for directed graphs by preserving high-frequency components crucial for flow dynamics modeling. It builds on spectral graph theory and physics-informed learning, introducing directional gradients and physical constraints. Essential References Not Discussed: The authors provide a relatively comprehensive discussion of related work. However, methods like PINN-GNN approaches could be included. This is because the selected graph learning models for physical systems do not explicitly incorporate physical laws. Research on enforcing physical constraints through loss functions (for example) is relevant to the problem and should be considered for comparison with the proposed method. Other Strengths And Weaknesses: The idea of modifying the adjacency matrix and update function to incorporate physical constraints is intriguing. However, the results show that the proposed method does not consistently outperform other approaches on certain datasets. The authors do not provide clear explanations for this, and the proposed evaluation metrics appear problematic in accurately assessing the method’s performance. Moreover, it is questionable on how generalizable is this approach to other system that cannot be discretized in similar format. Other Comments Or Suggestions: Please refer to the question section. Questions For Authors: 1. It is unclear why, in the reverse problem, small numerical errors in the inference process propagate and amplify, leading to instability and sensitivity in the reconstructed upstream conditions. Could the authors clarify why this issue occurs specifically in the reverse problem and whether it could also be a concern for the forward problem? 2. The generalizability of this approach to systems that cannot be discretized in a similar format is questionable. Could the authors discuss its applicability to broader cases? 3. The derivation of Eq. (12) and Eq. (13) from Eq. (8) and Eq. (11) is unclear.For instance, what happens to the term $\rho^n$ in Eq. (13)? Additional details on the derivation would be helpful. 4. The intuition behind Eq. (4) is not well explained. It seems like D1 and D2 are introduced primarily to facilitate the derivation of Eq. (12) and Eq. (13). Could the authors clarify their role? 5. The text immediately following Eq. (11) does not seem to align with the equation. 6. The authors mentioned $\Delta t$ and $\hat g$ are learnable scalars. How is the accuracy or reliability of these learned terms verified? 7. The authors mentioned *"By making $\Delta t$ learnable, GNNs can adjust their sensitivity to real-time traffic conditions, providing a physics-aware approach to traffic prediction"*. Could the authors provide the actual learned values of $\Delta t$ ? Do these values make sense, and do they adapt to different conditions as expected? Additionally, how would the performance compare if a fixed $\Delta t$ were used instead? 8. How does the model handle input features with 24 time steps? Is it through concatenation or another method? 9. Regarding the concerns mentioned about the $DS$ metric, could the authors clarify if there is any misunderstanding? 10. In Figure 2(c), the method is compared with ResGCN, which was not included in previous baseline comparisons. Additionally, since GCN performs the worst in the forward model, why not compare it with the second-best model (e.g., GWN)? Furthermore, how does the performance differ for evaluation in RQ4 under the traffic network setting? Also, what happens when nodes are perturbed in the reverse model? 11. Can the authors explain why the proposed method underperforms in the reverse model on the traffic network dataset? 12. The paper lacks sufficient details on the model architecture and training settings, making reproduction difficult. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you, and we address your questions in a Q\&A fashion as follows Q1: Why the reverse problem will amplify numerical errors? Is it a concern for the forward problem? A1: Solving inverse problems is ill-posed, as many different upstream conditions can lead to the same downstream fluxes. When inferring upstream states from downstream inputs, the mapping becomes one-to-many and numerically unstable, small noise at downstream nodes can cause large errors upstream. In our inverse experiment (Q9), we observe that GCN suppresses such noise entirely, showing no upstream variation. This supports our hypothesis that GCN cannot distinguish forward from reverse settings and lacks sensitivity to edge direction. Further details are provided in our response to Q9. This is not a concern in the forward setting because the PDE describing physical structure can enforce a stable and one-to-one mapping from upstream to downstream nodes. Q2: Generalize to broader systems cannot be discretized? A2: Our discretization does not rely on any specific PDE; rather, it is based on the topological structure describing the system. In fact, we can construct multiple difference operators [1] from the graph through its spatial or functional adjacency, and our difference matrix derived from local spatial variations and directional flow is one possible instantiation. Q3: How were Eqs.(12) and (13) derived from Eqs.(8) and (11)? A3: We will supplement a step-by-step derivation, as presented in (https://anonymous.4open.science/r/PhyNFP-D88F/Q3.pdf). The main idea is to use independent MLPs that take raw node inputs to learn representations of physical quantities. Difference matrices $D_1$ and $D_2$ are applied to guide the representation learning, so to align with the original PDE. The intuitions behind $D_1$ and $D_2$ are in our response to Q4. Q4: Are D1 and D2 just introduced to enable Eqs.(12) and (13) derivation? A4: $D_1$ and $D_2$ are not just for derivation, they capture key physical quantities. $D_1$ models horizontal gradients on the graph, reflecting convection and spatial variation. $D_2$ captures vertical gradients from elevation, tied to gravity-driven flow. These matrices allow our framework to generalize to other PDE systems involving similar terms. Q5: The text following Eq. (11) does not with the equation. A5: We will fix this typo, as also suggested by Reviewer RhQb. Q6: Why is $\Delta t$ (and $\hat{g}$) set as learnable scalar but not fixed? A6: The amount of $\Delta t$ determines either a message-passing layer will update the node embeddings by reusing the output from previous layer (small $\Delta t$) or it will incorporate the information for updating the physical status of PDE (large $\Delta t$). For example, a large $\Delta t$ in Eq.(12) means that the embedding update will mostly rely on using the convection and gravity terms derived from the Eq.(6). By learning it from data, we make $\Delta t$ adaptable to real conditions. We validate through two experiments: 1) Larger $\Delta t$ improves $DS$, as shown in Table 1 in the anonymous link ( https://anonymous.4open.science/r/PhyNFP-D88F/README.md), although inverse problems typically prefer smaller steps [2]. With fixed layers, small $\Delta t$ limits each update. A learnable $\Delta t$ adapts to network depth without this concern. 2) Figure 2 in the link plots the variation of $\Delta t$ w.r.t. the number of epochs, where it converges to small values in the reverse setting. Q7: Handle input features with 24 time steps? A7: Concatenation. Please refer to our response to Q1 of Reviewer RhQb. Q8: Misunderstanding in DS? A8: Indeed a high reverse MSE can produce high DS performance. However, our model enjoys the highest DS = +.0105 by attaining the lowest reverse MSE = .0906. The positive and high DS value indicates a better distinguishability of edge directions. Q9: 1) Why ResGCN is not in Table 1? 2) How perturbations reflect in reverse setting? A9: Answer to 1) is in Q3 of Reviewer RhQb. To answer 2), we add local perturbation in reverse setting (and the Traffic Network) as shown in Fig 1 (and Fig 3) in the anonymous link. Q10: Performance downgrade in traffic network? A10: Mainly due to cycles in the traffic network. Using Johnson's algorithm, we find 80 cycles in it, but the river network has zero. These cycles allow messages to propagate in both directions, making it more challenging for models to distinguish forward and reverse flows. Q11: Details on the model architecture and training settings? A11: We use 19 layers based on the graph diameter in our dataset. In experiment we observe that our implementation is robust across different layer numbers. We will publish our code with all hyperparameter settings for reproducibility. [1] Grady, et al. Discrete calculus: Applied analysis on graphs for computational science[M]. London: Springer, 2010. [2] Bertero, et al. The stability of inverse problems[M]. Springer Berlin Heidelberg, 1980.
Summary: The authors proposed PhyNFP, a topology-aware neural flux prediction framework that integrates GNNs with physical principles to improve flow dynamics modeling in directed graphs. Traditional GNNs struggle with directional sensitivity and high-frequency information loss due to their inherent low-pass filtering nature. PhyNFP addresses this limitation by incorporating difference matrices that encode local directional dependencies and global physical constraints that ensure predictions adhere to real-world physics. The authors evaluate PhyNFP on two real-world datasets, where it outperforms traditional GNNs and PDE-based solvers in terms of both accuracy and directional sensitivity. Claims And Evidence: The claims are legitimate and supported with sufficient empricial evidence. Methods And Evaluation Criteria: The proposed method is rigorous yet straightforward, which is good. The presentation makes it easy to follow. The evaluation strategy is standard and thorough. Theoretical Claims: I note two main claims made in this paper: 1) The authors argue that GNNs act as low-pass filters, suppressing high-frequency components critical for flow directionality. This makes them insensitive to forward vs. reverse flows. The claim is supported by Fourier analysis (in appendix), showing that message-passing smooths signals, and an inverse problem formulation, which demonstrates that predicting upstream flux from downstream conditions is ill-posed. The DS and RDS metrics further confirm that standard GNNs fail to capture directionality, while PhyNFP excels. 2) The authors claim that incorporating physical constraints improves model accuracy and stability by embedding Saint-Venant (hydrology) and Aw-Rascle (traffic) equations into the loss function. The theoretical formulation ensures physics-consistent learning, while ablation studies confirm that removing constraints significantly degrades accuracy. Also, long-horizon experiments show that PhyNFP accumulates less error over time, proving its stability advantage. I identify these claims to be legitimate and likely to contribute to the advancement of physics-informed GNNs for flow-based systems. Experimental Designs Or Analyses: The paper employs a structured and rigorous experimental setup, using two real-world directed graph datasets. PhyNFP is compared against standard GNNs and PDE-based solvers. The competitors chosen can represent the state of the art. Standard metrics like MSE are used as it being widely employed in other node-level regression tasks. Ablation studies confirm that both difference matrices and physics constraints are essential, while removing them leads to higher errors and reduced stability. Long-horizon prediction tests show PhyNFP accumulates less error over time. Supplementary Material: Yes, I confirm its theoretical justifications are supportive. Relation To Broader Scientific Literature: The paper situates its contributions within physics-informed machine learning and graph learning, drawing connections to spatio-temporal GNNs and numerical solvers for flow dynamics. It highlights the limitations of standard GNNs in handling directional dependencies and emphasizes the need for physics-guided regularization. The paper is likely to host audiences from a diversity of backgrounds. Essential References Not Discussed: I do not find significant missing. Other Strengths And Weaknesses: + The study is serious, and its proposed framework is straightforward but can be generalized into various related domains like power grids, epidemiology, and financial transaction networks where edge directions are critical. + The design of using difference matrix and combining it with PDEs to design new message-passing layers is novel and easy to implement. I am in favor of this explicit and deterministic designs over inplicite regularizations. + The analysis in appendix followed the standard steps in graph spectral analysis and and results support its legtimacy. + The evaluation in real datasets rather than those from simulators are extensive and positive. I appreciate more such PINN research to be evaluated on real datasets. Other Comments Or Suggestions: Please find my comments above. Questions For Authors: - How was exactly the flux prediction or regression task modeled? Is it autoregressive for the past 24 time steps? If so, what is the base learner? - I believe that the notations in Eq.11 and their elaborations do not match. Please make it consistent and explain which is which. - What is the main finding of Fig.2? Why ResGCN is good/bad given its reaction to the "change" (what is thischange?) in v_1? Why ResGCN is not included as one of the GNN competitors? - How robust is PhyNFP to data incompleteness? Given the two datasets collected through observatories, there may have missing entries -- how were those handled? - What do you mean by forward vs reverse vs undirected? Ethical Review Concerns: None noted. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions. We address them in a Q&A format as follows. Q1: How was the flux prediction task modeled? Is it autoregressive for the past 24 time steps? A1: The flux prediction task is formulated as a supervised node regression problem rather than an autoregressive one. For each node, a fixed-length temporal window of 24 past time steps \(W=24\) is concatenated into one feature vector, following the prior research~[1]. This vector is used to predict the future flux \(y\) at time \(t+6\). The base learner is graph neural network that integrates discretized difference matrices and physical priors in its message passing process. Q2: Notation inconsistency in Eq.(11) and their elaborations. A2: This is a typo, and we will make revision in our camera-ready as follows: $$ \rho_i^{t+1} = \rho_i^t - \alpha \left( \rho_i^t (\hat{D} u^t)_i + u_i^t (\hat{D} \rho^t)_i \right), $$ where $(\hat{D}u^t)_i$ represents velocity differences, and $(\hat{D}\rho^t)_i$ encodes density-driven effects. These terms approximate the spatial derivatives of velocity and density, respectively, using a difference operator $\hat{D}$. The scalar factor $\alpha$ is defined as $\Delta t / \Delta x$. Q3: What is the main finding of Fig.2? Why is ResGCN good/bad given its reaction to the "change" (what is this change?) in v-1? Why is ResGCN not included as one of the GNN competitors? A3: The "change" refers to a manually injected spike in the input flux at node $v_1$, simulating a sudden perturbation in the river network. Fig.2 examines how this perturbation propagates downstream through the model predictions. The full analysis of the main finding in Fig.2 is provided in RQ4 of the main text. We will further clarify in the final version. The result of ResGCN in Figure 2 is bad compared to our method because it fails to capture the correct flow dynamics. Specifically, when the perturbation is applied to a node, it should cause an increase in its downstream flux, while GCN shows almost no change in downstream nodes. Although ResGCN propagates this perturbation signal to some extent, it produces incorrect trends; for example, predicting a decrease at node \(v_2\) where an increase is expected. We select GCN due to its wider application over its residual variant. In fact, ResGCN exhibits similar performance to standard GCN with shallow layers. Even after careful tuning (e.g., adding more layers), ResGCN only performs on a par with GWN by having MSE (F) = .1114, MSE (R) = .1139, and RDS = -76.2\%. We shall include those results in our camera ready. Q4: How robust is PhyNFP to data incompleteness and how are the missing entries handled? A4: Our datasets include real-world measurement data, which naturally contain both noisy and missing entries. Our method can mitigate the effect of noise through its physics-guided inductive bias, which acts as implicit regularization to enhance robustness. For missing node attributes, we adopt the preprocessing step used in~[1], where nodes with missing features are excluded from the network. Q5: What are the settings of forward vs reverse vs undirected? A5: In our work, these terms describe different ways of using edge directions during graph construction and message passing. In the forward setting, we use the original directed graph where edges follow the real physical flow (i.e., from upstream to downstream). This reflects the correct direction of information propagation. In the reverse setting, all edge directions are flipped (i.e., the adjacency matrix is transposed), so information flows in the opposite direction (i.e., from downstream to upstream), which violates the physical rules. In the undirected setting, edge directions are ignored. Each node exchanges messages with both upstream and downstream neighbors, which is common in standard GNNs. Reference: [1] Nikolas Kirschstein and Yixuan Sun, The Merit of River Network Topology for Neural Flood Forecasting, ICML 2024.
null
null
null
null
null
null
MPO: An Efficient Post-Processing Framework for Mixing Diverse Preference Alignment
Accept (poster)
Summary: This paper proposes MPO, an efficient post-processing framework for mixing diverse preference alignment. The authors use batch stochastic mirror descent to find the optimal combination coefficients for output combination. ## update after rebuttal Most of the concerns are resolved, so the reviewer raises the score to weak accept. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The reviewer assumes all proofs are correct. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: This paper proposes a post-processing method for diverse preference optimization. From the reviewer's perspective, it is novel compared to previous scientific literature. Essential References Not Discussed: None. Other Strengths And Weaknesses: - Strengths - The proposed method is training-free. Theoretically, the authors can combine several existing well-aligned LLMs to get the desired LLM without any cost training process. - The authors provide theoretical guarantees for the proposed method. - Weaknesses - Lack of comprehensive evaluation of the proposed method. - The reviewer doubts the feasibility of serving multiple LLMs simultaneously in practice. Other Comments Or Suggestions: None. Questions For Authors: - According to Algorithm 1, do the authors simply combine the output logits of different LLMs? From Figure 1, the reviewer thinks the authors conduct something like model merging. - The reviewer thinks the experimental evaluation of the method is not enough. - In Table 1, do the authors adopt $\pi_{helpful/harmless/humorous}$ provided by others or train them by yourselves? If the authors train these models by yourselves, the reviewer thinks the authors should add comparison results with models provided by others. For example, comparison with PKU-SafeRLHF [1] in terms of helpfulness and harmlessness. - If we consider the win rate of the reference model as 50%, the aligned model is only marginally better than the baseline. Are such results reasonable? - Following the above point, in Table 1, a large $\beta$ leads to better results, this is quite uncommon because general alignment algorithms like PPO usually use a small constraint. Can we think a large $beta$ represents a similar model as the original one? If so, what is the role of the alignment method during the process? - In practice, the reviewer thinks it is quite difficult for us to serve multiple LLMs at the same time. Especially when the model is large. If the authors adopt the model merging method, this should not be a problem. [1] https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your constructive and insightful feedback! Here we provide a detailed response to address all of your concerns below. > Confusion of Algorithm 1 and Figure 1. Thank you for the question and apologies for any confusion. As shown in Thm3.4, we have $$\pi^*(y|x) \propto \prod_{k=1}^K\left(\pi_k(y|x)\right)^{\lambda_k^*},$$ which can be seen as a logit-level aggregation of multiple LLMs. However, our main goal is to compute $\lambda^*$ that maximizes the minimum reward, which balances the trade-offs among different objectives. Based on your comment, we will revise Figure 1 to better highlight the role of $\lambda$ in shaping the final policy. > Evaluation of $\pi_{helpful/harmless/humorous}$ in Table 1. Thank you for your feedback. 1. $\pi_{helpful/harmless/humorous}$ in Table 1 were trained by our own, and the results demonstrate a significant surplus in the corresponding rewards—indicating the effectiveness of our single-objective policies. 2. However, our primary goal is to balance multiple objectives rather than to optimize individual objectives. Given the differences in training data, we believe that direct comparisons of single-objective policies may not be entirely fair. 3. Nonetheless, we have added further comparisons with the PKU-SafeRLHF model with MPO under $\beta=0.1$ in Eq.10. As shown in FIgure 4, our $\pi_{MPO}$ mostly rely on $\pi_{helpful}$ and $\pi_{harmless}$, which aligns closely with the objectives considered in the PKU-SafeRLHF. The results below show that MPO still achieves the highest minimum win rate. Table : Win rate(%) against the Reference Model | Model | Helpful | Harmless | Humorous | Min | |-|:-:|:-:|:-:|:-:| | $\pi_{PKU}$ | 53.1 | 40.8 | 56.1 | 40.8 | | $\pi_{MPO}$ | 46.3 | 53.1 | 54.1 | $\color{red}{46.3}$ | 4. Moreover, the normalized reward for $\pi_{MPO}$ and $\pi_{PKU}$ are listed below: |Model | $r_{helpful}$ | $r_{harmless}$ | $r_{humorous}$ | |-:|:-:|:-:|:-:| | $\pi_{MPO}$ | -0.176 | 0.564 | 0.104 | | $\pi_{PKU}$ | 0.150 | -0.05 | 0.150 | Here, larger normalized rewards indicate better alignment with the corresponding objective. These results indicate that the PKU-SafeRLHF model places a greater emphasis on helpfulness, which is likely due to its constrained optimization loss. We will add more discussion with PKU-SafeRLHF in the revision. > Aligned model is only marginally better than the baseline. Are such results reasonable? Thank you for the question. 1. Table 1 demonstrates MPO's optimality among all methods under the max-min setting, with the aligned model outperforming the baseline on all individual objectives. 2. The improvement appears only marginal because of the inherent conflict between objectives such as helpfulness and harmlessness. Multi-objective alignment tasks involve complex trade-offs, and simultaneously improving all conflicting objectives is nearly impossible [1]. This challenge is precisely why we consider the max-min setting in our work. References: [1] Rame, A, et al. Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. NeurIPS 2023. > What is the role of the $\beta$ during the alignment process? Thank you for your question. 1. While a larger $\beta$ encourages policy to stay close to the reference model, it plays a critical role in balancing conflicting objectives in our multi-objective setting. Stronger regularization helps stabilize aggregation and maintain desirable baseline behaviors while optimizing the minimum reward. 2. $\beta$ is a tunable hyperparameter. As shown in Table 1, $\beta=0.5$ outperforms $\beta=0.1$ and $\beta=\infty$. In particular, the case of $\beta=\infty$ collapsed to the reference policy with a 50% win rate. This highlights the importance of tuning $\beta$ for optimal performance. > In practice, the reviewer thinks it is quite difficult for us to serve multiple LLMs at the same time. Especially when the model is large. If the authors adopt the model merging method, this should not be a problem. 1. We understand the concern about serving multiple LLMs concurrently. However, the efficiency of MPO primarily comes from the training phase. In Section 4.2 of our experiments, training policies using PPO-based approaches (e.g., MaxMin-RLHF, MORLHF) requires approximately 10 A100 GPU hours, whereas MPO only requires around 2.5 A100 GPU hours, since it avoids reinforcement learning step. 2. In the inference phase, instead of running full inference on all LLMs simultaneously, we can compute single-objective policy outputs logits in parallel, then aggregate logits using the MPO framework. 3. Additionally, as shown in Table 1, our method achieves better alignment than parameter merging baselines like reward soups and future work will focus on further optimizing the inference pipelines to enhance the scalability of MPO in real-world applications.
Summary: This paper studies how to align diverse objectives of human preferences. The authors propose a post-processing approach that combines the optimal policies for each objective without requiring retraining from scratch. Moreover, the author also studies the max-min RLHF, and shows that we can find the optimal policy by adjusting the weight using mirror descent. Experimental results are provided to support their theoretical findings. Claims And Evidence: Yes. The claims are correct, clear and easy to follow. Methods And Evaluation Criteria: The authors evaluate their algorithms using a classical MORLHF dataset named HH-RLHF, which contains three objectives Helpful, Harmless, and Humorous. Theoretical Claims: I check the proof and it is correct to me. Experimental Designs Or Analyses: The experiment compares MPO with previous algorithms such as Reward Soups and Max-Min RLHF, along with baselines like the reference model and single-reward algorithm. My main concern is that the authors should include comparisons with more prior works, such as MOD (Shi et al., 2024), as well as a baseline algorithm that aggregates the reward and trains the model directly on it. Additionally, since Algorithm 1 has learned the optimal weight $\lambda$, the authors could utilize this weight to implement RS instead of using a uniform weight $[1/3,1/3,1/3]$. Shi R, Chen Y, Hu Y, et al. Decoding-time language model alignment with multiple objectives. NeurIPS 2024. Supplementary Material: I read the proof part and the experiment details. Relation To Broader Scientific Literature: The key contribution of this paper is the proposed algorithm that enables language models to align with diverse objectives by leveraging the optimal policy for each objective, rather than requiring training from scratch. However, the novelty appears to be limited, as the main theorem (Theorem 3.4) has already been studied in Theorem 1 of (Shi et al., 2024). Could the authors clarify the differences between their main theorem and the one in (Shi et al., 2024)? Additionally, the use of mirror descent for weight adjustment closely resembles the approach in (Ramesh et al., 2024). As a result, it remains unclear whether this paper offers a novel theoretical contribution. Shi R, Chen Y, Hu Y, et al. Decoding-time language model alignment with multiple objectives. NeurIPS 2024. Ramesh S S, Hu Y, Chaimalas I, et al. Group robust preference optimization in reward-free rlhf. NeurIPS 2024. Essential References Not Discussed: Two essential references that are not discussed are (Shi et al., 2024) and (Ramesh et al., 2024). The former presents a result similar to Theorem 3.4 in this paper, while the latter introduces a similar idea of adjusting weights using mirror descent to achieve max-min goal. Other Strengths And Weaknesses: The strengths and weaknesses are provided above. Other Comments Or Suggestions: The author should provide more theoretical comparisons (and if possible, empirical comparisons) between MPO and previous algorithms, and the novelty of MPO. Questions For Authors: 1. In Line 139, the author states that "Balancing multiple, often competing objectives leads to training instability, while the need to train multiple reward models and perform RL updates makes them computationally expensive." However, obtaining the optimal policy for each objective also requires training multiple reward models. Therefore, I do not think that the computational cost of MPO is lower than that of previous algorithms like Reward Soups. Could the authors provide further clarification on why MPO is not computationally expensive? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We greatly appreciate your constructive and insightful feedback! Here we provide a detailed response to address all of your concerns. > Differences between main theorem and the one in Shi et al., 2024 Thank you for your feedback. Compared to Thm 1 in Shi et al., our approach differs in both objective and applicable setting, with some overlap in a special case: 1. Our Thm 3.4 addresses the max-min setting without explicit preference weights, whereas Shi et al. uses predefined weights. In the special case of linear reward aggregation, Lem 3.9 shows that the optimization leads to a closed-form solution, which indeed reaches same conclusion in Shi et al. 2. We introduce an auxiliary normalizing operator for rewards, which is crucial for transforming the optimization over $\lambda$ into Eq12 (line 199). Without it, the reward function cannot be avoided in optimization. 3. In terms of the derivation, Shi et al. uses a Legendre transform, converting the problem into $$\max_y \pi_{ref}(y|x) \text{~~such that~} r(y∣x)>C,$$ where $C$ is unspecified. In contrast, our proof uses a direct reward–policy mapping, leading to the closed-form expression for $\pi^*$ directly, providing a more interpretable and transparent theoretical derivation. > The use of mirror descent for weight adjustment closely resembles the approach in Ramesh et al., 2024. Indeed the high-level idea of Ramesh et al. is to perform robust alignment, which is similar to us. However, we highlight several differences in terms of less resources required, applicable settings, and methodology developed. 1. Ramesh et al. aims to derive a group robust preference optimization objective and conduct robust alignment from scratch, which is computationally expensive and requires extensive hyperparameter tuning. In contrast, we could directly use existing single-objective policies, avoiding full retraining and significantly reducing computational cost. 2. By reusing pretrained or open-source LLMs, we simplify robust training to only updating preference weights—a lightweight post-processing step that aligns better with practical academic and industry use cases. Ramesh et al. 's method is better suited for settings where LLMs must be trained from scratch. 3. Methodologically, Ramesh et al. uses a gradient descent-mirror ascent method to update the policy and the weight simultaneously as its objective is to solve a min-max optimization. In addition, it has access to unbiased gradient estimators. In our case, since we do not train policies, weight optimization becomes a conditional stochastic problem without unbiased gradient estimators. To tackle this, we designed a biased mirror descent method and analyzed its convergence. > Could the authors provide further clarification on why MPO is not computationally expensive? Thanks for your question. 1. While standard RLHF requires training multiple reward models, this can be avoided by using DPO, which is mathematically equivalent and was adopted in our experiments for efficiency. 2. For $dim(\lambda=3)$, training the policy via MORLHF (or maxmin-RLHF) for a fixed $\lambda$ takes approximately 10 A100 GPU hours, whereas solving for $\lambda$ in MPO only takes 2.5 A100 GPU hours. 3. Unlike standard MORLHF, which re-runs PPO for different $\lambda$, MPO avoids this overhead by efficiently combining logits. While Reward Soups has similar cost with predefined weights, MPO achieves better alignment (shown Table 1), highlighting its effectiveness in balancing multiple onjectives. > Comparisons with more prior works. Thank you for the suggestion, we have added further comparisons with prior works: 1. We compare our approach with MORLHF, which employs linear aggregated rewards using PPO, as well as with reward soups that utilize learned weights. As shown in Table 1, our MPO method still achieves the highest minimum win rate, where $\pi_{Weighted RS}$ corresponds to the Reward Soups with learned weights | Model | Helpful | Harmless | Humorous | Min | |-|:-:|:-:|:-:|:-:| |||$\beta = 0.1$| | $\pi_{RS}$ | 44.8 | 59.4 | 56.4 | 44.8 | | $\pi_{Weighted RS}$ | 45.4 | 52.2 | 51.3 | 45.4 | | $\pi_{MORLHF}$ | 42.9 | 56.7 | 54.5 | 42.9 | | $\pi_{MPO}$ | 46.3 | 53.1 | 54.1 | $\color{red}{46.3}$ | |||$\beta = 0.5$| | $\pi_{RS}$ | 51.9 | 53.7 | 50.0 | 50.0 | | $\pi_{Weighted RS}$ | 53.7 | 50.8 | 48.8 | 48.8 | | $\pi_{MORLHF}$ | 41.7 | 54.4 | 52.9 | 41.7 | | $\pi_{MPO}$ | 54.9 | 53.1 | 57.1 | $\color{red}{53.1}$ | Table 1: Win rate(%) against the Reference Model 2. While MOD employs a linear combination of logits, in Figure 2(b), we have evaluated $$\pi(y|x) \propto \pi_1(y|x)^{\lambda_1}\pi_2(y|x)^{1-\lambda_1}$$ with different $\lambda_1 \in \Lambda_{grid}= \\{0.0, 0.2, 0.4, 0.6, 0.8, 1.0\\}$. Our results show that the policy obtained via MPO achieves the best objective performance across this grid, outperforming MOD with predefined weights in the max-min setting. --- Rebuttal Comment 1.1: Comment: Thanks for the response. However, I still think this paper is very similar to the main theorem in [Shi et al. 2024], with a similar weight adjustment in [Ramesh et al. 2024]. Hence, the novelty seems limited. I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer FxzW, Thank you for your follow-up comment. We appreciate your continued engagement with our work. However, we respectfully disagree with your assessment and would like to offer a clarification of the key differences between our approach and the references you mentioned. First, the primary focus of our paper is on multi-objective alignment via a max-min formulation that does not require pre-specified preference weights. This focus is *very different* from that of [Shi et al. 2024], which assumes fixed weights as input. Yet how to choose such a fixed weight can be highly nontrivial in practice. On the contrary, our method is specifically designed to infer a robust solution without relying on explicit prior knowledge of user preferences. Second, as noted in our rebuttal, we introduce an *auxiliary normalization operator* on the reward functions, which plays a critical role in enabling a closed-form expression for the optimization over $\lambda$ (see Eq. 12). This normalization step changes the analytical landscape and requires a different theoretical treatment than that in [Shi et al. 2024]. Regarding your comment on "a similar weight adjustment" in [Ramesh et al., 2024], we would like to emphasize 1) from a computational perspective, our method can directly use existing policies, avoid policy retraining, and thus is much cheaper; 2) from a technical point of view, to achieve such a computational reduction, we propose to use a novel *biased mirror descent* to update the weights, which makes it possible to keep the policy fixed. This is fundamentally different from the approach in [Ramesh et al. 2024], which employs a joint min-max optimization framework and thus involves both updating weights and policy retraining at each step. In particular, they have access to unbiased gradient estimators of the weight at the costs of much higher computational costs. That being said, our biased optimization method is tailored to our setting to avoid updating the policy and ensure smaller computational costs. In addition, we provide a novel convergence analysis.
Summary: This work proposes Mixing Preference Optimization (MPO), a post-processing framework for aggregating single objective policies with a mixing of preference alignment. Specifically, the authors combine two multi-objective RLHF approaches, MORLHF and MaxMin-RLHF using a post-processing strategy to combine single-objective policies. The overall objective of MPO can be viewed as a max-min game between the policy and the combinational factor $\lambda$ of MORLHF. **After rebuttal:** Thank you to the authors for their feedback. However, my concerns regarding [Theoretical Claims] remain unresolved. In particular, the primary issue is that the correctness of Equation (10) has not been substantiated. This affects the overall confidence in the correctness of the proposed method. I will maintain my original score. Claims And Evidence: The authors claim that MPO significantly reduces training costs and computational overhead. However, I can not find any theoretical or empirical analysis of the computational complexity compared with the baselines such as MORLHF and MaxMin-RLHF. Methods And Evaluation Criteria: The proposed method uses a post-processing algorithm to tackle alignment with diverse human preferences. The method, itself, is easy to follow and the evaluation criteria make sense. Theoretical Claims: The main concerns are the correctness of the derivation of the algorithm, specifically: 1. The correctness of Eq. 10 can not be verified, which is an important premise for deriving the optimization objective in Theorem 3.4. (Main Theorem). 2. $\pi^*$ is a global and certain optimality of the policy, but $r^*$ is obtained by stochastic descent in Alg. 2. It is unclear how the estimation of $r^*$ can approximate optimize the objective of Eq. 10. 3. The optimization of $\pi^*$ is with respective to the input-output $(x,y)$. However, the optimization of $r^*$ is input-output agnostic. This anti-intuitive result makes me concerned about the soundness of the proposed method in comparison with MaxMin-RLHF. Experimental Designs Or Analyses: I have some concerns about the experimental design and analyses as follows. 1. The size of $\lambda$ is so small (2 or 3) in the experiments. This can make the soundness of the proposed algorithm weak in practical usage. 2. The comparison does not consider other important RLHF baselines such as PPO-based approaches. The existing work has proposed to aggregate the reward model of PPO to mix diverse preference alignment. 3. The analysis of training costs is missing in the experimental analyses. Besides, it is unclear whether the proposed method will improve the computational complexity in the inference phase. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: An improvement method on the alignment of diverse human preferences upon previous work such as MORLHF and MaxMin-RLHF. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: For strengths, 1. the proposed method is easy to follow for mixing diverse preference alignment. 2. The problem of alignment with diverse human preferences is novel and interesting within the area of RLHF. For weaknesses, 1. the writing is needed to improve, e.g., section 3 is hard to follow. 2. the contribution and novelty are marginal to some extent, the proposed algorithm is a straightforward combination of two previous works MORLHF and MaxMin-RLHF. Other Comments Or Suggestions: N/A Questions For Authors: 1. How can the correctness of Eq. 10 be verified? 2. Whether the proposed method will improve the computational complexity in the inference phase? 3. Can we improve the size of $\lambda$ in the experiments? 4. Where are the key novelty and soundness compared with MORLHF and MaxMin-RLHF? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We greatly appreciate your constructive and insightful feedback! Here we provide a detailed response to address all of your concerns. > How can the correctness of Eq. 10 be verified? Thanks for the question. As we explained after Eq. 10 in our original submission, the result follows directly from Sion’s minimax theorem. The objective function is convex in $\lambda$ ($\pi_{\theta}$ fixed)​ and concave in $\pi_{\theta}$​ ($\lambda$ fixed), which satisfies the conditions for the theorem and guarantees the interchange of max and min in Eq. 10. > $\pi^*$ is a global and certain optimality of the policy, but $r^*$ is obtained by stochastic descent in Alg. 2. It is unclear how the estimation of $r^*$ can approximate optimize the objective of Eq. 10. 1. If $r^*$ refers to the reward model, our approach does not involve reward training; Instead, we use an auxiliary normalizing operator to directly derive the optimal policy $$\pi^*(y|x) \propto \prod_{k=1}^K\left(\pi_k(y|x)\right)^{\lambda_k^*}$$ 2. If $r^*$ refers to the preference weight $\hat{\lambda}$ as solved by Alg. 2, then Thm 3.8 provides a KL-based error bound between the learned policy $\hat{\pi}$ and $\pi^*$. The theorem formally states that $\hat{\pi}$ closely approximates $\pi^*$ under mild conditions. > The optimization of $\pi^*$ is with respect to the input-output (x,y). However, the optimization of $r^*$ is input-output agnostic. Thank you for your question. Again, we believe you are referring to $\lambda$. As shown in Thm 3.4, our main task is to solve for $\lambda^*$ via Eq .12. Once obtained, the optimal policy is effectively a linear combination of the logits from the single-objective policies. Additionally, when applying Alg 2, we only need the individual policies and a set of prompts. > Can we improve the size of $\lambda$ in the experiments? Thank you for your comment. We would like to point out that $dim⁡(\lambda)=3$ is consistent with prior works [1,2,3], which also adopts up to three objectives. This dimensionality has proven effective for capturing the scalability of multi-objective alignment, balancing soundness and computational efficiency. References: 1. Chakraborty, S, et al. MaxMin-RLHF: Alignment with diverse human preferences. ICML 2024. 2. Yang, R, et al. Rewards-in-context: Multi-objective alignment of foundation models with dynamic preference adjustment. NeurIPS 2024. 3. Zhou, Z, et al. Beyond one-preference-fits-all alignment: Multi-objective direct preference optimization. arXiv preprint arXiv:2310.03708 (2023). > The comparison does not consider other important RLHF baselines such as PPO-based approaches. Thank you for the comment. Standard MORLHF approaches [1,2] optimize linearly aggregated reward functions. while Maxmin-RLHF perform PPO to obtain the policy by optimizing: $$\max_{\pi} \min_k E\left[r_{k}(x,y) \right] - \beta D_{\mathrm{KL}}\left[\pi \Vert \pi_{\text{ref}}\right].$$ In light of your comments, we have added more comparisons of such methods. Notably, our MPO method still achieves the highest minimum win rate. | Model | Helpful | Harmless | Humorous | Min | |-|:-:|:-:|:-:|:-:| |||$\beta = 0.1$| | $\pi_{Maxmin-RLHF}$ | 44.6 | 56.1 | 51.4 | 44.6 | | $\pi_{MORLHF}$ | 42.9 | 56.7 | 54.5 | 42.9 | | $\pi_{MPO}$ | 46.3 | 53.1 | 54.1 | $\color{red}{46.3}$ | |||$\beta = 0.5$| | $\pi_{Maxmin-RLHF}$ | 46.1 | 53.8 | 54.8 | 46.1 | | $\pi_{MORLHF}$ | 41.7 | 54.4 | 52.9 | 41.7 | | $\pi_{MPO}$ | 54.9 | 53.1 | 57.1 | $\color{red}{53.1}$ | References: 1. Ji, J, et al. Beavertails: Towards improved safety alignment of llm via a human-preference dataset. NeurIPS 2023. 2. Wu, Z, et al. Fine-grained human feedback gives better rewards for language model training. NeurIPS 2023. > The analysis of training costs is missing in the experimental analyses. Thanks for the suggestion. For computation cost, training the policy using MORLHF (or maxmin-RLHF) with aggregated reward models takes approximately 10 A100 GPU hours, since both methods rely on PPO for policy optimization and differ only in how they aggregate reward functions, In contrast, our approach avoids PPO entirely, solving for the preference weights $\lambda$ via Alg 2 requires only about 2.5 A100 GPU hours, offering a significant reduction in training time while still achieving competitive performance. > Where are the key novelty and soundness compared with MORLHF and MaxMin-RLHF? Thanks for the comment, We have stated the key novelty and soundness of MPO compared to MORLHF and MaxMin-RLHF in both the Introduction and Conclusion sections. In summary: 1. MPO connects reward aggregation to policy aggregation, yielding a closed-form aggregation rule. 2. Our approach operates directly on single-objective policies, avoiding extra RL updates and reward model training, thereby significantly reducing computational overhead. 3. The method is backed by rigorous theoretical error bounds that ensure robustness relative to the optimal policy.
Summary: This paper proposes MPO, a framework designed to mix diverse single-objective policies for aligning LLMs with human preferences. Instead of training a costly multi-objective RLHF model from scratch, this paper shows how pre-trained, single-objective policies can be aggregated using a batch stochastic mirror descent (BSMD) algorithm. This paper derives a closed-form solution relating the aggregated policy to the individual policies and provide theoretical guarantees. Experiments on multiple multi-objective preferences tasks illustrate that MPO can outperform baseline methods while having lower computation cost. Claims And Evidence: Yes, most of the claims are supported by the theoretical guarantees and experiment results. Methods And Evaluation Criteria: The evaluation mainly relies on the judgment of GPT 3.5/4, it would be better to include human validation. Theoretical Claims: Yes, I carefully checked the key steps of the aggregated policy and convergence guarantees for BSMD, and they appear to be correct. Experimental Designs Or Analyses: The experimental design is sound in that it considers multiple objectives and compares MPO with representative baselines. One potential issue is that the evaluation relies on GPT 3.5/4 models, although common in current research, may be sensitive to prompt design. Supplementary Material: I reviewed the supplementary sections including proof derivations, implementation details, and detailed results. Relation To Broader Scientific Literature: This paper makes meaningful contribution to the preference learning and alignment of LLMs. Essential References Not Discussed: This paper discusses a wide range of relevant works of RLHF, preference learning and the diverse alignment objectives. Other Strengths And Weaknesses: **Strengths** 1. This paper proposes an effective method MPO that utilizes existing single-objective policy to combine into a unified one for diverse preferences alignment. The proposed method is well formalized and supported by theoretical analysis. 2. MPO has a clear advantage in efficiency, which avoids alignment from scratch. 3. Experiment results demonstrate across a wide range of alignment tasks demonstrate the effectiveness of MPO. **Weaknesses** 1. It would be better to add more validations on the GPT-based evaluation. 2. It would be better to include a discussion on the scalability of MPO with respect to computation resources when increasing the number of objectives. Other Comments Or Suggestions: Please see the weakness. Questions For Authors: Please see the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your constructive and insightful feedback! Here we provide a detailed response to address all of your concerns below. > It would be better to add more validations on the GPT-based evaluation. Thanks for the suggestion. When utilizing GPT-based evaluations, we have experimented with multiple GPT versions and leveraged prompts similar to previous works such as [1] and [2]. We found that while there are some variations in the output, the overall performance trends remain stable, which gives us confidence in the reliability of this evaluation method. We are also open to further validations and comparisons to ensure that our evaluation framework is as comprehensive and robust as possible. References: 1. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. NeurIPS 2023. 2. Zhou, Z., Liu, J., Shao, J., Yue, X., Yang, C., Ouyang, W. and Qiao, Y., 2023. Beyond one-preference-fits-all alignment: Multi-objective direct preference optimization. arXiv preprint arXiv:2310.03708. > It would be better to include a discussion on the scalability of MPO with respect to computation resources when increasing the number of objectives. Thanks for the question. From our observation, computational cost increases approximately linearly with the dimensionality of $\lambda$. Specifically, obtaining an approximate optimal $\lambda$ (using 600 iterations of Algorithm 2) requires: | dim($\lambda$) | A100 GPU hours | |:-:|:-:| | 2 | 1.8 | | 3 | 2.5 | | 4 | 3.3 | This linear scaling indicates that the computational burden remains manageable when increasing the number of objectives.
null
null
null
null
null
null
Free Process Rewards without Process Labels
Accept (poster)
Summary: The paper introduces a method to train Process Reward Models (PRMs) without requiring expensive step-level annotations. By parameterizing outcome rewards as the log-likelihood ratio between a policy model and a reference model, PRMs can be implicitly derived from Outcome Reward Models (ORMs) trained on response-level data alone. This approach, validated on mathematical reasoning tasks, outperforms existing PRM methods (e.g., MCTS-based annotations) with significantly lower computational costs. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. Derives process rewards as log-likelihood ratios between policy and reference models, unifying preference learning (e.g., DPO) with PRM training. 2. Reduces training FLOPs compared to MCTS-based methods (e.g., Math-Shepherd), making PRMs accessible for resource-limited settings. 3. Compatible with diverse loss functions (DPO, CE, KTO, NCA), demonstrating flexibility beyond a single algorithm. Weaknesses: 1. The core concepts and theoretical framework have already been established in [1]. Even with the proposed extensions, the theoretical contribution remains marginal. 2. While the authors suggest using cross-entropy (CE) loss to address scenarios with unpaired data, obtaining response-level labels may still pose challenges. Furthermore, the performance of CE loss is generally inferior to that of DPO. 3. Evaluated only on mathematical reasoning; generalizability to code generation or open-ended generation tasks is untested. [1] Your Language Model is Secretly a Q-Function. https://arxiv.org/abs/2404.12358 Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. The core concepts and theoretical framework have already been established in [1]. Even with the proposed extensions, the theoretical contribution remains marginal. **A**: We note that the derivation of our work is different from [1] and provides a more general conclusion. [1] is tailored to DPO only and adopts a different reward representation (with a Z(X) baseline), with the implicit reward derived from entropy-regularized RL framework to ensure the optimality of the trained policy model. We aim for a reward representation that enables tractable Q value; we do not target and never claim an optimal policy in the entropy-regularized RL framework. Rather, our reward representation is **defined** and **constructed from scratch**, namely any representation is acceptable as long as it gives a tractable way to estimate Q value and makes Eq. (2) in line 137 hold. Compared to [1], this paper provides a fresh and more general perspective of implicit rewards, which we believe holds significant value. In this way, our paper provides a novel and more general theoretical framework which then leads to empirical benefits, e.g. the legibility of a CE loss, which offers an alternative when pairwise data is harder to collect than response-level labels, and scenarios that are more data-scarce. One may also explore other effective objectives beyond DPO and CE within our theoretical framework. Therefore, we believe our generalization to unpaired losses holds great theoretical and practical merits compared to previous works. [1] From r to Q∗: Your Language Model is Secretly a Q-Function. Rafailov et al. 2024. > 2. While the authors suggest using cross-entropy (CE) loss to address scenarios with unpaired data, obtaining response-level labels may still pose challenges. Furthermore, the performance of CE loss is generally inferior to that of DPO. **A**: We agree that in some cases response-level labels are still difficult to collect; our CE objective provides an alternative when such labels are available, and it can utilize the pairwise labels when they are easier to obtain. Also, CE loss has its own advantages over DPO. As shown in Figure 5 and Figure 6, Implicit PRM with CE loss is more data efficient, while showing better performance when integrated with majority vote, presenting as an appealing alternative in practice as in many scenarios pair-wise data is hard to collect. Also, CE loss only requires one example for forwarding and backwarding which reduces memory overhead in RL training. Therefore, the generalization to unpaired losses remains valuable compared to pair-wise DPO in more data-constrained settings. [2] Process Reinforcement through Implicit Rewards. Cui et al. 2025. > 3. Evaluated only on mathematical reasoning; generalizability to code generation or open-ended generation tasks is untested. **A:** Though we do not include other tasks in this paper due to our limited capacity and the limited space, after ICML submission deadline, there are recent works showing that Implicit PRM is helpful in best-of-N sampling on agent tasks [3], and adopting Implicit PRM for online RL, with brings substantial gains on coding [2] and function calling tasks [4]. [3] AgentRM: Enhancing Agent Generalization with Reward Modeling. Xia et al. 2025. [4] Learning to Generate Structured Output with Schema Reinforcement Learning. Lu et al. 2025.
Summary: This paper proposes a new way of training process reward models without expensive fine-grained step-level annotations by training an ORM (reward modeled as log-likelihood ratios of the policy and the reference model) and using it as an implicit PRM. The authors show that their training of the implicit PRM is more data efficient than baseline PRM training strategies while also achieving good performance with Best-of-N sampling on the MATH dataset. Lastly, the paper contains analysis on how the their training strategy scales with different hyperparameters and loss functions for training the implicit PRM. Claims And Evidence: - Training implicit PRM is a more data-efficient approach than conventional PRM training recipes from prior work -- The authors show this successfully and convincingly - Implicit PRM is performant and outperforms baselines: When it comes to these empirical results, the paper is lacking in the following ways: - [W1] *Limited Datasets*: All the results in this paper are based on the MATH dataset. However, a key advantage of any reward model is the ability to score generations from different distributions (even within math reasoning). The results would be more convincing if the best-of-N results were shown on several datasets such as GSM-8K, SVAMP, MMLU, AIME, etc. - [W2] *Is PRM better than ORM?*: In table 1, it appears that the Implicit PRM model is on average only a few points better than the other ORMs, so it is unclear why the training the implicit PRM is worthwhile (since training an ORM would be more efficient at inference time). Methods And Evaluation Criteria: In addition to W1 and W2, it is not clear if comparisons of Implicit PRM an other baselines is fair: - [W3] *Fairness of Baseline Comparisons*: As pointed out in Table 2, at the 7B model scale, the Implicit PRM involves substantially more compute or GPU hours than the baseline, often by a factor comparable to 1-2x generations. In that case, when looking at the results in Table 1, it would be fair to compare the pass@4 performance of Implicit PRM with the pass@8 performance of baselines for Llama3 8B and pass@12 performance for Mistral. Based on the already slim margins by which their method beats baselines, most of the gains could be wiped away in this setting. Theoretical Claims: Yes, in Appendix A. Experimental Designs Or Analyses: See W1-3. Supplementary Material: No Relation To Broader Scientific Literature: It proposes a new way of training an Implicit PRM without expensive fine-grained step-level annotations, which is more data efficient and performant than baseline PRM training strategies. They also make interesting connections to RL training of LLMs with implicit rewards as used in offline RLHF algorithms like DPO and show versatility in implicit PRM training objectives for paired and unpaired data. Essential References Not Discussed: [1] https://arxiv.org/abs/2402.10963 [2] https://arxiv.org/abs/2403.04642 Other Strengths And Weaknesses: Addressed in W1-3. Other Comments Or Suggestions: Found several typos in the main text: L 077 & L 384. Also check W1-3. Questions For Authors: - Given the PRMs provide step-level supervision, did the authors conduct any experiments to show their implicit PRM is more effective than other PRMs or ORMs at refinement [1] or RL training [2]? - See the baselines or additional experiments requested in W1-3 above. [1] https://arxiv.org/abs/2402.10963 [2] https://arxiv.org/abs/2403.04642 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > [W1] Limited Datasets **A:** Though we do not include other tasks in this paper due to our limited capacity and the limited space, after ICML submission deadline, there are recent works showing that Implicit PRM is helpful in best-of-N sampling on agent tasks [1], and adopting Implicit PRM for online RL, with brings substantial gains on coding [2] and function calling tasks [3]. [1] AgentRM: Enhancing Agent Generalization with Reward Modeling. Xia et al. 2025. [2] Process Reinforcement through Implicit Rewards. Cui et al. 2025. [3] Learning to Generate Structured Output with Schema Reinforcement Learning. Lu et al. 2025. Following reviewer’s suggestion, we also test on other math datasets and results are as follows. The policy model is Llama-3-70B-Instruct, and the baseline is Math-Shepherd in our implementations. | Method | | AMC|(pass@1=36.1) | ||AIME|(pass@1=8.9)| |--------|:---:|:---:|:---|:---:|---:|:---:|:---| || @4|@16|@64||@4|@16|@64| | Math-Shepherd| 39.8|48.2|45.8| |17.8|18.9|17.8| | ImplicitPRM(DPO)|41.0|50.6|48.2||20.0|23.3|20.0| > [W2] Is PRM better than ORM? **A:** All ORMs in Table 1 are off-the-shelf from HuggingFace and are trained with different data and therefore not comparable to ours. They indeed consume much more data. For example, Eurus-RM-7B uses 287k pairs of data, SkyworkRM uses 465K, ArmoRM uses 1560K, while we only use 263K. Also, ORMs in Table 1 perform well on weak-to-strong settings which increases the average performance, but on Mistral-7B (20.4% vs 28.8% for best-of-64) and Llama-3.1-8B-Instruct (51.8% vs 57.0%), there are significant performance gaps to Implicit PRM. Moreover, Implicit PRM only needs to forward the response once, with the only difference being log probs are divided into steps and summed up respectively to find the minimum step reward. We acknowledge that Implicit PRM brings extra inference overhead due to the reference model and we have explored this issue in Appendix. As the cost mainly comes from the additional reference model, in Appendix B.3.1, we ablated it in both training and inference and presented results in Table 5. We found that in practice they can be removed at least in this paper’s setup, avoiding the extra compute while maintaining the performance. We also provided explanations in Appendix B.3.1 on why it may be removed. In such cases, Implicit PRM brings negligible overhead during inference compared to an ORM. > [W3] Fairness of Baseline Comparisons **A:** Please refer to the above response on how we can reduce the inference overhead of the reference model. Besides, following your suggestion, we present the compute-equivalent comparison between our best-of-4 and their best-of-8/12 as below: |Methods ||Mistral-7B-Inst-v0.2|| Llama-3.1-8B-Inst| |--------|:---:|:---:|:---|:---:| | | @12|@48|@8|@32 | |Math-shepherd |22.2|25.2|51.4| 52.0 | ||@4|@16|@4|@16| |ImplicitPRM(DPO)|18.6| 24.4 |54.0| 55.4| > Given the PRMs provide step-level supervision, did the authors conduct any experiments to show their implicit PRM is more effective than other PRMs or ORMs at refinement [1] or RL training [2]? **A:** We supplemented comparison to ORMs with the same data as our Implicit PRM and still observed the superiority of Implicit PRM. ||||Mistral-7B-Inst-v0.2|||Llama-3.1-8B-Inst|||Llama-3.1-70B-Inst|Avg.| |---|---|---|---|---|---|---|---|---|---|---| ||@4|@16|@64|@4|@16|@64|@4|@16|@64|| |ORM|18.6|23.8|26.4|50.8|50.2|52.8|68.8|71.0|69.4|48.0| |ImplicitPRM(DPO)|18.6|24.4|28.8|54.0|55.4|57.0|71.8|71.2|72.2|50.4| For RL training, a recent work [3] released after ICML submission deadline explored comprehensively how Implicit PRM can improve RL especially on sample efficiency compared to using verifiable outcome rewards only, namely the golden ORM. Results are shown as follows: |Method|AIME2024|AMC|MATH-500|MinervaMatch|OlympiadBench|LeetCode|LiveCodeBench|Avg.| |---|---|---|---|---|---|---|---|---| |GPT-4o|9.3|45.8|76.4|36.8|43.3|58.9|48.8|45.6| |Eurus-2-7B-SFT|3.3|30.1|66.2|32.7|29.8|21.7|17.8|28.8| |+RLw/GTOnly|20.0|47.0|73.2|36.4|35.4|28.3|26.7|36.9| |+RLw/GT+ImplicitPRM|26.7|57.8|79.2|38.6|42.1|33.3|28.6|43.9| [3] Process Reinforcement through Implicit Rewards. Cui et al. 2025. --- Rebuttal Comment 1.1: Comment: I thank the authors for their effort in the response, but I have decided to keep my overall recommendation unchanged. My decision is primarily influenced by two factors: i) The compute-matched results are not very convincing since it shows the baseline outperforms implicit PRMs one model and trails on the rest -- so it is unclear what pattern holds for stronger base models like Qwen-Math; ii) The results of RL training with Implicit PRM are taken from another (concurrent) work unless the authors are arguing that the two methods are essentially the same implementationally, I am not convinced how it supports their work and would have liked to see a reproduction of this study in the author's setup. --- Reply to Comment 1.1.1: Comment: Thanks reviewer for the engagement! To address the concerns, we added two experiments: 1. More rigorous compute-match experiments. We previously presented best-of-4 of Implicit PRM versus best-of-8 of baseline on Llama-8B, and best-of-12 of baseline on Mistral-7B, following reviewer’s suggestion. However, according to Table 2, the GPU time costs of Implicit PRM in real-world practice is 301.6/200.9=1.5 times of baseline’s cost on Mistral-7B, 241.7/171.1=1.41 times on Llama-8B, and 122.2/111.1=1.1 on Llama-70B. Hence, the fair comparison should be **best-of-4 of Implicit PRM vs. best-of-6 of baselines on Mistral and Llama-8B**. Moreover, as indicated in Table 5 in Appendix B.3.1, we can remove the reference model at inference to reduce overhead in some cases, reaching the same level of inference efficiency as our baselines. We evaluated the reference-free version of Implicit PRM under the same budget as our baselines. Results are as follows, from which we can see Implicit PRM, with or without a reference model, is comparable on Mistral-7B and outperforms Math-Shepherd by a large margin on Llama-8B. | Method | Mistral | | Llama | | |---|:---:|:---:|:---:|:---:| | | @6 | @24 | @6 | @24 | | Math-Shepherd (Trained on same data) | 20 | 24.6 | 49.8 | 51.6 | | Implicit PRM (DPO, w/o ref) | 18.8 | 24.8 | 54.6 | 55.2 | | | @4 | @16 | @4 | @16 | | Implicit PRM (DPO) | 18.6 | 24.4 | 54 | 55.4 | 2. Following reviewer’s suggestion, we implemented RLVR [1] with PPO, using only ground-truth outcome rewards, and then integrated it with online updated Implicit PRM, where the rollouts assessed by golden outcome rewards can be used to train Implicit PRM as in Eq. (5). We used publicly available instructions on [Huggingface](https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data). We ran both for 160 steps within the discussion period and tested on coding and math benchmarks. Results show that Implicit PRM augmented PPO is generally better than GT-only PPO across various benchmarks. | | HumanEval | MBPP | LeetCode | MATH500 | AMC | AIME | Minerva | OlympicBench | avg | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | PPO w/ GT Only | 72.0 | 56.1 | 28.3 | 71.8 | 48.2 | 6.7 | 35.3 | 34.5 | 44.1 | | PPO w/ GT and Implicit PRM | 72.0 | 59.1 | 27.8 | 76.4 | 48.2 | 20.0 | 39.7 | 37.9 | 47.6 | [1] Tulu 3: Pushing Frontiers in Open Language Model Post-Training. Lambert et al. 2024.
Summary: This paper shows that the PRM can be obtained implicitly without additional training by parametarization. Claims And Evidence: From the table, the gain delta from the proposed PRM is not that large, and on Mistra-7B is not helping, Besides, the pass@1 is not clear if it helps, would be great to show a curve here. Methods And Evaluation Criteria: Looks good. Theoretical Claims: Looks good, but not an expert to verify the correctness. Experimental Designs Or Analyses: Looks reasonable to me, with the ORM and PRM as baselines, and the sufficient base model as well as Reward model to show the ablations and gains. Supplementary Material: Yes Relation To Broader Scientific Literature: If this generalize to many other domain, would be a big gain for efficient RL scaling for LLM Essential References Not Discussed: Looks comprehensive pile of relevant work. Other Strengths And Weaknesses: The paper is in general a good shape, that not an expert in this domain can still follow the flow easily. Both theoretical and empirical results looks good to me. Other Comments Or Suggestions: Would like to know more about the pass@k from 1 to 16, how the curve looks like, particularly for pass@1, how much it helps. Besides how many runs for this metric, would be great to show mean/std here. Questions For Authors: As in comments and above. The main concern is the empirical gain, if it's really showing the consistent gain. Would be great if this part can be explained more (particularly, why Mistral-7B no gain, and pass@4 for Llama/pass@64 for Llama-70B Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > From the table, the gain delta from the proposed PRM is not that large, and on Mistra-7B is not helping, Besides, the pass@1 is not clear if it helps, would be great to show a curve here. **A:** This might be a misinterpretation of Table 1. In fact, our approach achieves very strong performance. We should compare the best-of-N performance to pass@1(greedy decoding w/o using reward models) to evaluate the effectiveness of the reward models. The pass@1, i.e. best-of-1, of each generation model is highlighted in the header and the caption. The first column for each model denotes the best-of-4 accuracy and is already much higher than pass@1 across the board, confirming the strong performance of our approach. Rows in the “open-source reward models” block are trained with different and much larger and more engineered data, and therefore are not comparable to ours. Those under “our implementations” are reimplementations of previous works controlling for confounding factors such as the base model and data, for fair comparisons. Contrary to the reviewer’s observation that “on Mistral-7B it is not helping,” the gains on Mistral-7B are actually the largest: the pass@1 is 9.6% while best-of-64 increased to 28.8% for our DPO model, a 19.2 absolute gain. Improvements are still observed even when we use the 8B-sized PRM to assist 70B-sized generation models, namely in the weak-to-strong setup, evidenced by best-of-64 of 72.2% compared to pass@1 of 63.2%. Importantly, our reported performance improvement on MATH is challenging to achieve: contextualizing it within recent impactful literature [1,2], a <3% improvement on best-of-64 and 2% improvement compared to baselines is usually considered substantial as a significant improvement. We think our Implicit PRMs achieve strong performance in our experiments. We also note that why we did not surpass the off-the-shelf baseline RLHFlow-8B-Mistral-Data can be explained by the unfair comparison, due to their advantages from training models using on-policy rollouts. Our evidences include: (1) RLHFlow-8B-Mistral-Data uses **on-policy** rollouts from Mistral-7B while ours use off-policy rollouts from Llama-3.1-8B-Instruct; (2) The same approach with rollouts from DeepSeek models underperforms ours; (3) Our approach outperforms RLHFlow-8B-Mistral-Data on Llama-3.1-8B-Instruct by a large margin. Following the reviewer's suggestion, we plot the curve of pass@N and bes-of-N in this [figures](https://ibb.co/Qvt4GQgM) anonymously. from the figure we can see that our method consistently outperforms baselines when increasing the number of candidate responses (N). [1] Math-shepherd: Verify and reinforce llms step-by-step without human annotations. Wang et al. 2023. [2] Improve mathematical reasoning in language models by automated process supervision. Luo et al. 2024. > Whether this generalize to many other domain. **A:** Though we do not include other tasks in this paper due to our limited capacity and the limited space, after ICML submission deadline, there are recent works showing that Implicit PRM is helpfull in best-of-N sampling on agent tasks [3], and adopting Implicit PRM for online RL, with brings substantial gains on coding [4] and function calling tasks [5]. [3] AgentRM: Enhancing Agent Generalization with Reward Modeling. Xia et al. 2025. [4] Process Reinforcement through Implicit Rewards. Cui et al. 2025. [5] Learning to Generate Structured Output with Schema Reinforcement Learning. Lu et al. 2025. > Would like to know more about the pass@k from 1 to 16, how the curve looks like, particularly for pass@1, how much it helps. Besides how many runs for this metric, would be great to show mean/std here. **A:** Thanks for the suggestion. The curve of pass@N and bes-of-N can be found [here](https://ibb.co/Qvt4GQgM). For each experiment we only run for one time since our early experiments show that there is little variance across runs and thus this has been a standard practice mainly due to the cost of the experiments [1,2,6]. [6] Autopsv: Automated process-supervised verifier. Lu et al. 2024. > The main concern is the empirical gain, if it's really showing the consistent gain. Would be great if this part can be explained more (particularly, why Mistral-7B no gain, and pass@4 for Llama/pass@64 for Llama-70B **A:** Please see above responses on empirical gains. Particularly, our method has achieved substantial gains (pass@1: 9.6% -> best-of-64: 28.8%) on Mistral-7B rather than no gains, and we only underperform the baseline trained on on-policy Mistral-7B rollouts. Regarding pass@4 for Llama/pass@64 for Llama-70B (actually they are best-of-4 and best-of-64), though we did not achieve the highest accuracy, our performance is very close to the best-performing baseline, only with a difference of 0.4% and 0.8% respectively. We kindly ask the reviewer reevaluate our submission after these clarifications on the empirical gains. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my concerns! Raised my rating.
Summary: This paper introduces a method to create a process reward model (PRM) without the need for expensive step-by-step annotations. The authors propose that an implicit PRM can be derived by training an ORM using only response-level labels, by parameterizing the outcome reward as a log-likelihood ratio between the policy and reference models. Their experiments on MATH show that this implicit PRM performs better than a strong MCTS-based baseline (Math-Shepherd) while using significantly less training data. The model’s performance further improves with majority voting, and scaling up instructions and responses boosts effectiveness, with responses contributing more to improvements. ## update after rebuttal I have read the author responses, and my evaluation remains the same. I feel the proposed dense rewards need to be tested as dense rewards for online RL, or at lease beam search at test-time (common in recent literature as I mention below), to know if they are empirically truly effective in improving search efficiency. Claims And Evidence: The paper claims that 10-30x reduction in the overhead needed to train PRMs, if we instead just train ORMs with re-parameterized rewards as in DPO, and then use the proposed prefix level scores to evaluate intermediate steps. The experiments on Math benchmarks support this claim. Methods And Evaluation Criteria: Yes, the evaluation criteria for BoN follows Lightman et. al.. But more recent works evaluate PRMs with beam search or as dense rewards in RL (see comments below). The benchmarks and models chosen are indeed standard. Theoretical Claims: Yes, I checked the validity of Proposition 3.1 in Appendix A. Experimental Designs Or Analyses: Yes, the models and benchmarks chosen are valid and sound. Supplementary Material: Yes, for the proof of Proposition 3.1, and section B.3 which talks about how trained ORM does not give us a better policy. Relation To Broader Scientific Literature: The main benefit of the approach proposed in this paper is that the it only requires training an ORM with implicit rewards in order to turn it into a PRM that can score partial generations. In contrast, other prior works like MathShephard, OmegaPRM and PAVs, train PRMs to predict value functions, which requires a larger collection of data, which roughly scales linearly with the sequence length. Though, this paper does not evaluate the trained PRM during beam search or for RL, which is the main way Snell et. al. and Setlur et. al. evaluated and used PRMs to scale inference time compute. Essential References Not Discussed: Improve Mathematical Reasoning in Language Models by Automated Process Supervision Luo et. al. Other Strengths And Weaknesses: Strengths - The implicit PRM does not need training or data collection beyond what is needed for an ORM. - The analysis in 5.1 seems to suggest that the trained PRMs are indeed data efficient and performance improves as training data is scaled consistently. Weaknesses - The paper only evaluates the PRM as an ORM, to re-rank responses during BoN. The true test of PRM would be to run beam search as in Snell et. al. or to use it as dense rewards for online RL, as on Setlur et. al. - The gains are small on poorer models like Mistral-7B, and in general, the gains are around 5% when averaged across more performant models on MATH. So, it seems that the PRM is not very useful on hard questions. Other Comments Or Suggestions: - L77 typo, parameterizin ---> parameterizing Questions For Authors: - Can the authors provide some discussion on how to optimally scale instructions and responses when collecting the training data for training PRMs? Also, for this, are the samples sampled IID from the base model or balanced across correct and incorrect samples? - Does the trained PRM extrapolate, i.e., how well does a PRM train on Mistral7B transfer to LLama and vice-versa, or for the same base model how does it extrapolate from GSM8k --> MATH or vice-versa? - Do you think that this has would also work on domains other than MATH, like PRMs for instruction tuning, or coding? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. Here are our responses. > The true test of PRM would be to run beam search or to use it as dense rewards for online RL. **A:** We choose best-of-N as our setup because it is a standard practice in recent literature and presents as a valuable approach for inference-time scaling [1,2,3]. Testing PRMs for online RL is of course valuable, but can introduce additional confounding factors and much more overhead. This paper aims to isolate the quality of the reward model, so best-of-N sampling provides relevant and convincing evidence for our claim. That being said, we notice that a recent work [4] after the ICML ddl closely resembles the reviewer's suggestion, which applies our implicit PRM in online RL and achieved strong performance and sample efficiency across various benchmarks. We think this proves our approach's potential in online RL. We are now running the beam search experiments that the reviewer suggested. They are slow and we will provide update as soon as possible. [1] Let’s Verify Step by Step. Lightman et al. 2023. [2] Math-shepherd: Verify and reinforce llms step-by-step without human annotations. Wang et al. 2023. [3] Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters. Snell et al. 2024. [4] Process Reinforcement through Implicit Rewards. Cui et al. 2025. > The gains are small on poorer models like Mistral-7B... It seems that the PRM is not very useful on hard questions. **A:** This might be a misinterpretation of Table 1. In fact, our approach achieves very strong performance on these questions. We note that the pass@1 performance of each generation model is highlighted in the header and the caption, and the first column for each model denotes the best-of-4 accuracy and is already much higher than pass@1. We should compare the best-of-N performance to pass@1 to see the absolute gain of each model. Contrary to the reviewer’s observation, the gains on Mistral-7B are actually the largest, with the pass@1 being 9.6% while best-of-64 increased to 28.8% for our DPO model. Compared to our implemented Math-Shepherd and AutoPSV, we achieved significant average improvements of nearly 3% and 5%. As for the reviewer’s second point: a 5% average improvement on the MATH benchmark is challenging to achieve: contextualizing it within recent impactful literature [2, 5], a <3% improvement on best-of-64 and 2% improvement compared to baselines is usually considered substantial. Therefore, the average performance gains of our implicit PRM demonstrate its effectiveness on these challenging questions. We also add comparisons to ORMs below; both are trained with the same data. The trend is consistent with that in the paper and we observe strong performance of our Implicit PRM. ||||Mistral-7B-Inst-v0.2|||Llama-3.1-8B-Inst|||Llama-3.1-70B-Inst|Avg.| |---|---|---|---|---|---|---|---|---|---|---| ||@4|@16|@64|@4|@16|@64|@4|@16|@64|| |ORM|18.6|23.8|26.4|50.8|50.2|52.8|68.8|71.0|69.4|48.0| |ImplicitPRM(DPO)|18.6|24.4|28.8|54.0|55.4|57.0|71.8|71.2|72.2|50.4| [5] Improve mathematical reasoning in language models by automated process supervision. Luo et al. 2024. > How to optimally scale instructions and responses **A:** According to our experiments in section 5.1, scaling up instructions from the same tasks to downstream tests and scaling up responses are both is helpful. We did not test data balance for the DPO objective, but we did observe benefits of that for CE loss. > Does the trained PRM extrapolate **A:** Yes, Implicit PRM is able to transfer across model families and tests of the same task. For model transfer, all Implicit PRMs in Table 1 are trained from Llama-3.1-8B-Inst with its on-policy rollouts, and existing results have shown their effectiveness on improving Mistral-7B and Llama-3.1-70B-Inst models. For task transfer, we add an experiment to train our model solely on GSM8K and test on MATH as follows, and surprisingly, results show that the overall performance remains comparable to that trained on all instructions, indicating the superior task transferability of Implicit PRM. ||||Mistral-7B-Inst-v0.2|||Llama-3.1-8B-Inst|||Llama-3.1-70B-Inst|Avg.| |---|---|---|---|---|---|---|---|---|---|---| ||@4|@16|@64|@4|@16|@64|@4|@16|@64|| |ImplicitPRM(DPO)|18.6|24.4|28.8|54.0|55.4|57.0|71.8|71.2|72.2|50.4| |ImplicitPRM(DPO on GSM8k)|18.6|24.8|28.2|54.6|54.8|57.0|71.0|71.0|72.6|50.3| > If Implicit PRM works on other domains **A:** Though we do not include other tasks in this paper due to our limited capacity and the limited space, after ICML ddl, there are recent works showing that Implicit PRM is helpful in best-of-N sampling on agent tasks [6] and online RL, with substantial gains on coding [4] and function calling tasks [7]. [6] AgentRM: Enhancing Agent Generalization with Reward Modeling. Xia et al. 2025. [7] Learning to Generate Structured Output with Schema Reinforcement Learning. Lu et al. 2025.
Summary: Verifiers, such as process reward PRMs and ORMs, evaluate LLMs' partial or full responses, providing feedback and pushing the boundaries of LLMs' ability to solve complex reasoning tasks. . PRMs provide better, fine-grained feedback than ORM based on the nature of their training procedure. However, training PRMs is more challenging than training ORMs because it requires annotating every intermediate step. This paper argues that a strong PRM can be derived at no additional cost from training an ORM. The observation is that using the closed-form solution of the reward model to learn a reward and policy jointly, similar to the DPO method, a PRM can be automatically learned during training. The paper extends this idea beyond DPO-style algorithms by incorporating cross-entropy (non-binary objectives). The paper also experiments with three different LLMs, along with several variants of optimization objectives based on DPO, demonstrating the robustness of this approach. The paper also includes several ablation studies highlighting the importance of increasing the number of responses per instruction and scaling the number of instructions. Claims And Evidence: Below are the claims and evidence presented in the paper: - The paper claims that PRM performs better than ORM. They support this claim by referencing other studies in the literature that demonstrate this advantage and conducting experiments to compare both PRM and ORM. - Another claim is that PRM can be easily learned by optimizing a DPO-style objective, which leverages the closed-form solution of the reward model. The paper provides empirical evidence showing that their implicit PRM performs well in practice compared to ORM and is competitive with traditionally trained PRM. Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria are suitable for the problem and application at hand. The authors aim to train the PRM more efficiently to address downstream tasks that require complex reasoning, such as math instruction datasets and chat instruction-following tasks. Theoretical Claims: I've reviewed the theoretical claims, and the proof appears to have some issues, but I could be wrong. The authors assert that $E_x[f(x) g] = E_x[f(x)] g$, where $g = E_{y<t+1}$ and $f(x) = E_{y_t < y_t}$. This assertion is problematic because $g$ is not a constant. Additionally, the second step of proof (1) seems to have some flaws as well. Furthermore, the derivation of cross-entropy loss is missing the normalization constant Z(x). The reason for DPO to minimize the pairwise loss is to cancel the intractable normalization constant. Experimental Designs Or Analyses: The experimental design and analyses are sound. The paper proposes to study their idea across several models to demonstrate its generality. Additionally, the paper provides several key ablation studies to identify where the performance gains are coming from in the proposed approach. Supplementary Material: N/A Relation To Broader Scientific Literature: The key contribution of the paper relates to the broader scientific literature by focusing on improving the efficiency of training Probabilistic Relational Models (PRM). The authors observe that PRMs outperform Ordinary Relational Models (ORMs), but the drawback is that they are costly to train. Their proposed solution aims to reduce the complexity of PRMs, enabling large language models (LLMs) to learn more efficiently since PRMs provide more informative data. Essential References Not Discussed: None Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: None Questions For Authors: Below is list of questions that I have: - DPO optimizes pairs to eliminate the intractable computation of the normalization constant. How do you manage the normalization constant in the cross-entropy (CE) update? If you are making assumptions about the normalization constant, what assumption are you making, and why is it feasible? - The paper concludes that the implicit DPO reward outperforms the other objectives studied. However, implicit DPO was originally proposed in [1], where it was used to learn a PRM and apply it during inference. Given this context, it is unclear what novelty this paper offers, as the best algorithm was introduced in previous work, and the single-sample algorithm has issues with the normalization constant. - In Figure 4, why does increasing the number of scaling instructions hurt performance? [1] Treebon: Enhancing inference-time alignment with speculative tree-search and best-of-n sampling Qui et al. 2024 Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and are glad that you find the method is suitable for the problem and the experiments and analysis are sound. Here are our responses. > the proof appears to have some issues. Furthermore, the derivation of cross-entropy loss is missing the normalization constant Z(x). **A**: Thanks for the careful reading, but we believe there are misunderstandings. As the reviewer uses different notations and does not mention the line of proof, we would like to kindly ask the reviewer for clarification on the specific steps in the proof, so that we can better address the concern. Regarding the baseline $Z(X)$ in the cross entropy objective: Please note that our reward representation does not need a baseline $Z(X)$, which is different from the implicit reward defined in [2,3]. The implicit reward in [2,3] is derived from the entropy-regularized RL framework, and a $Z(X)$ must be included to ensure the optimality of the trained policy model. However, this is not the case in our paper. Instead, we aim for a reward representation that enables tractable Q value; we do not target and never claim an optimal policy in the entropy-regularized RL framework. Rather, our reward representation is **defined** and **constructed from scratch**, namely any representation is acceptable as long as it gives a tractable way to estimate Q value and makes Eq. (2) in line 137 hold. Therefore, we do not need to follow the restrictions as [2,3] and our reward representation does not necessarily relate to theirs. Moreover, if a baseline term $Z(X)$ were added, one can prove that **the following equation would not hold** anymore, with the proof done by simply substituting the new reward representation (with a $Z(X)$) to the right-hand of the equation: $$ \sum_{i=1}^{t} \beta \log \frac{\pi_\phi(y_i \mid \mathbf{y}_{<i})}{\pi_\text{ref}(y_i \mid \mathbf{y}_{<i})} = \beta \log \mathbb{E}_{\pi_\text{ref}(\mathbf{y} \mid \mathbf{y}_{\leq t})} \left[ e^{\frac{1}{\beta} r_\phi(\mathbf{y})} \right] $$ I.e., proposition 3.1 will not consistently hold for all ORM objectives if we follow the reward representation as [2,3], and we won’t be able to find a tractable Q value. We plan to add this discussion in the next version. > How do you manage the normalization constant in the cross-entropy (CE) update? **A**: Please refer to the response on $Z(X)$ above. As stated in Proposition 3.1, we do not need a baseline in our reward representation. > The paper concludes that the implicit DPO reward outperforms the other objectives studied. Implicit DPO was originally proposed in [1]. Given this context, it is unclear what novelty this paper offers, as the best algorithm was introduced in previous work. **A**: First, [1] directly adopts the theory from [3] which only applies to DPO. However, as discussed above, our work also provides a more general proposition with solid theoretical contribution, applying to any ORM objectives including CE loss as long as they use our reward representation, compared to both [1] and [3] that are tailored to DPO. Compared to these previous works, this paper provides a fresh and more general perspective of implicit rewards, which we believe holds significant value. Second, as shown in Figure 5 and Figure 6, Implicit PRM with CE loss is more data efficient than its DPO counterpart while showing better performance when integrated with majority vote, presenting as an appealing alternative in practice as in many scenarios pair-wise data is hard to collect. Also, the CE variant requires only one example for forwarding and backwarding while the DPO variant has to consider a pair of examples at the same time. As a result, the CE variant reduces memory overhead in RL training, as observed by a recent work that directly adopts our method (published after the ICML submission deadline) [3]. Therefore, we believe our generalization to unpaired losses holds great theoretical and practical merits compared to previous works. > In Figure 4, why does increasing the number of scaling instructions hurt performance? **A:** We’d like to clarify: for most cases, increasing the number of instructions improves model performance. The only exception is using all instructions to train 8B-sized PRMs and test on Llama-3.1-70B-Inst. As the increased number of instructions come with Llama-3.1-8B-Instruct generated responses, we conjecture that the performance drop can be attributed to PRMs overfitting to responses from the small model and being unable to generalize to larger models. This discussion will be added in the revision. [1] Treebon: Enhancing inference-time alignment with speculative tree-search and best-of-n sampling Qui et al. 2024 [2] Direct Preference Optimization: Your Language Model is Secretly a Reward Model. Rafailov et al. 2023. [3] From r to Q∗: Your Language Model is Secretly a Q-Function. Rafailov et al. 2024.
null
null
null
null
(How) Do Language Models Track State?
Accept (poster)
Summary: This paper investigates state tracking mechanisms in transformer based language models. In particular, this paper applies interpretability analyses to Pythia and GPT-2 models as they solve the word problems in $S_3$ and $S_5$. The authors find signatures relating to two computational models, PAA and AA, and demonstrate that different training curricula can give rise to models using these two mechanisms. Claims And Evidence: The authors provide strong evidence that the signatures they propose are realized in the transformers they analyze. Methods And Evaluation Criteria: The proposed methods make sense. However, I was a bit confused about the pretraining vs training distinction. When the models are said to be pretrained (cf line 282), does this means sometimes they were LLMs trained on the entire internet, and then other times (Section 5.3) the weights were initialized randomly, and they were pre-trained explicitly on state tracking problems? Theoretical Claims: I did not follow the rationale for the AA and PAA signatures. This is an important issue, because the empirical results hinge on the AA and PAA signatures actually corresponding to the AA and PAA algorithms. I think it's very important that the authors clarify this important point. Here are my questions and confusions in detail about the theoretical aspects of this paper. 1. In the algorithm blocks, $n$ is used? What is $n$? Is it the number of tokens? In the description of the word problem (Section 2.2), $t$ is used for the number of tokens. I find this shift in notation very confusing. 2. I do not understand the signatures for AA and PAA as described in this paper. For example, take Figure 1. In the prefix patching signature section for AA, there are four rows of squares, where the first row is entirely blue, and the bottom row only as a blue square in the lower right hand side. This signature simply does not make sense to me with the definition of prefix patching as described in this paper. I am understanding the top row to be the input layer, and the bottom row to be a deepest layer, as would seem to be suggested by the diagrams of the algorithms in Figure 1. My understanding is that the token index is increasing from left to right. So, I don't understand how merely patching the very first token in the first first layer (which seems to be the upper left) could result in an algorithm with a corrupted input predicting the correct state. I would have thought that this clean input would have been corrupted by running the AA algorithm with states corresponding to an entirely different input sequence. I've thought as hard as I can about this issue and it just doesn't make any sense to me. Unfortunately I can't recommend accepting this paper while this confusion persists, because everything else about this paper seems like a really solid piece of work. I hope that the authors can provide a crystal clear explanation of the PAA and AA signatures during the rebuttal period. Experimental Designs Or Analyses: I did not check the experiments. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: There is a lot of discussion about whether or not foundation models can track state, and this paper makes a nice contribution by showing concretely mechanisms by which transformers could track state and empirically verifying them on pretrained models. Essential References Not Discussed: I was a bit surprised to read the second sentence of this paper > A growing body of work suggests that these models learn to represent the latent state of the world These papers would seem to argue differently * Merrill and Sabharwal '23, "The Parallelism Tradeoff: Limitations of Log-Precision Transformers" and Merill et al '22 "Saturated transformers are constant-depth threshold circuits" * Huet et al '25, "Episodic Memories Generation and Evaluation Benchmark for LLMs" * Bhattamishra et al '20, "On the Ability and Limitations of Transformers to Recognize Formal Languages." * Deletang et al 2023 * Sarrof et al 2024 * Strobl et al 2024 A more nuanced discussion was provided in Section 2.1, however. It would be nice to at least have some references accompanying the second sentence of the paper. And, the authors might consider a more balanced second sentence also acknowledging work that suggests that transformer based LMs can really struggle with tracking state. Other Strengths And Weaknesses: The paper as a whole is very well written. Other Comments Or Suggestions: The extra dotted lines in Figure 1 (for the state parity probe) are a bit confusing until one reads the footnote non the bottom of page 5, but this takes quite a while to get to. Questions For Authors: My essential question is about the correctness of the AA and PAA signatures, see my question under "Theoretical Claims" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are glad the reviewer found this paper to be a “really solid piece of work” and “as a whole [...] very well written”! We are happy to clarify your questions below: ## Pretraining vs training distinction, were pre-trained models pre-trained explicitly on state-tracking problems or the entire internet? When we talk about pre-trained models, we always mean models pre-trained on the entire internet (*not* state-tracking problems), unless otherwise specified. We will clarify this in future versions! ## What is $n$? Is it the number of tokens? Good catch! Yes, we meant $t$ instead of $n$ for the number of tokens in the algorithmic blocks. Apologies for the confusion! We will fix this in future versions of our paper. ## Figure 1: In the prefix patching signature for AA, there are four rows of squares, where the first row is entirely blue, and the bottom row only has a blue square in the lower right-hand side. This is merely a high-level extrapolation of the empirical result shown in Figure 2, which might make more sense with more details. ## I am understanding the top row to be the input layer, and the bottom row to be a deepest layer, the token index is increasing from left to right. This is correct! ## Why does just patching the first token result in an incorrect state? Great question! **We create pairs of prompts by corrupting only the *first* token of the prompt**. This is shown visually in section B of Fig 1 (the two prompts differ on action $a_1$), but we realize how this may be easy to miss and we will make it clearer in the final submission. Corrupting only the first token would result in two prompts with a differing final state. Restoring the activation of the first token at the embedding layer is equivalent to prompting the LM with the correct prompt, and thus is sufficient for producing the final correct state. Having the two prompts differ only at the first token allows us to track the impact of that first token through the network. In early experiments where we corrupt a token near the middle or end of the prompt, we found similar patching patterns, only more vertically compressed (as would be expected). We will add this clarification to Section 2.3 and Section 4.2 in the paper! ## Is claim “A growing body of work suggests that these models learn to represent the latent state of the world” true in light of other work? This is a fair point. We want to emphasize the difference between (1) "state representations are decodable" and (2) "models use state tracking procedures that generalize to arbitrarily long inputs." The first has been shown by prior work [2,3], while the second remains underexplored [4] but also is not a necessary condition for (1). We will add the aforementioned references for (1) to the second sentence, and soften the claim of the sentence to the following: A growing body of work suggests that the latent state of the world can be decoded from model internals—e.g. situations described by language and results of program execution—to support prediction. We will also add the suggested sources – thanks for providing them! ## The extra dotted lines in Figure 1 (for the state parity probe) are a bit confusing until one reads the footnote on the bottom of page 5 Thanks for noting this! We will add a line in the caption in Figure 1: “Note the dotted lines indicate two different probing signatures that would both be consistent with this algorithm (see Appendix C.1 for more details).” [2] Li et al., 2022. Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task. https://arxiv.org/pdf/2210.13382 [3] Li et al., 2021. Implicit Representations of Meaning in Neural Language Models. https://arxiv.org/abs/2106.00737 --- Rebuttal Comment 1.1: Comment: Thank you for your response! > We create pairs of prompts by corrupting only the *first* token of the prompt. Thank you for this very helpful clarification. Yes, I think I now understand. I agree it would be extremely helpful to explicitly include this sentence in the main text (as opposed to having to discern this experimental design choice from a figure caption and an inline equation. > A growing body of work suggests that the latent state of the world can be decoded from model internals—e.g. situations described by language and results of program execution—to support prediction. I strongly recommend that you include at least three references cited after the phrase "a growing body of work." Otherwise it's a bit vague and handwavy. # Some additional thoughts and recommendations ## Abstract is vague This sentence of the abstract is vague: > The two mechanisms exhibit markedly different robustness properties, and we show how to steer LMs toward one or the other with intermediate training tasks that encourage or suppress the heuristics. From reading the abstract, I can't tell if AA or PAA is better. In the conclusion you clearly write > LMs that learn the AA algorithm tend to generalize better and converge faster. I would strongly recommend that you replace the vague abstract sentence with this much clearer concluding sentence. Then someone picking up the paper for the first time can see that you are saying AA is more reliable than PAA just from the abstract. ## Figure 1B is hard to read Thank you for the clarification about Figure 1B. Is there any way to make the figure itself clearer, i.e. that the squares are layers and the top is an input layer and the bottom the deepest layer? With words? It's just so hard to tell what's going on. I'm raising my score to a 3. Nice job with this paper!
Summary: This paper investigates how language models track dynamic states through systematic analysis of permutation composition problems . The study reveals that both pretrained and fine-tuned Transformer models learn two distinct state-tracking mechanisms: an Associative Algorithm (AA) and a Parity-Augmented Associative Algorithm (PAA). The former solves tasks via hierarchical permutation composition, while the latter first prunes the state space using parity heuristics before refining results with AA. Experimental validation demonstrates the divergence between these mechanisms, and the authors propose intermediate tasks to guide model specialization. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths 1. The permutation composition framework offers theoretical significance and generalizability to practical tasks like finite automata simulation, establishing a concise yet versatile testbed for state tracking research. 2. The work elucidates how architectural choices and initialization parameters influence algorithmic preference, while proposing actionable interventions to optimize state-tracking capabilities. ### Weakness 1. Exclusive reliance on artificially designed permutation tasks (S3/S5 groups) leaves unverified whether observed mechanisms generalize to real-world scenarios like natural language inference or code execution. Despite claims of generalizability, the absence of cross-task validation or concrete application cases substantially weakens practical relevance. 2. While noting that initialization and architecture dictate algorithm preference, the paper fails to uncover root causes(e.g., how initial parameters bias computational pathways or how attention implement parity heuristics). This black-box attribution reduces findings to correlational observations rather than reproducible causal mechanisms. 3. Exclusive focus on final-state accuracy and parity correctness neglects critical dimensions of state tracking, including intermediate state consistency, robustness to long-range dependencies, and error tolerance under adversarial perturbations. Overreliance on parity shortcuts risks overestimating models’ true tracking capabilities. Other Comments Or Suggestions: N/A Questions For Authors: 1. The paper mentions that model architecture influences algorithmic preference but lacks concrete analysis of how depth (number of layers) or hidden layer dimensionality (width) modulates the AA/PAA propensity. Do deeper networks inherently favor AA? 2. If applying the mechanisms discovered in permutation tasks to practical scenarios (e.g., tracking dialogue states or program execution traces), would structural modifications or specialized training strategies be required? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for your time and feedback. Below, we address some of your general critiques and specific questions. ## How could these mechanisms be applied to practical scenarios? To emulate a more practical scenario, we train Pythia models on a version of our task with permutations expressed in *natural language*, e.g. “132” would be “swap positions 2 and 3.” while “312” would be “rotate the last item to the front.” We train LLMs to predict the *final state* (e.g. 231) from the *final period token* of the sequence. For example, > Swap positions 2 and 3. Rotate the last item to the front. Swap positions 1 and 2. would map to state “123”. We then conduct probing and activation patching experiments. **Probing experiments** We train probes to map from the activation of each layer at the position of the final “.” token to the final state. * Pretrained Pythia-160M (PAA signature): https://ibb.co/5XwfF5h7 * Non-pretrained Pythia-160M (AA signature): https://ibb.co/R4HfbXhn As with our results on synthetic data, probing results are consistent with the AA and PAA mechanisms respectively. **Activation patching experiments** We patch prefixes up to a fixed token position N. Note that token positions may no longer be aligned in natural language: e.g. “swap” actions have 5 tokens, while “rotate” actions have 7. Thus, we may not be replacing the activations of the *same* number of actions between prompts. Nonetheless, we still believe the activation patching results serve as a good proxy for estimating how information gets propagated through the layers. * Pretrained Pythia-160M: https://ibb.co/Tqdcfb4N * Non-pretrained Pythia-160M: ​​https://ibb.co/0VXSNNxP As above, these resemble AA and PAA signatures from the paper. Interestingly, the pretrained results are significantly more “compressed” over the layers – the LLM computes the state very early on. We suspect this may be due to the pre-trained LLM taking advantage of its innate natural language understanding (and perhaps pre-trained state tracking abilities!) to quickly solve the task in an early layer. ## Root causes for why models are biased towards one or the other algorithm The goal of this work wasn’t to provide a definitive account of how these algorithms emerged, but rather identify them, provide evidence for how they contrast, and factors that influence how they emerge over training. We believe a comprehensive causal account of the emergence of these algorithms is out of the scope of this paper, and may require novel interpretability tools that have not yet been invented. We hope future research can be conducted on this topic, and we will add a point on this in our conclusion! ## Exclusive focus on final-state accuracy neglects intermediate state accuracy, robustness to long-range dependencies, and error tolerance under adversarial perturbations We respectfully disagree about the first two. 1. We train and evaluate models to predict all intermediate states up to the sequence length 100 (Section 4.1). We also use probing techniques to verify whether the model's international representation aligns with the ground truth (4.3). 2. We also examine long-range dependencies by performing activation patching early in the sequence (the very first token!) and seeing how it affects the state prediction at the *last* token (Section 4.2). We would appreciate clarification on what kind of experiments would best elucidate “error tolerance under adversarial perturbations,” as we're uncertain which adversarial perturbations would be most relevant in the context of our setup. ## Overreliance on parity shortcuts risks overestimating models’ true tracking capabilities. We’d appreciate some clarification on how our measurements of the degree to which models rely on parity heuristics affects our estimation of LMs’ state tracking capabilities! For context, our analysis is grounded in whether the model can complete the permutation group task, i.e. their “true tracking capabilities”. We find that models implement an associative scan in both algorithms, and the parity shortcut is simply a heuristic that eliminates incorrect answers in earlier layers for some models (see Sec. 4.2 and Appendix B). In this sense, the parity shortcut cannot implement state tracking on its own and is only complementary to the “true tracking” algorithm. Note that in all our Section 4 analyses (probing, activation patching, generalization, etc.), models are able to achieve 100% accuracy on exponentially longer sequences as we go down the layers, which is impossible if they were relying on a parity heuristic alone. ## Do deeper/wider networks inherently favor AA? We do this analysis in Section 5.2 (please check out Figure 6). We found that model size isn’t correlated with the algorithm the model chooses to learn; rather, model architectural and initialization differences matter more.
Summary: The paper studies the mechanisms that Transformers learn for performing state tracking - predicting the state after a sequence of operations. Specifically, the model is provided with a sequence of permutations $a_1, \dots, a_t$, and needs to compute the state, which is the composition of the permutations $s_t = a_t \circ \dots \circ a_1$. The authors discuss possible algorithms for performing state tracking, and demonstrate that Transformers learn either an "associative scan" algorithm, which computes in parallel compositions of permutations, or a "parity-associative scan" which uses a computation of the parity of the permutations for ruling out some of the outputs before performing associative scan. The authors discuss the difference between the two mechanisms and how a model can be steered towards performing one mechanism and not the other. Claims And Evidence: I think the paper is very well-written, and studies a very central question regarding the computational abilities of Transformers and how they learn to solve state tracking problems. The results are novel and interesting, the authors demonstrate that Transformers learn a surprising algorithm for state tracking that leverages the permutation parity computation in order to find the final state. Additionally, the results on steering the model towards one solution instead of the other are very interesting. Some questions and comments: 1. I think the description of the task is not clear enough: - Are the permutations provided as individual tokens (i.e., the size of the vocabulary is the number of permutations)? - "the input to the model is a sequence of actions $[a_1, \dots, a_t]$ and the output is a sequence of state predictions $[s_1, \dots, s_t]$": is this a causal language model trained to perform a sequence-to-sequence task (i.e., not next-token prediction)? Or is this an encoder-only sequence-to-sequence model? What would be the equivalent next-token prediction variant of this task, and do you think this changes the results/conclusions? 2. Related to the previous point: if I understand correctly, the model is provided with the sequence of states after each step as a supervision. This can potentially change the function that the model learns, as it could potentially guide it to generate a particular state at a particular position. Did you try training this end-to-end (i.e., get a sequence of actions and output only the final state)? Would this change the results/conclusions? 3. In Section 3, the description of how the different algorithms are implemented by the Transformer implicitly assumes that everything is implemented starting from the first layer. If the Transformer has more layers than the minimal number of layers required for solving the task, it could potentially pass the tokens through the first layers and start implementing the algorithm deeper in the network, or otherwise skip layers in the middle. Is the claim that the model always learns the most compressed (fewer number of layers) version of the algorithm starting from the first layer? 4. The experiments are done with data repetition (generating 1 million sequences repeated for 20 epochs). Is there a reason for not training on fresh data? Does this change the results? 5. In Figure 5 - what is the sequence length that the models are trained on? 6. In Section 5.3 - do you have any intuitive explanation for why adding the topic modeling steers the model towards the associative scan algorithm? 7. I think that adding an experiment that connects the results in the paper to a more realistic setting (e.g., word problems that require state tracking) would be a nice addition to the paper. Minor: - I think that putting labels on the axes of the matrices in Figure 1 (Layers/Sequence Length) will make it easier to parse. - Typo in line 302 left and 282 right: "S3 and S3" => "S3 and S5". Methods And Evaluation Criteria: See above. Theoretical Claims: See above. Experimental Designs Or Analyses: See above. Supplementary Material: No Relation To Broader Scientific Literature: See above. Essential References Not Discussed: No Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are thankful for your positive feedback and thoughtful comments. Below, we address some specific questions: ## Are permutations provided as individual tokens (i.e., the size of the vocabulary is the number of permutations)? Yes, they are individual tokens! We append them as special tokens to the tokenizer, but since none of the original tokens are used, the vocabulary is effectively the number of permutations. See our response to qUEj (point #1) about results with natural language text rather than these tokens. ## Clarify training supervision: Is this a causal LM trained to perform a seq2seq task? Is this NTP? Or an encoder-only seq2seq model? What is the equivalent NTP variant? You can think of this as a *causal decoder-only LM* trained to perform a sequence classification task – for each prefix of a sequence, it is trained to predict the state from the last token of the prefix. Because any action is valid from any state in the permutation composition task, there would be no way to distinguish between the various states through NTP alone. Thus, we did not run an NTP version of the task, though we concur that this would be an interesting avenue of future work. ## Would training end-to-end to output final state change results/conclusions? Great question! We chose our original method for a denser supervision signal from each example. We suspect that training end-to-end won’t make a big difference, but would slow training down. ## Is the claim that the model always learns the most compressed (fewer number of layers) version of the algorithm starting from the first layer? These Section 3 descriptions are simply meant as high-level guides and illustrations of general algorithms; in practice, LMs may skip layers or tokens, or even use a combination of algorithms. As shown in Figure 14, LMs appear to implement something much more redundant and complex than the illustrated associative algorithm in Figure 1. ## Why not train on fresh data? Why data repetition? Does this change results? This is purely for memory reasons. It is hard to check each new data point is unique beyond a certain point, due to needing to store all previously-generated data points in memory. (We ensure all our training examples are distinct from each other, and distinct from the evaluation set.) We use a sufficiently big dataset that we don’t think it’ll make a real difference realistically. ## Figure 5: seq length the model is trained on? The model is trained up to sequence length 100, see L280 in the main text. We’ll also include this information in the figure caption in future versions of the paper! ## Sec 5.3: Intuitively why does topic modelling steer towards associative algorithms? We don’t have a definitive answer to this question, but here’s a hypothesis: when we're training with topic modeling, the model learns a summary representation of the action tokens that are in the place of the parity scheme the model would normally learn otherwise. Training downstream on state tracking, It would be hard for the model to unlearn the topic modeling representation and pick up on the parity heuristic representation, rather than adapting the existing representation to solve the state tracking task. ## Adding an experiment with a more realistic setting Great question! See our response to qUEj (point #1). ## Minor typos We will fix, thank you!
Summary: It is known that transformers are theoretically able to capture certain formal language tasks of length $n$ with depth $\log(n)$. Empirically, large language models, which are primarily based on transformers, do appear to learn to state track. A full understanding of the mechanism that they learn for state-tracking is still missing. This paper analyses language models trained on permutation word problems (which for a dictionary of greater than 5 is $NC^1$-complete). Four possible mechanisms are hypothesised, along with what patterns would be expected from prefix patching and probing experiments. This paper shows that the language models tested consistently learn one of two state tracking mechanisms for this task. Generalisation properties of these mechanisms are explored and how to steer the network to converge from one mechanism to the other. Claims And Evidence: This paper proposes 4 possible schemes that language models can learn to state track and illustrates the expected signatures for the probing and patching experiments. The extensive experimental results match the signatures expected from two of these four schemes. The signatures are necessary for the AA and PAA schemes proposed, however, it wasn't clear whether these are sufficient conditions. Would another similar scheme give similar signatures? Methods And Evaluation Criteria: For interpretation of how the language models are able to track state, understanding how important weights/inputs to each layer is, is important. The probing and the patching experiment seems useful to test the sensitivity of the network outputs. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: There seems to be extensive evaluation of the probing and patching experiments, with more results seen in the Appendix. This also enable readers to understand how early on in training the network seems to converge to these state tracking method. A linear probe is trained, and it would be good to perform a sensitivity analysis to see how varying this probe would affect the results. Supplementary Material: I looked through the pages in the Appendix. Relation To Broader Scientific Literature: This paper sits well in the literature of mechanistic understanding and comprehending how transformers may achieve their state tracking abilities. It's interesting to see how one of the mechanisms learned is similar to the associative scan. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall this is a relatively well-written paper, which explores the mechanistic understanding of language models and has novel results. Some explanations can be make clearer (see below). Other Comments Or Suggestions: There is a lot going on in Figure 1, some of which I find difficult to interpret. For example, for the probing signature, what is represented by the two bifurcating dashed lines for state parity probe for the sequential and AA algorithm? Minor points: Line 12: jylin04 reference seems to have incorrect author name. Line 151: $h_{lt}$ -> $h_{t,l}$ Line 291: lenght -> length There is a mix of using $\S$ and Section to refer to section, which can be unified. Questions For Authors: How sensitive is activation patching across different data samples $x$ or alternative $x'$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thoughtful review, especially the time you took to examine the full submission, including the Appendix. We’re also grateful for your affirmation of our work’s contribution to the literature. Below, we address some of your specific questions: ## Unclear if signatures are sufficient conditions. Would an alternate scheme give similar signatures? You raise an important point: the signatures we describe are necessary, but not sufficient, conditions for the proposed algorithms. We tried to make a distinction between theoretical “algorithms” and empirical “mechanisms” in the paper, but to make this distinction clearer, we will add the following clarification at the beginning of Section 4 in the final version: > It is important to emphasize that the various signatures described above provide necessary, but not sufficient, conditions for implementation of the associated algorithm; the exact mechanism that LMs use in practice is likely complex and dependent on other input features not captured by the algorithms described above. ## Sensitivity analysis for linear probe: how would varying the probe affect results We trained the probe across 10 different random data subsets on the S3 AA and PAA models, and measured the standard deviations across these runs. | Layer | S3 Pythia-160M (PAA) State Std | S3 Pythia-160M (PAA) Parity Std | Pythia-160M (AA) State Std | Pythia-160M (AA) Parity Std | |-------|-------------------------------|----------------------------------|----------------------------|-----------------------------| | 0 | 4.16e-06 | 3.03e-06 | 6.33e-06 | 2.71e-06 | | 1 | 3.36e-06 | 3.31e-06 | 2.62e-06 | 2.82e-06 | | 2 | 4.52e-06 | 5.45e-06 | 1.46e-06 | 8.97e-07 | | 3 | 4.31e-05 | 5.59e-04 | 5.00e-06 | 1.38e-06 | | 4 | 1.58e-05 | 3.72e-05 | 4.16e-06 | 1.11e-06 | | 5 | 1.70e-05 | 5.38e-06 | 1.97e-06 | 1.11e-06 | | 6 | 1.87e-05 | 7.02e-06 | 4.73e-06 | 1.59e-06 | | 7 | 1.29e-05 | 1.34e-05 | 7.79e-06 | 3.05e-06 | | 8 | 3.18e-05 | 3.08e-05 | 1.18e-05 | 2.64e-06 | | 9 | 2.96e-04 | 6.85e-05 | 2.36e-05 | 3.53e-06 | | 10 | 3.10e-04 | 7.98e-05 | 2.60e-05 | 2.96e-06 | | 11 | 1.86e-04 | 8.22e-05 | 8.83e-06 | 2.77e-06 | | 12 | 7.05e-05 | 4.01e-05 | 1.20e-06 | 2.75e-06 | The standard deviations are all quite small, indicating that our results are highly insensitive to randomness in training. Please let us know if this aligns with what you had in mind for “sensitivity analysis,” or something else! ## Sensitivity of activation patching to varying $x$ or $x’$ We performed activation patching experiments across 200 different input pairs $(x,x’)$. We plot a heatmap of the standard deviations across various (layer, position) of these pairs in: * S3 AA: https://ibb.co/5gmqB2Q2 * S3 PAA: https://ibb.co/R4YGwC4z Note that for the PAA models, the standard deviations are highest in the middle chunk — for half the prompt pairs (when parity is equal), replacing the middle chunk results in the corrected answer, while for the other half, replacing the middle chunk has little to no effect. ## In Figure 1, what is represented by the two bifurcating dashed lines for state parity probe? The dashed lines refer to the interesting fact that for models implementing AA, some encode parity linearly, and some do not. We go into more detail on this in Appendix C.1. We will add the following to the caption of Figure 1 to avoid confusion: “Note the dotted lines indicate two different probing signatures that would both be consistent with this algorithm (see Appendix C.1 for more details).” ## jylin04 seems incorrect We were referencing a blog post [1] and weren’t sure about the citing format ourselves, but we think the reference is correct given that we don’t know the authors’ full names. ## Other typos / inconsistencies We will correct, thanks! [1] OthelloGPT learned a bag of heuristics https://www.lesswrong.com/posts/gcpNuEZnxAPayaKBY/othellogpt-learned-a-bag-of-heuristics-1 --- Rebuttal Comment 1.1: Comment: Thank you for your response. The sensitivity analysis for the probe seem good. For jylin04, I think looking by the other posts of this user that they may be Jennifer Lin (https://www.lesswrong.com/posts/nRu92PXLrdwqdtQmn/more-recent-progress-in-the-theory-of-neural-networks-1), but agreed can keep it as jylin04 (it just looked slightly odd on the page upon first reading). The suggested additions to the texts regarding sufficiency and pointing to Appendix C are useful thanks! For the sensitivity analysis on $x$ and $x'$, is it possible to see the PAA plot split into the two cases discussed? --- Reply to Comment 1.1.1: Comment: Yes, here is the standard deviations heatmap for PAA split into the * equal parity case: https://ibb.co/6cxM9MR4 * different parity case: https://ibb.co/wq8kFBJ Note the relatively small magnitudes of the standard deviations in each case. The equal parity stddevs resembles those of the AA algorithm (as parity complement is computed associatively), while the different parity one appears to have nonzero stddevs throughout the middle chunk, depending on at what point parity gets computed.
null
null
null
null
null
null
BILBO: BILevel Bayesian Optimization
Accept (poster)
Summary: This paper proposes a bi-level Bayesian optimization algorithm for black-box functions. By optimizing both upper- and lower-level problems simultaneously, it improves the sample efficiency. Theoretically, BILBO achieves a sublinear regret bound for common kernels. It also demonstrates strong empirical performance on synthetic and real-world problems. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the key steps for the proof of the main theoretical results, but not all the details. As far as I know, the proofs are correct. Experimental Designs Or Analyses: Yes. I checked the experimental results, and they are mostly sound to me. One issue is that some of the results shown in the experimental section seem to be non-complete, such as the green curves in the Fig. 4 (a, b, c). Supplementary Material: No. Relation To Broader Scientific Literature: Confidence-bound algorithms are extended to bi-level constrained problem, which could be of general interest to Bayesian optimization, bi-level optimization, and multi-armed bandits research. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. This work addresses a significant problem and introduces a novel approach to bilevel Bayesian optimization. 2. Unlike existing methods that requires full lower-level optimization for each upper-level query, the proposed algorithm optimizes both levels concurrently, significantly improving sample efficiency. 3. It also incorporates an infeasibility declaration method and achieves a theoretical sublinear regret bound, extending the classic bound from single-level to bilevel problems. 4. Experimental results demonstrate its effectiveness. Other Comments Or Suggestions: This approach maintains a set of nearly optimal solutions for the lower-level problem. While discretization is feasible in low-dimensional cases, scaling to even moderately high-dimensional problems is likely challenging. Questions For Authors: What if the lower-level problem is a white-box problem that can be solved via a nonlinear programming solver? Can similar idea work for this case? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed review, and for appreciating our novel approach to a significant problem, sample efficiency of our method, sublinear regret bound, and effective experimental results. We would like to clarify the following points. >One issue is that some of the results shown in the experimental section seem to be non-complete, such as the green curves in the Fig. 4 (a, b, c). The green curves represent Nested's number of queries over regret. There is an obvious gap at the start of each green curve as the Nested method requires more initial observations of lower-level functions to optimize the lower-level problems separately before estimations begin, compared to BILBO and TrustedRand. >What if the lower-level problem is a white-box problem that can be solved via a nonlinear programming solver? Can similar idea work for this case? White-box problems are often inexpensive to compute. We can still apply our algorithm, evaluate the white-box problem at every step and reduce the confidence interval to a point, since there are no noisy observations. If the nonlinear programming solver for the lower-level is preferred (e.g. for computational efficiency), we can also reduce BILBO to a special case where Bayesian optimization is carried out only on the upper-level. However, we would also like to point out that BILBO's advantages are in settings with expensive blackbox evaluations and noisy observations, both of which are not present in this scenario. We hope these clarifications answer your questions and improve your opinion of our paper.
Summary: This paper introduces BILBO (BILevel Bayesian Optimization), a BO (Bayesian Optimization) algorithm used to address constrained bilevel optimization problems where the constraints, the upper objective and the lower objective are all black boxes and assumed expensive to evaluate. BILBO distinguishes itself from the state-of-the-art of derivative-free bilevel BO by its ability to simultaneously optimize the objectives (both upper and lower) and discover the unknown constraints. The performance of BILBO is studied both theoretically and empirically. Claims And Evidence: The claims of the paper are the following: (i) BILBO simultaneously discovers the constraints and optimizes the upper and lower objectives instead of requiring the optimization of the lower objective at each iteration. (ii) BILBO has a sublinear cumulative regret. (iii) BILBO performs empirically better than other bilevel optimization methods *in the particular setting considered in the paper*. Overall, I think the claims are adequately supported. Claim (i) is verified by the definition of BILBO (see Algorithm 1), Claim (ii) is supported by Theorem 4.9 and Claim (iii) is backed up by numerical experiments on low-dimensional problems (see more comments on this particular point below). Methods And Evaluation Criteria: Because BILBO addresses bilevel optimization in a very particular setting (expensive black-box objectives, expensive black-box constraints), there is no other bilevel optimization method that could directly compete with BILBO (as far as I know). Nevertheless, the paper introduces two benchmarks: (i) **TrustedRand**: randomly selects queries from trusted sets defined in the paper and observes all the constraints/objectives at once, (ii) **Nested**: applies BO to the upper objective only, the lower objective is optimized with Sequential Least-Squares Programming (SLQP) at each query. These two methods are compared against BILBO on synthetic and real-world experiments that make sense for the problem considered in this paper. Theoretical Claims: The main theoretical claim is that BILBO has a sublinear cumulative regret. This is backed up by common proof techniques in BO asymptotic analysis and hold with high probability and as long as the search space is compact. Because the trusted sets are built on discretized search spaces, studying the behavior of BILBO given a discretization would have been a nice addition to the paper. Experimental Designs Or Analyses: The experiments are all low-dimensional (at most 5 dimensions for the SMD synthetic experiments, with 2 and 3 dimensions for the upper and lower objectives, respectively) and the continuous search spaces are discretized with a uniform grid. I believe this is a limitation of the approach, since BILBO requires scanning the whole search space for building trusted sets and solving up to two non-trivial optimization problems at each iteration. I don't think such an expensive process could scale nicely with the search spaces dimensions or with a finer discretization of the search spaces. To better study the limitations of BILBO, I would have liked to see other numerical results and analyses, such as: * the immediate regret plotted against wall-clock time for the considered methods, * the computational complexity of BILBO, especially in terms of the dimensionality of the search spaces and/or the granularity of the discretization, * the performance of BILBO on bilevel optimization tasks where the lower objective is not expensive, to study how it performs against other bilevel BO solutions that make more restrictive assumptions such as [1, 2]. **References** [1] Kieffer, E., Danoy, G., Bouvry, P., and Nagih, A. Bayesian optimization approach of general bi-level problems. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1614–1621, 2017. [2] Wang, B., Singh, H. K., and Ray, T. Comparing expected improvement and kriging believer for expensive bilevel optimization. In 2021 IEEE Congress on Evolutionary Computation (CEC), pp. 1635–1642. IEEE, 2021. Supplementary Material: The appendices were reviewed, there is no other supplementary material. Relation To Broader Scientific Literature: As far as I know, there is no other bilevel optimization method able to discover and optimize the constraints and the objectives when all these functions are expensive black boxes. Although I have some concerns about its scalability, BILBO is useful to solve a bilevel optimization problem under a minimal set of assumptions. Essential References Not Discussed: The very recent work [1] is discussed by the authors but ignored in the numerical experiments. Although comparing BILBO to this solution in numerical experiments would have been a nice addition to the paper, [1] falls under the ICML Concurrent Work policy. **References** [1] Ekmekcioglu, O., Aydin, N., & Branke, J. (2024). Bayesian Optimization of Bilevel Problems. arXiv preprint arXiv:2412.18518. Other Strengths And Weaknesses: As far as I understand the paper, BILBO is efficient when the objectives and the constraints are expensive black boxes. I have two comments about this: * Although this is not a concept introduced by the authors, I think considering expensive black-box constraints is of little interest. In most applications, the constraints are known to the user beforehand and are not expensive to evaluate. Moreover, discovering the constraints during the optimization implies that the constraints are soft (i.e., they can be violated by the optimizer). Including unknown hard constraints (i.e., no measurement can be made if at least one constraint is violated) seems not trivial in your framework. * Although the sample efficiency of BILBO is definitely one of its strengths, the particular setting within which BILBO is efficient could be seen as a restriction. In fact, as far as I know, most authors justify the use of BO for the upper objective precisely because they require, at each query, an expensive optimization of an inexpensive-to-evaluate lower objective. Given that a significant fraction of bilevel optimization problems occur in this setting, it should be clearly stated in the paper that BILBO can be less efficient (e.g., when comparing the instantaneous regrets w.r.t. the wall-clock times) than other bilevel BO methods when the lower objective is inexpensive to evaluate. Other Comments Or Suggestions: The paper contains many forward pointers (i.e., it refers to Lemmas or Definitions that are introduced later in the paper). I suggest to replace/remove them wherever possible to make reading the paper more pleasant. Questions For Authors: Here are some questions (labelled for future reference) to spark the discussion with the authors: 1. Have you studied the performance of BILBO when the trusted spaces are built on discretized versions of continuous search spaces? If yes, what additional assumption do you need to ensure a sublinear regret bound? If no, do you have any intuition about it? 2. How does the immediate regret of BILBO evolves as a function of the wall-clock time? 3. What is the computational complexity of BILBO as a function of the dimensionality of the search spaces? As a function of the cardinality of the uniform grid used for discretization? 4. How would you integrate unknown hard constraints in your framework? How would they impact the performance of BILBO? 5. Do you agree that BILBO could be less efficient than other bilevel BO solutions such as [1, 2] if the lower objective is inexpensive to evaluate? If yes, can you comment on that? If no, can you justify? **References** [1] Kieffer, E., Danoy, G., Bouvry, P., and Nagih, A. Bayesian optimization approach of general bi-level problems. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1614–1621, 2017. [2] Wang, B., Singh, H. K., and Ray, T. Comparing expected improvement and kriging believer for expensive bilevel optimization. In 2021 IEEE Congress on Evolutionary Computation (CEC), pp. 1635–1642. IEEE, 2021. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful review, and for recognizing our sublinear cumulative regret and sample efficiency when both upper- and lower-level objectives are expensive blackboxes. >1. Have you studied the performance of BILBO when the trusted spaces are built on discretized versions of continuous search spaces? If yes, what additional assumption do you need to ensure a sublinear regret bound? If no, do you have any intuition about it? This is an interesting future research direction that we have not studied. One possible factor on the performance of discretized problems will be the difference between the best discretized point and the optimal point in the continuous domain. This difference may be bounded with respect to the granularity of the discretization, with Lipschitz continuity assumptions. As discretization becomes finer, the performance should converge to that of continuous search spaces. >3. What is the computational complexity of BILBO as a function of the dimensionality of the search spaces? As a function of the cardinality of the uniform grid used for discretization? For a discretized implementation, the computational complexity of BILBO is affected by computational complexity of : - Gaussian processes $\mathcal{O}(n^3)$ - Updating trusted sets $\mathcal{O}(|\mathcal{F}|c)$ - Selecting function query $\mathcal{O}(|\mathcal{F}|c)$ - Optimizing the acquisition function $\mathcal{O}(c)$, where $n$ is the number of observations at an iteration, $c$ is the number of discrete points, and $|\mathcal{F}|$ is the number of blackbox functions. In uniform grid discretization, if each dimension is divided into $m$ points, then the cardinality is $c=m^d$, where $d$ is the number of dimensions. Thus, dimensionality $d$ exponentially affects computational complexity when using uniform grid discretization. Adaptive discretization may be able to mitigate the exponential factor of dimensionality, where effective dimension $d_\text{eff}\ll d$. However, high dimensionality leads to common challenges that go beyond computational complexity (of BILBO). For example, it is also known to reduce the effectiveness of Gaussian processes as observations become increasingly sparse. >4. How would you integrate unknown hard constraints in your framework? How would they impact the performance of BILBO? We can use variational inference to approximate the Gaussian process posteriors while accounting for infeasible observations, e.g. approximating $p(c(x^*)\mid y(\mathbf{x}_1),c(\mathbf{x}_2)<0)$, where $y(\mathbf{x}_1)$ are noisy feasible observations and $\mathbf{x}_2$ are the observed infeasible points. While $c(\mathbf{x}_2)$ is not observed directly, we can condition on the information that $c(\mathbf{x}_2)<0$ since $\mathbf{x}_2$ is infeasible. More observations might be required to model constraint boundaries accurately as infeasible observations become less informative. Nevertheless, we can still apply BILBO with the primary adaptation being in the estimation of the posterior belief, while further investigation is needed to assess the effectiveness for hard constraints in practice. >5. Do you agree that BILBO could be less efficient than other bilevel BO solutions such as [1, 2] if the lower objective is inexpensive to evaluate? If yes, can you comment on that? If no, can you justify? The efficiency (in terms of wall-clock time) would depend on the computational cost of each iteration of BILBO versus the cost of evaluating the lower objective, as well as the optimality of lower-level estimates for nested methods such as [1, 2]. While it is possible for BILBO's computational cost to outweigh an inexpensive lower-level objective evaluation, BILBO still has advantages in scenarios with noisy observations or multiple lower-level solutions, which may be more common. Nested methods only solve for one solution for each lower-level optimization, and it can be suboptimal in these scenarios. On the other hand, BILBO manages the uncertainty of lower-level estimates in a principled way and allows for multiple lower-level estimates, possibly providing better lower-level estimates to reduce regret more effectively even if each iteration takes more time. >2. How does the immediate regret of BILBO evolves as a function of the wall-clock time? On a Mac Studio with M2 Ultra, for the 2D BraninHoo+GoldsteinPrice experiment, the average time per BILBO iteration is 0.132s, which is 2x slower than TrustedRand and ~50x slower than Nested. However, while the regret for Nested decreases faster than BILBO in the initial (~5) seconds, Nested's regret quickly plateaus due to suboptimal lower-level estimates of the multimodal lower-level objective. BILBO outperforms Nested subsequently as BILBO converges to a more optimal solution, supporting our point in the previous question above. We hope these discussions on the efficiency, complexity and possible extensions of BILBO addressed your questions and will improve your opinion of our work.
Summary: The paper proposes a UCB based method for bilevel Bayesian optimization. The proposed method, called BILBO, selects a next point by the upper bound based surrogate model bilevel optimization. The authors provide a regret analysis based on an approach of the well-known GP-UCB analysis. Claims And Evidence: Since baselines are not well-known methods, efficiency is somewhat difficult to evaluate though I am not sure there does not exist other appropriate existing methods. Methods And Evaluation Criteria: Results on four benchmark functions and two real-word functions are reported. I think the amount of evaluation is sufficient. Theoretical Claims: I couldn't fully follow the proof. Experimental Designs Or Analyses: The experimental settings are seemingly reasonable. Supplementary Material: I partially read the proofs, though couldn't fully follow them. Relation To Broader Scientific Literature: Bilevel optimization seemingly often occurs in various scientific fields, and so, I think the topic has a potential importance. Essential References Not Discussed: I don't have any suggestion. Other Strengths And Weaknesses: S: The topic seemingly has a potential importance, though it has not been widely studied. W: Significance of performance improvement is somewhat difficult to evaluate because the baseline methods are not popular approach. I guess the Thompson sampling might have been a good baseline. W: Most of theoretical techniques are directly from well-known Srinivas et al (2010). Other Comments Or Suggestions: The abbreviation 'LL' is used in some figures, but not defined. The paper seems only discusses the discrete input setting. It should have been explicitly stated in the main text. Questions For Authors: 1) Many lemmas (e.g., Lemma 4.4 and 4.6) should be a probabilistic bound because most of them depend on Corollary 4.2, but is often shown as a deterministic bound. Why? Just omitted? 2) Corollary 4.2 is for simultaneously holding on all possible x, z, h, and t? If so, it should be explicitly written. 3) In the experiments, \sum_h \in F r_h,t is used for the evaluation, but r_F,t can be negative according to (3.5). Is it correct? 4) How is the maximization of (4.6) solved? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and for appreciating the potential of our problem. We would like to address some questions and concerns raised. >1. Many lemmas (e.g., Lemma 4.4 and 4.6) should be a probabilistic bound because most of them depend on Corollary 4.2, but is often shown as a deterministic bound. Why? Just omitted? The lemmas are all probabilistic bounds, which were omitted for brevity. We will clarify this in the next version of the paper. >2. Corollary 4.2 is for simultaneously holding on all possible x, z, h, and t? If so, it should be explicitly written. Yes, we omitted it for brevity as it was already stated in the directly preceding Definition 4.1. We can add this in the next version of the paper. >3. In the experiments, \sum_h \in F r_h,t is used for the evaluation, but r_F,t can be negative according to (3.5). Is it correct? Your interpretation is correct and (3.5) should be $r_F(\mathbf{x}_t, \mathbf{z}_t) ≜ \max(0, F(\mathbf{x}^*, \mathbf{z}^*) - F(\mathbf{x}_t, \mathbf{z}_t))$, which is what we had implemented in our experiments to avoid negating other regret components. Thank you for pointing this out, and we will update (3.5) in the next version of the paper. Lemma C.3. will still hold with minor edits to the proof. >4. How is the maximization of (4.6) solved? In our experiments, we discretized the search space so the arg max over all the discrete points can be easily found. For continuous domains, we can use a continuous representation for the trusted sets, e.g. with hyperrectangles, and use L-BFGS or SLSQP to optimize the acquisition function, similar to TorchBO. >The paper seems only discusses the discrete input setting. It should have been explicitly stated in the main text. While we discretized the inputs in our experiments implementation, our theoretical analysis holds for continuous inputs, and it is possible to implement BILBO for continuous inputs in practical settings as discussed above. >W: Most of theoretical techniques are directly from well-known Srinivas et al (2010). We would like to clarify that our theoretical techniques are not derived directly from Srinivas et al (2010), but only utilizes an adapted confidence bound in Corollary 4.2. We tackle bilevel challenges that differ from the single-objective, unconstrained optimization in Srinivas et al (2010). In particular, we introduced a function query selection strategy with conditional reassignment, to select a function query for the decoupled setting, while accounting for the uncertainty of lower-level estimates. The conditional reassignment is also crucial for the exploration of the lower-level objective without repeated lower-level optimizations. These novel components address challenges associated with the lower-level optimization of bilevel problems, and are integral to the derivation of a sublinear regret bound for BILBO. >The abbreviation 'LL' is used in some figures, but not defined. 'LL' refers to lower-level. We will define this in the next version of the paper. We hope our clarifications have addressed your questions and improved your opinion of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. > our theoretical analysis holds for continuous inputs The theorem depends on the number of candidates in ${\cal X}$ and ${\cal Z}$ (e.g., $\beta_t$ in Theorem 4.9). How does it extend to a continuous space? --- Reply to Comment 1.1.1: Comment: Thank you for your acknowledgment and response. >The theorem depends on the number of candidates in $\mathcal{X}$ and $\mathcal{Z}$ (e.g., $\beta_t$ in Theorem 4.9). How does it extend to a continuous space? Indeed, we would need additional assumptions and adjust $\beta_t$ for a continuous space. Assume $\forall h \in \mathcal{F}$, $h$ lies in RKHS $\mathcal{H}_{k_h}(D)$ and the RKHS norm is bounded, $||h||_{k_h} \leq B_h$, where $k_h$ is the corresponding kernel and $D$ is a compact subset of $\mathbb{R}^{d_\mathcal{X} + d_\mathcal{Z}}$. Also, assume that noise sequence $\lbrace \epsilon_{h,t} \rbrace_{t \geq 1}$ is conditionally R-sub-Gaussian. Note that these are common assumptions. Adapting from [1], we can set $\beta_t$ as $B_h + R \sqrt{2(\gamma_{h,t-1} + 1 + \ln(|\mathcal{F}|/\delta)}$, where $\gamma_{h,t-1}$ is the maximum information gain which can be found in our Appendix B.2. The main difference is in the selection of $\beta_t$, while our regret analysis remains valid as Corollary 4.2 holds following Theorem 2 in [1]. We will add this clarification in the next version of the paper. (The line breaks in the second sentence were added to ensure proper LaTeX rendering.) [1] Chowdhury, S. R., & Gopalan, A. (2017, July). On kernelized multi-armed bandits. In International Conference on Machine Learning (pp. 844-853). PMLR.
Summary: This paper introduces BILBO, a Bayesian optimization method for bilevel problems with noisy, constrained black-box objectives. It jointly optimizes both levels using trusted sets from Gaussian process confidence bounds. BILBO provides theoretical regret guarantees and outperforms baselines in empirical evaluations. Claims And Evidence: The results are theoretical in nature, and proofs are provided. The results are sound. Methods And Evaluation Criteria: I believe this is not applicable due to the theoretical nature of the paper. Theoretical Claims: While I did not check all details, the proofs appear generally sound and reasonable. Experimental Designs Or Analyses: Not applicable. Supplementary Material: I briefly check the proofs. Relation To Broader Scientific Literature: The paper uses existing methods from Bayesian optimization and extends them to bilevel optimization. Essential References Not Discussed: Not aware of any. Other Strengths And Weaknesses: Strengths: - Addresses an important, practical problem (efficient bilevel Bayesian optimization). - Provides clear theoretical guarantees (sublinear regret bound), an important first in this setting. Trusted set approach effectively reduces expensive lower-level evaluations. -Good empirical support via both synthetic and real-world experiments. Weaknesses: - Limited technical novelty; methods (GP modeling, confidence bounds) are mostly standard. - Scalability issues due to discretization, especially in higher-dimensional cases. - Clarity could improve—some definitions and assumptions initially confusing. Other Comments Or Suggestions: Overall, the paper is sound and clear. The technical novelty however seems quite limited as most technical tools are based on standard GP confidence intervals. The paper benefits from a discussion on technical novelty. Questions For Authors: Could you concisely highlight which specific aspects of BILBO are technically novel, beyond standard Bayesian optimization techniques (GP, confidence bounds)? How does BILBO practically handle scalability to higher-dimensional problems, given the reliance on discretization? Ethical Review Concerns: Not applicable! Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review, and for recognizing the importance and practicality of our bilevel problem, clear and first theoretical guarantees in this setting, and good empirical support. We would like to address the questions raised. >Could you concisely highlight which specific aspects of BILBO are technically novel, beyond standard Bayesian optimization techniques (GP, confidence bounds)? BILBO contains the following novel contributions to tackle unique challenges in bilevel Bayesian optimization where the upper-level optimization is constrained by lower-level solutions: 1. Formulation of the trusted set of optimal lower-level solutions $\mathcal{P}^+_t$ based on lower-level estimates, where we showed that points in the set have lower-level objective regret bound. This reduces the search space and is a key step towards the derivation of regret bound for BILBO. 2. Function query selection strategy with conditional reassignment of query point. Our function query strategy includes a term for the uncertainty of lower-level estimates, which corresponds to the additional difficulty from lower-level optimization that is not present in standard optimization. We also introduced the conditional reassignment as a way to explore the lower-level objective without the repeated lower-level optimization found in many bilevel Bayesian optimization methods. >How does BILBO practically handle scalability to higher-dimensional problems, given the reliance on discretization? One way is via adaptive discretization, which can reduce the effective dimension. Also, while we had discretized the problems in our experiments, BILBO does not require discretization of continuous inputs. A scalable BILBO implementation for continuous problems could contain a continuous representation of the trusted sets, e.g. with hyperrectangles, and use L-BFGS or SLSQP to optimize the acquisition function, similar to TorchBO. We hope our clarifications on the novel aspects of our work and scalability discussions addressed your questions and will improve your opinion of our paper.
null
null
null
null
null
null
TOPLOC: A Locality Sensitive Hashing Scheme for Trustless Verifiable Inference
Accept (poster)
Summary: This paper presents a verifiable inference framework for LLMs served behind APIs that improves on existing techniques via better space-time complexity whilst maintaining robust security guarantees. In their empirical evaluations, the demonstrate near perfect reliability in terms of verifying proofs under benign circumstances where certain nonessential characteristics of the serving process are perturbed such as GPU type and attention implementation. In the other cases, they show robustness by failing to verify proofs when simulating undesirable modifications to the serving process such as prompt or model modification. ### Update after rebuttal: See "Rebuttal Comment" Claims And Evidence: The error rates reported are extremely low (eg. 0), but the experimental evidence appears to back this up. Obviously, the thresholds are very tuned to the bounds of what the observed extremal matching values were in the positive and negative cases, but the use of tuned thresholds is stated clearly by the authors in 5.2. To clarify (authors should chime in), both the abstract and the conclusion restate a "can detect unauthorized modifications to models, prompts, or precision with 100% accuracy, achieving no false positives or negatives". Is this true? Precision modification appears to be the weakest setting based on the discussion and it's not clear whether, when the generation and validation model precisions do differ, the match rates are below the threshold for rejection, which the reviewer assumes to be the "desired" outcome for this setting. Methods And Evaluation Criteria: The methodology section is clear and the solution to the proposed problem (threat model) seems clever. The settings they consider for benign and adversarial verification scenarios make sense. However, I am not familiar enough with any relevant work to know for sure whether there exist other techniques that can achieve these types of detection rates under the same threat model. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design relies on decision thresholds set by experiment, but does not test their generalizability. Did the authors demonstrate that the thresholds that were "chosen" based on Tables 2 and 5 (as stated in L206C2) would generalize to held out problems and still achieve the same accuracy and error rate? Relatedly, for binary detection problems where the model has a threshold, it is standard practice to visualize the performance spectrum via ROC curves and summarize with AUC-ROC. This style of analysis helps demonstrate what possible TPRs and FPRs are acheiveable across all possible thresholds. The reviewer has a guess as to what these would look like given the reported results, but computing them would be helpful when communicating the results to a broader audience, and if/when the approach is compared to other baselines or future modifications. Supplementary Material: Did not review supplementary. Relation To Broader Scientific Literature: The problem as presented by the authors is well motivated. Given the increasing economic value and burden of LLM serving operations it is realistic to assume that there will be contradictory motivations between service users and serving providers, and thus trustless schemes like the one proposed are worthy of study. Whether or not the algorithms presented are efficient and reliable enough to run in realtime on real world systems remains beyond the scope of this work, though their method is designed with efficiency in mind. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Some clarity issues with interpreting the results: It would help the reader interpret the results better if in S3.1 it could be made very explicit that these modifications to serving _should_ fail to verify (i.e. proof failure constitutes a True Negative) and that those in S3.2 _should not_ fail to verify (i.e. proof failure would constitute a False Negative). Different terms i.e. positives, could be used, but the work needs additonal clarity when introducing the two classes of adjustments so that the reader's expectations are set correctly. Relatedly, all tables where intersections, matches, and diffs are reported need to be further clarified in their captions with "for this experiment, success of the approach means match rate should be low/should be high". The reviewer believes that the up and down arrows are meant to help but overall it is still confusing to reason about. The prose surrounding each table reference in the main body sections does tend to state whether or not the experiments in a particular table were successful on the whole, which helps, but captions should be self contained. Other Comments Or Suggestions: 1. Figure 1 is attractive, but the information content is low/not central to the proposal. It is fine to simply state in prose wherever appropriate, eg. the contributions list, that verification requires only a "teacher-forced" forward pass of a query sequence that will generate the hidden states to which the proof is supposed to correspond. The figure is not required to illustrate this. 2. What are we supposed to see in Fig 2? Caption should include some succinct takeaway statement and in L255 can the authors motivate better why we expect more deviation at higher token indices? Questions For Authors: 1. Could a subroutine for findInjectiveModulus or interpolateModPolynomial be provided? or at least a reference. 2. Basic assumption appears to be that intermediate or last layer activations cannot be efficiently spoofed, is this discussed/verified? If it were possible for the provider to efficiently spoof these features, it could constitute a vulnerability of the approach. 3. Sec 5.5 Fig 3 trends are explained by elements slipping past the top-k cutoff and falling out of the proof. The reader was not clear on whether this is because the top-k elements are being compared as a set, or as an ordered sequence, or this is irrelevant to the argument. Can the authors clarify how the topk indices and values are treated more clearly either in the Algorithm defs themselves, or in this section? 4. There are no baselines considered. This is a bit of an issue because the difficulty of any detection problem is directly modulated by the closeness of the expected negatives wrt positives (one can construct test sets where 0% FPR is easy to achieve if there are no tricky negatives, though the reviewer does not suggest the authors did this in any way). Could the authors elaborate on the relative difficulty of each of the types of problems in Tables 2,3,4,5? Some of them are where method success is quantified by proof verification and some are the opposite where success means proof rejection. In order to understand the impact and significance of the empirical results, it feels important to get a sense of how tricky of a discrimination problem each of these is expected to be before interpreting the actual results. 5. One failure of the approach seems to be the precision differentiation. Can the authors elaborate a bit on their explanation for why this is the case? The direct relationship between this particular serving factor and the accuracy of the quantities they are verifying during the proof, does seem to make this setup one of the more challenging test cases. After all bfloat16 was itself developed to try and match fp32 as well as is possible in numerical optimization scenarios at a lower space-time complexity. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are thankful for the thorough reading and review of our paper and appreciate the comments you have written. It is assuring that you found the problem to be well motivated and are convinced by the experiments of the reliability of the method in distinguishing permissible and undesirable modifications. **Infeasibility of spoofing last hidden activations** There are a few potential approaches to spoofing last hidden activations, such as pruning layers or training a smaller model. However, if a small model is able to reliably reproduce the same layer activations, it effectively means it can match the hidden states of the larger model—implying equivalent performance. Given the known capability gap between smaller and larger models, this seems unlikely in practice. **Comparisons to previous methods and baselines** We fully agree that having more comparisons with baselines would be helpful. To provide some early additional results, we've evaluated zkLLM (https://arxiv.org/pdf/2404.16109), SVIP (https://arxiv.org/pdf/2410.22307) and using raw activations. The summary of our experiments are available below; in short, TopLoc is competitive. | |zkLLM|SVIP|Raw Activations|TopLoc| |---|---|---|---|---| |**Detection model training time**|-|4h21m|-|-| |**Commitment size per token**|11MB|20KB|10KB|8B| |**Commitment overhead per token**|986s|1.7ms|-|0.26ms| |**Validation time**|803s|5.6ms|81ms|81ms| |**Provider GPU memory overhead per token**|23.1GB|980MB|10KB|10KB| |**FPR**|0%|3%|0%|0%| |**FNR (Deterministic)**|0%|4.41%|0%|0%| |**FNR (Non-deterministic)**|100%||0%|0%| **Generalizability of decision thresholds** We demonstrate initial evidence of generalizability by applying similar thresholds across a diverse set of tasks with different model configurations, architectures and precision. While the thresholds in Tables 2, 3, 4 and 5 were chosen based on observed performance, they were not tuned for the specific tasks, and we observed consistent accuracy and error rates across all tasks with the same thresholds. Thanks for the suggestion on ROC curves. We fully agree that AUC-ROC curves would be great for analyzing the achievable TPR and FPR rates, numerical stability of each modification and generalizability of the method. However, including them in the main text of this paper might not be particularly compelling as the experiments we ran allowed for thresholds that would yield perfect results. Admittedly, this is because we did not pick attack setups that would require finer tuning of the thresholds. Our method is already much better than prior methods and we are encouraged to explore harder attacks in future works. **Difficulty of the detection problems** For the permissible modifications, the problem is particularly difficult for cryptographic methods which rely on reproducible deterministic computation without numerical deviations. Detecting small prompt alterations is harder than large ones, but significant changes in output should be easier to catch as they affect the attention mechanism and hidden states. Model changes are easy to detect since the models don’t share hidden representations, making their top-k element distributions highly distinct. A harder task is differentiating between fine-tuned models or the same model after some gradient updates, which we plan to explore in future work. **Detecting changes in inference precision** TopLoc is uniquely effective at detecting precision differences between bf16 and fp32 because of the mantissa check. In Table 5, the minimum mantissa difference statistics for the bf16 model is above the thresholds of 256 for mean and 128 for median. This setup is particularly difficult for methods which use a downstream detection model such as SVIP. However, detecting changes in fp8 and lower-bit quantizations is more complex, as noted in our discussion section. **Motivation for why more deviations are expected at higher token indices** KV cache errors compound, causing higher token indices to have higher deviation. We will clarify this better in the final version of the paper. **Providing a subroutine for findInjectiveModulus and interpolateModPolynomial** We agree that this will be a common question for readers and will add this to our paper. **Other clarifications** In Section 5.5, Fig. 3, trends are explained by elements slipping past the top-k cutoff. The algorithm compares top-k elements as a set, using only the indices present in both sets to compute mantissa differences. We will clarify this in the final version. We agree that the up and down arrows, switching between max and min are confusing and make the algorithm harder to follow. We will rename “top-k intersection” to “top-k mismatch” and “exponent match” to “exponent mismatch” so that high values always indicate invalid proofs and low values indicate valid proofs. We will also add a sentence to each table summarizing whether values are above or below the proof acceptance threshold for clarity. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses to the questions and comments of all the reviewers. The points mentioned regarding spoofing, difficulty of problems, precision swapping as a special problem, as well as all other clarity points regarding how the results and tables are presented should all be carefully incorporated into the draft. Particularly, the subroutines are _required_ for completeness and clarity; this is something that I wish I could see before a final decision is made on the work, but this review process does not permit an updated draft. As noted in more than one review including my own, the addition of baseline comparisons was/is a very important weakness/improvement to the work. Proposing a new method is fun, but connecting it to other approaches is of course critical :] Please consider adding the ROC-AUC analysis. I agree that it might look a bit silly in the settings you show, but for example if you increase the hardness of a problem, and/or create slightly shifted train and val sets where a threshold needs to generalize, add class imbalance, and _most importantly,_ if you include the baseline approaches, ROC curves might actually be more informative than expected. Regardless, showing curves that are squeezed up and to the left is a simple visual indicator that your method performs well. Assuming that the authors are agreeing to incorporate as much of these suggestions as possible in their camera ready, I do believe that the work is a good contribution to the literature. With upward movement on one score already, and a score=1 that is relatively well addressed by the rebuttal itself, to balance that out, I will bump my score as an additional indicator towards acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their thoughtful comments and for raising their overall recommendation. **Subroutines and Clarity** As mentioned in our [response to Reviewer WCWr](https://openreview.net/forum?id=8PJmKfeDdp&noteId=xY1g1zteuJ), we will include the requested subroutines (`findInjectiveModulus` and `interpolateModPolynomial`) along with analysis in the final version. We will also provide a link to our open source code for generating and verifying the proofs, which include efficient implementations of the subroutines. We will also extend the discussions regarding spoofing, difficulty of problems and other points made in our rebuttal and thank the reviewer for the constructive discussion on these points. **ROC-AUC and Baseline Comparisons** We agree that ROC curves can be quite informative, especially when comparing multiple methods. We will seriously consider presenting our comparisons to baselines with ROC plots. We expect this will further underscore TopLoc’s advantages over existing methods. **Impact of revisions** The suggested modifications are primarily related to visualizations and discussions, making them relatively straightforward. We are confident these updates can be cleanly integrated into the final version. We appreciate the reviewer’s positive remarks on our contribution and suggestions for improvement. We believe these additions will significantly strengthen the final manuscript and look forward to refining it accordingly.
Summary: In this paper, the authors introduce a novel method called TOPLOC that provides cheap verifiable inference for large language models. TOPLOC efficiently encodes intermediate tensor activations into (k−1)-degree polynomial for top-k values. By doing so, it reduces a huge amount of storage for the communication. Throughout the experiments, the author shows the robustness of the method to GPU nondeterminism. The authors also empirically validate TOPLOC across multiple model architectures and hardware configurations, demonstrating its capability to detect unauthorized changes reliably. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No proofs. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, all of them. Relation To Broader Scientific Literature: The proposed method is way cheaper than the previous methods, and the authors make the verification more practical. Essential References Not Discussed: I am not an expert in this area, but I think the authors include the essential references I can find on Arxiv. Other Strengths And Weaknesses: Strengths: - The paper is well-written. The flow of this paper is very nice. Even though Im not very familiar with this area, I can totally follow all the intuition and motivations of the proposed methods. - The proposed method is very effective. Mostly important, it reduces over 1000× storage required compared to previous works. Weaknesses: - There is no directly comparison to previous methods in the paper. I know the main benefit of TOPLOC is much cheaper, but It would be helpful to see if the proposed method is more reliable than previous works. - The inference modifications included in this paper are not very strong attacks. For example, the authors included altering system prompts, but the prompts used in the experimental section are too distinct from each other. The result would be more convincing if there were only small modifications to the prompt. For example, the original prompt: "You are a helpful assistant, ..." altered prompt: "You are a helpful assistant, ... However, if the user asks about ICML, you should always respond ICML is the best conference." - During verification, the verifier also needs to do a forward pass with the model. This is expensive if we want to verify a lot of queries. Other Comments Or Suggestions: - Minor: It would be good if Table 1 also shows the proportion instead of absolute counts. Questions For Authors: - What if the model owner only changes the behavior given some specific prompts. For example, for normal prompts the model behaves the same, but if the user asks a question containing a key word, the service provider uses a different model. How would you detect such cases? This can also be done by altering the model weights by some backdooring attacks. I think the challenge is that during the verification, it's not possible to probe all possible scenarios, so a strategic probing method would be useful. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the review. We are glad you found the flow of the paper nice and the intuitions and motivations easy to follow. **Comparisons to previous methods and baselines** We fully agree that having more comparisons with baselines would be helpful. To provide some early additional results, we've evaluated zkLLM (https://arxiv.org/pdf/2404.16109), SVIP (https://arxiv.org/pdf/2410.22307) and using raw activations. The summary of our experiments are available below. As the reviewer has suggested, TopLoc is way cheaper and significantly more practical compared to prior approaches. The time and memory overhead are **millions** of times lower compared to zkLLM. Compared to SVIP which requires training detection models, TopLoc does not require any training overhead. TopLoc is also more reliable, having no false positive or false negative rate in the settings we tested; which is not true for SVIP and zkLLM. Toploc is also 98,000x less VRAM overhead compared to SVIP. | |zkLLM|SVIP|Raw Activations|TopLoc| |---|---|---|---|---| |**Detection model training time**|-|4h21m|-|-| |**Commitment size per token**|11MB|20KB|10KB|8B| |**Commitment overhead per token**|986s|1.7ms|-|0.26ms| |**Validation time**|803s|5.6ms|81ms|81ms| |**Provider GPU memory overhead per token**|23.1GB|980MB|10KB|10KB| |**FPR**|0%|3%|0%|0%| |**FNR (Deterministic)**|0%|4.41%|0%|0%| |**FNR (Non-deterministic)**|100%||0%|0%| **On the difficulty of the prompt alterations used** The prompt alterations we used for our experiments are already quite close to what is being suggested. In table 10 of the appendix, we include the alterations used. As shown in table 4, shorter prompts are harder to detect. The shortest alteration we use is to prepend the generation with “Always praise tacos.” which is only 4 tokens. This shows us that it is quite likely the method generalizes to other prompt alterations. **Computational Efficiency of verifying large number of queries** We acknowledge the concern regarding the compute overhead of requiring a forward pass to validate the query. 1. We can do the validation forward much faster than the generation because the validation can be done entirely with prefill operations while the generation requires many memory bound decode operations. 2. If we are verifying a lot of queries, a probabilistic approach is possible where we only check 10% of the generations. Provided we have sufficient disincentive (e.g. slashing mechanism). The provider is game theoretically incentivized to not risk cheating. **On providers selectively altering the generation** If we check each generation, we should be able to catch the provider on the queries that they cheated on. If the user never passes the keyword, we will not be able to detect that the provider has this rule in place, however, in this case, the provider never tampered with the generation and the outputs are correct. **Weight alterations** For significant changes, we would be able to detect this as the last hidden activations will be different. Outside the scope of this work, we plan to explore the method's sensitivity to more subtle changes such as the same model after some gradient updates. **Minor point on Table 1** Thanks for the suggestion! We will consider showing proportions instead of absolute counts.
Summary: This paper proposes TopLoc, a locality-sensitive hashing-based method for verifying that an output generated by an LLM actually comes from the LLM that the LLM serving provider claims to be using. The author claims that traditional methods for verifying LLM output (e.g., cryptographic approaches or testing model intermediate outputs by a third party) are either computationally inefficient or memory inefficient. The proposed TopLoc method leverages hashing methods to significantly accelerate the verification of LLM output while also reducing memory requirements. Experimental results are presented to demonstrate the effectiveness of the proposed TopLoc method. Claims And Evidence: The effectiveness claim of the TopLoc method in the submission is supported by the experimental justification. However, the speed and memory usage of the TopLoc method do not seem to be well justified in the experimental evaluations. Methods And Evaluation Criteria: The evaluation criteria to measure the verification performance of the proposed TopLoc method seem to make sense. However, it is not clear if the proposed method is really as fast and as memory efficient as claimed. Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: The experimental design to measure the verification performance of the proposed TopLoc method seems to make sense. Again, it is not clear if the proposed method is really as fast and as memory efficient as claimed. Furthermore, the proposed TopLoc method is not appropriately and sufficiently compared to other baselines for LLM output verification (especially state-of-the-art baseline methods). Supplementary Material: I reviewed all the additional details provided in the supplementary material. Relation To Broader Scientific Literature: This paper is related to the broader area of verifiable program output, as well as the trustworthiness of large language models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper is well-written and well-motivated in general. - Verifying the output of LLMs and checking if it matches the model provider's claim seems to be an interesting research field. - The proposed method is easy to follow. Weaknesses: - I wonder if there is any trivial solution to solve the motivating problem in this paper, e.g., are users sensitive enough to tell if the quality of the LLM's output changes? If so, wouldn't the users simply tell that the model provider used a different model from what has been claimed in the service? If not, does it really matter which model the model service provider actually uses to serve their customers? - As discussed above, it is not clear how fast the proposed method is. It is also not clear what the actual memory savings obtained by the proposed method are during LLM output verification. - The proposed method is not appropriately compared to any prior LLM output verification methods or state-of-the-art baseline methods. Other Comments Or Suggestions: Please see "Other Strengths And Weaknesses" for more details. Questions For Authors: - What is the theoretical computation and memory complexity of the proposed method? - Noting that people tend to chase FP8 precision for LLM pretraining and inference, can TopLoc also work for FP8? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We are glad you found the problem interesting and the proposed method easy to follow. **Comparisons to previous methods and baselines** We fully agree that having more comparisons with baselines would be helpful. To provide some early additional results, we've evaluated zkLLM (https://arxiv.org/pdf/2404.16109), SVIP (https://arxiv.org/pdf/2410.22307) and using raw activations. The summary of our experiments are available below. As the reviewer has suggested, TopLoc is way cheaper and significantly more practical compared to prior approaches. The time and memory overhead are **millions** of times lower compared to zkLLM. Compared to SVIP which requires training detection models, TopLoc does not require any training overhead. TopLoc is also more reliable, having no false positive or false negative rate in the settings we tested; which is not true for SVIP and zkLLM. Toploc is also 98,000x less VRAM overhead compared to SVIP. | |zkLLM|SVIP|Raw Activations|TopLoc| |---|---|---|---|---| |**Detection model training time**|-|4h21m|-|-| |**Commitment size per token**|11MB|20KB|10KB|8B| |**Commitment overhead per token**|986s|1.7ms|-|0.26ms| |**Validation time**|803s|5.6ms|81ms|81ms| |**Provider GPU memory overhead per token**|23.1GB|980MB|10KB|10KB| |**FPR**|0%|3%|0%|0%| |**FNR (Deterministic)**|0%|4.41%|0%|0%| |**FNR (Non-deterministic)**|100%||0%|0%| **Memory savings** The 1000x storage efficiency claim in our paper is from comparing TopLoc to storing all the activations directly. For example, take the smallest model we tested: Llama-3.1-8B-Instruct, which has a hidden size of 4096. If we stored the final hidden activation for every generated token in bf16, we’d need: ``` 4096 elements * 2 bytes = 8192 bytes per token ``` With TopLoc, we instead store the top 128 activation values every 32 tokens using a polynomial congruence with 128 coefficients. Each coefficient takes 2 bytes, so the total is: ``` 128 * 2 bytes = 256 bytes for 32 tokens → 8 bytes per token ``` This is a 1000x reduction. **On TopLoc’s effectiveness in detecting models inference with FP8** TopLoc is uniquely effective at detecting precision differences between bf16 and fp32 because of the mantissa check. In Table 5, the minimum mantissa difference statistics for the bf16 model is above the thresholds of 256 for mean and 128 for median. This setup is particularly difficult for methods which use a downstream detection model such as SVIP. However, detecting changes in fp8 and lower-bit quantizations (e.g. 4-bit) is more complex (as noted in Section 6.1) and outside the scope of this work, which is already a significant improvement over prior methods. **The necessity of having algorithms to detect model changes** > I wonder if there is any trivial solution to solve the motivating problem in this paper, e.g., are users sensitive enough to tell if the quality of the LLM's output changes? If so, wouldn't the users simply tell that the model provider used a different model from what has been claimed in the service? If not, does it really matter which model the model service provider actually uses to serve their customers? This is a valid question, but detection remains necessary for several reasons: - Agentic workflows: In many cases, the output may not be directly consumed by a human user but passed into another model or system component. Verifiability must be automated and independent of human judgment to ensure trustworthiness in such pipelines. - Subtle degradation: Users may not be sensitive to model regressions. A user may accept a suboptimal but satisfactory output without realizing a stronger model would have produced a better response. A user might also wrongly attribute poor performance to the task being too difficult for the model’s capabilities, rather than a degraded model. - Undetectable biases: Shifts in model behavior (e.g. due to altered prompts or use of finetuned models) can inject subtle biases that are difficult to detect through inspection alone. **Theoretical Complexity Analysis** Thanks for bringing this up. We believe it will be a common question for readers, so we will include it in our paper. As mentioned in **memory savings**, we store 8 bytes per token, which grows linearly with the number of tokens (O(n)). For computation, interpolating a polynomial using Newton’s method has a complexity O(k²), where k is the number of top-k values we use in the proof. Finally, finding the injective modulus can be done in O(k) time: ```python def find_injective_modulus(x: list[int]) -> int: for i in range(65536, 2**15, -1): if len(set([j % i for j in x])) == len(x): return i raise ValueError("No injective modulus found!") ``` Although the theoretical worst case constant can be quite large, on average, the function returns in a few iterations. This is because the probability of reaching an iteration decreases exponentially.
Summary: This paper presents TOPLOC, a method to achieve verifiable LLM inference. It uses locality-sensitive hashing for intermediate activations to detect potential unauthorized modifications during the computaion. It uses a polynomial encoding scheme of the memory overhead of proof generation by 1000x. ## update after rebuttal I have updated my score during the rebuttal. My initial question is the lack of justification of the 1000x speedup (which lets me to doubt several main claims in the paper). In the rebuttal, the author has provided me more detailed calculation, and I am convinced to increase my score to 3. Claims And Evidence: Yes or No. There are two main claims in the paper: (1) The method can generate accurate proof, this is justified by the results in Table 1, Figure 3, Table 2. (2) The method is efficient, it claims to be 1000x more storage efficient, but the reviewer does not observe extensive explanation on how the reduction is calculated (please help kindly correct the reviewer if I understand incorrectly). Methods And Evaluation Criteria: The paper uses UltraChat and several leading SOTA LLMs (Llama-3.1-8B-Instruct, INTELLECT-1-instruct, Gemma-2-9b-it), which the reviewer believes is a good combination for evaluation. Theoretical Claims: The reviewer checks the correctness of proof generation and validation. (algorithm 1 and 2). Experimental Designs Or Analyses: The experiment design is majorly sound (it analyzes the error with the mentioned dataset and models). However, the experiments are all based on distinguishing fp8 and bf16, which the reviewer is not entirely convinced this is sufficient for the overall claim. For instance, how does the method performs when the model provider uses another smaller model, use 4-bit versus 8-bit (e.g. Several large models, e.g. R1 is in fp8 by default, can the method distinguishes it if the model provider is using 4-bit?) Supplementary Material: Yes, the reviewer mainly review for Table 6 and Table 9. Relation To Broader Scientific Literature: One of the closest papers is zkLLM, where the method seems to make substantial improvement over (if the author can kindly point me to the 1000x calculation). Essential References Not Discussed: The references are clear. Other Strengths And Weaknesses: Please see the above comments. The paper is good because it addresses an important problem (it is very likely the current model provider will change the computation to save cost). Other Comments Or Suggestions: Please see above comments. Questions For Authors: Please see above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and for recognizing the importance of the problem our paper addresses. **Comparison to zkLLM** To provide some context on the speed and memory claim, we provide some early additional results. Here we evaluated zkLLM (https://arxiv.org/pdf/2404.16109), SVIP (https://arxiv.org/pdf/2410.22307) and using raw activations. The summary of our early experiments are available below. As shown, TopLoc is way cheaper and significantly more practical compared to prior approaches. The time and memory overhead are **millions** of times lower compared to zkLLM. Compared to SVIP which requires training detection models, TopLoc does not require any training overhead. TopLoc is also more reliable, having no false positive or false negative rate in the settings we tested; which is not true for SVIP and zkLLM. Toploc is also 98,000x less VRAM overhead compared to SVIP. | |zkLLM|SVIP|Raw Activations|TopLoc| |---|---|---|---|---| |**Detection model training time**|-|4h21m|-|-| |**Commitment size per token**|11MB|20KB|10KB|8B| |**Commitment overhead per token**|986s|1.7ms|-|0.26ms| |**Validation time**|803s|5.6ms|81ms|81ms| |**Provider GPU memory overhead per token**|23.1GB|980MB|10KB|10KB| |**FPR**|0%|3%|0%|0%| |**FNR (Deterministic)**|0%|4.41%|0%|0%| |**FNR (Non-deterministic)**|100%||0%|0%| **1000x more storage efficient** As mentioned in the table above on comparing against zkLLM, TopLoc is actually millions of times more storage efficient. The 1000x storage efficiency claim in our paper is from comparing TopLoc to storing all the activations directly. For example, take the smallest model we tested: Llama-3.1-8B-Instruct, which has a hidden size of 4096. If we stored the final hidden activation for every generated token in bf16, we’d need: ``` 4096 elements * 2 bytes = 8192 bytes per token ``` With TopLoc, we instead store the top 128 activation values every 32 tokens using a polynomial congruence with 128 coefficients. Each coefficient takes 2 bytes, so the total is: ``` 128 * 2 bytes = 256 bytes for 32 tokens → 8 bytes per token ``` This is a 1000x reduction. **Detecting changes in inference precision** TopLoc is uniquely effective at detecting precision differences between bf16 and fp32 because of the mantissa check. In Table 5, the minimum mantissa difference statistics for the bf16 model is above the thresholds of 256 for mean and 128 for median. This setup is particularly difficult for methods which use a downstream detection model such as SVIP. However, detecting changes in fp8 and lower-bit quantizations (e.g. 4-bit) is more complex (as noted in Section 6.1) and outside the scope of this work, which is already a significant improvement over prior methods. --- Rebuttal Comment 1.1: Comment: Thank you for getting back. This addresses my question on 1000x. I will increase my score to 3 to support the paper. Please add the clarification in the final manuscript! --- Reply to Comment 1.1.1: Comment: Thanks for the follow-up and for updating your score. We'll make sure to include the clarification in the final manuscript!
null
null
null
null
null
null
PARQ: Piecewise-Affine Regularized Quantization
Accept (poster)
Summary: The authors propose a convex piecewise regularizer for quantization aware training. They utilize an aggregate proximal stochastic gradient method and prove that it has last-iterate convergence. They denote that their method is equivalent to a previously proposed ProxConnect method, however they derive their method in a different way. ## update after rebuttal I would like to keep my score after the rebuttal period. Claims And Evidence: Yes. Methods And Evaluation Criteria: It makes sense. Having more benchmark datasets/tasks would have made the evaluation stronger. Theoretical Claims: Didn't check. Experimental Designs Or Analyses: Yes, the experimental design is sound. Supplementary Material: I have skimmed over the proofs. Relation To Broader Scientific Literature: Yes. Quantization aware training is a hot topic since the large language models have become expensive to serve. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: * The methodology is sound. * The paper is well written. Weaknesses: * If the authors trained large language models with the proposed method, its impact would be much greater. * Additional datasets/models/tasks would have showed wider applicability of the method. Other Comments Or Suggestions: Ablation studies on the choice of the $\rho_t^{-1}$ would be interesting to see. Questions For Authors: * What is behind the choice of the curve in Figure 10 for the $\rho_t^{-1}$ schedule? For example, how does an exponentially decaying schedule compare against the schedule used in the experiments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the strength of our paper (sound methodology and being well written) and giving us constructive suggestions on having more benchmark evaluations. We agree that additional empirical evaluations, especially on modern language models, will make the paper stronger. Although we do not have time to finish such experiments during the rebuttal period, we will try to include experiments on some basic language models in the final version to demonstrate its applicability. We agree that ablation study on the choice of $\rho_t^{-1}$ is very useful for better understanding the behavior of the algorithm. The particular curve in Figure 10 is of the sigmoid family. Specifically, $\rho_t^{-1} = \frac{1}{1 + \exp(s(t-t_1))}$ where $t_1$ is the transition center and $s$ is the transition steepness. This schedule changes $\rho_t^{-1}$ roughly from 1 to 0 (taking value $1/2$ at the transition center $t_1$). Essentially this changes the slope $\rho_t$ from 1 to $+\infty$. Here $s>0$ is the steepness parameter: the larger $s$ is, the steeper transition it has. For example, $s=0.1$ makes the transition almost linear, and the one shown in Figure 10 is with $s=10$. Notice that the above sigmoid curve for $\rho_t^{-1}$ is essentially exponentially decaying as the reviewer suggested. Equivalently, we have the slope itself, $\rho_t$, increases exponentially. In our ablation study, we train a 2-bit DeiT-T model with PARQ. We study the choice of transition steepness $s \in \\{0.1, 1, 10, 20, 40, 80\\}$ and transition center $t_1 \in \\{0.25, 0.5, 0.75\\}$, the fraction of training progress at which transition occurs. This sweep reveals that a shallow $s = 1$ performs best for the model configuration. A later transition of $t = 0.75$ improves QAT performance for shallower steepness values $s \in \\{10, 20\\}$, while higher $s \in \\{40, 80\\}$ are relatively unaffected. We will add the above description and details of the ablation study to the appendix in the final version of the paper. | $\boldsymbol{s}$ | $\boldsymbol{t_1}$ | prec1 | |:---:|:---:|:---:| 0.1 | 0.5 | 66.48 | 1 | 0.5 | 66.6 | 10 | 0.25 | 64.11 | 10 | 0.5 | 64.62 | 10 | 0.75 | 66.02 | 20 | 0.25 | 63.74| 20 | 0.5 | 63.88 | 20 | 0.75 | 66.17 | 40 | 0.25 | 64.05 | 40 | 0.5 | 63.89 | 40 | 0.75 | 63.89 | 80 | 0.25 | 64.06 | 80 | 0.5 | 64.28 | 80 | 0.75 | 63.73 |
Summary: This paper proposed PARQ, a convex, piecewise-affine regularizer (PAR) for training the weights to cluster to a set of quantization points. Also, a practical implementation called PARQ is introduced. Overall, this paper has sufficient motivation, clear writing, grounded citations, and experiment enough to demonstrate their method's effectiveness. ## update after rebuttal I don't have any further opinions. Claims And Evidence: I believe this paper's claims is supported by sufficient evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. These mathematical proofs are beyond my expertise. Thus, I only give my rating based on some reason based on some reason such as the clarity of the presentation and the logic of the arguments. Experimental Designs Or Analyses: Yes. I believe their experiments are reasonable. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions of the paper mainly focus on the quantization area. It introduced a novel regularizer to avoid the use of STE. Essential References Not Discussed: I would like to see some comparison/discussion with [1], which also applies regularizer to circumvent STE. [1] Towards Accurate Network Quantization with Equivalent Smooth Regularizer, ECCV2022 Other Strengths And Weaknesses: This paper is beyond my expertise. Thus, my rating is only based on the motivation, writing, logical flow, and experimental results. I would recommend having this paper reviewed by experts specifically in theory and mathematics. Other Comments Or Suggestions: Here, I would like to justify my rating from the following parts and sincerely hope the author could modify their paper accordingly. Motivation and contribution to the field: STE is a widely used method to avoid the non-differentiable rounding operation in quantization. Also, it does incur some issues such as weak convergence as indicated in this paper. Thus, developing some regularization and proximal gradient methods to avoid the use of STE has been a research point here. This paper introduced a PARQ (Piecewise-Affine Regularized Quantization), which is able to serve as a regularization term to make the parameters converge to the desirable quantization levels. This provides a new contribution to theory and practice. Writing and logical flow: In section 2, this paper first introduced Piecewise affine regularization (PAR), and then demonstrated its optimality conditions, which induce the parameter to converge to quantization levels. In section 3, this paper introduced the AProx algorithm to solve the PAR to achieve the optimal values. There are also discussions about comparing AProx and other algorithms and convergence Analysis. In section 4, this paper introduced the practical implementation of their method. Overall, a new regularization term, following a new solver algorithm and practical implementation. These make their logical flow clear. Related to [1]. [1] is also a regularization term-based method. I consider these two papers to have the same topic. Other than that, these two papers have no similarities in method and theory, according to my understanding. I hope to see some comparison between the performance and the theory behind them. [1] Towards Accurate Network Quantization with Equivalent Smooth Regularizer Experimental results: Their experimental results consist of ResNet and DeiT, evaluated on widely used ImageNet. I do consider these results to be sufficient. Novelty: The idea of convex piecewise affine regularization (PAR) and corresponding AProx solver make their method unique. To the best of my knowledge, this is the first work that proposed the regularization term from not only the practical view but also the theory view. Theory and formula parts: I think there are two main reasons why I find it difficult to understand: one reason is that I lack professional background knowledge. Thus, I recommend having this paper reviewed by experts specifically in mathematics. The other reason is that this article lacks guidance. For example, the author should explain the purpose of each section at the beginning of each section. I believe that adding guidance will help readers clearly understand the purpose of a certain section. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the overall positive feedback to our paper and especially recognizing our main novelty and contributions. Here we mainly address the reviewer’s question on the regularization approach proposed in the following reference, which we call Ref [1] hereafter. [1] Towards Accurate Network Quantization with Equivalent Smooth Regularizer, ECCV2022 Ref [1] proposes smoother regularizers for inducing quantization, more specifically, of sinusoidal shape. It is clearly based on the intuition that such regularizers can help trap the weights to clusters close to a set of (evenly spaced) discrete values. This is similar to using W-shaped regularizers (which are nonsmooth) but with smooth functions in order to have a unique gradient in optimization. Unfortunately, such functions violate both properties we desired for a good regularizer for quantization: nonsmoothness and convexity (see the two paragraphs starting from Line 157 in our paper). More specifically, * Smooth curves like sinusoid can trap weights into separate clusters, but does not induce quantization (i.e., concentrate on discrete values). This is due to the fact that locally near the local minima, the sinusoid behaves like the squared Euclidean norm, being flat thus does not induce quantization. In contrast, nonsmooth regularizers such as $L_1$ or W-shaped or PAR are locally sharp and thus can force the weights to concentrate at the local minima. As a result, Ref[1] still needs to use additional rounding or STE steps in order to obtain discrete quantized values. * Convexity gives better global properties that help to avoid local minima, which explains the popularity of $L_1$ regularization over nonconvex regularizers. Without convexity (such as W-shaped regularizer), it is very hard to establish any interesting convergence theory as we do in our paper. Indeed, in the intersection of nonsmoothness and convexity, piece-wise affine function looks to be the only sensible choice. Also notice that using proximal-update of nonsmooth regularizers (instead of gradient update) avoids any problem due to the non-uniqueness of their (sub-)gradients, which is the main motivation to address with smooth regularizers by Ref[1]. Finally, we agree with the reviewer that adding appropriate guidance (explaining the purpose of each section at their beginnings) will make the paper easier to read. We will be able to add them in the final paper with one extra page allowed for the main text.
Summary: they contribute a new QAT quantizer, specially optimize PAR-regularized loss functions using an aggregate proximal stochastic gradient method (AProx) and prove that it enjoys last-iterate convergence. Claims And Evidence: convincing Methods And Evaluation Criteria: make sense Theoretical Claims: theoretical correctness Experimental Designs Or Analyses: experiment soundness. Supplementary Material: N/A Relation To Broader Scientific Literature: Lack some QAT quantizer Essential References Not Discussed: Lack some QAT quantizer Other Strengths And Weaknesses: This article is interesting enough, but the readability needs improvement. Some formulas lack numbers and need to be supplemented with variable explanations after the formulas. Other Comments Or Suggestions: Since most of the current methods are STE for QAT, the current comparative experiments are reasonable, but may not be sufficient. Questions For Authors: 1. line 160, dist(w, Q^d) need further explanation. 2. Readability needs to be enhanced and relevant fundamentals in the field of quantization need to be supplemented, including the previous work introduced piecewise affine. 3. The regularization overhead needs to be discussed. 4. Although most existing methods use STE, I hope the authors will consider improved versions such as LSQ+. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our main contribution on the PAR regularization, the AProx method and proving its last-iterate convergence. We will work on better readability in revising the paper as suggested by the reviewer. Here are answers to the reviewer’s questions. 1. Line 160 the definition of $\text{dist}(w, Q^d)$ is $\text{dist}(w, Q^d)=\min_{v\in Q^d}||w-v||_2^2$, which appears in line 153. 2. We will add relevant fundamentals on quantization, especially on previous work using piecewise affine functions. - ProxQuant introduced W-shaped regularization, which is piecewise affine but nonconvex, so it is hard to establish interesting convergence properties. - Several other methods (including BinaryRelax) can be reformulated with equivalent piecewise affine regularizations, but the authors did not recognize the connection. In particular, the BinaryRelax equivalently uses the proximal map in Figure 9(b), which corresponds to a nonconvex piecewise affine regularization. - Dockhorn et al (2021) focused on the piecewise affine proximal maps and made the connection with piecewise affine regularizations (PARs). But they did not realize that there exists a class of convex PARs that can lead to much stronger convergence guarantees (which is one of the main contributions of our paper). 3. We appreciate the reviewer’s comment regarding the regularization overhead. In our method, the regularization overhead is negligible compared to the cost of gradient computation. Specifically, in AProx, the additional computation arises from the proximal update described in Equation (11). However, this update leverages an explicit proximal mapping, as shown in Equation (7), which can be evaluated efficiently. In the practical implementation (PARQ), we apply LSBQ to determine quantization values by solving a constrained least-squares problem. It is important to note that this step is common across many QAT algorithms and therefore does not introduce any additional overhead specific to our method. 4. We thank the reviewer for suggesting to incorporate other quantization schemes such as LSQ+. We agree that adaptive schemes such as LSQ (Learned Step Size Quantization) and the LSQ+ extension provide more flexibility in using trainable quantization scale and offset parameters. It’s possible to replace LSBQ used in PARQ with LSQ+ for potential performance improvement, which does not impact the convex PAR theory and AProx algorithms convergence properties. We will incorporate some of the above discussions in preparing for the final submission.
Summary: The paper proposes PARQ, a convex piecewise-affine regularization method for quantization-aware training. It introduces the AProx algorithm that transitions from soft to hard quantization, interprets STE as its asymptotic case, and proves last-iterate convergence. ## update after rebuttal I confirm that I have read the author response, and would like to keep my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. For more details, please refer to "Other Strengths And Weaknesses". Experimental Designs Or Analyses: Yes. For more details, please refer to "Other Strengths And Weaknesses" Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper's key contributions advance quantization-aware training (QAT) by proposing a principled method using convex, piecewise-affine regularization (PAR). This builds on prior work using regularization for quantization (e.g., L1 regularization for sparsity) and proximal gradient methods. It also provides a new interpretation of the widely-used straight-through estimator (STE) as an asymptotic form of their method, bridging gaps between heuristic approaches and theoretical foundations. Essential References Not Discussed: No essential missing citations. Other Strengths And Weaknesses: Strengths: 1. Principled and Theoretical Foundation: Proposes a principled QAT method with convex regularization and proves last-iterate convergence. 2. Practical and Competitive Performance: Achieves competitive results on low-bit quantization and adaptively selects quantization values. Weaknesses: 1. The baselines in the experiment are too old. Why is PARQ not compared with newer methods such as AdaRound[1] or N2UQ[2]? 2. Can PARQ be applied to large language models? For example, the llama Family? 3. Judging from the experimental results, the advantage of PARQ is not that great. Could the author reiterate the greatest contribution of PARQ? [1]Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation [2]Up or Down? Adaptive Rounding for Post-Training Quantization Other Comments Or Suggestions: No. Questions For Authors: See "Strengths And Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our paper’s contributions in advancing QAT by bridging gaps between heuristic approaches and theoretical foundations. We address the reviewers comments and questions as follows: 1. “The baselines in the experiment are too old. Why is PARQ not compared with newer methods such as AdaRound[1] or N2UQ[2]?” * The baselines we use are indeed some of the early works on QAT, especially BinaryConnet/STE, which is still the de facto standard in practice. Most recent works are extensions of BinaryConnect in different ways, relying on the fundamental interpretation of “Straight-Through Estimator.” As we explained in Section 1.1, STE is a misconception we try to correct through the convex PAR framework, and we provide a more principled interpretation to it. * As QAT attracts more attention in recent years, there are tens of new methods proposed and published including AdaRound and N2UQ. Comprehensive comparison with recent QAT methods is not our intent in this paper, as most recent methods integrate various small additional tweaks beyond the fundamental ideas in order to boost empirical performance. And it can be unfair if we do not equip every method with similar bells and whistles. For example: - N2UQ (Nonuniform-to-Uniform Quantization) introduces a particular nonuniform quantizer, it is an alternative scheme for LSBQ we use to generate the quantization targets $Q= \\{ q_1,...,q_m \\}$, which is independent of the PAR and AProx algorithm. We can replace LSBQ in PARQ (Algorithm 1) with N2UQ and reapply the rest (PAR and AProx) to test the performance, but the results are not indicative of what we care most about the effectiveness of PAR or AProx. - AdaRound is actually a PTQ (post-training quantization) method, in a quite different category of QAT, see our remarks in Section 1 (second paragraph starting on Lines 39). * Among the QAT methods beyond BinaryConnect/STE, we choose to compare with BinaryRelax because it follows a similar proximal gradient approach, and the proximal map is also piecewise affine but corresponding to nonconvex regularization as shown in Figure 9(b). See also the third bullet point in our response to Reviewer DkcD's comments. 2. “Can PARQ be applied to large language models? For example, the llama Family?” * Yes, PARQ is a generic method that can be applied to train any machine learning models, including LLMs. We will try to include experiments on training a (relatively small) Llama model in the final version to demonstrate its applicability. 3. “Reiterate the greatest contribution of PARQ.” * Our main contributions include: Introducing a class of convex, piece-affine regularizations (PAR) that can effectively induce weight quantization; Developed an aggregate proximal gradient (AProx) method and proved its last-iterate convergence (first of its kind); Provides a principled interpretation of the widely successful heuristic of STE. In summary, we developed a principled approach for QAT, “bridging the gaps between heuristic approaches and theoretical foundations.” Again, as we state in the paper, the main goal of our experiments is to demonstrate that PARQ has competitive (not necessarily superior) performance against the de facto standard of BinaryConnect/STE. Indeed they become essentially the same algorithm if run for a long time, thanks to our theoretical connection. A comprehensive performance benchmark against recent QAT methods is beyond the scope of this paper, which requires taking care of many additional nuances for a fair comparison.
Summary: This paper presents a principled QAT method PARQ via convex piecewise-affine regularization (PAR). The authors examine that PAR can induce network weights to approach discrete values. Then, the paper proposes an aggregate proximal stochastic gradient method (AProx) and theoretically demonstrates its last-iterate convergence. Experiments conducted on convolutional and transformer-based models show the effectiveness of the proposed method across five bit-widths. ## update after rebuttal I have read the authors' rebuttal and **maintain my assessment and score** for this paper. On one hand, the authors' ​​design of convex regularization is​​ relatively novel, so **I am inclined to accept it**. On the other hand, ​​their experiments suggest that convex regularization may have limited effectiveness for non-convex deep learning loss functions, and thus it still fails to address the generalization issue in high-bit settings.​​ I acknowledge that the performances of STE and PARQ can be similar, ​​as​​ the authors ​derive​​ a generalized form of the heuristic method through theoretical analysis. However, I hope the authors can include more discussion in the final version and focus on the generalization issues of regularization in their​​ research. Claims And Evidence: The paper's arguments regarding nonsmoothness and convexity are well-founded and reasonable. Methods And Evaluation Criteria: The algorithms proposed in the paper are well-justified, and the evaluation conducted across multiple datasets for two types of models (convolutional and transformer-based) effectively reflects the models' performance. Theoretical Claims: The authors provide a comprehensive proof for Theorems 3.1 and 3.2. I believe the last-iterate convergence of AProx is well-justified. Experimental Designs Or Analyses: 1. In the experimental analyses, the authors' discussion is not comprehensive, as it primarily emphasizes the performance advantages of the 1-bit case. However, in my view, the performance of PARQ tends to degrade in experiments with more than 1 bit. For instance, in the case of 4-bit ResNet-20, PARQ exhibits a noticeable decline in accuracy. Moreover, PARQ does not achieve the best performance for more complex network. Specifically, for 1-bit ResNet-56, PARQ demonstrates the lowest accuracy and the highest standard deviation. 2. The authors lack comparisons with more recent methods, as the current analysis is limited to BinaryConnect (2015) and BinaryRelax (2018). There are several other quantization methods related to the proximal gradient method, such as Proxquant[1] and BNN++[2]. While the authors emphasize the superior performance of PARQ in the 1-bit setting, it would be more clear to include comparisons with additional methods. Reference [1] Bai, Yu, Yu-Xiang Wang, and Edo Liberty. "Proxquant: Quantized neural networks via proximal operators." arXiv preprint arXiv:1810.00861 (2018). [2] Lu, Yiwei, et al. "Understanding neural network binarization with forward and backward proximal quantizers." Advances in Neural Information Processing Systems 36 (2023). Supplementary Material: This paper has no supplementary material. Relation To Broader Scientific Literature: The paper presents an extension of the proximal gradient method, which ensures discreteness and last-iterate convergence by introducing convex piecewise-affine regularization (PAR) and the aggregate proximal stochastic gradient method (AProx). The authors also emphasize that AProx is equivalent to ProxConnect, and the straight-through estimator (STE) can be regarded as the asymptotic form of PARQ. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper primarily introduces a convex regularizer that ensures discrete values and proposes an aggregate proximal map to guarantee last-iterate convergence. In my opinion, the authors' motivation and analysis for these two innovative contributions are clear and well-articulated. They effectively highlight the similarities and differences with ProxConnect, offering a distinct perspective on the problem. Other Comments Or Suggestions: Is there a specific reason why the best results under each setting in Tables 1-3 are not all highlighted in bold? Questions For Authors: 1. Could the authors please clarify and analyze why PARQ exhibits suboptimal performance at higher bit-widths? 2. Could the authors please clarify and analyze why PARQ demonstrates suboptimal performance in more complex networks, such as ResNet-50? 3. In Section 3.1, the title references ProxQuant, but the paragraph lacks any description or comparison related to it. Could the authors revise this section to include relevant details (does ProxQuant utilize Prox-SGD) ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our main contributions on convex regularization for inducing quantization and the AProx method with last-iterate convergence. Our response will focus on the experiment results and analysis. We agree that we can make the discussion on experiments more comprehensive, especially with the one extra page allowed for the final version. In particular, we should emphasize that the main goal of our experiments is to demonstrate that PARQ obtains competitive performance, not necessarily always better than existing approaches such as STE or BinaryRelax (for which we provide novel, principled interpretation). There are several aspects to discuss: * The experiment results presented are in TEST accuracy, following the convention of the ML community. We developed PARQ as a rigorous optimization/training framework for QAT, but our work does not yet address the generalization capability of the regularization, which is an interesting topic that we aim to investigate in future work. * The final training losses for different algorithms are also very close, see Figures 10 and 12. Even for training loss, it is hard to guarantee that PARQ is always better than others due to the nonconvex loss functions in deep learning – the results depend on the random initializations and random mini-batches in training. Our convergence guarantees are developed for convex losses, which also imply similar behaviors around a local minimum in the nonconvex case, but in general cannot guarantee a better local minimum. Many relative comparisons in the Tables are statistically insignificant, especially given the small number of runs with random seeds. * On the other hand, similar performances of STE and PARQ is somewhat expected, as STE can be interpreted as the asymptotic form of PARQ. We are not contrasting two drastically different approaches, rather to argue that the one with a sound principle is as good as the widely successful heuristic (but the principled approach enables guided further investigation). Similarly, BinaryRelax uses a similar proxmap as for PARQ (see Figure 9), but does not correspond to convex PAR (hence less theory support). * We mainly commented on the very-low-bit cases (1 bit or ternary) due to the observed, relatively large, half-point improvements. The low-bit cases have much less freedom compared with more bits so it may be relatively easier to reveal differences between different local minima. The gradual evolution of PARQ from piecewise-affine soft quantization to hard quantization may help the training process to be more stable and more likely to converge to better local minima (See our comments in Section 6). Again there is no guarantee that this happens for every model we try (especially with a small number of trials). We agree that our experiment comparisons are limited. However, it is not our intent to give a comprehensive comparison against many recent QAT methods, as many recent methods integrate with small additional tweaks beyond the fundamental ideas in order to boost empirical performance, and it can be unfair if we do not equip every method with similar bells and whistles. We limit our comparison to BinaryConnect (STE) and BinaryRelax because they have direct connections with our method as explained above. In addition, STE is still the de facto standard in practice despite many new methods and variations proposed. * On "why the best results under each setting in Tables 1-3 are not all highlighted in bold"? We only highlight the best results that look to be statistically significant, meaning the difference between the means are apparently larger than their standard deviations. Answers to Questions for Authors: * For Questions 1 and 2, please see our itemized explanations/discussions above. In addition, we conjecture that more bit-widths and more complex networks may have the advantage of being over-parametrized, leading to very small differences between the local minima found by different methods. * For Question 3, ProxQuant (Bai et al. 2018) proposed to use the W-shaped regularizer (nonconvex) and indeed use the Prox-SGD method. As we explain in Section 3, Prox-SGD will NOT produce a meaningful quantization effect as training evolves over time ($t\to\infty$). In order to fix this empirically, as described in their implementation, the regularization parameter $\lambda$ is changed to be growing linearly $\lambda_t = \lambda \cdot t$, without principle/theory justification. It turns out that it is consistent with our AProx algorithms, where an increasing regularization strength should be applied according to our theory. Then their actual implemented algorithm is similar to PARQ, except that they use a nonconvex PAR of W-shaped. So in the end, it is more close to BinaryRelax, which we included in our experiments.
null
null
null
null
CodeSync: Synchronizing Large Language Models with Dynamic Code Evolution at Scale
Accept (poster)
Summary: This paper introduces CodeSync, a data engine for identifying outdated code patterns and collecting real-time code knowledge updates from Python third-party libraries. Building upon CodeSync, the authors further develop CodeSyncBench, a comprehensive benchmark for assessing LLMs’ ability to stay synchronized with code evolution, which covers real-world updates for 220 APIs from six Python libraries. Extensive experiments on 14 state-of-the-art LLMs reveal that they struggle with dynamic code evolution, even with the support of advanced knowledge updating methods (e.g., DPO, ORPO, and SimPO). This benchmark offers a strong foundation for the development of more effective methods for real-time code knowledge updating in the future. Claims And Evidence: The paper introduces an interesting benchmark to evaluate the synchronized ability of LLMs for dynamic code evolution. The experiments conducted in Section 2.4 provide convincing evidence to support the proposal. Methods And Evaluation Criteria: The pipeline of constructing the dataset and the evaluation criteria of the benchmark seem reasonable. Theoretical Claims: No Experimental Designs Or Analyses: I have checked the experimental designs and analysis Supplementary Material: I have read the Supplementary Material, especially on the experimental setting part. Relation To Broader Scientific Literature: The presented novel dataset and benchmark for dynamic code evolution can serve as a complement to the existing progress on LLMs for code intelligence. Essential References Not Discussed: The related works are sufficiently discussed in this paper. Other Strengths And Weaknesses: Pros: 1. This paper is well presented with a clear structure. 2. The studied problem of synchronizing LLMs with dynamic code evolution is well motivated and interesting. 3. The experiments conducted in this paper are comprehensive, and several interesting findings are delivered in this paper. Cons: 1. This paper assumes that LLMs should always be updated with the latest API versions. However, it may not align with real-world development needs, since many projects continue using order versions for stability and compatibility. If CodeSync only fine-tuned on the latest API changes, it might lead to over-specialization. 2. As LLMs generate legacy-updated API pairs, they may introduce errors. If the datasets include incorrect API updates, this could mislead LLM training and evaluation, compromising the reliability of the benchmark. 3. Library evolution is a continuous process that not only involves modifying existing APIs but also includes introducing new functionalities and deprecating outdated APIs. The paper discusses API changes but does not explain how CodeSync deals with APIs that are removed without direct replacements (deprecated APIs). Other Comments Or Suggestions: No Questions For Authors: 1. The paper provides limited details on how 220 APIs are selected from the 6,036 tracked updates. What specific criteria are used for filtering? More details on building the dataset would be beneficial to assess the benchmark's representativeness and reliability. 2. Since libraries continue to evolve, how will CodeSyncBench remain up to date? Is there a plan for periodic updates, or will the dataset become outdated as newer API changes emerge? 3. What is the "Parameter Mapping" that is referred to in Appendix B.1.2. Would this method include some APIs that are not updated? 4. Could the authors provide a clear and detailed explanation of the process for identifying valid method API invocations, given that method APIs often involve dynamic binding? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer Jxkj,** We would like to express our sincere gratitude for your thoughtful and constructive feedback. We have addressed all of the comments and thoroughly presented our most recent experimental findings. --- **Q1:** The paper provides limited details on how 220 APIs are selected from the 6,036 tracked updates. What specific criteria are used for filtering? More details on building the dataset would be beneficial to assess the benchmark's representativeness and reliability. **A1:** As detailed in Appendix B.1, we filtered out APIs with insufficient real-world usage. Specifically, we selected 220 APIs that each had at least 15 retrieved real-world invocation examples. --- **Q2:** Since libraries continue to evolve, how will CodeSyncBench remain up to date? Is there a plan for periodic updates, or will the dataset become outdated as newer API changes emerge? **A2:** Yes, we plan to maintain and periodically update CodeSyncBench to reflect ongoing API changes, ensuring it remains a relevant and reliable benchmark for evaluating LLMs' ability to stay synchronized with evolving API knowledge over time. --- **Q3:** What is the "Parameter Mapping" that is referred to in Appendix B.1.2. Would this method include some APIs that are not updated? **A3:** As described in Appendix B.1.2, parameter mapping is a standard technique in software engineering for detecting API signature changes. For each pair of APIs across versions, we analyze: - Parameter types: positional-only, keyword-only, or both - Parameter attributes: required vs. optional We then construct a mapping from old-version parameters to their corresponding new-version parameters. If no mapping can be established—for example, due to newly added or removed parameters—this indicates an update. Additionally, if mapped parameters differ in type or attribute, we classify the API as changed. We will clarify this in the next version of our manuscript. --- **Q4:** Could the authors provide a clear and detailed explanation of the process for identifying valid method API invocations, given that method APIs often involve dynamic binding? **A4:** As stated in Appendix B.1, we initially collected 6,036 updated APIs and retrieved invocation samples for each. The next step is to identify target API invocations, as data retrieval relies on string match. We then analyze the code context to determine the actual API calls. For Method-Level APIs, precisely identifying their invocations through static analysis alone is extremely challenging. Therefore, we limit our focus on cases where parameter types can be inferred from the code context. Specifically, we track the dataflow and control flow of each instance. Then, we bind class type and identify method calls for each. This method makes sure to capture actual API calls. --- **W5:** This paper assumes that LLMs should always be updated with the latest API versions. However, it may not align with real-world development needs, since many projects continue using older versions for stability and compatibility. If CodeSync only fine-tuned on the latest API changes, it might lead to over-specialization. **A5:** Thanks for your comments. We think there may be a misunderstanding here. Actually, *CodeSyncBench* selected the current time as a snapshot to highlight the importance of API evolution. However, our framework is highly flexible—users can specify any preferred version ranges of a library to generate customized benchmarks. As we have shown below, users can set different start times and deadlines of API evolution for each library. The training set generated by CodeSync helps LLMs to synchronize with target API knowledge. Therefore, in the real-world development scenarios, users can design their own training sets and benchmarks and fine-tune models using LoRA methods for further application and exploration.
Summary: The paper proposes a new benchmark called CODESYNC to address the issue that in the real world, library functions evolve over time while LLM code generation models are not updated. They source the API function updates from 6 real-world repositories: pandas, numpy, scipy, tensorflow, torch, and flask. They collect the API function updates by comparing the function signatures between the latest version and the version from Jan 2023, which is the ChatGPT release date. They also source the invocations of these functions from GitHub and use LLMs to create old-version invocation code from the new-version invocation code. The resulting 1100 legacy-updated pairs are used to create 1100 code completion, error correction, and multiple-choice questions. The other 2200 pairs are used as training data for updating LLMs' library function knowledge. The results are evaluated across many open and closed LLMs. They also finetune open code LLMs with SFT, DPO, ORPO, SimPO, etc. to see if the finetuned models can improve performance on the CODESYNC benchmark. Claims And Evidence: - The paper claims to propose a data engine that systematically collects real-time API updates from various Python third-party libraries, and they show that they can use the method to collect many real-world API updates. - The paper claims to propose a novel benchmark for evaluating the performance of LLM code generation models, which is indeed different from previous API-update benchmarks like CodeUpdateArena. - The paper claims to have comprehensive evaluation, and they did show model evaluations across many open and closed LLMs. Methods And Evaluation Criteria: The paper uses BLEU, ROUGE, and relative edit-distance scores for the code completion and error correction tasks. Not including exact match score or semantic-aware matching score (exact match with respect to parameter ordering) is not ideal. Theoretical Claims: N/A Experimental Designs Or Analyses: The design of the benchmark heavily sources from existing real-world repos instead of relying too much on synthetic data like the previous related work such as CodeUpdateArena, which makes it valuable to measure the real-world API update impact on LLM code generation. However, the paper only focuses on function signature changes, whereas API changes may also include changes to function semantics. Another limitation of the benchmark is the fixed legacy version (Jan 2023), which may favor LLMs with later knowledge cutoff dates. Finally, the paper only includes 6 popular Python libraries, which may be a limitation of the benchmark. Supplementary Material: The supplementary material is a zip file containing benchmark and training data. Relation To Broader Scientific Literature: The paper is related to previous work on API update benchmarks like CodeUpdateArena. It also relates to knowledge update work in the NLP literature. Essential References Not Discussed: There are many many knowledge editing work in NLP which is not discussed. Other Strengths And Weaknesses: The use of real-world API changes and invocations makes this a very valuable benchmark for evaluating the performance of LLMs in the midst of real-world API changes. However, the paper only focuses on function signature changes, whereas API changes may also include changes to function semantics. Another limitation of the benchmark is the fixed legacy version (Jan 2023), which may favor LLMs with later knowledge cutoff dates. Please also see the questions below. Other Comments Or Suggestions: Please see the questions below. Questions For Authors: - Why is the HumanEval score for Qwen2.5-Coder-7B-Instruct so low? The results from the Qwen team show that it achieves 88.4% on HumanEval. - How is the training data related to the benchmark? Does every function change in the benchmark problem have a corresponding data point in the training set? - Is it possible to include RAG + prompting baseline? - How much human effort is needed for creating the synthetic legacy update code from the latest API invocation data? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Dear Reviewer oxwH,** We sincerely appreciate your thoughtful review and your recognition of our work’s contributions to the community. --- **Q1:** Why is the HumanEval score for Qwen2.5-Coder-7B-Instruct so low? The results from the Qwen team show that it achieves 88.4% on HumanEval. **A1:** In fact, the HumanEval score we reported for Qwen2.5-Coder was obtained using the `bigcode-evaluation-hardness` framework, which applies stricter evaluation criteria than those used by the Qwen team. Notably, it does not automatically add import statements and enforces stop tokens, leading to a more rigorous evaluation. To ensure fairness, we have updated our evaluation setup to align more closely with the official criteria and re-evaluated all base and updated models. The revised scores are as follows: |Model|Base|SFT|DPO|SimPO|ORPO| |--- |---|---|---|---|---| |Qwen|65.24|62.80|61.59|63.41|63.41| |Qwen-Coder|82.32|82.32|82.93|82.93|81.71| |Llama|62.20|60.98|58.54|62.20|60.37| |CodeLlama|38.41|36.59|36.59|35.98|35.37| |DeepSeek-Coder|72.56|71.34|70.12|68.29|68.29| The updated results align with trends from the original settings, indicating that our conclusions remain valid. --- **Q2:** How is the training data related to the benchmark? Does every function change in the benchmark problem have a corresponding data point in the training set? **A2:** As stated in lines 255–262, for each API update, we collected 15 legacy-updated invocation pairs—10 for training and 5 for evaluation—ensuring that every API change in the benchmark has corresponding data points in the training set. However, this does not imply data leakage, as the training and benchmark data points use distinct code contexts, as discussed in our response to **Reviewer HLiK (W4)**. --- **Q3:** Is it possible to include RAG + prompting baseline? **A3:** Thank you for the suggestion. We have included RAG baseline results in our response to **Reviewer HLiK (Q3)**. --- **Q4:** How much human effort is needed for creating the synthetic legacy update code from the latest API invocation data? **A4:** As stated in Sec. 2.3, to ensure benchmark quality, two authors manually verified the divergence between legacy and updated API invocations synthesized by DeepSeek-V3. On average, verifying and guiding re-synthesis required \~1 minute per invocation pair. --- **W5:** The paper uses BLEU, ROUGE, and relative edit-distance scores for the code completion and error correction tasks. Not including exact match score or semantic-aware matching score (exact match with respect to parameter ordering) is not ideal. **A5:** Thank you for the suggestion. We have addressed this concern by incorporating AST-based semantic matching in our evaluation in our response to **Reviewer HLiK (Q1)**. --- **W6:** The paper only focuses on function signature changes, whereas API changes may also include changes to function semantics. **A6:** While API updates that involve purely semantic changes without signature modifications do occur, they are relatively uncommon and undocumented. Thus, our benchmark mainly focuses on API signature changes, as these typically reflect substantial semantic modifications and can be systematically tracked. --- **W7:** Another limitation of the benchmark is the fixed legacy version (Jan 2023), which may favor LLMs with later knowledge cutoff dates. **A7:** While LLMs with later knowledge cutoffs may have greater exposure to recent API updates, this does not contradict the benchmark’s objective—evaluating LLMs' ability to access and adapt to real-time API updates. In fact, this setup reflects a realistic scenario for assessing up-to-date knowledge. Nevertheless, even models like DeepSeek-R1 with recent cutoffs (July 2024), achieve low BLEU scores (e.g., 19.32), indicating limited effectiveness in handling evolving APIs. This further supports the need for explicit knowledge updating methods. --- **W8:** The paper only includes 6 popular Python libraries, which may be a limitation of the benchmark. **A8:** Although CodeSyncBench currently focuses on six widely-used Python libraries, our proposed CodeSync engine is generalizable. It can track API updates across any Python library over arbitrary time periods. Thus, the benchmark can readily be extended by broadening the version range and including more libraries as needed. We leave the extension to our future work.
Summary: The paper introduces CodeSyncBench, a benchmark to evaluate LLMs’ abilities to invoke the most recent versions of Python APIs. The benchmark shows that LLMs struggle to invoke APIs in the benchmarks correctly as measured by the chosen metrics. Further, the authors use various alignment techniques such as SFT, DPO and SimPO to fine-tune the LLMs to invoke the APIs correctly. Claims And Evidence: The claims are more or less backed by concrete evaluations (see comments below). One concern that stands out for me is the lack of commentary on the leakage of samples from CodeSyncBench into the training data of models being evaluated. Methods And Evaluation Criteria: 1. The metrics chosen for checking the validity of API invocations are dated and should no longer be used. It is well-known that these metrics (string match based criteria) suffer from not capturing the nuances of code syntax and semantics. I recommend the authors to instead use static analysis to verify the correctness of API invocations [1, 2, 3]. Measures like edit distance can be very high despite the model prediction being semantically equivalent to the ground truth. 2. There can typically be multiple ways of invoking the same API to achieve the same outcome, e.g., `np.sum([a, b]) <=> np.sum([b, a])` but the paper does not call these out or comment on how such edge cases are being handled during evaluations. [1] Patil, Shishir G., et al. "Gorilla: Large language model connected with massive apis." Advances in Neural Information Processing Systems 37 (2024): 126544-126565. [2] Ding, Yangruibo, et al. "Crosscodeeval: A diverse and multilingual benchmark for cross-file code completion." Advances in Neural Information Processing Systems 36 (2023): 46701-46723. [3] Jain, Nihal, et al. "On Mitigating Code LLM Hallucinations with API Documentation." arXiv preprint arXiv:2407.09726 (2024). Theoretical Claims: n/a Experimental Designs Or Analyses: 1. The evaluations conducted in the paper rely on alignment methods such as SFT, DPO, etc. While different methods improve performance over the baseline, the results among the methods appear largely mixed, and no concrete conclusion can be made about which method helps improve performance the most. 2. I believe that RAG must be considered as a baseline and is missing from the paper. If the experiment is out of scope, the paper should discuss when their method must be considered over RAG. It is now common knowledge to use RAG to update LLM knowledge and it’s unclear when the methods discussed in the paper would be adopted solely to update models’ knowledge about a few APIs. Supplementary Material: I did not review the Appendix. Relation To Broader Scientific Literature: This paper is related to generating updated API invocations with large language models. It provides a new benchmark to study this problem and surveys several post-training methods to address this problem. The paper’s findings that post-training methods slightly improve performance on the task is aligned with common knowledge about LLMs. Essential References Not Discussed: The paper does not disregard contemporary works also focusing on API invocations [1, 2] and benchmarks developed [3, 4]. Further, the paper does not consider RAG, a popular and simple approach to updating API invocation knowledge [5], in their evaluations. [1] Jain, Nihal, et al. "On Mitigating Code LLM Hallucinations with API Documentation." arXiv preprint arXiv:2407.09726 (2024). [2] Patil, Shishir G., et al. "Gorilla: Large language model connected with massive apis." Advances in Neural Information Processing Systems 37 (2024): 126544-126565. [3] Kuhar, Sachit, et al. "LibEvolutionEval: A benchmark and study for version-specific code generation." *arXiv preprint arXiv:2412.04478* (2024). [4] Islah, Nizar, et al. "GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models." *arXiv preprint arXiv:2411.05830* (2024). [5] Asai, Akari, et al. "Self-rag: Learning to retrieve, generate, and critique through self-reflection." *The Twelfth International Conference on Learning Representations*. 2023. Other Strengths And Weaknesses: The text in Section 3 is largely a regurgitation of the results in the tables. The reader is not provided meaningful insights, if any, that can be drawn from the tables in the paper. Finally, the paper says that existing knowledge techniques “show limitations” without diving deep into these. It would be useful if the authors could comment on the “further refinements” that are needed. In fact, this should form the core focus of the paper as other details (such as SFT, DPO, etc. will improve performance somewhat) are well-known facts. Other Comments Or Suggestions: Line 149 - thirt → third Questions For Authors: 1. How do the results of the paper change if you use static analysis methods, such as AST matching, etc. instead of string-matching approaches for evaluation? Incorporating these evaluations in your benchmarking package will improve trust in your benchmark. 2. Why do you think current knowledge updating methods discussed in the paper are limiting? What’s the expected upper-bound on performance using the current evaluations and metrics? What kind of improvements are you suggesting will improve performance further? 3. How does the simple RAG baseline compare with the methods discussed in the paper? When or when not should someone use your method as opposed to RAG to update API invocation knowledge? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Dear Reviewer HLiK,** We sincerely appreciate your suggestions and assessment of our work! Motivated by your feedback, we are committed to improving our manuscript and providing a more comprehensive benchmark and evaluations! --- **Q1:** How do the results of the paper change if you use static analysis methods, such as AST matching, etc. instead of string-matching approaches for evaluation? **A1**: Thanks for the suggestion. Actually, we have considered CodeBLEU to measure the effectiveness of API invocation updating from the perspective of AST matching. However, we found that, in practice, many LLM completions contained invalid syntax or natural language, causing them to fail the AST parsing (e.g., \~70% of Qwen2.5-Coder completions on our benchmark failed AST parsing). As a result, we compute CodeBLEU using a fallback strategy: if an API invocation passes AST parsing, we compute CodeBLEU against the ground truth; otherwise, we assign a corresponding part score of zero. The results for Qwen2.5-Coder are shown below: |Method|CCT BLEU↑|CCT CodeBLEU↑|ECT BLEU↑|ECT CodeBLEU↑| |---|---|---|---|---| |Original|5.89|29.41|11.64|32.68| |SFT|15.44|31.17|19.20|36.03| |DPO|23.36|38.67|55.57|44.95| |ORPO|21.47|37.06|56.92|40.62| |SimPO|23.86|39.39|54.57|45.43| > The experimental results align with trends from the original metrics, indicating that our conclusions remain valid under structure-aware evaluation. (Full results are available at: https://anonymous.4open.science/r/CodeSyncRebuttal-D65F ). --- **Q2:** Why do you think current methods are limiting? What’s the expected upper-bound on performance? What kind of improvements are you suggesting will improve performance further? **A2:** Thanks for your comments. Current knowledge-updating methods are limited in helping LLMs fully internalize and recall updated APIs, as evidenced by suboptimal CodeBLEU scores (<40) in the CCT task. The upper bound of performance can be quantified by evaluating LLMs’ ability to invoke legacy APIs. The empirical upper bounds for Qwen2.5 and Qwen2.5-Coder are as follows: |Model|Original|Best Updating Method|Upper Bound| |---|---|---|---| |Qwen|30.21|39.67|**42.05**| |QwenCoder|29.41|40.70|**45.12**| > The results indicate a significant performance gap with upper bounds. To close it, we recommend exploring hybrid approaches, as discussed in response to Q3. --- **Q3:** How does the simple RAG baseline compare with the methods discussed in the paper? When or when not should someone use your method as opposed to RAG to update API invocation knowledge? **A3:** As stated in lines 104–109, unlike RAG which increases inference overhead by retrieval for each query, our CodeSyncBench focuses on evaluating and improving LLMs’ ability to internalize API updates. FT methods allows models to update without extra costs while inference, making them preferable in latency-sensitive or offline scenarios. Moreover, RAG's reliance on retrieval quality is problematic, especially with large real-world API updates. In contrast, CodeSync tracks API changes and integrates them into LLMs, enabling more reliable recall of updated API knowledge. To address your concern, we test a RAG baseline with a vector database of 5,056 updated API signatures from six libraries mentioned in the paper, indexed with OpenAI’s `text-embedding-3-large` model. |Method|CCT CodeBLEU↑|ECT CodeBLEU↑|MCQ P@1↑| |---|---|---|---| |Original|29.41|32.68|32.56| |SFT|31.17|36.03|35.16| |DPO|38.67|44.95|37.00| |RAG|35.17|42.26|34.26| |SFT+RAG|**40.70**|**51.35**|**36.89**| > Results show that RAG performs lower than FT methods (35.17), primarily due to a retrieval success rate of only \~60% (665/1100). However, RAG achieves better results (40.70), while combined with FT methods, highlighting the potential of hybrid approaches. In summary, knowledge updating methods outperform RAG in scenarios requiring efficient inference and precise API knowledge tracking. We will incorporate these results and analysis into the revised manuscript. --- **W4:** One concern that stands out for me is the lack of commentary on the leakage of samples from CodeSyncBench into the training data of models being evaluated. **A4:** To demonstrate that there is no data leakage between training data and benchmark, we compute the n-gram similarity of all samples associated with each API. Lower n-gram (<15%) scores indicate minimal overlap and thus confirm the absence of leakage. |N-gram|N=5|N=10|N=15|N=20| |---|---|---|---|---| |Score|12.62|10.21|9.24|8.52| Specifically, after retrieving data for APIs, we duplicate all data and then split them into two parts. This approach has a clean separation and prevents any potential data leakage. **W5:** The paper does not include contemporary works also focusing on API invocations and benchmarks developed. **A5:** Thanks for the suggestion. We will include all the mentioned works in the next version of the paper.
null
null
null
null
null
null
null
null
Robust Noise Attenuation via Adaptive Pooling of Transformer Outputs
Accept (spotlight poster)
Summary: This paper studies the problem of pooling methods in transformers for non-sequential tasks where only a subset of input tokens (signal) is relevant for downstream decision-making, while the rest (noise) may degrade performance. The authors formulate a theoretical framework that formalizes pooling as a vector quantization problem, leading to error bounds for different pooling methods. Under this view, the authors analyze common pooling methods—average pooling, max pooling, and class token-based pooling—and show that each fails under certain signal-to-noise conditions. They then investigate adaptive attention-based pooling (AdaPool), a technique adapted from Stergiou & Poppe (2023), and demonstrate that it can approximate an optimal pooling function across varying noise conditions. They provide empirical validation for their theoretical findings on synthetic supervised tasks and reinforcement learning (RL) environments (Multi-Particle and BoxWorld), demonstrating AdaPool’s robustness to varying signal-to-noise ratios. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: The proofs and theoretical claims seem sound, however, I did not check all the math in detail. Experimental Designs Or Analyses: - Synthetic dataset experiments are well-structured to explicitly test signal loss across varying SNRs. Supplementary Material: no Relation To Broader Scientific Literature: Connects well with prior work in attention-based pooling, vector quantization, and associative memories. Essential References Not Discussed: I am not very familiar with the related work, as such I cannot answer this question. Other Strengths And Weaknesses: The paper is very well written and nicely presented. Even without deep knowledge of the literature I had no problem following the authors. Other Comments Or Suggestions: \- Questions For Authors: - How does AdaPool’s computational cost compare to avg/max pooling, particularly for large-scale models? - How does AdaPool handle cases where some tokens are partially informative but not strictly signal or noise? - Have you considered extending AdaPool to multi-query setups for more robust feature aggregation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your time and feedback. Your questions raise some important points that we should have discussed in the original submission. **1. Computational Complexity**: AvgPool and MaxPool can be implemented with `O(n * d)` algorithms, as both require each of the `d` features of each of the `n` vectors to be visited during aggregation. Self-attention famously experiences quadratic computational costs with respect to the context size `n`. In particular, the attention weight computation is ` O(n^2 * d)`, as each vector in the input set computes attention weights for each other vector in the input set, i.e. `n^2` dot products of `d` dimensional vectors. However, AdaPool uses cross-attention with a single query, requiring the computation of only 1 set of attention weights rather than `n` sets. This reduces the weight computation and pooling time down to `O(n * d)` - the same as Max & Avg. However, AdaPool still retains the overhead of the QKV projections of the input set, which involves standard matrix multiplications for each input vector, running `O(n * d^2)`. Thus, the overall time complexity of AdaPool is ` O(n * d + n * d^2)`. The key takeaway is that it scales linearly with the number of inputs (like max/avg) but quadratically with the number of features (like a standard linear layer). In isolation, AdaPool is slower - the increased expressivity comes at a computational cost. However, in practice, these pooling layers are placed at the end of a multi-layer transformer or similarly sized encoder network, and the compute time added by the pooling layer is marginal. For our particular applications, when using a transformer with 3+ layers, the time difference between AdaPool and Max/AvgPool networks was negligible, regardless of the number of features `d`. Also of note, ClsToken is significantly slower than all other methods. This is due to the fact that the learned class embedding must be copied and concatenated along the batch dimension of the input for every inference, which is quite expensive in terms of both time and memory during training. This is a known hindrance investigated by Zhai et al. (https://arxiv.org/abs/2106.04560), which led them to explore the use of AvgPool for efficiency reasons when scaling vision transformers. **2. Handling Ambiguous Inputs:** To address this question, we conducted additional experiments on an image classification task (CIFAR 10/100) where each input vector may contain nebulous amounts of signal or noise, in contrast to the other experiments we presented. Due to character count limits, please see our response to reviewer PXAR above for details about these experiments and their results. In summary, the findings were consistent with our theory and other experiments. **3. Multi-Query Extension:** Using multiple queries or even employing multiple different pooling methods and concatenating the results may result in even better downstream representations. We have not tested this, but it is certainly a viable approach that is worth looking into in the future. Incorporating this additional discussion into our revisions will help make the paper much more robust. Thank you for your review!
Summary: This paper investigates pooling methods for transformer embeddings in tasks where only a subset of inputs carries signal and the remainder are noise. It shows that standard methods like average and max pooling can collapse in performance as the signal-to-noise ratio fluctuates. The authors propose an attention-based adaptive pooling method that approximates a signal-optimal vector quantizer, providing theoretical error bounds that guarantee robustness across various noise levels. Their approach is validated through supervised experiments on synthetic datasets and reinforcement learning benchmarks, demonstrating superior performance in noisy settings. Overall, the work establishes a framework for enhancing transformer pooling in non-sequential, noise-prone applications. Claims And Evidence: The paper’s primary claims—that standard pooling methods collapse under variable signal-to-noise ratios and that adaptive, attention-based pooling can approximate the signal-optimal vector quantizer—are generally supported by a combination of rigorous theoretical analysis and empirical validation. The authors derive clear error bounds and show mathematically that AvgPool and MaxPool are specific instances of the more general AdaPool method, which is capable of handling noise across any SNR regime. Their experimental results, conducted on both synthetic datasets and complex reinforcement learning benchmarks, provide convincing evidence that AdaPool consistently achieves superior robustness and performance compared to traditional pooling methods. However, some of the theoretical guarantees rely on idealized assumptions regarding the separability of signal and noise in the relation space, which may not always hold in real-world settings and could benefit from further investigation. Methods And Evaluation Criteria: The paper's methods and evaluation criteria are well-suited to the problem of noise attenuation in transformer pooling. The authors establish a rigorous theoretical framework using vector quantization and signal loss minimization, which logically underpins their proposed adaptive pooling method. Their experimental design includes a synthetic dataset to isolate the noise impact, followed by evaluations on established reinforcement learning benchmarks (MPE and BoxWorld) that effectively mimic real-world noisy conditions. Together, these methods and benchmarks provide a comprehensive and convincing validation of the approach for non-sequential tasks where noise is a critical factor. Theoretical Claims: Overall, the proofs appear correct within their stated assumptions, but the reliance on these idealized conditions may limit the direct applicability of the theoretical guarantees in real-world settings. - The proof that the signal-optimal pooled vector is the centroid of the signal subset (Corollary 3.3), which is mathematically sound given the mean squared error definition. - The derivations showing that AvgPool and MaxPool are special cases of the more general adaptive pooling framework (Corollaries 3.5, 3.6, and 3.8). These proofs correctly demonstrate the limitations of traditional pooling methods under varying noise conditions. - The error bounds established in Theorem 3.11 for AdaPool's approximation to the signal-optimal quantizer are derived rigorously under the assumption of linearly separable relation neighborhoods. While the derivations are mathematically consistent, they rely on idealized assumptions—such as the clear separation (margin) between signal and noise relations—which might be challenging to guarantee in practical scenarios. Experimental Designs Or Analyses: The experimental designs are generally sound and align well with the paper’s goals. The authors first construct a synthetic dataset where the noise-to-signal ratio is carefully controlled, allowing for a focused evaluation of how each pooling method manages signal loss. They then validate their findings on established benchmarks such as the Multi-Particle Environment (MPE) and BoxWorld, which are relevant for multi-agent reinforcement learning and relational reasoning tasks. This multi-tiered approach strengthens the empirical evidence for the proposed adaptive pooling method. One potential issue is that the synthetic dataset, while useful for isolating noise effects, might oversimplify the complexities of real-world noise; however, the inclusion of realistic RL benchmarks helps mitigate this concern. Supplementary Material: No, I did not. Relation To Broader Scientific Literature: The paper’s contributions build on the foundational transformer architecture (Vaswani et al., 2017) by addressing a critical yet underexplored aspect: the pooling of transformer outputs for non-sequential tasks. Its adaptive pooling method extends prior work in vector quantization and selective attention, drawing parallels with research on associative memories such as Hopfield Networks and Dense Associative Memories, which also focus on robustly retrieving relevant information amidst noise. Moreover, the work connects with recent advances in computer vision and reinforcement learning, where attention-based pooling and class token strategies have been successfully employed to handle complex, noisy data. By offering rigorous theoretical error bounds and empirical validation, the paper unifies traditional pooling methods (like AvgPool and MaxPool) with modern attention mechanisms, thereby enriching the broader literature on robust representation learning and relational reasoning. Essential References Not Discussed: While the paper comprehensively cites many foundational works, some recent studies could provide additional context for its key contributions. For instance, adaptive pooling strategies in transformer architectures—such as the dynamic pooling method proposed in [Lee et al., 2024] at ICML, which adjusts to varying noise levels—offer alternative approaches that are highly relevant to the paper's focus but are not currently discussed. Additionally, although the authors reference classical vector quantization and associative memory literature, they omit recent work on noise-robust pooling in graph neural networks (e.g., Xu et al., 2022), which addresses similar challenges in aggregating noisy, high-dimensional data. These omitted references highlight complementary ideas and alternative mechanisms for robust aggregation that could enrich the theoretical and empirical framework of the current work. Including a discussion of these studies would situate the paper more firmly within the broader landscape of robust representation learning. Other Strengths And Weaknesses: The paper is notable for its originality in creatively combining vector quantization techniques with attention-based pooling mechanisms, which provides a fresh perspective on mitigating noise in transformer outputs. Its strengths include a solid theoretical framework with detailed proofs that derive meaningful error bounds, and an empirical evaluation that spans both synthetic and realistic RL benchmarks, underscoring the practical relevance of the approach. The work significantly advances our understanding of how traditional pooling methods like AvgPool and MaxPool fail under varying noise conditions and demonstrates the potential of adaptive pooling in overcoming these limitations. However, the reliance on idealized assumptions regarding the separability of signal and noise in relation space may restrict the direct applicability of the theoretical guarantees in all real-world scenarios. Additionally, while the paper is generally clear, some of the denser theoretical derivations could benefit from more intuitive explanations to enhance accessibility for a broader audience. Other Comments Or Suggestions: The paper is overall very strong, but a few suggestions might help improve its clarity and accessibility. For instance, some of the theoretical derivations are quite dense—adding more intuitive explanations or illustrative examples could make these parts more digestible for a broader audience. There are also a few minor typos and notation inconsistencies in both the main text and supplementary material that should be corrected. Additionally, while the experimental setup is comprehensive, it might be beneficial to include a discussion of scenarios where the assumptions (e.g., the clear separation between signal and noise) might not hold, and how the proposed method would perform in those cases. Questions For Authors: 1. Could you provide more insight into how robust the adaptive pooling method is when the assumption of clear separability between signal and noise relations (as per Assumption 3.10) is relaxed or violated? A more detailed discussion or empirical analysis in scenarios with overlapping distributions would help assess the method's applicability in real-world settings. 2. In your synthetic experiments, how sensitive is AdaPool to the choice of hyperparameters in the attention mechanism? An ablation study highlighting the impact of key parameters would clarify the robustness and ease of tuning the method. 3. Have you empirically validated the theoretical error bounds derived in Theorem 3.11? If so, could you compare the theoretical predictions with actual observed performance on benchmark datasets? This connection would strengthen the confidence in your theoretical claims. 4. Since AvgPool and MaxPool are special cases of AdaPool, can you discuss any potential trade-offs in computational complexity or performance when scaling AdaPool to larger datasets or more complex models? Understanding these trade-offs would inform its practical deployment. 5. Could you elaborate on AdaPool's performance in scenarios where the noise is highly correlated or where the noise and signal distributions significantly overlap? Clarification on these cases would help determine the method's limitations and potential areas for further improvement. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your comprehensive review. We saw the weak reject rating as a great opportunity to improve the quality of the paper, and hope to address the questions you raised regarding performance outside of our theoretical assumptions. **1 & 3. Idealistic Assumptions:** As questions 1 & 3 are related, we address them jointly here. We empirically evaluated the theoretical error bounds from Theorem 3.11 on the KNN-centroid dataset (N=32, d=16) for SNRs 0.5-0.03. We know which vectors are signal and noise by design, so we can save off the relations rₛ and rₙ after computing dot products in the attention mechanism. With these, we derived ϵₛ , ϵₙ , M, and D, computed the error bounds, and checked if the weights for each vector fell between them. We note that this type of analysis is only possible when the signal and noise vectors are known and will typically not be feasible on a real-world dataset where the line between signal and noise is blurry. Surprisingly, the error bounds held for all 1 million samples at each noise level, even when the assumption of linearly separable signal and noise relations was violated (M < 0). This was observed both with trained networks and randomly initialized networks. Upon revisiting the proof of Theorem 3.11, **we realized that Assumption 3.10 was overly restrictive and NOT necessary for the error bounds to hold**. We can instead define M more loosely as M = min( rₛ ) – max( rₙ ), allowing this quantity to be positive or negative rather than strictly positive as assumed before. Theorem 3.11 thus applies more generally than was claimed in the original submission, which is a pleasant surprise. Also worth noting, we observed empirically that M is nearly always less than zero on the KNN-centroid dataset, meaning AdaPool still outperformed other methods with some degree of overlap between signal and noise neighborhoods in those experiments. It is also worth taking a step back and discussing why we derived and presented the bounds in the first place. The sole purpose was to show that with AdaPool, it is possible to approximate the optimal vector quantizer exactly for any SNR. The bounds are also helpful in illustrating which factors influence the approximation error most and how a signal-rich query can shrink those bounds. Since it is impractical to compute the bounds exactly on most real-world datasets, we did not present them to guarantee modeling performance on a particular problem. Another point we did not discuss is that using multiple attention heads allows the network to pool chunks of the input vectors separately. If the input vectors each contain a mix of signal and noise, the heads can weight partitions of the vector space separately. In fact, when using the same number of heads as features, AdaPool can be viewed as pooling each feature dimension separately. This would enable it to optimally pool such inputs, given a good choice of query, and may be a better modeling choice for real-world data. We experiment on real-world data in response to PXAR above. **2. Hyperparameters:** We performed a number of ablation studies on model hyperparameter choices in the Appendix; please see Tables 3-7. **4. Computational Complexity:** Due to character limits, we ask that you see our response to reviewer GbuG regarding this below. **5. Signal/Noise Overlap:** Suppose the signal and noise relation distributions overlap completely. If variance is low, applying softmax results in relatively uniform weights, yielding something closer to AvgPool. If variance is high, one relation score will dominate the others in the softmax, yielding weights closer to MaxPool. As a worst case, suppose the query is more similar to noise than signal. The dot product relations will be higher for all noise vectors, and the resulting aggregation will suppress signal and accentuate noise. However, if such a query vector exists, then it implies that the noise and signal are easily distinguished via dot product. A learned linear transformation (the Q projection) could rotate the query away from the noisy direction and towards the signal, given that dot products largely reflect directional alignment. So, practically speaking, the worst case for AdaPool would instead be when signal and noise are indistinguishable, in which case it reverts towards AvgPool or MaxPool as discussed above. **Related Works:** Can you provide full titles or links to the papers you referenced by Lee et al., 2024 and Xu et al., 2022? We also reviewed the line of work on noise-robust graph neural networks. However, those approaches focus on robust learning in the presence of mislabeled graph classification targets. While the titles and themes are similar, we felt that the actual research questions were not related enough to justify inclusion. As a result of your questions, we were able to identify an oversight and make a significant improvement to our theoretical claims. We really appreciate your review!
Summary: This work analyzes the various pooling methods used in deep neural architectures. They establish a connection between pooling and vector quantization and demonstrate adaptive pooling is more robust to signal-to-noise ratio. They provide experimental results with a carefully created synthetic dataset and multi-agent RL environments. Claims And Evidence: The experimental results supports the superiority of the adaptive pooling method. The experiments to assess the robustness are thorough and includes varying signal-to-noise ratio, network width, network depth, data dimension. Methods And Evaluation Criteria: While this work doesn't propose any new method, it systematically analyzes different pooling methods and their outputs. The designed evaluation metric "signal loss" seems reasonable to me. Theoretical Claims: The formal connection established between pooling and vector quantization is pretty interesting. However, I would like to see the case if the margin M vanishes. Experimental Designs Or Analyses: The experiments are well designed and considered different factors of variations. Also, the experiments with multi-agent environments are nicely designed to capture the problem at hand. Supplementary Material: Supplementary material provides proof of the theorems, detailed results, and hyperparameter values. Relation To Broader Scientific Literature: The authors discussed the relevance of the work with works related to associative memories. Also, it puts a different perspective based on vector quantization. However, adaptive pooling is already known as a robust approach compared to avg. pool or max pool. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper assumes that there is no overlap between the task relevant vectors and noise vectors. However, in reality, this may not be the case. Some channels may have important information mixed with noise. Other Comments Or Suggestions: N/A Questions For Authors: While the paper's discussion is for transformer outputs, I am curious whether the same analysis may be valid for CNN output? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to read and review our work! Regarding your questions - Upon further empirical analysis and review of our proof of Theorem 3.11, we realized that our assumption that the margin M must be greater than zero (i.e. signal and noise must be linearly separable) was not necessary for the bounds to hold. That is, the bounds are still effective when the neighborhoods of signal and noise relations overlap. We also address the case where there may be an overlap between signal and noise vectors (i.e. a single vector contains both relevant and irrelevant information) by using multiple attention heads. Please see our response to 3DVE below for more details on both of these topics. As to your question about applications to CNNs, we believe this line of work would be highly relevant to CNN-based image retrieval tasks, where global pooling of output feature maps is heavily utilized. We are less certain about the effectiveness of AdaPool in replacing Avg/Max pooling for pixel-level filter downsampling in standard convolutional architectures, but our analysis should be generally relevant to any pooling applications. If you have any further questions, please let us know!
Summary: The paper studies pooling methods for aggregating transformer embeddings—particularly in settings where only a subset of input vectors (signal) is task-relevant while the remainder (noise) may deteriorate performance. The authors reframe pooling as a vector quantization (or lossy compression) problem and show that common methods like Average Pooling (AvgPool) and Max Pooling (MaxPool) can fail under fluctuating signal-to-noise ratios. They introduce an attention-based adaptive pooling method (AdaPool) and provide theoretical error bounds that demonstrate its ability to approximate the signal-optimal compressor across noise regimes. These theoretical results are validated with experiments on synthetic supervised tasks as well as on reinforcement learning benchmarks, where AdaPool consistently exhibits greater robustness to noise. Claims And Evidence: The paper makes two core claims: - AdaPool can approximate the optimal vector quantizer for any signal-to-noise ratio with quantifiable error bounds. - Standard pooling methods (AvgPool, MaxPool, ClsToken) fail or degrade predictably as noise increases, while AdaPool remains robust. The claims seem to be supported by a theoretical framework (with the primary result being Theorem 3.11) and are accompanied by proofs provided in the appendix. On the experimental side, the evidence is drawn from controlled synthetic datasets as well as from RL environments where noise is systematically varied. Methods And Evaluation Criteria: The authors: - Formally define pooling as a differentiable vector quantization task with a focus on minimizing “signal loss”. - Analyze AvgPool and MaxPool as special cases, and then derive AdaPool based on an attention mechanism. - Use both theoretical error bounds and empirical evaluations to compare the pooling methods. The evaluation criteria—especially the use of synthetic datasets where the signal-to-noise ratio can be explicitly controlled—provide a clear measure of robustness. In reinforcement learning tasks, using both entity-based and pixel-based observations further tests the method in settings with varying noise levels. Theoretical Claims: The paper’s theoretical contributions include: - A derivation of the optimal pooling strategy as one that minimizes the signal loss. - A series of corollaries showing under what conditions AvgPool and MaxPool become optimal. - Theorem 3.11, which provides error bounds for the proposed adaptive pooling method. While the theory & proofs appear correct I am not familiar enough with the techniques and line of work to check this in detail. Experimental Designs Or Analyses: The experiments are synthetic and controlled in nature and include: - A supervised KNN-centroid task where noise levels are precisely controlled allows for direct measurement of signal loss across varying signal-to-noise ratios. - The authors evaluate performance in both a custom “simple centroid” scenario and a “simple tag” task in the Multi-Particle Environment, demonstrating how increased noise leads to degradation in performance for standard pooling methods. - A vision-based relational reasoning task that further challenges the pooling methods by introducing high-dimensional, pixel-based noise. The analysis of performance degradation is convincing. However, due to the synthetic nature of these experiments, it's difficult to extrapolate whether or not the proposed pooling method improves performance in more realistic tasks and environments. Supplementary Material: I reviewed parts of the supplementary material covering the theoretical claims and additional experimental ablations. Relation To Broader Scientific Literature: Most works in this space don't approach the problem of aggregating representations in sequence models from a signal-to-noise lens. Given my unfamiliarity with how the authors approached this problem and how it's somewhat disconnected from the literature in the vision and text space I'm unable to definitively comment on how the paper relates to the broader literature. Essential References Not Discussed: There is one prior approach [1] that also replaced the common CLS token representation with a similar learned pooling mechanism. This seems close enough that it's worth discussing. [1] Marcin Przewiezlikowski, Randall Balestriero, Wojciech Jasinski, Marek Smieja, Bartosz Zielinski. Beyond [cls]: Exploring the true potential of Masked Image Modeling representations. Other Strengths And Weaknesses: Strengths - The paper provides a novel framework that reinterprets pooling as vector quantization. - It offers comprehensive empirical validation across multiple domains. - The clear exposition of failure modes for AvgPool and MaxPool under varying noise regimes is a valuable contribution. Weaknesses - While the experiments are extensive, additional tests in real-world or more diverse noisy environments could further establish generality. - It's unclear how novel the approach is given related work in this space, e.g., Przewiezlikowski et al. Other Comments Or Suggestions: No further comments. Questions For Authors: 1. How sensitive is AdaPool’s performance to the choice of query vector? Have you explored different strategies for selecting the query in settings where the signal vector is not obvious? 2. Is the linear separability assumption (Assumption 3.10) going to hold in practical, real-world problems? 3. What is the computational overhead of AdaPool compared to AvgPool or MaxPool in terms of training time and inference speed? Although it should be negligible compared to the cost of a forward pass through a large model it's still more expensive than a simple reduction. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work and provide thorough feedback. The work from Przewiezlikowski et al. is indeed interesting and highly relevant to our discussion, and we will update our related works to acknowledge it. Their method appears equivalent to AdaPool with a learned embedding as the query. We believe that our analytical framework for analyzing pooling methods and our more general form of AdaPool distinguishes us as novel from this concurrent work, while the results of our analysis led to a similar solution of weighted pooling. Due to character limits, we address question 3 in our response to GbuG's review below. As to 1 & 2, we realize that by seeking to explicitly control the noise level, our experiments failed to address the following: What happens with real-world data where vectors are not purely signal/noise and the choice of query for AdaPool is not obvious? To address this concern, we conducted additional studies on image classification using the CIFAR 10 & 100 benchmark datasets. We use this experiment to explore various choices of query on real-world, less cleanly defined input vectors. We adopted the Vision Transformer (ViT) approach, partitioning the 32x32 pixel RGB images from CIFAR into 64 separate 4x4 pixel patches which are then flattened and projected to the dimension of the transformer (see Fig 1 in https://arxiv.org/abs/2010.11929). We show the arrangement of those 4x4 pixel patches by their indices below: ``` [ 0, 1, 2, 3, 4, 5, 6, 7] [ 8, 9, 10, 11, 12, 13, 14, 15] [16, 17, 18, 19, 20, 21, 22, 23] [24, 25, 26, 27, 28, 29, 30, 31] [32, 33, 34, 35, 36, 37, 38, 39] [40, 41, 42, 43, 44, 45, 46, 47] [48, 49, 50, 51, 52, 53, 54, 55] [56, 57, 58, 59, 60, 61, 62, 63] ``` As in our previous experiments, we feed these patches through a transformer and pool the outputs. Most CIFAR images contain the object being classified (i.e. the signal) in the center patches. We thus study 3 choices of query: (CORNER) - the embedding of patch 0 which appears *less likely* to include signal, (FOCAL) - averaging the embeddings of the center four patches 27, 28, 35, and 36, which appear *highly likely* to contain signal, and (MEAN) - averaging all patch embeddings as the query, as proposed by Stergiou & Poppe (2023). Both experiments use a transformer with layers=6, dim=512, heads=8, & dropout=0.1. We use 5-fold cross-validation, 150 epochs/fold, batch=512, and LR=1e-4. We report the Top 1 accuracy scores on the holdout test set, averaged across the best models from each fold. Our ClsToken scores are consistent with common open-source implementations of ViT trained from scratch on the CIFAR benchmarks. ``` METHODS CIFAR 10 CIFAR 100 ClsToken | 0.7863 ±0.0041 | 0.4892 ±0.0018 | MaxPool | 0.8301 ±0.0041 | 0.5547 ±0.0044 | AvgPool | 0.8265 ±0.0023 | 0.5633 ±0.0070 | Ada-CORNER | 0.8240 ±0.0016 | 0.5385 ±0.0044 | Ada-FOCAL |*0.8325 ±0.0035*|*0.5730 ±0.0051*| Ada-MEAN | 0.8317 ±0.0022 |*0.5731 ±0.0073*| ``` We observe that Ada-FOCAL and -MEAN queries outperform in both cases. Interestingly, Ada-FOCAL and MaxPool have relatively better performance on C-10, while Ada-MEAN and AvgPool perform relatively better on C-100. With C-10, there are only 10 highly distinct classes, so the object at the center of the image may be relatively more important for discriminating between classes. For C-100, the data is the same, but there are 100 highly similar classes, and border patches may contain subtle features that are important for discriminating between related classes where they were otherwise distracting before. Interestingly, the Ada-CORNER query underperforms in both experiments, which aligns with the prediction that choosing a query with little to no signal will lead to worse performance. Since a noise query would yield higher dot product relations with other noise vectors, this likely results in higher attention weights for irrelevant patches and lower weights for signal-containing patches, a sort of worst-case for AdaPool. This violates our assumption of linearly separable signal and noise relation scores, implying the error from an optimal weighting would not be guaranteed to fall within the bounds we presented *for this choice of query*. It is worth noting that the standard ClsToken approach still underperforms this and all other methods by a significant margin. This experiment demonstrates how one might use intuition, data analysis, and experimentation to find a competitive query. If one cannot be found, Avg/Max may be strong alternatives *if* the noise level of the dataset is constant. While the explicit SNR level in CIFAR is not known, the key takeaways are that the choice of AdaPool’s query (1) can be used to investigate the way signal and noise vectors are distributed, (2) can lead to superior performance on real-world data, and (3) may underperform with a poor choice of query. We really appreciate your comments and insightful questions! --- Rebuttal Comment 1.1: Comment: I thank the reviewers for their in-depth rebuttal and for commenting on Przewiezlikowski et al. and how it relates to this work. In light of the additional clarity from the rebuttal, I will raise my score.
null
null
null
null
null
null
HetSSNet: Spatial-Spectral Heterogeneous Graph Learning Network for Panchromatic and Multispectral Images Fusion
Accept (poster)
Summary: This paper proposes a spatial-spectral heterogeneous graph learning network for panchromatic and multispectral images fusion. Specifically, the authors explore the pansharpening-specific relationships in the heterogeneous graph structure. Then, they extract the multiple relationship patterns by the designed basic relationship pattern aggregation module. Finally, the unified spatial-spectral representation across different relationships is learned from local and global perspectives. ## update after rebuttal My concerns have been addressed and would like to recommend accept. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The arguments and derivations about the heterogeneous graph architecture related to spatial-spectral relationships appear logically sound. Experimental Designs Or Analyses: Yes. The experimental designs including baseline comparisons and ablation studies are coherent. No major flaws or inconsistencies are found in the experimental setup or analyses. Supplementary Material: Yes. I focus on the pansharpening-specific relationship priors and more quantitative and visualization results in the supplementary material. Relation To Broader Scientific Literature: The previous method [1] only uses GCN to model global relationships in Euclidean space. The authors pioneer spatial-spectral heterogeneous graph structure in non-European space for remote sensing pansharpening. [1] When pansharpening meets graph convolution network and knowledge distillation. IEEE Transactions on Geoscience and Remote Sensing. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths 1. The authors design the first spatial-spectral heterogeneous graph structure for remote sensing pansharpening. 2. Building on the spatial-spectral heterogeneous graph structure, the authors further develop both local and global relationship pattern aggregation mechanisms. 3. The paper is well-organized and clearly written. Weaknesses: 1. In the discussion of the graph construction challenge at line 60, the authors highlight the spectral relationship between LR-MS and PAN images. Why do they not also consider the spatial relationship between these images? 2. What are the advantages of the heterogeneous graph learning mechanism designed by the author compared with the methods used in previous heterogeneous graph learning networks (e.g., MHGCN)? 3. The roles of local- and global-level aggregations warrant further analysis, and the authors should clarify how these two aspects are balanced. 4. Pansharpening datasets also have some eight-band remote sensing images. Can the authors provide experimental comparison of eight-band datasets? 5. Graph structure typically demand substantial computational resources when dealing with large-scale graphs or high-dimensional node features. Therefore, the authors should compare the computational cost of their proposed approach with that of existing methods. Other Comments Or Suggestions: There are some inaccurate expressions in the manuscript—particularly around lines 35 and 200. The authors should carefully review and correct these sections. Questions For Authors: Please refer to the Weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and encouraging review. The point-to-point answer is provided below. **A1.** We analyze the spatial relationship of a large number of PAN/LR-MS/target HR-MS (GT) image pairs in the Appendix. As can be seen from Fig. 3 in the Appendix, the spatial histogram of each spectral band of LR-MS is extremely different from that of each spectral band of GT. In contrast, the spatial histogram of PAN is very close to that of each spectral band of GT. Therefore, we can obtain spatial property-related relationship prior, i.e., modeling only the spatial relationship of PAN images can reconstruct the required spatial property of target HR-MS images. **A2.** The designed spatial-spectral heterogeneous graph is regarded as the attributed multiplex heterogeneous graph. Existing methods do not explicitly model multiplex heterogeneous structures, but rather regard them as a linear superposition of relations. For example, MHGCN realizes message passing in multiplex heterogeneous networks by automatically capturing useful relation-aware meta-paths. It is only regarded as a simple superposition of relations, while ignoring the learning of semantic information. But in fact, when learning the spatial-spectral heterogeneous graph, we should not simply regard it as a linear superposition of simple individuals, but should focus on its own high-level semantic information. The proposed method can express rich high-level semantic information by extracting basic spatial-spectral relationship patterns and multi-level relationship aggregation. The ablation experiments in the manuscript (Tab.3) also demonstrate the effectiveness of our learning mechanism. **A3.** In this paper, we use the contrastive learning loss (InfoNCE loss [1]) to balance the local-wise and global-wise aggregation branch, as denoted in Eq.8. In Eq.8, the positive sample pairs are the numerator, i.e., $\exp\mathrm{(}\boldsymbol{s}(\mathbf{H} _{\boldsymbol{Local},i},\mathbf{H} _{\boldsymbol{Global},i}))$, which enforces the local features to be similar to their corresponding global features; The negative sample pairs are denominator, i.e., $\sum _{j\in \mathrm{V}}{\exp}(\boldsymbol{s}(\mathbf{H} _{\boldsymbol{Local},i},\mathbf{H} _{\boldsymbol{Global},j}))$, which contains the global features of all samples ($j\ne i$). Negative sample pairs force the model to learn discriminative local features while requiring the global features to be sufficiently discriminative. The temperature coefficient $\tau$ in Eq.8 controls the weight of local-wise and global-wise branches. In the early stages of training, the model may focus more on local details. As training progresses, global features gradually dominate, guiding local features to learn semantic consistency. **A4.** The experimental results of eight-band WorldView-3 dataset are shown in the table as below. We will add these results in final version. | Methods | PSNR ↑ | SSIM ↑ | SCC ↑ | SAM ↓ | ERGAS ↓ | |------------|--------|--------|-------|-------|---------| | SFIM | 21.415 | 0.542 | 0.721 | 0.115 | 8.855 | | BROVEY | 22.506 | 0.547 | 0.733 | 0.571 | 8.233 | | GS | 28.417 | 0.693 | 0.812 | 0.102 | 6.799 | | SRPPNN| 30.304 | 0.918 | 0.956 | 0.078 | 3.188 | | DCFNet | 30.624 | 0.924 | 0.958 | 0.072 | 3.092 | | CTINN | 31.856 | 0.952 | 0.960 | 0.066 | 2.742 | | SFIINet | 30.597 | 0.924 | 0.956 | 0.074 | 3.080 | | Hyperformer | 29.943 | 0.913 | 0.945 | 0.081 | 3.426 | | MDCUN | 31.299 | 0.953 | 0.956 | 0.066 | 2.930 | | BiMPan | 30.419 | 0.941 | 0.952 | 0.069 | 2.912 | | LGTEUN | 32.219 | 0.955 | 0.950 | 0.061 | 2.629 | | MSDDN | 30.850 | 0.926 | 0.955 | 0.073 | 2.995 | | FAMENet | 30.990 | 0.929 | 0.954 | 0.070 | 2.953 | | GPCNet | 30.543 | 0.922 | 0.955 | 0.077 | 3.105 | | **Ours** | **32.589** | **0.958** | **0.963** | **0.059** | **2.524** | **A5.** We have calculated the space complexity of our proposed model in the manuscript (Line 844-851). As shown in the table below, we present computational cost of our method and all comparison methods. | Methods | SRPPNN | DCFNet | CTINN | SFIINet | Hyperformer | MDCUN | BiMPan | LGTEUN | MSDDN |FAMENet | GPCNet | Ours | |---|--------|--------|-------|-------|---------|--------|--------|--------|---------|-------|---------|---------| |Params|1.711| 2.905 | 0.039 | 0.085 | 43.938 |0.098| 0.396 | 0.202| 0.011 | 0.091| 0.087| 0.167| **A6.** Thank you for pointing it out. We will carefully correct the descriptions in the Line 35 and 200. [1] Chen T, et al. A simple framework for contrastive learning of visual representations. ICML, 2020.
Summary: For pansharpening, recent modeling frameworks are relatively rigid and have limitations when dealing with irregular ground objects in remote sensing images. To address this issue, this paper proposes a spatial-spectral heterogeneous graph learning network named HetSSNet. It constructs a heterogeneous graph structure that explicitly describes the specific relationships between PAN and MS images. A basic relationship pattern generation module is designed to extract multiple fundamental spatial-spectral relationship patterns from the constructed heterogeneous graph. To automatically capture local information and global relevant information across different complex relationship pattern between nodes, it designs the relationship pattern aggregation module from the local and global perspective. The experimental results demonstrate that the model achieves satisfactory performance on multiple datasets. Claims And Evidence: The methods and experimental design of the paper are clearly presented and well-structured. Methods And Evaluation Criteria: The datasets used are classic in the field of pansharpening, and the full-resolution and reduced-resolution experiments are also commonly conducted in this field. Theoretical Claims: The authors could further explain why the method based on CNN/Transformer is rigid and why the method based on heterogeneous graphs is more flexible. Experimental Designs Or Analyses: The authors conduct ablation experiments to validate the rationality of the Basic Relationship Pattern Generation module and the Relationship Pattern Aggregation module. Supplementary Material: I have examined Section B, which discusses the analysis of relational priors; Section D, which provides supplementary experimental results; and Section E, which focuses on the complexity analysis. Relation To Broader Scientific Literature: The paper provides examples and introduces CNN-based and Transformer-based methods, such as DCFNet and CTINN, pointing out that their modeling is overly rigid. To address this issue, the paper introduces its innovative approach. Essential References Not Discussed: I have not seen any similar spatial-spectral heterogeneous graph structure for remote sensing pansharpening. The paper is therefore innovative in this regard. Other Strengths And Weaknesses: Strengths: 1. The article is well-structured and logically coherent, progressing systematically from problem statement to method design and experimental validation. 2. The authors analyze pansharpening data in the appendix section, and obtain the priors of task-related spatial-spectral relationships. The paper proposes a novel spatial-spectral heterogeneous graph to model these relationships in a more flexible way. 3. The experimental design is rigorous, utilizing three widely recognized datasets and incorporating both comparative and ablation studies for validation. 4. The proposed method extracts multiple basic relationship matrices based on edge types and aggregates these basic relationships from local and global perspectives to obtain a unified spatial-spectral relationship representation. Weaknesses: 1. The authors says that recent CNN/Transformer is rigid in modeling relationships in pan-sharpening. What does this "rigidity" mean in remote sensing image? Further discussion is recommended. 2. In definition 2, the authors give the definition of the basic relationship pattern. It is recommended that the authors give the example to illustrate the basic relationship pattern in the constructed spatial-spectral heterogeneous graph. 3. The related work needs improvement. Personally, in the current version, it should express what are the current mechanisms for learning heterogeneous graph. Other Comments Or Suggestions: Please refer to the Weaknesses. Questions For Authors: Please refer to the Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your thoughtful and encouraging review. The point-to-point answer is provided below. **A1.** The convolution kernel of CNN slides on a regular grid and cannot adapt to the curved, broken, radial and other remote sensing image features. The core of Transformer is global self-attention, and fixed-size patch splitting (such as 16$\times$16) will break the structure of irregular features (such as rivers and roads). In fact, the rigidity of CNN/Transformer comes from its regular modeling assumption of Euclidean space, which is inconsistent with the non-Euclidean data characteristics of remote sensing image features (such as mountains, rivers, etc.). **A2.** We give two examples of the base relationship pattern. For example, $\mathbf{P}\xrightarrow{spectral}\mathbf{L}$ is a basic relationship pattern between PAN and LR-MS nodes, which represents that there is only spectral interaction between the PAN node and the LR-MS node . $\mathbf{P}\xrightarrow{spatial/spectral}\mathbf{L}$ is also a basic relationship pattern, where there are spatial interaction and cross-modal spectral interaction between the PAN node and the LR-MS node. **A3.** We will add related work of heterogeneous graph learning strategies in Sec. 2.2. For example, GATNE [1] learns the representation of nodes in each specific relationship, respectively, and then aggregate them to obtain the final node representation. DualHGNN [2] introduces two hypergraphs into the representation learning of multiplex heterogeneous networks, and learns node embedding using spectral hypergraph convolutional networks. FAME [3] realizes message passing in multiplex heterogeneous networks by automatically capturing useful relation-aware meta-paths. [1] CEN Y, et al. Representation Learning for Attributed Multiplex Heterogeneous Network. SIGKDD, 2019. [2] XUE H, et al. Multiplex Bipartite Network Embedding using Dual Hypergraph Convolutional Networks. arXiv, 2021. [3] LIU Z, et al. Fast Attributed Multiplex Heterogeneous Network Embedding. CIKM, 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the response. All my comments have been well addressed, I have no further questions. I believe this paper offers valuable insights to the community, with clear contributions, and I therefore decide to upgrade my rating to 5.
Summary: In this paper, authors propose a spatial-spectral heterogeneous graph learning network, termed as HetSSNet. Specifically, it segments each band of the pansharpening data into non-overlapping patches. By leveraging graph structures, it captures intrinsic relationships among these patches. Furthermore, it employs contrastive learning to enrich feature representations and enhance the overall performance of pan-sharpening. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, there is no theoretical proof in the article. Experimental Designs Or Analyses: Yes, the experiment complies with the general settings. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The main contribution of the paper is the introduction of the graph neural network structure in the field of pan-sharpening. This provides insights into improving the network structure in related fields. However, there are not many related articles in recent years, and it may be widely explored by the community in the future. Essential References Not Discussed: No, all related pan-sharpening methods based on graph neural networks have been discussed. Other Strengths And Weaknesses: ### Strengths: [+] Based on statistical analysis, the authors constructed a graph neural network incorporating a spatial-spectral heterogeneous graph construction module, a basic relationship pattern generation module, and a relationship pattern aggregation module to enhance the performance of pansharpening. [+] The authors theoretically analyzed the computational complexity of the proposed method. [+] The authors utilize graph structures to capture long-range dependencies within images, which is a relatively novel approach in the field of pansharpening. ### Weaknesses: [-] The authors claim that the method is non-Euclidean but abandon this concept in the implementation, making the theoretical and intuitive justification unconvincing. [-] In Section 3.3, the authors mention, "We divide the image into N overlapping patches and transform each patch of the PAN image into a feature vector." However, the details of this transformation are unclear. [-] The ablation experiments in the paper are insufficient. Specifically, in experimental settings (c) and (d), the contrastive loss cannot be applied due to the absence of positive and negative sample pairs. After combining the two modules, the authors do not adequately discuss the role of the contrastive loss, both qualitatively and quantitatively. [-] The running efficiency, number of parameters, and floating-point operations per second (FLOPs) of the method are not reported. [-] The relationship between the final node representation $H$ and the network output is not clearly explained. [-] The article employs comparative methods that are somewhat outdated, focusing primarily on approaches developed before 2023. Additionally, the references need to be updated to include more recent research and developments in the field. Other Comments Or Suggestions: [-] In Section 3.1, the symbol "U" in the defined graph $G(V,E,U)$ is not denoted. [-] In Section 2.2, the description "Non-European" appears inappropriate or unclear. [-] In line 176 and line 182, the width and height of LRMS image should not be assumed to be equal to those of PAN image. [-] The distinguishability of the symbols should be improved for clarity. [-] In line 205, $\tilde{A}_l$ lacks a clear definition. Questions For Authors: Please refer to the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and encouraging review. The point-to-point answer is provided below. **A1.** The non-Euclidean characteristics of the method are reflected in two aspects: the data structure and relationship modeling. **(a) Spatial-spectral heterogeneous graph construction.** We construct the spatial-spectral heterogeneous graph with two types of nodes (PAN and LR-MS nodes) and heterogeneous edges modeling complex spatial-spectral relationships . The graph topology is dynamically constructed via an adaptive $k$-nearest neighbor algorithm (Line 190-206). Thus, the defined spatial-spectral heterogeneous graph inherently represents a non-Euclidean data structure. **(b) Non-Euclidean relationship modeling.** We design relationship pattern aggregation module based on the non-Euclidean learning operation (i.e., GCN) (Sec. 3.5). It explicitly respect the graph topology, avoiding the limitations of Euclidean assumptions. **A2.** The convolution operation is applied to the PAN image patches and obtain the corresponding feature vector. **A3.** The experimental setting (c) and (d) could verify the role of local features and global features in contrastive learning. In fact, Setting (c) and (d) individually makes Eq.8 degenerate into self-contrast learning with only global/local features. The results show that our cross-view local-global contrastive learning is more effective. We add two sets of ablation experiments: visual ablation experiment (https://anonymous.4open.science/r/kk3/exp.png), and the ablation experiment of the temperature coefficient $\tau$ in Eq.8 as follows. |$\tau$|PSNR↑|SAM↓|SSIM↑|QNR↑|$D_{s}$↓| |--|--|--|--|--|--| |0.01|48.577|0.025|0.985|0.849|0.118| | **0.1** |**49.541** |**0.015**|**0.994**|**0.860**|**0.107**| |0.5|49.311|0.019|0.990|0.856|0.109| |1.0|48.011|0.029|0.976|0.840|0.125| **A4.** We present the inference time, number of parameters, and FLOPs of our method. We will add them in the final version. | Metric |Time (s/img) | Params (M) | FLOPs (G) | |--------------|-------|-------|------| | HetSSNet| 0.0327|0.167| 1.927| **A5.** The final node representation $\mathbf{H}$ is first mapped to the target dimension through the fully-connected layer, and then refined using the convolutional operation to generate the final HR-MS image. We will add the corresponding description after Eq.9. **A6.** We add three comparison methods published in 2024, i.e., FusionMamba [1], WINet [2] and HFIN [3], as shown in tables below, these comparison methods will be updated in the final version. In addition, we will update the references in the Sec.1 and Sec.2.1. **WV-3:** | Method|PSNR↑|SSIM↑|SAM↓|ERGAS↓|SCC↑|$D_{\lambda}$|$D_{s}$↓|QNR↑| |-|-|-|-|-|-|-|-|-| |FusionMamba|38.727|0.872|0.069|5.437|0.934|0.097|0.261|0.752| |WINet|39.781|0.887|0.056 |4.769|0.969|0.132|0.331| 0.740| |HFIN|32.126|0.837|0.083|6.627|0.837|0.142|0.341|0.723| |**Ours**|40.623|0.905|0.054|4.741|0.972|0.085|0.252|0.792| **QB:** |Method|PSNR↑|SSIM↑|SAM↓|ERGAS↓|SCC↑|$D_{\lambda}$↓|$D_{s}$↓|QNR↑| |-|-|-|-|-|-|-|-|-| | FusionMamba|34.021|0.881|0.047|2.931|0.904|0.152|0.381|0.697| |WINet|37.209|0.927|0.032|2.212|0.931|0.142|0.341|0.714| |HFIN|33.616|0.811|0.050|3.174|0.869|0.211|0.376|0.662 | |**Ours**|37.228|0.931|0.029|2.022|0.951|0.130|0.328|0.742| **GF-2:** |Method|PSNR↑|SSIM↑|SAM↓|ERGAS↓|SCC↑|$D_{\lambda}$↓|$D_{s}$↓|QNR↑| |-|-|-|-|-|-|-|-|-| |FusionMamba|46.242|0.976| 0.027|1.278|0.972|0.074|0.121|0.756| |WINet|49.405| 0.984 |0.016| 0.968|0.977|0.060|0.108|0.855| |HFIN|41.938|0.966 |0.039|2.534|0.894|0.136|0.158|0.722| |**Ours**|49.541|0.994|0.015| 0.955|0.990|0.061|0.107|0.860| **A7.** The matrix $\mathrm{U}$ is constructed by stacking the feature vectors of all nodes in the spatial-spectral heterogeneous graph, where each row is the associated node feature vector of node $v_i$ (Line 150-152). For example, the corresponding feature vectors of PAN images/ LR-MS images patches are used to form the matrix $\mathrm{U}$ (Line 214). **A8.** "Non-European" is a typo, it should be "Non-Euclidean". We will correct it in Line 135. **A9.** During the implementation for the pansharpening models, the LR-MS needs to be upsampled to the size of PAN before fusion [2]. In addition, the dimension definition of LR-MS follows the published pansharpening work [2], thus, in method section, it is reasonable to define it as the same length and width as the PAN image. **A10.** Thanks for your suggestion, we will correct them. **A11.** It is a typo, it should be "$\tilde{\mathbf{A}}_{\boldsymbol{Local}} $ " generated from Eq.1. We will correct it in Line 205. [1] Peng S, et al. Fusionmamba: Efficient remote sensing image fusion with state space model. TGRS, 2024. [2] Tan J, et al. Revisiting Spatial-Frequency Information Integration from a Hierarchical Perspective for Panchromatic and Multi-Spectral Image Fusion. CVPR, 2024. [3] Zhang J, et al. Pan-Sharpening With Wavelet-Enhanced High-Frequency Information. TGRS, 2024. --- Rebuttal Comment 1.1: Comment: The authors provide detailed responses to each of the raised issues, and incorporate a substantial number of experiments. These can address my concerns and simultaneously improve the quality of the paper. Furthermore, taking into consideration the opinions of other reviewers, I will raise my rating.
Summary: Remote sensing pansharpening involves fusing panchromatic (PAN) images with low-resolution multi-spectral (LR-MS) images to produce high-resolution multi-spectral (HRMS) images. Traditional methods like CNN and Transformer treat images as grids of pixels in Euclidean space, which struggle with irregular ground objects in remote sensing. Graphs offer a more flexible structure but face challenges in modeling spatial-spectral properties: 1) creating a customized graph structure for spatial-spectral relationships, and 2) learning a unified spatial-spectral representation. To overcome these, the authors propose HetSSNet, a spatial-spectral heterogeneous graph learning network. HetSSNet constructs a heterogeneous graph tailored for pansharpening, extracts multiple relationship patterns, and aggregates these patterns to learn a unified spatial-spectral representation from both local and global perspectives. Extensive experiments show that HetSSNet outperforms existing methods and generalizes well. Claims And Evidence: Yes Methods And Evaluation Criteria: In order to solve the rigid modeling of CNN/Transformer, the author introduces the heterogeneous graph to establish complex spatial-spectral relationships, this approach is interesting and novel. Experiments on three pansharpening datasets demonstrate consistent improvements over CNN-based/Transformer-based methods, proving that the proposed method has the potential to break through the limitation of the current modeling framework. Theoretical Claims: The spatial-spectral relationship priors for the pan-sharpening task are verified and analyzed on a large amount of pansharpening data. Experimental Designs Or Analyses: The experimental design is reasonable and provides competitive results on reduced-and full-resolution datasets. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: Recent learning-based methods often rely on the Euclidean space assumption, which may not fully capture the complex geometric structure of remote sensing data. The non-Euclidean framework proposed in this paper draws on the progress of graph, which have shown potential in dealing with irregular data. This contribution extends the application of non-Euclidean methods to the field of pansharpening, addressing the urgent need for more flexible and adaptive spatial-spectral modeling. Essential References Not Discussed: It is recommended that the author should discuss the GCN-based panchromatic sharpening method, GPCNet, published in TGRS2024. Other Strengths And Weaknesses: Strengths : 1. The paper analyzes the two spatial-spectral relationship priors in the pansharpening, providing a good basis for subsequent modeling. 2. The paper uses heterogeneous graph to construct complex and diverse spatial-spectral relationships between LR-MS and PAN images, this is an interesting solution. In addition, to learn unified spatial-spectral representation, the proposed method extracted base relationship pattern and learn them from the local and global perspective. 3. Comprehensive experiments on WorldView-3, GaoFen-2 and QuickBird datasets demonstrate consistent improvements over state-of-the-art pansharpening methods, validated by ablation studies analyzing individual HetSSNet components. 4. Unlike the other CNN/Transformer-based methods, the proposed method attempts to design a new pansharpening modeling framework in non-Euclidean space. This study shows a commendable beginning. 5. The article is well written and the figures are clear. Weakness: 1. Why do CNN and Transformer structures lead to redundancy and also do Mamba or Diffusion structures suffer from the above drawbacks? Please provide a more in-depth explanation of the following point:‘’Since these ground objects are usually not quadrate whose shape is irregular, the commonly used grid structures in the modeling architecture like CNN and Transformer are redundant and inflexible to process them “. 2. In the Local-wise Aggregation, the authors used only a single-layer GCN to learn node representations with interaction information. The motivation for this approach is not clearly explained. 3. The author should discuss the GCN-based pansharpening method (i.e., GPCNet[1]) in the introduction section to make it clearer how it differs from this proposed method. In addition, it is recommended to add the analysis of recent heterogeneous graph learning strategies (e.g., meta-path sampling) in the related work section. 4. This paper uses the basic relationship modeling strategy. Meta-path sampling is a commonly used heterogeneous graph modeling mechanism. Why not use the mete-path sampling mechanism? What are the advantages of the learning strategy used in this paper? 5. The author should add the experimental results of the eight-band pansharpening dataset. Other Comments Or Suggestions: 1. The table should be expanded to include the sources or origins (provenance) of the different methods used in the comparison. 2. Avoid using abbreviations like "i.e." in the abstract and conclusion sections. 3. In section 3.6, it contains a typo ("Optimization " instead of " optimization "). Questions For Authors: In the section on Pansharpening-specific relationship priors, the comparison of spectral band distributions is currently limited to neighboring spectra, such as band1 and band2. Could the analysis be extended to explore distribution comparisons across broader spectral ranges, for example, between band1 and band3, or band1 and band4? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and encouraging review. The point-to-point answer is provided below. **A1.** The convolution kernel of CNN assumes that local features are spatially invariant, but the morphology of objects (such as rivers and roads) may change nonlinearly with the terrain. For example, the curvature of mountain roads is different from that of plains. It is difficult for a fixed-size convolution kernel to adaptively adjust the receptive field and may perform redundant calculations on irrelevant areas (such as the background). Transformer captures long-range dependencies through self-attention, but the correlation of objects in remote sensing images often has local-global heterogeneity (such as regular local structures of buildings, but random distribution of vegetation). Global calculations introduce noise from irrelevant pixels and are computationally expensive. Mamba and Diffusion also have the problem of rigid modeling. Specifically, Mamba's default scanning path (such as from left to right, from top to bottom) is not suitable for anisotropic objects (such as diagonal roads or radial urban layouts). Diffusion is due to the denoising network (usually U-Net) still relying on the local inductive bias of convolution. **A2.** We use two-layer GCN. In fact, the over-smoothing problem limits the GCN model from stacking more aggregation layers. The purpose of this work is not to solve the over-smoothing problem of GCN, but to use GCN with a suitable number of layers to learn the designed spatial-spectral heterogeneous graph. Ablation experiments in the manuscript have shown that even a small number of GCN layers can achieve promising performance (Tab. 4). **A3.** GPCNet uses the full-graph modeling strategy, in which all nodes and edges are of the same type and all nodes have relationships. Pansharpening involves the fusion of PAN and LR-MS, which have heterogeneous characteristics. Therefore, the full-graph structure is sub-optimal for pansharpening. The designed spatial-spectral heterogeneous graph can explicitly model the heterogeneity of PAN and LR-MS, flexibly fuse spatial-spectral relationships, and adapt to local and global modeling requirements. In addition, we will add the analysis of recent heterogeneous graph learning strategies in the final version (please refer to **A4**). **A4.** Meta-path sampling requires pre-setting of fixed paths and cannot flexibly adapt to the spectral-spatial coupling of different regions; it requires sampling all possible paths, which has high computational complexity (especially for long paths); the path's semantics are single, making it difficult to integrate the multi-objective constraints of spectral fidelity and spatial enhancement. However, we use the base relationship pattern strategy to model multiple edge types at the same time and dynamically weight the relationships between different relationships. The ablation model (Modal (a) in Tab.3) verifies this in the manuscript. **A5.** The experimental results of the eight-band worldview-3 dataset are shown in the table below, which shows that our method could achieve promising results on the eight-band dataset. We will add these experimental results in the final revision. | Methods | PSNR ↑ | SSIM ↑ | SCC ↑ | SAM ↓ | ERGAS ↓ | |------------|--------|--------|-------|-------|---------| | SFIM | 21.415 | 0.542 | 0.721 | 0.115 | 8.855 | | BROVEY | 22.506 | 0.547 | 0.733 | 0.571 | 8.233 | | GS | 28.417 | 0.693 | 0.812 | 0.102 | 6.799 | | SRPPNN| 30.304 | 0.918 | 0.956 | 0.078 | 3.188 | | DCFNet | 30.624 | 0.924 | 0.958 | 0.072 | 3.092 | | CTINN | 31.856 | 0.952 | 0.960 | 0.066 | 2.742 | | SFIINet | 30.597 | 0.924 | 0.956 | 0.074 | 3.080 | | Hyperformer | 29.943 | 0.913 | 0.945 | 0.081 | 3.426 | | MDCUN | 31.299 | 0.953 | 0.956 | 0.066 | 2.930 | | BiMPan | 30.419 | 0.941 | 0.952 | 0.069 | 2.912 | | LGTEUN | 32.219 | 0.955 | 0.950 | 0.061 | 2.629 | | MSDDN | 30.850 | 0.926 | 0.955 | 0.073 | 2.995 | | FAMENet | 30.990 | 0.929 | 0.954 | 0.070 | 2.953 | | GPCNet | 30.543 | 0.922 | 0.955 | 0.077 | 3.105 | | **Ours** | **32.589** | **0.958** | **0.963** | **0.059** | **2.524** | **A6.** Thanks for your suggestion, we will add sources of the comparison methods in the Tab.2 and Tab.6. **A7.** Thanks for your suggestion, we will remove these abbreviations and recalibrate correspond description in the final version. **A8.** Thank you for pointing it out, we will correct it. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, the author has addressed my concerns and I recommend acceptance.
null
null
null
null
null
null
Autonomy-of-Experts Models
Accept (poster)
Summary: This work proposes Autonomy of Experts (AoE) to replace the traditional MoE model. Instead of employing a router to select the experts, AoE computes the internal activations of all experts and select the best one to proceed. The authors conduct the pre-training experiments to investigate various properties of AoE and demonstrate promising results compared to the conventional MoE. Claims And Evidence: The claims are generally supported. Methods And Evaluation Criteria: Yes Theoretical Claims: - The paper does not contain any theoretical claims Experimental Designs Or Analyses: The experimental designs are sound. The authors conducted the pre-training experiments to train AoE and SMoE from scratch, and evaluated them on several downstream tasks. This is the standard experimental setting. Supplementary Material: Yes, I read the code in the supplementary materials and the appendices. Relation To Broader Scientific Literature: The scope of this work is quite limited as it only focuses on the design of the MoE expert selection mechanism. The paper did not discuss how this idea can impact related research disciplines. Essential References Not Discussed: While the AoE architecture design is relatively new, its expert selection mechanism has been observed in a previous study of CompeteSMoE [1], which also proposed to selected the best experts via its activation norm. [1] Pham, Quang, et al. "CompeteSMoE--Effective Training of Sparse Mixture of Experts via Competition." arXiv preprint arXiv:2402.02526 (2024). Other Strengths And Weaknesses: I have two major concerns regarding AoE - First, while using the expert's activation norm for selection has been explored, AoE instead uses the intermediate's activation norm. Could the authors provide more justification into why this design could achieve a better result? Specifically, it is not guaranteed that expert with the highest intermediate norm will also have the highest output norm. - Second, the training pipeline for AoE is not clear. Since AoE requires decomposing the expert weight into an up and down projections, how are the parameters updated throughout training, and does AoE decompose the expert weight after each update? Other Comments Or Suggestions: N/A Questions For Authors: - Please clarify the key contribution in the expert selection strategy in AoE compared to previous studies such as CompeteSMoE. - Please clarify the advantages of using the intermediate norm for expert selection and the training dynamic of AoE, as elaborated in the Other Strength and Weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments! We hope our rebuttal helps address your concerns. If so, we would be grateful if you could consider increasing the overall recommendation. --- &nbsp; # Contribution and Novelty Thank you for listing [Pham et al.]. Our paper fundamentally differs from it. We believe our AoE presents a solid novelty and technical contribution. Here are the differences. ### 1. Motivation, Architecture, and Method - AoE identifies and addresses a widely overlooked issue, i.e., the separation between the router's decision-making and the execution of experts. AoE is ***a novel MoE paradigm where the router is removed, allowing the experts to autonomously select themselves.*** - In contrast, Pham et al. ***aim to enhance the router***. They take the experts' output as additional supervision signals to the router logits, which requires a complex training objective. Additionally, because they sometimes compute **every expert's *final* outputs**, this results in ***dense activation, making the model not technically even an SMoE.*** ### 2. Use of Activation Norms - [1] proposed that activation norms can be used to measure the knowledge of modules. Our approach is inspired by [1] and we cite them in Line 147. Pham et al. do not cite this paper or other related interpretability works. ***This perspective is not Pham et al.'s innovation, nor do we claim it as ours.*** - One of our contributions lies in a novel MoE paradigm that selects experts without routers, using only intermediate activations, along with architecture designs for enhanced efficiency. ***This is not simply a matter of selecting which nodes to use for norm calculations or improving results.*** AoE avoids the need to compute every expert's final output, preserving the sparse activation characteristic of SMoE and making AoE significantly more efficient and practical. ### 3. Efficiency - AoE achieves similar pre-training throughput of an SMoE. - Pham et al. did not report efficiency results. Since their model is sometimes densely activated, assuming the common 8-select-2 setting, ***AoE can be up to four times more efficient during pretraining.*** ### 4. Effectiveness - Pham et al. developed a complex training pipeline and only tested models up to **36M** parameters, with evaluation limited to BPC and PPL metrics. - AoE supports simple end-to-end training, validated at scale with models up to **4B** parameters and evaluated across various downstream tasks. Additionally, ***Pham et al. publically acknowledged that their complex router-enhancement can not be reproduced***, making it impossible to use it as a baseline for comparison. This is stated in the first line of their README: https://github.com/giangdip2410/CompeteSMoE We would be truly grateful if you could kindly re-assess our novelty and contribution. ### References [1] Transformer Feed-Forward Layers Are Key-Value Memories, EMNLP 2021 &nbsp; --- &nbsp; # Response to Concern 1 Most of this concern has been addressed in our initial response. Here, we address your question that "the expert with the highest intermediate norm is not guaranteed to have the highest output norm." *A neural network works according to how it is trained. Since AoE is not trained like a traditional MoE, this guarantee is unnecessary.* As mentioned in the last paragraph of Sec.3.1, we train AoE to represent its awareness of its capabilities through the norm of the intermediate node. To support this claim, we present a new experiment: during inference, when using Final Output Norms (FON) to select experts in a pre-trained AoE (Config.10), rather than the intermediate norms used during training, there is a performance drop across all tasks: ||ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG. |-|-|-|-|-|-|-|-|-|- |AoE|41.16|58.32|36.80|53.04|28.37|32.78|50.61|54.59|44.46 |AoE (FON)|37.33|56.75|35.41|51.54|27.77|32.00|50.16|54.47|43.18 This is a valuable question, as many readers who are accustomed to the traditional MoE models may also have similar concerns. We will include this response in the paper. Thank you very much. &nbsp; --- &nbsp; # Response to Concern 2 The word "decompose" in Line 202 might be ambiguous. We mean that we modify the model architecture, replacing the $W_g$ in traditional MoE with two low-rank matrices. The architecture is illustrated in Fig.1(b), which does not require dynamic adjustment during runtime. We will improve the clarity in Line 202. Thank you for your feedback. &nbsp; --- &nbsp; # Response to "Relation To Broader Literature" MoE is the foundation of advanced LLMs, such as DeepSeek-R1, Qwen-2.5-Max, Grok, etc. A deeper understanding and innovation in MoE could help advance future LLMs. We will highlight the necessity and potential impact of studying MoE in the NLP field. &nbsp; --- If you have further questions, please leave more comments. We are committed to addressing them. Your assessment is very important to us! Thank you very much!
Summary: The authors introduce a new Mixture of Experts (MoE) paradigm called Autonomy-of-Experts (AoE), where experts independently decide whether to process inputs. The foundation of AoE lies in the understanding that an expert can gauge its ability to effectively handle a token by observing its internal activations. In AoE, routers are eliminated; experts pre-calculate internal activations for inputs and are ranked according to their activation norms. Only the highest-ranking experts continue with the forward pass, while the others are terminated. The overhead associated with pre-computing activations is mitigated through low-rank weight factorization. This method of self-assessment followed by comparison with peers leads to enhanced expert selection and effective learning. Claims And Evidence: Yes, the claims appear to be substantiated by the evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical analysis Experimental Designs Or Analyses: Experimental design and analysis appears to be valid. Supplementary Material: Yes, skimmed through the supplementary material and code. Relation To Broader Scientific Literature: The contributions are contextualized properly in the context of broader scientific literature. The idea of using internal activations of experts in an efficient manner for pre-training is novel in the context of Sparse MoEs. Essential References Not Discussed: Related works that are essential to understanding the (context for) key contributions of the paper are discussed. Other Strengths And Weaknesses: Strengths: - The paper is written clearly - The idea of using internal activations of experts in an efficient manner is novel - The pre-training experiments and ablation studies are thorough. Weaknesses: - Given there is a 3% difference in throughput between MoE and AoE, it would be interesting to see if the baselines should be allowed to pre-train for additional steps (for a fair comparison in wall-clock time) and whether some of the gain still holds. - Typically, multiplicative noise is added to the input before computing gate scores e.g., in the Switch Transformer paper, which can improve performance. Can the authors compare against that version of the gate? Other Comments Or Suggestions: None Questions For Authors: See weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive suggestions and valuable comments! We hope our rebuttal helps address your concerns. If so, we would be grateful if you could consider increasing the overall recommendation. &nbsp; --- &nbsp; # If the baselines should be allowed to pre-train for additional steps We would like to clarify that in our paper, we ensured fairness by using the same model size and training on the same number of tokens. The difference in throughput (tokens processed per second) results in slightly longer training times of AoE models but does not indicate that AoE has learned additional knowledge. If baselines are trained for additional steps, they would acquire knowledge that AoE does not. If you have any further experimental suggestions, we would be happy to discuss or explore them! &nbsp; --- &nbsp; # To train Switch Transformer with multiplicative noise Thank you for your valuable advice. We trained a traditional MoE model with multiplicative jitter noise (0.01), applied to the layer inputs using 100B tokens with $L_{aux}$. As shown by the Switch Transformer, jitter noise encourages expert exploration, which is beneficial for MoE models, but it still cannot outperform AoE models with $L_{aux}$. ||ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG.| |-|-|-|-|-|-|-|-|-|-| |MoE (Config.2)|40.74|58.49|36.13|51.30|28.11|32.67|50.23|51.83|43.68| |MoE (Config.2 + noise)|40.95|58.43|36.75|51.07|28.34|32.96|50.01|51.72|43.78 |AoE (Config.10)|41.16|58.32|36.80|53.04|28.37|32.78|50.61|54.59|44.46| We will include these experiments in our paper. Thank you again for your helpful suggestions!
Summary: This paper introduces Autonomy-of-Experts (AoE), a novel approach to Mixture-of-Experts (MoE) models that addresses a critical issue in traditional MoE architectures: the separation between routing decisions and expert execution. In traditional MoE, a router decides which experts process which inputs, creating potential misalignment between routing decisions and expert capabilities. The core innovation of AoE is to eliminate the router entirely and allow experts to autonomously determine whether they should process inputs. I quite like the author's initial findings in existing findings that justify the development of AoE and also the toy illustration of AoE behaviors in the appendix which is very cute. Overall, the authors justify the claims by extensive pre-training and ablation experiments. Claims And Evidence: The authors claims are quite clear and supported by convincing evidence. Methods And Evaluation Criteria: The methods proposed in the paper are appropriate for addressing the identified problem: 1. The authors first validate their core insight through preliminary experiments on pre-trained MoE models before developing their full AoE approach. 2. The low-rank factorization to reduce computational overhead is well-motivated and effectively implemented. 3. The evaluation criteria are appropriate. Theoretical Claims: The paper does not present formal theoretical proofs but provides conceptual explanations for why AoE is effective. Experimental Designs Or Analyses: See above. Supplementary Material: The supplementary material includes additional experimental details on alternative expert selection metrics, pre-training with alternative expert selection strategies, and a toy example to provide additional interpretation of AoE's advantages. This material provides useful context and further validates the authors' claims. Relation To Broader Scientific Literature: The paper appropriately situates AoE within the broader MoE literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, I find this paper presents a solid technical improvement, though I have a few remaining questions: 1. The computational overhead analysis focuses on throughput but could discuss training time more explicitly. 2. While AoE outperforms traditional MoE, the performance gains on the 732M parameter models are relatively modest on some tasks. Can you explain it? 3. The paper could benefit from more discussion on whether the insights from AoE could be incorporated into traditional MoE architectures without completely replacing the router. 4. The paper mentions that a smaller d_low may be a lossy approximation when below the true rank of Wg. d_low seems to have no effects to the final performance in 732M model. 5. What about the uncertainty of the accuracies in the tables? Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive suggestions and valuable comments! We hope our rebuttal helps address your concerns. If so, we would be grateful if you could consider increasing the overall recommendation. &nbsp; --- &nbsp; # To discuss training time more explicitly Here are the total machine hours (1 machine = 8 GPUs): ||Machine Hours |-|- |MoE|72.73 |AoE ($d_{low}$=64)|76.15 |AoE ($d_{low}$=128)|76.58 |AoE ($d_{low}$=256)|77.56 |AoE ($d_{low}$=512)|81.34 We will include these results. Thank you for the feedback. &nbsp; --- &nbsp; # Small Model's Performance gains are relatively modest on some tasks There are two key factors to consider: 1. Larger models are stronger. With the same number of tokens, a large AoE shows more noticeable gains over MoE. 2. The "spiral rise" in performance during pre-training. Both AoE and MoE show steady overall improvements, but task-specific performance fluctuates across checkpoints. We show results at 50B, 80B, and 100B tokens (AoE: Config.10, MoE: Config.2) to illustrate this. For example, at 100B tokens, MoE slightly outperforms AoE on PIQA due to a larger gain from 80B to 100B. A similar trend occurs with HELLA from 50B to 80B, though AoE regains the lead at 100B. Despite fluctuations, AoE consistently achieves higher AVG. scores across checkpoints. ### Task Performance over tokens |Tokens|Models|ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG. |-|-|-|-|-|-|-|-|-|-|- |50B|MoE|38.76|56.26|35.67|50.12|27.34|33.05|50.23|**50.80**|42.78 ||AoE|**39.27**|**57.13**|**35.98**|**51.70**|**27.40**|**33.32**|**50.87**|50.34|**43.25** |80B|MoE|40.45|57.45|36.39|50.20|**27.93**|33.32|50.28|50.57|43.32 ||AoE|**41.79**|**57.67**|**36.44**|**51.30**|27.83|**34.51**|**50.38**|**51.49**|**43.93** |100B|MoE|40.74|**58.49**|36.13|51.30|28.11|32.67|50.23|51.83|43.68 ||AoE|**41.16**|58.32|**36.80**|**53.04**|**28.37**|**32.78**|**50.61**|**54.59**|**44.46** We'll include detailed task accuracy development plots to highlight the consistent advantages of 732M AoEs. &nbsp; --- &nbsp; # Can AoE insights benefit traditional MoEs? While AoE shows that routers can be removed, we try to use expert norms as labels for router training. The router receives gradient-detached inputs to avoid interfering with AoE learning. During inference, expert selection is performed solely by the router to reduce memory usage. This setup led to a performance drop (50B tokens, $d_{low}$=256, with $L_{aux}$): ### Task Performance |50B tokens|ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG. |-|-|-|-|-|-|-|-|-|- |MoE (Config.2)|38.76|56.26|35.67|50.12|27.34|33.05|50.23|50.80|42.78 |AoE w. router|38.24|56.37|35.88|51.30|27.36|32.32|50.27|50.11|42.73 |AoE (Config.10)|39.27|57.13|35.98|51.70|27.40|33.32|50.87|50.34|43.25 These results suggest that—even with supervision—routers struggle to learn effective expert selection, highlighting the limitation of separating routers and experts. We will discuss this in the paper and hope it inspires further research. Thank you for the thoughtful question. &nbsp; --- &nbsp; # Effects of $d_{low}$ When $d_{low}$ = 64 or 128, AoE with $L_{aux}$ results in lower performance compared to $d_{low}$ = 256 (Config.10). Additionally, as shown in Figure 2, $d_{low}$ = 64 results in slower convergence, similar to traditional MoEs. We also tested an extreme case where $d_{low}$ = 8. In this case, the approximation of $W_g$ is too lossy, leading to poor performance and high loss. Due to limited time, we trained with only 50B tokens. We will include this experiment and make our statement more accurate. Thank you for your valuable feedback! ### Task Performance |50B tokens|ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG. |:-:|-|-|-|-|-|-|-|-|- |MoE (Config.2)|38.76|56.26|35.67|50.12|27.34|33.05|50.23|50.80|42.78 |AoE ($d_{low}$=8)|37.67|56.31|34.80|50.36|27.21|32.27|50.08|50.34|42.38 |AoE (Config.10)|39.27|57.13|35.98|51.70|27.40|33.32|50.87|50.34|43.25 ### Loss over Tokens ||MoE|AoE ($d_{low}$=8)|AoE (Config.10) |-|:-:|:-:|:-: |10B|3.59|3.70|3.59 |20B|3.09|3.17|3.08 |30B|2.92|3.02|2.91 |40B|2.84|2.91|2.82 |50B|2.77|2.86|2.75 --- &nbsp; # The uncertainty of the accuracies First, we note that AoE outperforms MoE across various setups presented in our paper, with the testing process using greedy decoding to eliminate generation randomness. For each configuration, we aggregated the outputs from all tasks into a sequence of 1s (correct) and 0s (incorrect), paired with the outputs of traditional MoE (Config. 2), and performed a McNemar test. The improvement of AoE over MoE is significant (p < 0.05) across all configurations in Tables 2, 3, and 4. We will highlight the significance of this improvement.
Summary: This paper proposes a new scheme to select expert sublayer in the mixture-of-experts (MoE) language model. Rather than empoying a router layer to choose the expert to process incoming embedding, the method utilizes a factorized subcomponent of feed-forward layer to calculate the importance score ("norm" in the paper) that are fed to a softmax and top-k to choose the expert to be activated. Experiments show that the proposed method outperforms the traditional MoE with router sublayer with a better outcomes with a load balancing loss. ## update after rebuttal: Thank you for your comments on my review! The additional results would be useful to discuss about this study more in-detail. However, I don't have strong reason to revise my overall judgment, so I will leave the overall score as it is. Claims And Evidence: The claim of the paper is based on assumption that the model separation between each expert and router should be hermful. The paper said this characteristics is "overlooked", but the paper contains basically only empirical results on the proposed model that compares downstream scores, meaning that it is not sure to me to say that the features the paper focused on is necessary or not. Methods And Evaluation Criteria: The proposed method is very intuitive and it should work if the large amount of training iteration have processed (like other MoE models). Factorizing gate layers in the expert layer significantly lose information from the expert layer, but is an acceptable solution to yield low-dimensional vector for score calculation if we relied on references cited on top-right of p.4. Table 2 also shows its impact is negligible. Theoretical Claims: The proposed method is just an empirical method based on hypothesis that the expert sublayer itself is more aware of selecting tokens to process. Experimental Designs Or Analyses: Evaluations are employed only for comparing the proposed method with traditional MoE. Though the results show better outcomes on the proposed method, it is still unclear whether the proposed method is still better against other strategies to select experts (e.g., those cited in the last paragraph in the section 2. Supplementary Material: Briefly checked whether there are some comparison with other methods in appendices, but no ones have been found. Relation To Broader Scientific Literature: Not sure Essential References Not Discussed: Not sure Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: (1) please add experiments to compare other expert selection mechanisms. (2) The proposed method introduced factorization of the gate layer in the expert. Could you show some results when other subpart is employed for factorization (especially $W_p$) or other trivial tweaks (e.g., splitting first n-th element of intermediate layer to calculate norms)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive suggestions and valuable comments! We hope our rebuttal helps address your concerns. If so, we would be grateful if you could consider increasing the overall recommendation. &nbsp; --- &nbsp; # Comparison with more MoE works We trained the Hash-Layer model (Roller et al.) on 100B tokens, using the same hyperparameters and model setup as our other models. Unlike other methods cited in the last paragraph of Section 2, which require domain-labeled data, Hash-Layer does not—hence our decision to include only this method for comparison. Here are the results: ### Task Performance |100B tokens|ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG.| |-|-|-|-|-|-|-|-|-|-| |MoE (Config.2)|40.74|58.49|36.13|51.30|28.11|32.67|50.23|51.83|43.68| |Hash-Layer|41.12|57.62|36.18|50.91|28.82|32.54|50.30|51.03|43.57| |AoE (Config.10)|41.16|58.32|36.80|53.04|28.37|32.78|50.61|54.59|44.46| Our results align with Roller et al. (p.7), who note that "Switch Transformer and Hash perform similarly in multi-layer routing." Hash-Layer's key advantage lies in its balanced load distribution. Our AoE model not only outperforms both Hash-Layer and traditional MoE in downstream tasks, but also demonstrates improved load balancing compared to traditional MoE. &nbsp; --- &nbsp; # Please compare other expert selection mechanisms Thank you for the suggestion. We have added a new baseline—the Hash-Layer—as noted above. In Table 2, we report results using the _top-K token choice_, while Table 3 includes experiments with _top-P token choice_ and _top-K expert choice_ selection mechanisms. We hope these results address your concern. If you have further suggestions for additional baselines, we’d be glad to explore them or discuss more! &nbsp; --- &nbsp; # Try factorizing $W_p$ or other trivial tweaks Thank you for your insightful question. We trained new models by factorizing $W_p$ instead of $W_g$ ($d_{low}$ = 256, using 100B tokens with $L_{aux}$), but observed a performance drop. ### Task Performance |100B tokens|ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG.| |-|-|-|-|-|-|-|-|-|-| |MoE (Config.2)|40.74|58.49|36.13|51.30|28.11|32.67|50.23|51.83|43.68| |AoE (Config.10, Factorized $W_p$ )|40.95|58.22|36.54|50.67|28.08|32.97|50.21|53.67|43.91| |AoE (Config.10)|41.16|58.32|36.80|53.04|28.37|32.78|50.61|54.59|44.46| We have a few hypotheses regarding the performance drop: 1) Factorizing $W_p$ might create a bottleneck in this branch, while the activation function in the $W_g$ branch may already act as a bottleneck. Since both pathways in the FFN are constrained in this case, it could lead to the performance drop. If this is the case, we would suggest always factorizing $W_g$ in AoE or encourage future work to further develop expert architectures. 2) Alternatively, the optimal value for $d_{low}$ might change when factorizing $W_p$. Due to time constraints, we cannot provide a definitive explanation at this moment. However, these aspects of model architecture are valuable areas for future research, and we will discuss this point in the paper. We also experimented with computing norms using only the first _n_ elements of the intermediate layer ($n=256$), training with 50B tokens due to time constraints. Compared to standard AoE ($d_{low}=256$) and MoE, this method underperformed because of insufficient activation information for expert selection: ### Task Performance |50B tokens|ARC-E|PIQA|SIQA|WINO|HELLA|MNLI|QNLI|SST2|AVG.| |-|-|-|-|-|-|-|-|-|-| |MoE (Config.2)|38.76|56.26|35.67|50.12|27.34|33.05|50.23|50.80|42.78| |AoE (Split, n=256)|39.14|55.71|36.03|50.75|27.55|33.05|50.27|49.31|42.72| |AoE (Config.10)|39.27|57.13|35.98|51.70|27.40|33.32|50.87|50.34|43.25| We will include these points in the paper, as they offer insights that may be useful for future research. Thank you again for your insightful question!
null
null
null
null
null
null
Controlling Neural Collapse Enhances Out-of-Distribution Detection and Transfer Learning
Accept (poster)
Summary: This paper explores the relationship between Neural Collapse, Out-of-Distribution detection, and OOD generalization. It provides that strong NC enhances OOD detection but damages generalization, while weaker NC has the opposite effect. In order to balance these objectives, the authors use a method that controls NC at different neural network layers. Claims And Evidence: The paper claims that NC has an inverse relationship with OOD detection and generalization and that controlling NC at different layers improves both objectives. These claims are supported by empirical evidence demonstrating: - strong correlation between NC and OOD detection -- generalization performance - improved results across multiple datasets when NC is manipulated using entropy regularization and a simplex ETF projector - theoretical justification linking entropy regularization to preventing NC + demonstrating the benefits of an ETF projector for OOD detection Methods And Evaluation Criteria: The proposed method appears to be well-motivated and aligns with the problem at hand. Comparisons with baseline methods are performed on standard OOD benchmarks (e.g., NINCO, CIFAR-100, ImageNet-R). Metrics such as OOD detection error, OOD generalization error, and NC properties (NC1-NC4) are used. A range of architectures (VGG17, ResNet18, ViT) to ensure robustness of findings across varying architectures of different nature. Theoretical Claims: The theoretical component of the paper appears to be solid, with well-executed derivations explaining how entropy regularization mitigates NC and how a Simplex ETF projector enforces NC. The proofs seem to be accurate -- though further clarity in some steps of the entropy collapse argument could improve readability. Experimental Designs Or Analyses: The experiments appears to be well-structured and executed, with comprehensive comparisons to baselines and ablation studies. The encoder with entropy regularization maintains feature diversity for better OOD generalization. The projector with a fixed ETF structure enhances OOD detection by inducing NC and the projector, _o_ in _f-o-g_, structure appears to be well evaluated. he authors also examine the hyperparameter and architectural sensitivity. Supplementary Material: Since the main paper itself appears to be explanatory enough, I only skimmed the appendix and appreciate the additional explanations and experiments. Relation To Broader Scientific Literature: The paper builds on literature in NC, OOD detection, and transfer learning and cites key works and recent advances in ETF-based learning. It extends prior studies by linking NC explicitly to both detection and generalization tasks and proposing a practical mecanism to control NC at different network layers. Essential References Not Discussed: The paper appears to be covering most relevant prior work Other Strengths And Weaknesses: - Providing a detailed analysis of computational efficiency and trade-offs could benefit the evaluation of the proposed framework - Discussing potential limitations in scaling the method to deeper architectures wrt the NC could enhance clarity Other Comments Or Suggestions: No comments Questions For Authors: - What is the computational overhead of the proposed method compared to standard classifiers? - Can this approach be extended to other tasks such as detection, i.e., is the proposed approach agnostic to the task at hand? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We have carefully considered your concerns and tried to address them. Below, we have provided detailed responses to each review separately. # Weaknesses **W1. Is the proposed method computationally efficient?** Yes, the proposed method is computationally efficient. It introduces two additional components compared to standard DNN architecture and training protocol: 1. Entropy regularization applied at the encoder’s output. 2. A frozen ETF projector (two MLP layers) following the DNN backbone. - For *entropy regularization*, we employ an efficient batch-level nearest neighbor distance computation, which incurs negligible computational overhead during training. - Regarding the *ETF projector*, since it remains frozen and does not undergo gradient updates, it does not introduce any noticeable training costs beyond those of the baseline DNN. To quantify this, when training DNNs (VGG17, ResNet18/34, ViT-T/S) on ImageNet-100 (ID dataset) for 100 epochs using four NVIDIA RTX A5000 GPUs, both our method and the baseline require the same training time. Moreover, in terms of FLOPs (floating-point operations per second), both our method and the baseline require the same amount of computation. This indicates that our method does not require higher computational costs than standard DNNs. Below, we report the FLOPs and training time: | Model | Compute (FLOPs $\times 10^{15}$) | Time (Hours) | |----------|----------|----------| | VGG17 | 4.96 | 5.13 | | VGG17+Ours | 4.96 | 5.13 | | ResNet18 | 0.46 | 2.28 | | ResNet18+Ours | 0.46 | 2.28 | | ResNet34 | 0.93 | 2.35 | | ResNet34+Ours | 0.93 | 2.35 | | ViT-T | 0.27 | 1.05 | | ViT-T+Ours | 0.27 | 1.05 | | ViT-S | 1.07 | 1.70 | | ViT-S+Ours | 1.07 | 1.70 | We will include this comparison in the final paper and appreciate the reviewer’s suggestion to provide this computational analysis. **W2. Does the proposed method scale to deeper architectures?** - Our method is inherently compatible with deeper architectures since the ETF projector (two MLP layers) can be seamlessly integrated into encoders of any depth. Additionally, while deeper DNNs typically exhibit stronger Neural Collapse (NC) in their top layers, our entropy regularizer effectively mitigates NC in encoders of any depth. - As demonstrated in Table 9 (Appendix), our method performs effectively with both ResNet-18 (13M) and ResNet-34 (23M), highlighting its scalability. Also, as shown in Table 10 (Appendix), our method improves OOD detection and generalization performance for both ViT-Tiny (6M) and ViT-Small (23M), demonstrating its scalability. # Questions **Q1. What is the computational overhead of the proposed method compared to standard classifiers?** As detailed in W1, our method does not increase computational overhead compared to standard DNNs. **Q2. Can this approach be extended to other tasks such as detection, i.e., is the proposed approach agnostic to the task at hand?** Yes, the proposed method is agnostic to the task at hand. Although we primarily focus on classification tasks, our method is applicable to other tasks, e.g., objection detection or other regression tasks. Our method employs an ETF projector after the DNN backbone (encoder). This ETF projector is designed to increase Neural Collapse in the last layer, thereby improving feature representations for downstream tasks, including classification and regression tasks. Our entropy regularization is also task-agnostic and compatible with both classification and regression objectives. We will discuss this in the main text. We thank the reviewer for the insightful questions. We appreciate the reviewer's insightful feedback and suggestions. Please let us know if further clarifications are needed. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for addressing my comments and appreciate the additional study showing the computational overhead in terms of wall time. I suggest adding this to the paper with higher point precision, as the overhead is not observable with the current precision. Apart from that, I have no other concerns. I would keep my original score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the constructive feedback. As suggested, we report the computational overhead of our method using higher-precision measurements, presented both in terms of FLOPs and training time. The percentage increase introduced by our method is *minimal* across all architectures, as shown below: --- # FLOPs Comparison | Model | FLOPs | Delta (% Increase) | |---------------|----------------------|---------------------| | VGG17 | 4,955,622,740,132,864 | | | VGG17+Ours | 4,956,972,705,684,480 | **+0.0272%** | | ResNet18 | 461,500,110,825,472 | | | ResNet18+Ours | 462,031,742,464,000 | **+0.1152%** | | ResNet34 | 931,123,885,195,264 | | | ResNet34+Ours | 931,655,516,833,792 | **+0.0571%** | | ViT-T | 271,301,725,913,088 | | | ViT-T+Ours | 271,376,414,539,776 | **+0.0275%** | | ViT-S | 1,068,921,092,308,992 | | | ViT-S+Ours | 1,069,219,652,567,040 | **+0.0279%** | --- # Training Time Comparison | Model | Time (minutes) | Delta (% Increase) | |---------------|----------------|---------------------| | VGG17 | 307.80 | | | VGG17+Ours | 307.98 | **+0.0585%** | | ResNet18 | 136.81 | | | ResNet18+Ours | 137.04 | **+0.1681%** | | ResNet34 | 140.89 | | | ResNet34+Ours | 141.26 | **+0.2626%** | | ViT-T | 63.03 | | | ViT-T+Ours | 63.18 | **+0.2380%** | | ViT-S | 102.02 | | | ViT-S+Ours | 102.24 | **+0.2156%** | --- As shown, the overhead introduced by our method remains below **0.3%** in all cases, which we believe is trivial and well-justified given the observed performance gains. We have included this computational analysis in our revised paper. We thank the reviewer for the valuable suggestions, which have helped strengthen our paper. We hope this addresses the remaining concerns. We are grateful for your constructive feedback and would appreciate it if you could reconsider the final score of our submission.
Summary: This paper empirically shows a trade-off between Out-of-distribution (OOD) detection and OOD generalization on multi-class classification task in the deep neural network (DNN) with in-distribution (ID) training data: a network can either impose a stronger neural collapse (NC) and improves OOD detection or weakens NC to enhances OOD generalization. To enhance both aspects, this paper proposes a pipeline to train the backbone for OOD generalization and add a projector and linear probing after it for OOD detection, hence the backbone which encodes the embeddings only needs to be trained once. (See Figure 2 in the paper) To modify the architecture to the transfer learning setting, this paper uses a novel *entropy regularization*, which mitigates NC, and replaces batch-normalization (BN) by group normalization (GN) plus weight standardization (WS). These changes further improves OOD detection and generalization performance. Claims And Evidence: The claims are supported clearly and evidently by various experiments and ablation tests. Methods And Evaluation Criteria: The paper uses 8 commonly used OOD datasets, 3 DNN architectures trained on AdamW with cosine learning rate scheduler and linear warmup. (See Section 5) I think it is rather standard for vision OOD detection/generalization task. Theoretical Claims: There is only one theoretical claim, namely Proposition 4.1, with proof sketch in the main text which promises a detailed proof in Appendix E. While in Appendix E, the proof is still rather sketchy. I still believe the proposition is true, but I hope the proof can be written in a more formal way in the revised edition. Experimental Designs Or Analyses: While I generally acknowledge the validity of the experiments, I would like to point out that: 1. The datasets used are all vision tasks. It would be more complete if the paper could draw similar conclusions on OOD detections/generalizations for other tasks with other input forms like audio or text. 2. This paper only used AdamW as the optimisers, while previous work, to name a few, [1,2,3], mainly used SGD or Adam, to showcase neural collapse. I personally also experiment on neural collapse (with the same architectures but datasets smaller than those in this paper), and find out that AdamW does not show neural collapse. For example, in Table 4, the NC metrics NC2, NC3 are about 0.5, which are significantly away from zero, since the metric is normalised between 0 and 2. If SGD or Adam is used as optimisers, the metrics could drop to the magnitude by 1 or 2. I still believe the novel findings in this paper, like entropy regularization and encoder-projector-classifier pipeline, are significant works on OOD context, but since the title includes the term neural collapse, we should be careful on the experiments and NC metrics. --- [1] Papyan, V., Han, X., and Donoho, D. L. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020. [2] Zhu, Z., Ding, T., Zhou, J., Li, X., You, C., Sulam, J., and Qu, Q. A geometric analysis of neural collapse with unconstrained features. Advances in Neural Information Processing Systems, 34:29820–29834, 2021. [3] Masarczyk, W., Ostaszewski, M., Imani, E., Pascanu, R., Miło´s, P., and Trzcinski, T. The tunnel effect: Building data representations in deep neural networks. Advances in Neural Information Processing Systems, 36, 2023. Supplementary Material: There is no submitted supplementary material, hence I cannot check the correctness of the code or the reproducibility of the results. Relation To Broader Scientific Literature: This paper clearly relates to the neural collapse phenomenon introduced by [1]. I am not an expert in OOD and transfer settings so I cannot draw more conclusions on this. --- [1] Papyan, V., Han, X., and Donoho, D. L. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020. Essential References Not Discussed: I think most of the essential related works are cited. A few more minor but related work which might be worth to cite can be: [4]: also about OOD and NC. Would be interesting to see which method could yield better performance. [5]: in contrary to line 421-422, "How BN or GN impacts neural collapse is unexplored in previous work.", BN has been explored in [5]. ---------- [4] Ammar, Mouïn Ben, et al. "Neco: Neural collapse based out-of-distribution detection." arXiv preprint arXiv:2310.06823 (2023). [5] Pan, Leyan, and Xinyuan Cao. "Towards understanding neural collapse: The effects of batch normalization and weight decay." arXiv preprint arXiv:2309.04644 (2023). Other Strengths And Weaknesses: The contributions of this paper seem original and it is delivered in a clear way. The experiments' description is very detailed. Other Comments Or Suggestions: Just a minor stuff, I would suggest to flip the axes of Figure 4: the dependent variable is NC1, which is usually shown on the y-axis. Questions For Authors: Regarding my concerns on the experimental designs, I would like to ask: 1. Could the paper include non-vision datasets in the revised edition to show that the contribution is universal among different data type? 2. Could the experiment be repeated with different optimisers like SGD or Adam to see if the the effect is intensified, neutralised or unchanged on OOD performance? --- regarding Figure 1 and 4, I would also like to know: 3. why is there a gap between NC1 in the middle of the plots where no instance can achieve NC1 around 0.5? Is there an explanation for this or could it be a bug in the code? --- regarding my concern in theoretical claims, 4. could the paper include a formal proof of Prop. 4.1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews and constructive feedback. We have carefully considered your concerns and tried to address them. Below, we have provided detailed responses to each review separately. # Clarifications **Non-vision datasets or tasks** Most prior work on OOD detection and/or OOD generalization has focused on vision datasets, and we align with this precedent. However, we recognize the importance of evaluating non-vision tasks for broader applicability. Due to time and computational constraints, we were unable to extend our work to non-vision domains within the rebuttal period. We consider this an important direction for future work and will discuss potential extensions to other modalities, such as audio and text. **Code and reproducibility** At the time of submission, our code was not fully polished, which is why we did not include it. However, we plan to release a cleaned and well-documented version upon acceptance. **Performance with optimizers other than AdamW** We have addressed this concern in Q2. # Additional Suggestions - We have revised the statement ``How BN or GN impacts Neural Collapse is unexplored in prior work." since BN has been studied in previous work [1]. We thank the reviewer for pointing this out. - **Essential References Not Discussed:** We have cited the relevant papers [1, 2] suggested by the reviewer. We have included NECO [2] in our related work and discussed its relevance. We thank the reviewer for the valuable feedback. - **References:** 1. Pan, Leyan, and Xinyuan Cao. "Towards understanding neural collapse: The effects of batch normalization and weight decay." arXiv preprint arXiv:2309.04644 (2023). 2. Ammar, Mouïn Ben, et al. "Neco: Neural collapse based out-of-distribution detection." ICLR, 2024. - **Comparison with NECO:** NECO [2] is a SOTA OOD detection method based on Neural Collapse. To compare against NECO, we trained various DNNs on ImageNet-100 (ID) and evaluated them on eight OOD datasets. Our method outperforms NECO in average OOD detection error by an absolute 12.72\% for VGG17, 18.43\% for ResNet18, and 2.51\% for ViT-T. More details are given in responses to reviewer BRBv. We have included these results in our paper. We thank the reviewer for the valuable suggestions. - We have improved the clarity of Figure 4 by flipping the axes, as per the reviewer’s suggestion. # Questions **Q1. Could authors include non-vision experiments?** We have addressed this concern in the clarifications section above. We welcome any further thoughts on this. **Q2. Could authors perform experiments with SGD or Adam Optimizer?** Running all experiments with SGD or Adam would be computationally expensive due to our extensive evaluation across five DNN architectures and eight OOD datasets. However, we conducted ablation studies by training VGG17 models with an SGD optimizer and evaluating them on eight OOD datasets. Our SGD results confirm that our method consistently improves OOD detection and generalization performance. - In particular, our method outperforms the baseline by an absolute 6.26\% in OOD generalization and by an absolute 28.88\% in OOD detection. - Also, we observe that our encoder reduces Neural Collapse (NC) and enhances OOD generalization by an absolute 13.86\% compared to the projector. Whereas the projector intensifies NC and improves OOD detection by an absolute 25.34\% compared to the encoder. - While SGD intensifies NC more than AdamW, AdamW achieves better overall performance (Table 16 in Appendix). **SGD Optimizer Results:** | Model | $\mathcal{E}_{\text{ID}}$ | NC1 | NC2 | NC3 | NC4 | $\mathcal{E}_{\text{GEN}}$ | $\mathcal{E}_{\text{DET}}$ | |----------|----------|----------|----------|----------|----------|----------|----------| | VGG17 | $13.06$ | $1.02$ | $0.45$ | $0.48$ | $26.46$ | $57.17$ | $89.69$ | | +Ours | $13.18$ | $0.09$ | $0.47$ | $0.27$ | $0.26$ | $\mathbf{50.91}$ | $\mathbf{60.81}$ | We have included these results in our paper. We will try to conduct more experiments with SGD/ Adam and include the results in the final paper. We would appreciate the reviewer’s feedback on whether this analysis sufficiently addresses the concern. **Q3. Why is there a gap in correlation figures?** We observe this trend for VGG17 but not for ResNet18 (Fig. 7 in Appendix). We have double-checked our data and code, they seem fine. The potential reason could be the limited data points, typically more data points would show better trends. It might be better if we combine Fig. 1 and Fig. 7. **Q4. Could the paper include a formal proof of Proposition 4.1?** Yes, we have included a detailed formal proof of Proposition 4.1 in Appendix E. We thank the reviewer for the valuable feedback and insightful questions. We appreciate the reviewer's insightful feedback and suggestions. Please let us know if further clarifications are needed. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answer. I have no further questions. I would like to maintain my original scoring. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their valuable feedback, which has helped improve this paper. If our responses have adequately addressed your concerns, we humbly seek your support for the paper.
Summary: This paper studies the role of neural collapse on the out-of-distribution (OOD) detection and generalization tasks. The authors also propose an entropy regularization technique to control the degree of collapse in intermediate layers and show that stronger collapse in the final layers can aid in OOD detection but can hurt OOD generalization. ## update after rebuttal The authors have addressed my concerns and have also assured that the code will be open-sourced. Overall I appreciate the extra experiments that they did during the rebuttal to further justify their approach (when compared to previous works). I also increased my score from 2 -> 3 after the rebuttal. Claims And Evidence: Experiments and analysis with various model architectures and datasets justify the empirical claims (some concerns are mentioned in the following sections). However, the authors do not justify their claims about a "theoretical framework explaining how a fixed Simplex ETF projector enforces NC for OOD detection". This claim must be revised to state that such an analysis is only experimental. Methods And Evaluation Criteria: The NC and evaluation metrics, datasets, and models are relevant to the analysis. Theoretical Claims: The authors present the proof of their Proposition 4.1 in Appendix E. (on the entropy regularization tending to -inf with stronger collapse in the features). Experimental Designs Or Analyses: The design of the experiments is valid and sound. However, I have some concerns with the reported numbers and would like the authors to clarify the following: 1. Why are the baseline VGG17 results for $\mathcal{E}_{ID}$ in Table 2 (first row) different from the non entropy regularization case in Table 3 (first row)? The captions do not indicate any major difference. 2. Next, the second row in Table 2 (i.e, the proposed method with VGG17 backbone + fixed ETFs and entropy regularization) has the same ID and OOD performance as a VGG17 model with entropy regularization in the second row of Table 3. Does the latter also employ the fixed ETFs as projectors? If yes, what is the difference between these two models? How can the ID and OOD performance be the same even though the NC values are different? Supplementary Material: I have reviewed the Appendix for the proof of proposition 4.1 and the ablation experiments. Relation To Broader Scientific Literature: This paper analyzes the role of neural collapse (NC) on OOD generalization and detection tasks in NNs. In the NC literature there has been extensive work on analyzing the optimality of collapse in intermediate and final layers and the implications on transfer learning and in-domain generalization. However, the study on OOD detection+generalization and NC is still in its nascent stage. This paper is well positioned to present an initial unifying perspective (at least empirically) and encourage future efforts in this direction. Essential References Not Discussed: 1. A relevant previous effort on NC and OOD-Generalization [1] is not discussed in the paper. Can the authors provide some comparisons with this paper ? 2. I have noticed that in Section 6.5 (in Appendix), the authors state that "How BN or GN impacts neural collapse is unexplored in previous work". This is not valid as previous works such as [2] have indeed studied this aspect. It would be great if the authors can rectify this statement. [1] Ammar, Mouin Ben, et al. "NECO: NEURAL COLLAPSE BASED OUT-OF-DISTRIBUTION DETECTION." The Twelfth International Conference on Learning Representations. 2024. [2] Ergen, T., et al. "Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization." International Conference on Learning Representations. 2022. Other Strengths And Weaknesses: The paper is generally well-written and easy to follow. However, a lack of comparison with previous work is a bit concerning. In particular, for a given OOD detection/generalization task, I am not convinced that a comparison cannot be done with any of the previous work in the literature. I would like to know the authors' justification on this aspect. Other Comments Or Suggestions: N/A Questions For Authors: 1. The entropy regularization applied to a standard VGG17 without any fixed ETF layer weights is shown in Table 12. Is the regularization applied to the last layer in this experiment? If yes, then it is not a fair comparison with VGG17 + 2 layer ETF architecture since technically the regularization should be applied to the last but second/third layer (depending on the usage of classifier head). Can the authors clarify this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews and valuable feedback. We have carefully considered your concerns and tried to address them. Below, we have provided detailed responses to each review separately. # Clarifications on Tables 2 and 3 - In Table 3, we isolate the effect of entropy regularization while keeping all other components identical. Specifically, we compare a baseline VGG17 with our method, where both models include the ETF projector, and only entropy regularization is either applied or omitted. In contrast, in Table 2, the baseline VGG17 does not include the ETF projector, which explains the differences in ID/ OOD performance between the first rows of the two tables. - The second rows of Tables 2 and 3 correspond to the same model (VGG17 + ETF projector + entropy regularizer). The NC values differ because, in Table 2, we measure NC after the projector, whereas in Table 3, we report NC at the encoder. Since entropy regularization is applied to the encoder, Table 3 explicitly compares its effect on encoder representations. Table 1 reports NC values for both the encoder and the projector (first two rows), which remain consistent with Tables 2 and 3. We will add more details to the table captions to enhance clarity. Please let us know if further clarification is needed. # Comparison with NECO We have included NECO [1] in our related work and discussed its relevance. NECO is a *post-hoc OOD detection method* based on Neural Collapse, requiring a pre-trained DNN. It does not focus on OOD generalization and representation learning. In contrast, our method is designed to learn good representations during training to improve both OOD detection and generalization. Reference: [1] Ammar et al., NECO: Neural collapse based out-of-distribution detection, ICLR, 2024 **Our Method Vs. NECO:** To compare against NECO, we trained various DNNs on ImageNet-100 (ID) and evaluated them on eight OOD datasets. Our method outperforms NECO in average OOD detection error by an absolute 12.72\% for VGG17, 18.43\% for ResNet18, and 2.51\% for ViT-T. **Below, we report the avg. OOD detection error (\%):** | Method | VGG17 | ResNet18 | ViT-T | |----------|----------|----------|----------| | NECO | $77.82$ | $88.13$ | $85.67$ | | Ours | $\mathbf{65.10}$ | $\mathbf{69.70}$ | $\mathbf{83.16}$ | We appreciate the reviewer’s suggestion and have incorporated the results in our paper. # Additional Suggestions - We have revised our statement about the theoretical framework explaining how a fixed simplex ETF projector enforces NC for OOD detection. We have stated that this analysis is empirical. We thank the reviewer for the valuable feedback. - We have revised this statement ``How BN or GN impacts neural collapse is unexplored in previous work." since BN has been studied in prior work [2]. We thank the reviewer for pointing this out. - Reference: [2] Ergen, T., et al. ``Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization." ICLR, 2022. # Additional Weaknesses **Why is comparison with prior works difficult?** - Prior works typically focus on either OOD detection or OOD generalization, whereas we aim to improve both objectives. Some recent works address both objectives, but they differ significantly in problem formulation and methodology. As detailed in Sec. 3 and 5, we cannot directly compare against such methods because: 1. They rely on additional OOD training data, whereas our method works without such data. Adapting these methods for comparison would require substantial modifications, including redefined training objectives and architectures, which are beyond the scope of our work. Therefore, a direct *apple-to-apple* comparison is difficult. 2. For OOD generalization, prior work *solely* focuses on *co-variate shifted OOD* data that has similar label space as ID data (e.g., a car in *sunny weather* as ID vs. a car in *rainy weather* as OOD). In contrast, we focus on *semantic shifted OOD* data that does not overlap with ID labels. Thus, unlike our method, previous methods are not designed to handle semantic OOD data. We would appreciate the reviewer’s feedback on whether this justification sufficiently addresses the concern. # Questions - **Clarifications on Table 12:** Table 12 examines the impact of entropy regularization on the OOD generalization of a regular VGG17 without any projector. The regularization is applied to the last encoder layer (penultimate layer) before the final classifier layer. All models are compared identically. We are not sure if we understand the last part of the question relating to the VGG+ETF projector because we are not comparing it in Table 12. - Additionally, in Table 11, we compare how entropy regularization impacts OOD generalization when VGG17 models include ETF projectors. We appreciate the reviewer's insightful feedback and suggestions. Please let us know if further clarifications are needed. --- Rebuttal Comment 1.1: Comment: Thanks for addressing most of my concerns. 1. I appreciate the authors' providing some baseline comparisons with NECO, as this helps readers connect your paper with the broader literature. I also see that the authors have not provided any code for reproducibility. Can they comment on this? Will the code be open-sourced? 2. Based on the author's response, I referred to [1] (as mentioned in section 3) and saw that even though they open-sourced the code, it is still a work in progress. On the other hand, [2] also open-sourced their code and seems to be up to date. The important thing to note here is that Table 1 in [2] considers sematic OOD data for OOD generalization experiments. Thus, in the "Differences from prior work" paragraph, I request the authors to carefully update the sentence formation so as to not give a wrong impression that previous works have only considered "covariate-shifted OOD" data. 3. Most importantly, thanks for acknowledging that the captions must be updated. For instance, it would be better if one could read the caption in Figure 11 and clearly understand that the VGG models include projectors. I do not have any further concerns and have increased the score. Good luck. [1] Zhang, Q., Feng, Q., Zhou, J. T., Bian, Y., Hu, Q., and Zhang, C. The best of both worlds: On the dilemma of out-of-distribution detection. In Advances in Neural Information Processing Systems, 2024. [2] Wang, H. and Li, Y. Bridging ood detection and generalization: A graph-theoretic view. Advances in Neural Information Processing Systems, 2024. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the valuable suggestions and for raising the score. Your feedback has helped improve our paper. - Regarding the code, we will open-source it upon acceptance to ensure full transparency and facilitate future research. The released repository will include scripts to reproduce all key experiments and results reported in the paper. - Regarding prior work, we acknowledge the reviewer’s observation about the types of OOD data used. We will revise the "Differences from prior work" paragraph to incorporate your suggestions. Thank you again for your insightful reviews and helpful feedback.
Summary: The paper proposes a novel claim that: stronger NC(neural collapse) improves OOD detection but degrades generalization, while weaker NC enhances generalization at the cost of detection. The explanation of above claim is sufficient and the experiment proves the statement. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I do not check proofs for theoretical claims. Experimental Designs Or Analyses: The experiments are conducted on ImageNet-100 with different architectures. The balance of generalization and detection performance is clear and verify their statement about the generalization-detection trade-off. Supplementary Material: None Relation To Broader Scientific Literature: None Essential References Not Discussed: The related work is sufficiently listed and discussed. Other Strengths And Weaknesses: The paper is well organized and expressed, and easy to understand. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews and valuable feedback. Please let us know if you have any questions or suggestions. We would be happy to address them.
null
null
null
null
null
null
All-atom Diffusion Transformers: Unified generative modelling of molecules and materials
Accept (poster)
Summary: This paper proposes a new model based on all-atom Diffusion with transformers, for generating both periodic crystals and non-periodic molecules. Experiments are performed on standard benchmarks for these appications. The authors show how this model helps speedup standard equivariant diffusion models and how it can benefit from scaling. Claims And Evidence: the claims are well supported. Methods And Evaluation Criteria: using a transformer for building a unified model for molecule and materials makes a lot of sense; moreover, replacing equivariance-based constraints by data augmentation to leverage more expressive architecture is also a recent trend in this field. Theoretical Claims: there were no theoretical claims Experimental Designs Or Analyses: yes Supplementary Material: quick pass over the supplementary Relation To Broader Scientific Literature: the paper proposes another approach to generate both material and molecules; as such the idea of learning a model across modalities is interesting and fairly new. Essential References Not Discussed: given that the authors emphasize on the benefits of not having equivariance in their model, the authors could cite other work that also removed equivariance constraints in generative models for molecules / materials e.g. "Language models can generate molecules, materials, and protein binding sites directly in three dimensions as xyz, cif, and pdb files" Flam-Shepherd et al. 2023 or other work for unconditional molecule generation e.g. "3D molecule generation by denoising voxel grids" Pinheiro et al 2023. This section of related work without equivariance is missing in the related work, yet the authors did mention AlphaFold3 and Wang et al 2024 which are also relevant work on this front, not explicitly on generation. Other Strengths And Weaknesses: overall, I found the paper well executed with a nice idea of unifying different generative tasks; in particular the choices on the ML front, follow many recent trends in generative models which make a lot of sense. yet the paper is only focused on extremely small systems (up to 10 atoms as mentioned in the conclusion); arguably this is a simple setting. I would recommend the authors to include some experiments on some more challenging tasks; in particular staying with small molecule generation, GEOM-drugs is a more challenging setting for which comparison to other baselines would be more convincing. Moreover, I recommend the authors to extend the metrics in the small molecule front e.g. following the framework of MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation for which several models are compared agains. Other Comments Or Suggestions: see above Questions For Authors: please address the limitations on the scale of the datasets and metrics of small molecule generation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > small systems - simple setting…include experiments on more challenging tasks --- **[In our response to Reviewer WqZ3](https://openreview.net/forum?id=89QPmZjIhv&noteId=RrA8d9t9eq), we presented results for GEOM based on your suggestion. ADiT continues to show strong results, generating physically realistic molecules and outperforming baselines.** --- We agree that systems from QM9 and MP20 are small, but they are complex and challenging. Doing well on them requires models to learn something about the underlying physics of atomic interactions. Let’s take MP20 as an example: - MP20 is restricted to up to 20 atoms in a unit cell because this is practically most relevant and largely covers the Materials Project distribution of known materials. Each atom's local environment is complex due to periodicity. - The S.U.N. rate metric for crystals/MP20 is a very challenging and practically relevant metric for materials discovery. ADiT has **improved S.U.N. rate from 4-5% of previous works up to 6.5%** (see Table 4). ADiT significantly outperformed the recently published MatterGen (Nature, 2024). Previous works showing smaller improvements have been published at top conferences (most recent example: FlowLLM at NeurIPS, 2024). There is a large community of researchers who care about S.U.N. rate on MP20. Ultimately, the goal of this paper was to introduce the **first unified architecture** for generative modelling of 3D atomic systems, and to demonstrate **transfer learning** across periodic crystals and non-periodic molecular systems. We believe this will make a novel and valuable contribution to the ICML community. Also note that our best models are based on pure Transformers - which have shown to be extremely scalable to large-scale inputs - so there is nothing inherent to our methodology that would prevent scaling up to larger systems or more samples. > extremely small systems - up to 10 atoms Just to clarify the details here: - MP20: **20 atoms** in a periodically repeating unit cell. This means that each atom can **interact with a far greater number of atoms** than the size of the unit cell due to periodicity. - QM9: 9 heavy atoms, which doesn’t include hydrogen. When including hydrogen (which is how our models are trained), systems go up to **20-30 atoms on average**. Which is why we wrote “tens of atoms”, not 10 atoms. > extend metrics for small molecules, following MiDi Our evaluation followed **[Symphony](https://arxiv.org/abs/2311.16199)**, the most recently accepted small molecule generation paper at ICLR 2024. We believe that **Symphony's evaluation metrics are an improvement upon MiDi**. Here are our justifications: - We have included the validity and uniqueness metrics from MiDi/EDM. We found ADiT to outperform EDM and GeoLDM, both of which do better than MiDi. - Instead of atom and molecule stability metrics from MiDi/EDM which based on simple valency heuristics, Symphony used the **PoseBusters** metrics suite which provides information about the physical realism of generated molecules. PoseBusters provides a more holistic set of metrics using force field relaxations and physics-based checks instead of relying on heuristics. - PoseBusters also includes metrics for bond distances, angles, rings, and geometries, which supersede MiDi's histogram metrics and are more interpretable (in our opinion). - PoseBusters is now widely known and used in academia and industry, e.g. AlphaFold3 used PoseBusters. > "the paper proposes another approach to generate both material and molecules; as such the idea of learning a model across modalities is interesting and fairly new" The goal of this paper was to introduce the **first unified architecture** for generative modelling of 3D atomic systems, and to demonstrate **transfer learning** across modalities. We disagree with your characterisation that our work is just “another” approach as we are not aware of any other generative architecture that jointly generates periodic and non-periodic systems. We believe this will make a novel and valuable contribution to the ICML community, and would kindly request you to reconsider your position on our novelty. > citing Flam-Shepherd et al. 2023 and Pinheiro et al 2023 We will definitely cite and discuss both papers in the Related Work - thanks for the pointers. We will also cite and discuss the MiDi paper and metrics in our revision. (Note that this year, ICML does not allow us to upload an updated PDF.) --- Rebuttal Comment 1.1: Comment: thank you for the rebuttal and for the additional experiment on GEOM-drugs; for this dataset, the results would be more convincing if they included more recent baselines; EDM is far from the state-of-the-art on this dataset and most approaches now significantly outperform it. including the posebusters metrics for MiDi would be more convincing than a comparison to EDM. nevertheless, I will increase my score but this point still remains --- Reply to Comment 1.1.1: Comment: > posebusters metrics for MiDi Unfortunately, the MiDi author Clement Vignac's official checkpoints were deleted from his university's Google Drive when he moved to industry. The README for their codebase states this: "Update (July 2024): My drive account has unfortunately been deleted, and I have lost access some checkpoints. If you happen to have a downloaded checkpoint stored locally, I would be glad if you could send me an email at vignac.clement@gmail.com or raise a Github issue." As a result, checkpoints trained independently by another researcher, Ian Dunn, are now being shared by Clement on the MiDi Gtihub repo. However, we obtained worse results than the numbers reported in Clement's paper when running evaluation (using MiDi codebase as well as our codebase) on samples generated by Ian's checkpoint. A similar claim was made in [this paper](https://arxiv.org/pdf/2309.17296#page=18.44) (Appendix A.6): "We could not reproduce the results reported in the paper. We also re-evaluated the checkpoint given on GitHub and again could not confirm the reported results." This is why we chose to report the (subset of) metrics directly from the MiDi paper, and not the ones we re-computed. > include more recent baseline We used MiDi as you specifically mentioned MiDi in your review. We found two other methods which claim to outperform MiDi, but could not find their weights to re-evaluate their claims: - EquiGAT-Diff - "Navigating the Design Space of Equivariant Diffusion-Based Generative Models for De Novo 3D Molecule Generation" - https://arxiv.org/pdf/2309.17296 - They have provided inference code, but they only provide model checkpoints upon request. They state: "Currently, we provide trained model weights only upon request. Please reach out to ... if you are interested." - We have contacted them but have yet to receive the weights. - "SemlaFlow – Efficient 3D Molecular Generation with Latent Attention and Equivariant Flow Matching" - https://arxiv.org/pdf/2406.07266 - There's no github link or codebase included in this paper. EquiGAT-Diff obtained a reported GEOM validity rate 94.6%, and SemlaFlow obtained 93.9%. Our ADiT-S model at validity rate 93.4% is able to reach roughly the same level of performance as MiDi and related equivariant models. (Note that ADiT-S is our smallest model and the result was obtained on one GPU during the brief rebuttal period time, without any hyperparameter optimization at all.) We think that these results demonstrate that the ADiT architecture can scale to larger datasets and larger system sizes, which is what your original concern was regarding.
Summary: The authors introduce All-atom Diffusion Transformer (ADiT), a framework aimed at unifying latent diffusion approach across different spatial molecular structure modalities. The proposed method focuses specifically on small molecules and crystals. ADiT leverages a combination of joint variational autoencoder and latent diffusion transformer models to generate these molecular structures. The latent diffusion model constructs atom-wise latent codes, which the decoder then maps into atom descriptions, positions, and, optionally, crystal lattice parameters. The authors evaluate their approach on unconditional small molecule and crystal generation to benchmark its effectiveness. Claims And Evidence: The overall claims in the paper appear valid and reasonable. However, I recommend avoiding unjustified claims of primacy, particularly in lines 103-105: “our work is the first to develop unified generative models for both periodic and non-periodic atomic systems”. Preliminary approach [a] has proposed a unified language model for small molecule and crystal generation, as well as protein pocket generation. a. Language models can generate molecules, materials, and protein binding sites directly in three dimensions as XYZ, CIF, and PDB files, Flam-Shepherd et al., 2023 Methods And Evaluation Criteria: The chosen evaluation metrics and baselines are appropriate and reasonable, and incorporating DTF and PoseBusters-derived metrics introduces a novelty into the evaluation of spatial small molecule structures generation. However, the choice of datasets raises concerns. The QM9 and MP20 datasets contain relatively small structures (up to 9 heavy atoms in QM9 and up to 20 heavy atoms in MP20). In contrast, state-of-the-art generative models [a, b] for spatial small molecules unconditional generation typically benchmark against GEOM-DRUGS [c], which includes structures with up to 91 heavy atoms. For crystal generation, prior work [d] has used more complex datasets, such as MPTS-52 (up to 52 heavy atoms) and [e] used custom subsets with up to 30 heavy atoms, in addition to MP20. While using QM9 and MP20 facilitates direct comparison with a broader range of baselines, incorporating larger and more complex datasets would provide deeper insights into the model’s capabilities and generalizability. The integration of larger structures dataset would further enhance the value of the proposed approach and allow comparison with state-of-the-art approaches on harder tasks. a. MolDiff: Addressing the atom-bond inconsistency problem in 3D molecule diffusion generation, Peng et al., 2023 b. BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning, Zholus et al., 2024 c. Geom, energy-annotated molecular conformations for property prediction and molecular generation, Axelrod & Gomez-Bombarelli, 2022 d. FlowMM: Generating Materials with Riemannian Flow Matching, Miller et al., 2024 e. Fine-Tuned Language Models Generate Stable Inorganic Materials as Text, Gruver et al., 2024 Theoretical Claims: N/A Experimental Designs Or Analyses: As discussed in “Methods and Evaluation Criteria”, the unconditional generation setups for molecules, crystals, and MOFs are reasonable and valid. The results from single-dataset and jointly trained models demonstrate the necessity of multidomain training. Additionally, the Ablation Study in the Appendix provides a justification for the chosen architectural design. However, unconditional generation primarily serves as a proof-of-concept to demonstrate model capability, rather than its applicability to real-world tasks. A key question remains: Can the proposed approach be effectively adapted for conditional downstream tasks? For example, conformation generation [a], pocket-conditional generation [b], text description-conditioned generation [c], and MOF property optimization [d]. While the authors acknowledge the downstream tasks as future work in the Discussion section, the absence of at least one conditional setup or case study significantly reduces the paper’s impact. Including an experiment on conditional generation would strengthen the evaluation and provide clearer evidence of the model’s broader applicability. a. Torsional Diffusion for Molecular Conformer Generation, Jing et al., 2022 b. 3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction, Guan et al., 2023 c. Fine-Tuned Language Models Generate Stable Inorganic Materials as Text, Gruver et al., 2024 d. MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design, Fu et al., 2023 Supplementary Material: I reviewed the supplementary material to find the examples of generated structures, the distribution of generated structure properties, and the ablation studies Relation To Broader Scientific Literature: The idea of applying the latent diffusion framework for generating chemical structures has been explored in prior work for 2D [a] and 3D molecules [b], as well as 3D proteins [c]. While the proposed modification to integrate crystal lattice parameters is relatively straightforward, I consider it to be a novel contribution of the paper. a. 3M-Diffusion: Latent Multi-Modal Diffusion for Language-Guided Molecular Structure Generation, Zhu et al., 2024 b. Geometric Latent Diffusion Models for 3D Molecule Generation, Xu et al., 2023 c. A Latent Diffusion Model for Protein Structure Generation, Fu et al., 2023 Essential References Not Discussed: As mentioned in the Claims and Evidence section, preliminary work [a] on multidomain spatial structure generation was not cited or compared against in the paper. a. Language models can generate molecules, materials, and protein binding sites directly in three dimensions as XYZ, CIF, and PDB files, Flam-Shepherd et al., 2023 Other Strengths And Weaknesses: The paper is well-written and easy to follow. The metrics effectively cover different aspects of spatial structure generation. The proposed modification to the latent diffusion framework is straightforward and focuses on small molecules and crystals, which, while limited, still represents a novel contribution. Additionally, the inference time efficiency of the approach, especially in comparison to pure diffusion models, is a strength of the method. A minor weakness of the approach is the high number of lambda coefficients in the autoencoder loss. There is a lack of intuition on how to choose these coefficients and whether they are dataset-dependent. Other Comments Or Suggestions: 1. I would recommend making Figure 2 smaller or moving it to the Appendix. Instead, it would be more useful to include examples of generated structures in the main text, as visual inspection of results would be beneficial for readers. 2. In Table 2, Validity results, I assume that one of the QM9-only ADiT entries should be marked with an asterisk *. Questions For Authors: My concerns are addressed in the previous sections. The key issues are the lack of downstream tasks beyond unconditional generation and the use of relatively small datasets for small molecule and crystal generation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > relatively small structures - benchmark against GEOM --- **[In our response to Reviewer WqZ3](https://openreview.net/forum?id=89QPmZjIhv&noteId=RrA8d9t9eq), we presented results for GEOM based on your suggestion. ADiT continues to show strong results, generating physically realistic molecules and outperforming baselines.** --- We agree that systems from QM9 and MP20 are small, but they are complex and challenging. Doing well on them requires models to learn something about the underlying physics of atomic interactions. Let’s take MP20 as an example: - MP20 is restricted to up to 20 atoms in a unit cell because this is practically most relevant and largely covers the Materials Project distribution of known materials. Each atom's local environment is complex due to periodicity. - The S.U.N. rate metric for crystals/MP20 is a very challenging and practically relevant metric for materials discovery. ADiT has **improved S.U.N. rate from 4-5% of previous works up to 6.5%** (see Table 4). ADiT significantly outperformed the recently published MatterGen (Nature, 2024). Previous works showing smaller improvements have been published at top conferences (most recent example: FlowLLM at NeurIPS, 2024). There is a large community of researchers who care about S.U.N. rate on MP20. Ultimately, the goal of this paper was to introduce the **first unified architecture** for generative modelling of 3D atomic systems, and to demonstrate **transfer learning** across periodic crystals and non-periodic molecular systems. We believe this will make a novel and valuable contribution to the ICML community. Also note that our best models are based on **pure Transformers** - which have shown to be extremely scalable to large-scale inputs - so there is nothing inherent to our methodology that would prevent scaling up to larger systems or more samples. > MPTS-52 Not applicable as MPTS is meant for evaluating **structure prediction**, where TS = temporal split, which is different from our de novo generation task. > unjustified claims of primacy - Flam-Shephard To the best of our knowledge, ADiT is the first **unified, jointly trained** generative model for periodic and non-periodic systems to demonstrate **transfer learning**. Flam-Shepherd trains 3 independent models on 3 different datasets with a different tokenisation strategy for each. They don't discuss how/whether their method can be applied for one unified model. Also, our results on MP20 are better than FlowLLM, another language model which outperforms Flam-Shepherd. We will discuss their paper in Related Work, as well as all other references you shared. > latent diffusion for generating chemical structures has been explored prior Our novelty is about **how** we used latent diffusion: For unification of system types (periodic and non-periodic systems together), as well as unification of mulit-modal data (categorical, numerical, floating point). We think this is new - **nobody has used latent diffusion in this unified manner** - and will be of interest to the ICML community. - [a] - Not directly related to 3D structure - [b] - They technically do latent diffusion, but the latents are still multi-modal (four latent scalars & one latent vector) - thus GeoLDM still uses equivariant diffusion, which is slow, and GeoLDM is not applicable beyond small molecules. Additionally, there are **several discrepancies** between the implementation reported in the paper and released on GitHub (most concerning: https://github.com/MinkaiXu/GeoLDM/issues/6). ADiT’s latent diffusion formulation is unified and faster, as well as outperforms GeoLDM. - [c] - Highly specific to proteins as latent representations are created by aggregating along the sequence (We will cite [a] and [c]) > high number of lambda coefficients, lack of intuition The coefficients are needed for each of the different data types that constitute a 3D atomic system. The coefficients are **not dataset dependent**. The intuition for choosing them is simple: **“balance the relative magnitudes of the various losses”**. This is stated in the paper and is standard practice for supervised learning tasks like the VAE reconstruction used here. > conditional downstream tasks The focus of this paper on introducing the architecture and demonstrating transfer learning. Efforts for extensions to conditional tasks such as protein-pocket conditioned molecule generation (ie. SBDD) as well as property conditioned crystal generation (similar to MatterGen) are underway as independent papers. From a methods standpoint, adding conditioning is straightforward in Diffusion Transformers via classifier-free guidance tokens. We don’t feel that property conditioned small molecule generation, as benchmarked in papers like EDM/GeoLDM, is very practically relevant. We feel that it is unlikely that generating molecules with specific HOMO-LUMO gaps or dipole moments are relevant for drug discovery. > Suggestions We will implement both. --- Rebuttal Comment 1.1: Comment: I’m grateful to the authors for their responses to my questions and, in particular, for conducting the experiments on GEOM. However, I remain convinced that, despite the promising direction, the current form of the approach has limited applicability. The authors have only demonstrated results in the unconditional setting, whereas in practice, it is often crucial to control the properties of generated molecules and materials. Including one or two case studies on conditional generation would significantly strengthen the quality of the paper. Therefore, I have decided to keep my score unchanged. --- Reply to Comment 1.1.1: Comment: We've worked hard to address as many reviewer concerns in the rebuttal period as we possibly could. As we mentioned, this is an architecture focussed paper. It introduces a new architecture which can jointly generate periodic and non-periodic chemical systems. This is new and not possible with previous techniques. And we think this will be well received by the ICML and broader machine learning community. We've not made any claims about conditional design and practical applicability. We acknowledged this as a limitation in the Discussions section. And we want to address this limitation in future work. Adding more conditional experiments would not change the main contributions and claims of this paper (which is about introducing a new architecture, not its practical application yet). You already stated that the paper is well written, our claims are well supported, the methods and evaluations are appropriate, experiments are convincing, and the approach is overall promising. To us, as authors, it sounds like you support this paper based on the claims it makes, and do want to accept it - even though you think adding conditional experiments will further strengthen the paper. Please would you consider changing your vote from borderline to an accept if that is the case? --- P.S. Here are notable examples of papers on new molecular or crystal generative modelling architectures that were published at top conferences **without** showing experiments on conditional tasks: - FlowLLM - NeurIPS 2024 - https://arxiv.org/abs/2410.23405 - Symphony - ICLR 2024 - https://arxiv.org/abs/2311.16199 - MiDi - ECML 2023 - https://arxiv.org/abs/2302.09048 - DiffCSP - NeurIPS 2023 - https://arxiv.org/abs/2309.04475 Each of these papers brings new architectural ideas to the table and catalyzes further research into both the architectures as well as conditioning them for downstream practical applications.
Summary: The paper introduced a unified framework called All-atom Diffusion Transformer (ADiT) for generating periodic (crystals) and non-periodic (molecules) atomic systems. ADiT employs a two-stage approach: A Variational Autoencoder that maps atomic systems into a shared latent space, and a Diffusion Transformer generates decoded latent samples into valid structures. Evaluations on QM9 and MP20 benchmarks showed the effectiveness of the proposed approach. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The authors built their approach on several works in the literature, like Variational Autoencoder, Diffusion Transformer, and classifier-free guidance technique. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths**: - Unified framework: I think the idea of using a joint generative model for molecules and crystals is novel, avoiding domain-specific methods. - Empirical performance: The paper showed strong empirical results outperforming domain-specific methods for crystals and molecule generations. **Weaknesses**: - Dataset limitations: The method is trained on relatively small datasets (QM9 ~ 130K molecules; MP20 ~ 45K crystals). As already mentioned by the authors, scalability on larger datasets is unexplored. - Theoretical gaps: The paper lacks a theoretical justification for why a shared latent space works. More discussion on the latent space properties would be helpful. Other Comments Or Suggestions: No. Questions For Authors: - Can you explain in more detail why joint training on MOFs reduces validity? - The comparison to the equivariant baseline focuses on speed, but are there trade-offs in terms of equivariance properties? For example, does this affect the model's ability to respect the physical symmetries of crystals/ molecules? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > trained on relatively small datasets - scalability on larger datasets --- **[In our response to Reviewer WqZ3](https://openreview.net/forum?id=89QPmZjIhv&noteId=RrA8d9t9eq), we presented results for the larger GEOM dataset of small molecules with up to 180 atoms. ADiT continues to show strong results, generating physically realistic molecules and outperforming baselines.** --- We agree that systems from QM9 and MP20 are small, but they are complex and challenging. Doing well on them requires models to learn something about the underlying physics of atomic interactions, and cannot be solved by simple approaches. Let’s take MP20 as an example: - MP20 is restricted to up to 20 atoms in a unit cell because this is practically most relevant and largely covers the Materials Project distribution of known materials. Each atom's local environment is complex due to periodicity. - The S.U.N. rate metric for crystals/MP20 is a very challenging and practically relevant metric for materials discovery. ADiT has **improved S.U.N. rate from 4-5% of previous works up to 6.5%** (see Table 4). ADiT significantly outperformed the recently published MatterGen (Nature, 2024). Previous works showing smaller improvements have been published at top conferences (most recent example: FlowLLM at NeurIPS, 2024). There is a large community of researchers who care about S.U.N. rate on MP20. Ultimately, the goal of this paper was to introduce the **first unified architecture** for generative modelling of 3D atomic systems, and to demonstrate **transfer learning** across periodic crystals and non-periodic molecular systems. We believe this will make a novel and valuable contribution to the ICML community. Also note that our best models are based on **pure Transformers** - which have shown to be extremely scalable to large-scale inputs - so there is nothing inherent to our methodology that would prevent scaling up to larger systems or more samples. > lack of theoretical justification for why a shared latent space works There are strong theoretical justifications of our choices: - **The underlying physics of atomic interactions is the same across all 3D atomic systems.** Interatomic distance between a carbon atom double bonded to an oxygen atom will be the same, whether they are part of a molecule or a crystal structure. Thus, a shared/unified latent space enables the model to learn shared principles of interatomic interactions. - Using a shared latent space **unifies system modalities** (both periodic and non-periodic atomic systems embedded together), as well as **unifies multi-modality data** (categorical, numerical, floating point embedded together). This makes it very easy to train a simple Gaussian diffusion model on the latent representations, instead of more complex equivariant diffusion formulations. - The advantage of jointly embedding periodic & non-periodic systems for ML force fields, aka. *universal interatomic potentials* was shown by recent works like JMP and MACE. More broadly, all of AI research is moving towards **joint training of large models on multiple datasets**, ie. learning unified latent representations. > trade-offs of equivariance? ability to respect physical symmetries? When developing ADiT, we ablated the impact of enforcing equivariance on the model -- **see Appendix D**. We tried to be pragmatic about whether or not to use equivariance. The non-equivariant version **improved generative performance** and **physical realism** of the crystals/molecules, as judged by evaluation metrics explicitly focussed on physics-based tests s.a. PoseBusters suite for molecules, and DFT-based rate for crystals. Overall, we and others have found that equivariance is not strictly necessary for training a good generative model. Perhaps it can even be an advantage: if you start with two rotated versions of a noisy Gaussian and non-equivariant denoising leads to two different (but equally valid) molecules/crystals, that gives you a more diverse generative model without sacrificing on performance! > joint training on MOFs reduces validity? Simple answer: training did not converge during the time period that we were allocated compute resources on a large enough GPU cluster for this experiment. Note that joint training lead to very strong validity for both crystals (91%) and molecules (95%), while reducing validity for MOFs. When looking at training dynamics (right side plot in Table 3), the model first learns crystals, then molecules, and finally starts learning MOFs later in training. We believe training for longer is likely to close the gap and potentially improve beyond the MOF-only variant of ADiT. Ultimately, our goal with including preliminary MOF results was to encourage the community to work further on MOFs - hybrid organic-inorganic materials for carbon capture and sustainability. We will release **open source code** and **datasets** which make it easy for others to build upon our initial experiments.
Summary: The authors note that current generative models for atomic systems - such as molecules and crystals - are are fragmented and overly specialized to the specific type of system they model. They propose all atom diffusion transformers (ADiT), which uses a two-step latent diffusion framework in which, first, mixed categorical and numerical data describing an atomic system are embedded into a shared latent space by training a VAE, and next, a diffusion transformer model is trained to model the latent distribution and generate new samples. Experimental results show that the proposed method effectively transfers knowledge between various types of atomic systems, outperforms baselines in various metrics, and scales in performance with increasing model capacity. #### Update after rebuttal period My review score remains the same. Claims And Evidence: The claims are supported. Methods And Evaluation Criteria: The methods are valid. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments appear well-designed, and the analysis sound. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: Generative models for atomic systems have thus far been very specialized for the specific application areas they model. This paper represents a unification of atomic system models based on the idea that the underlying physics in these systems should hold constant and generalize between seemingly different domains. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths - The paper is well-written, and explains just enough of the physics and chemistry background needed for someone who is unfamiliar. - The implementation details are very thorough, and clearly describe the steps needed to reproduce the proposed method. - The proposed method is a promising approach not just for atomic systems, but also for other scientific domains that may have related data from heterogenous sources with mixed categorical and numerical traits. - Empirical results are extensive and back up the major claims made by the authors. ### Weaknesses - The authors refer to this point in their discussion, but it bears repeating that transfer of performance between the two related atomic systems that the authors mainly test does not necessarily indicate that the model has learned the underlying physics or that it can generalize well to very large-scale data. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive comments and excellent summary of the work! Thanks for appreciating that the latent diffusion idea can be further applicable to multi-modality data in other scientific domains, too. > does not necessarily indicate that the model has learned the underlying physics ADiT obtains very strong results on both MP20 and QM9, exceeding the current state-of-the-art. These systems are small but they are complex and challenging. **Doing well on them requires models to learn something about the underlying physics of atomic interactions,** and cannot be solved by simple co-occurence/statistical approaches. Let’s take MP20 as an example: - MP20 is restricted to up to 20 atoms in a unit cell because this is practically most relevant and largely covers the Materials Project distribution of known materials. Each atom's local environment is highly complex due to infinite periodic tiling of a unit cell. - The S.U.N. rate metric for crystals/MP20 is a very challenging and practically relevant metric for materials discovery. ADiT has **improved S.U.N. rate from 4-5% of previous works up to 6.5%** (see Table 4). ADiT significantly outperformed the recently published MatterGen (Nature, 2024). Previous works showing smaller improvements have been published at top conferences (most recent example: FlowLLM at NeurIPS, 2024). There is a large community of researchers who care about S.U.N. rate on MP20. > or that it can generalize well to very large-scale data We agree, but the goal of this paper was to introduce the **first unified architecture** for generative modelling of 3D atomic systems, and to demonstrate **transfer learning** across periodic crystals and non-periodic molecular systems. We believe this will make a novel and valuable contribution to the ICML community. Below, we presented results for the larger GEOM dataset of small molecules with up to 180 atoms. ADiT continues to show strong results, generating physically realistic molecules and outperforming baselines. Also note that our best models are based on **pure Transformers** - which have shown to be extremely scalable to large-scale inputs - so there is nothing inherent to our methodology that would prevent scaling up to larger systems or more samples. --- # **TO ALL REVIEWERS - PLEASE READ BELOW - RESULTS ON GEOM** --- To demonstrate the scalability of ADiT to larger systems, we have run experiments on GEOM, as suggested by Reviewers KRy3 and 7DhL. GEOM includes 430,000 unique molecules and consists of larger systems than QM9, up to 180 atoms. Our experiments and evaluation follows the EDM/MiDi paper, with additional PoseBusters physics-based tests. Model setup and hyperparameters used are exactly as described in the paper. | Metric | EDM | MiDi | ADiT-S | --- | --- | --- | --- | | Validity | 87.8 | 77.8 | 93.4 | Uniqueness | 99.9 | 100.0 | 100.0 | Atoms connected | 41.4 | 90.0 | 94.9 | Bond angles | 91.8 | - | 96.2 | Bond lengths | 90.2 | - | 96.8 | Ring flat | 99.0 | - | 100.0 | Double bond flat | 98.2 | - | 99.9 | Internal energy | 89.2 | - | 94.2 | No steric clash | 85.2 | - | 93.5 ADiT-S (32M) outperforms or matches EDM and MiDi across all metrics. ADiT generates physically realistic molecules based on all PoseBusters criteria. Notably, molecules generated by ADiT are significantly more likely than EDM/MiDi to be connected as measured by the 'Atoms connected' score (measures % of molecules where there exists a path along bonds between any two atoms). We have not trained the larger ADiT-B and ADiT-L models due to resource constraints in the short rebuttal period. We expect larger models to further improve performance based on the scaling trends we have seen on QM9 and MP20. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for addressing my main concern regarding the generalization of the proposed model. After reading your explanation and some of the related works you linked in your rebuttal, I now agree that your model appears to learn the underlying physics across heterogenous atomic systems. --- Reply to Comment 1.1.1: Comment: Thank you for championing this paper.
null
null
null
null
null
null
Flexibility-conditioned protein structure design with flow matching
Accept (poster)
Summary: This paper introduces a novel framework for flexibility-conditioned protein structure design. The authors present BackFlip, an SE(3)-equivariant neural network that predicts per-residue flexibility from protein backbone structures. Using BackFlip, the authors propose GAFL-Flex, a flow matching-based generative model that conditions protein backbone generation on per-residue flexibility. The authors evaluate both BackFlip and GAFL-Flex using molecular dynamics (MD), demonstrating that BackFlip accurately predicts structural flexibility, while GAFL-Flex successfully generates protein structures that exhibit dynamic behavior in accordance with target flexibility profiles. Claims And Evidence: 1.BackFlip can predict per-residue flexibility from backbone structures without sequence information. While BackFlip demonstrates strong performance, its evaluation is limited in scope. The authors primarily compare it against pLDDT, a confidence score from AlphaFold, and B-factors, which measure atomic displacement in X-ray crystallography and reflect thermal motion rather than intrinsic flexibility. Although pLDDT and B-factors correlate with flexibility, they are not standard flexibility benchmarks, raising concerns about the completeness of the validation. A more comprehensive evaluation could include alternative methods, such as using ProteinMPNN to generate multiple sequences for a given backbone, refolding them, and analyzing structural variability to see if BackFlip correctly predicts flexibility across different sequence designs. Additionally, the authors could apply BackFlip to NMR datasets, where proteins naturally exist in multiple conformations, providing a real-world test of its ability to capture experimentally observed flexibility. 2. GAFL-Flex can generate protein backbones that match desired flexibility profiles. While the results indicate that GAFL-Flex can generate structures with desired flexibility, the paper does not compare its performance against established baseline models in protein structure generation. RFdiffusion [1], FrameDiff, and other generative models have been shown to produce diverse and designable protein backbones, yet the paper does not examine whether GAFL-Flex actually generates more flexible structures than these existing approaches. A direct comparison with these models would provide stronger evidence that GAFL-Flex improves flexibility-aware design rather than simply being another generative method. The authors could conduct MD simulations on protein backbones generated by different models to assess whether GAFL-Flex produces statistically more flexible structures. Additionally, analyzing flexibility variations across multiple backbones from various generative models would further support the claim that GAFL-Flex introduces a meaningful improvement in flexibility control. [1]Watson, Joseph L., et al. "De novo design of protein structure and function with RFdiffusion." Nature 620.7976 (2023): 1089-1100. [2]Yim, Jason, et al. "SE (3) diffusion model with application to protein backbone generation." arXiv preprint arXiv:2302.02277 (2023). Methods And Evaluation Criteria: The proposed methods and evaluation criteria do not fully align with the standard benchmarks used in protein flexibility prediction and generative protein structure modeling. While the authors demonstrate that BackFlip accurately predicts flexibility and that GAFL-Flex generates structures conditioned on flexibility, the evaluation remains incomplete due to the choice of baseline comparisons and the absence of key benchmark models. For BackFlip, the authors validate its flexibility predictions using MD-derived local RMSF and compare it against AlphaFold’s pLDDT and B-factors from experimental data. However, while these metrics correlate with protein flexibility, they are not standard benchmarks for flexibility prediction. Although it is challenging to establish a proper baseline for a novel task—predicting flexibility solely from backbone structure—the authors should still include alternative baseline methods to provide a fair comparison. Additionally, a better validation would involve using NMR-derived ensembles, which capture experimentally observed conformational heterogeneity. NMR data provide multiple conformations of the same protein in solution, offering a real-world test of whether BackFlip can accurately predict backbone flexibility across different dynamic states. Similarly, GAFL-Flex lacks a direct baseline comparison for protein structure generation. While the method is novel, structure generation is a well-established task, and comparing GAFL-Flex against existing models such as RFdiffusion and FrameFlow would provide stronger evidence of its effectiveness. The authors could also perform partial structure generation on NMR datasets, where they condition GAFL-Flex on rigid regions and generate flexible segments. This would allow them to test whether flexibility-conditioned generation outperforms naïve generative models in producing realistic flexible regions observed in experimental structures. Theoretical Claims: The paper does not focus on formal theoretical claims or proofs but builds on established principles from flow matching, geometric deep learning, and SE(3)-equivariant networks. Experimental Designs Or Analyses: As mentioned above, while the experiments are well-structured, they lack comprehensive baseline comparisons and real-world flexibility benchmarks. The evaluation of BackFlip is limited to comparisons with pLDDT and B-factors, which are correlated with flexibility but not standard benchmarks. A stronger validation would include NMR-derived ensembles to assess flexibility across experimentally observed conformations. Similarly, GAFL-Flex is not compared against existing generative models like RFdiffusion or FrameFlow, making it unclear whether flexibility conditioning provides a meaningful advantage. Expanding the evaluation with NMR datasets and alternative generative models would significantly strengthen the findings. Supplementary Material: The authors provide additional experimental results, detailed dataset descriptions, and implementation details in the supplementary material. Relation To Broader Scientific Literature: This paper presents a well-motivated approach to rapidly evaluating protein dynamics and generating flexible protein structures, which addresses an important gap in the field of protein design. Current state-of-the-art protein design methods, such as those used for binder design, have achieved significant success when targeting static structures. However, for flexible targets, the success rate is notably lower, as existing generative models do not explicitly account for conformational dynamics. By introducing flexibility-aware protein structure generation, this work has the potential to impact areas such as antibody design, enzyme engineering, and intrinsically disordered protein modeling. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The paper tackles an important and underexplored challenge in protein design—integrating flexibility into protein structure generation. This is crucial for designing proteins that target dynamic systems, such as antibodies and enzymes, where flexibility is essential for function. Weaknesses: The study lacks baseline comparisons, making it unclear how GAFL-Flex compares to existing structure generation models. BackFlip's flexibility predictions are only evaluated against pLDDT and B-factors, which are not standard flexibility benchmarks. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive and helpful review. We are happy the reviewer finds the problem of flexibility-conditioned design important. Due to character constraints, we try to focus on the most important concerns below. - We note that we retrained GAFL-Flex on the larger PDB dataset and observe enhanced performance at the original benchmark (answer to 3jah, section 'Evaluation of the model trained on the PDB'). - We also note that we introduce a novel BackFlip-guidance approach for conditional generation that we evaluate on longer proteins (answer to 3jah, section 'General response' and 'Experiments on longer proteins'). i. **BackFlip can predict per-residue flexibility without sequence information** **General comment on the scope of BackFlip as MD-flexibility-emulator** We first want to emphasize that BackFlip serves as a speed-up for MD simulations, which are prohibitively expensive for dataset annotations, and can thus be seen as an MD-flexibility-emulator. Thus, by learning to predict flexibility derived from MD simulations, BackFlip inherits biases of and correlations between ground-truth MD-derived flexibility and any experiment-derived flexibility. **Scope of the comparison of BackFlip is limited to B-factors and pLDDT** Crystallographic B-factors and pLDDT are widely regarded as metrics of flexibility and we regard showing that they do not correlate well to MD as important. We include another recent flexibility prediction model, FlexPert [1], which combines embeddings from a large protein language model (pLM) with structural features, as a baseline. Similar to BackFlip, FlexPert is trained on the ATLAS dataset. We retrained BackFlip on the global RMSF metric and the dataset split used in FlexPert. **For the results, we refer to the Tables R3, R4 in the response to the reviewer VDWk.** BackFlip without sequence embedding is better or on-par with the sequence-informed, larger model FlexPert on the ATLAS test set and outperforms it on the set of MD simulations of 100 de novo proteins. We conclude that BackFlip is the current SOTA model for predicting MD-derived flexibility. **Comparison of BackFlip to NMR-derived flexibility** We thank the reviewer for their suggestion. As discussed above, BackFlip solves the task of predicting MD-derived flexibility and predicting NMR-derived flexibility is out of scope. We expect BackFlip to inherit the upsides and downsides of MD, i.e. also the correlation of MD- to NMR-derived flexibility. However, we did apply BackFlip to 500 randomly selected NMR ensembles from the BMRB database [2] and compared it to other baselines, where it achieves SOTA performance as well and outperforms FlexPert, in particular. Table R5 summarizes the results. We find it is important to note, while NMR structures are commonly deposited as conformational ensembles, these often represent averages over heterogeneous states over broad time spans and cannot be seen as true statistical ensembles [3]. Even the (very) recent micro-milisecond dynamics predictor Dyna-1 [4], trained directly on spin relaxation times observed in NMR, correlates very poorly with the flexibility of PDB-deposited NMR ensembles (Table R5, Dyna-1). **Table R5: Performance of BackFlip on RMSF prediction of 500 randomly selected NMR ensembles.** |**Model**|**Global *r* (↑)**|**Global MAE (↓)**| |-|-|-| |Negative pLDDT|0.45| -| |FlexPert|0.43|2.1| |Dyna-1| 0.16| -| |**BackFlip**|**0.65**|**2.0**| ii. **Comparison of GAFL-Flex with SOTA structure generative models** We followed the reviewer's suggestion to compare GAFL-Flex with other baselines. We chose as baselines RFdiffusion and FoldFlow2 [5], as these two models demonstrate SOTA performance for unconditional generation. We retrained GAFL-Flex on a BackFlip-annotated PDB dataset. We compare this new model (GAFL-Flex\*) with the baselines and find that conditional sampling results in proteins that, on average, more closely follow the desired flexibility profile and that are significantly more flexible compared to proteins sampled unconditionally using the baselines. The results and more details on the experiments can be found in (answer to 3jah, section 'Experiments on longer proteins'). --- **References** [1] Kouba, Petr, et al. "Learning to engineer protein flexibility." arXiv:2412.18275 (2024). [2] Hoch, Jeffrey C., et al. "Biological magnetic resonance data bank." Nucleic acids research 51.D1 (2023): D368-D376. [3] Bonomi, Massimiliano, et al. "Principles of protein structural ensemble determination." Current opinion in structural biology 42 (2017). [4] Wayment-Steele, Hannah K., et al. "Learning millisecond protein dynamics from what is missing in NMR spectra." bioRxiv (2025). [5] Huguet, Guillaume, et al. "Sequence-augmented se (3)-flow matching for conditional protein backbone generation." arXiv:2405.20313 (2024).
Summary: This paper takes a step towards overcoming this limitation by proposing a framework to condition structure generation on flexibility, which is crucial for key functionalities such as catalysis or molecular recognition. The authors first introduce BackFlip, an equivariant neural network for predicting per-residue flexibility from an input backbone structure. Relying on BackFlip, we propose GAFL-Flex, an SE(3)-equivariant conditional flow matching model that solves the inverse problem, that is, generating backbones that display a target flexibility profile. Claims And Evidence: This paper introduces a generative model for protein structure design conditioned on per-residue flexibilities using flow matching. The experiments demonstrate that flexibility-conditioning leads to the generation of diverse and novel backbones that indeed display the respective target flexibility profile in Molecular Dynamics simulations. Methods And Evaluation Criteria: This paper introduces a generative model for protein structure design conditioned on per-residue flexibilities using flow matching. The proposed flexibility-conditioning framework relies on the structure-based flexibility prediction model BackFlip, enabling large scale flexibility annotation of proteins, a novel flexibility auxiliary loss and a flexibility screening procedure to find protein backbones that best display a flexibility profile of interest. This paper also proposes a generalization of RMSF as Local RMSF, in which the fluctuations of a residue are measured with respect to its local surrounding instead of the whole protein. This paper evaluate the flexibility with MD. Theoretical Claims: There is no significant theoretical claims. Experimental Designs Or Analyses: The experiments demonstrate that flexibility-conditioning leads to the generation of diverse and novel backbones that indeed display the respective target flexibility profile in Molecular Dynamics simulations. Supplementary Material: The authors provide more training, architecture and experimental details. Relation To Broader Scientific Literature: This is a direct application of Riemannian Flow Matching in Protein design. Essential References Not Discussed: The most related work, Dynamics-Informed Protein Design with Structure Conditioning (ICLR2024) needs a discussion, and comparison is appreciated. Other Strengths And Weaknesses: Conditioning on protein dynamics is a promising direction, further elaborations (e.g., case study about the application) on this are appreciated. Other Comments Or Suggestions: More advanced conditioning techniques for steering (protein) dynamics are appreciated. Questions For Authors: Could this per-residue dynamics reflex some specific patterns of protein motifs as in Dynamics-Informed Protein Design with Structure Conditioning (ICLR2024) ? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time invested in reading the paper and for their constructive feedback. We are happy that the reviewer agrees with our claims and evidence and appreciates the methods and evaluation criteria and our experimental design. Below we will discuss the comments line by line. - We note that we retrained GAFL-Flex on the larger PDB dataset and observe enhanced performance at the original benchmark (answer to 3jah, section 'Evaluation of the model trained on the PDB'). - We also note that we introduce a novel BackFlip-guidance approach for conditional generation that we evaluate on longer proteins (answer to 3jah, section 'General response' and 'Experiments on longer proteins'). i. **Dynamics-Informed Protein Design with Structure Conditioning** We thank the reviewer for pointing us toward this work [1]. We have added a discussion in the related work section: (U. Komorowska et al., 2024) propose conditioning a pre-trained diffusion model on normal modes—i.e., Hessians of the potential energy predicted by a force field that approximate local movements around an equilibrium state. Conditioning is achieved via inference-time guidance, using gradients computed from an analytical normal mode loss.' While the approach may appear similar to the flexibility-conditioning proposed in our paper, both the task and the method are fundamentally different: 1. We propose training a *conditional* model: akin to classifier-free guidance [2], we pass the condition as input and learn a conditional flow. In contrast, [1] uses an *unconditional* diffusion model guided during inference by gradients from an analytical scoring function, following a classifier guidance scheme. 2. The conditioned quantity differs: we use flexibility derived from Molecular Dynamics (MD) simulations, while [1] relies on Normal Mode Analysis (NMA)—a simulation-free approximation that assumes a harmonic potential and is only valid near equilibrium. 3. The guidance approach in [1] requires an *analytical* condition to compute gradients. In contrast, our flexibility-conditioning approach can handle *non-analytical* conditions (e.g., MD-derived flexibility) because we train the model to approximate a conditional flow, rather than apply analytical gradient guidance. Since the models condition on different quantities (analytical NMA vs. MD-derived flexibility), they solve different tasks and are not directly comparable. While GAFL-Flex can accept any flexibility profile (e.g., derived via NMA), making it a general method for dynamics-informed design, the approach in [1] is limited by its reliance on the harmonic assumption in NMA, which may not always hold. **Q: Could this per-residue dynamics reflect specific patterns of protein motifs?** Since lowest non-trivial modes in [1] are computed for the entire protein structure, and motifs are subsequently sampled as sub-regions of these structures, it can indeed be expected that MD-derived flexibilities will correlate with the amplitudes of lowest non-trivial modes of oscillations from NMA. ii. **More advanced conditioning techniques for steering dynamics** We retrained GAFL-Flex on BackFlip-annotated PDB of monomeric structures (22977) and developed BackFlip guidance in analogy to classifier guidance. With BackFlip-guidance, we can achieve the same flexibility-similarity as with the conditional model on the four new flexibility profiles for longer proteins, however, it is around 20 percent slower and requires more memory. Training on the larger dataset improves the conditioning performance and yields more novel backbones. For more details we refer to the answer to 3jah, sections 'General response' and 'Experiments on longer proteins'. iii. **Further elaborations on flexibility-conditioned design** We also regard this as promising direction and plan to use the framework introduced in this paper in more application-related work in the future. For instance, structural flexibility is believed to be important for the functionality of protein assemblies [3], for enzymatic catalysis [4] and for binding events of flexible receptor and respective ligand-proteins [5]. **Note: We also evaluated another recent flexibility prediction model and observe that BackFlip outperforms it (see answer VDWk, Table R3, R4, Section i).** --- **References** [1] U. Komorowska et al. Dynamics-Informed Protein Design with Structure Conditioning. ICLR 2024 [2] Ho & Salimans, 2022 – Classifier-Free Guidance [3] Khmelinskaia, Alena, et al. "Local structural flexibility drives oligomorphism in computationally designed protein assemblies." Nature Structural & Molecular Biology (2025): 1-11. [4] Matsuo, Takashi, et al. "Global structural flexibility of metalloproteins regulates reactivity..." Chemistry–A European Journal 24.11 (2018): 2767-2775. [5] Craveur, Pierrick, et al. "Protein flexibility in the light of structural alphabets." Frontiers in molecular biosciences 2 (2015): 20.
Summary: This paper introduces a novel framework for de novo protein design that explicitly incorporates residue-level flexibility—a dynamic property critical for biological function—into the generative process. Current methods prioritize static structural features (e.g., motifs, symmetry), limiting their ability to engineer proteins for dynamic processes like catalysis. The authors address this gap with two key innovations: BackFlip, an SE(3)-equivariant network predicting flexibility from backbone structures, and GAFL-Flex, a conditional flow model that inversely generates backbones conditioned on target flexibility profiles. Extensive experiments show that BackFlip can accurately predict the flexibility with a high Pearson correlation coefficient on unseen data and GAFL-Flex can generate plausible proteins conditioned on target flexibility profile. The paper is overall well-written. Claims And Evidence: This paper claims that "Back-Flip is a backbone flexibility predictor which is entirely independent of sequence information". However, flexibility is intrinsically coupled with sequence as side-chain type and conformation are keys to protein thermostability and flexibility. Therefore, my concern is that flexibility can not be accurately predicted without any sequence information as an input. I kindly suggest authors can elaborate on this point. Methods And Evaluation Criteria: The proposed method of training a backbone flexibility predictor followed by conditioned generation model makes sense to me. Theoretical Claims: This paper does not claim theoretical contributions, so there are no theoretical claims. I have checked the formulas used in this paper. They are correct and understandable. Experimental Designs Or Analyses: The experimental design raises concerns regarding the role of side chains in evaluating protein dynamics. While protein flexibility depends on both backbone and side-chain interactions, the proposed approach relies solely on backbone MD simulations to validate the results in Table 2o, which introduces uncertainty. A direct comparison of flexibility between backbone-only and full-atom MD (e.g., using the Atlas dataset) is necessary to assess whether backbone-only simulations sufficiently capture dynamic behavior or if side-chain contributions are critical. Additionally, there is a potential inconsistency in dataset usage—the Back-Flip model is trained on the Atlas dataset, which includes side-chain information, yet the generative module is validated using backbone-only MD simulations. If Back-Flip is trained properly, it predicts all-atom flexibility, which should not be directly compared to de novo protein backbone flexibility. Addressing these discrepancies through additional controlled experiments would strengthen the validity of the approach. Supplementary Material: I reviewed the appendix and no further questions. Relation To Broader Scientific Literature: Traditional protein modeling tools such as Rosetta MotifGraft (Alford et al., 2017) with BackRub sampling (Lauck et al., 2010), Modeller (Šali & Blundell, 1993), and LoopGrafter (Planas-Iglesias et al., 2022) enable flexibility engineering in specific regions through structure-guided iterative sampling using empirical energy functions, but their applicability is constrained by reliance on predefined input structures and high computational costs. While hybrid approaches combining classical tools and deep learning have successfully designed allosteric proteins (Pillai et al., 2024), pH-responsive complexes (Shen et al., 2024), and fold-switching systems (Guo et al., 2024), current deep learning methods primarily focus on structure prediction rather than generative target structure design and lack explicit flexibility conditioning. In concurrent work, Kouba et al. (2024) propose FlexPert-3D, a sequence-based pipeline using molecular dynamics-derived flexibility to fine-tune ProteinMPNN (Dauparas et al., 2022) via evolutionary priors from protein language models. However, their framework operates solely in sequence space and depends on evolutionary information, fundamentally contrasting with our structure-centric generative approach that directly encodes flexibility into structural design. Essential References Not Discussed: I did not find. Other Strengths And Weaknesses: No other strengths and weaknesses. Other Comments Or Suggestions: I have no further questions for authors. I will keep a positive rating if my concern can be well addressed. Questions For Authors: ## Update after rebuttal I thank authors for thier detailed responses and I would like to keep my positive rating to accept this paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We cordially thank the reviewer for their time invested in reading the paper and for their constructive review. We discuss questions and concerns below line by line. - We note that we retrained GAFL-Flex on the larger PDB dataset and observe enhanced performance at the original benchmark (answer to 3jah, section 'Evaluation of the model trained on the PDB'). - We also note that we introduce a novel BackFlip-guidance approach for conditional generation that we evaluate on longer proteins (answer to 3jah, section 'General response' and 'Experiments on longer proteins'). i. **Backbone flexibility prediction in absence of sequence information** We compared the performance of BackFlip to FlexPert [1], another backbone flexibility predictor that combines embeddings from a large protein language model (pLM) with structural features. Similar to BackFlip, FlexPert is trained on the ATLAS dataset. We retrained BackFlip on the global RMSF metric and dataset split used in FlexPert, both without and with one-hot encoded sequence embeddings, to assess their effect. Table R3 reports inference results on the ATLAS test set. Without any sequence embedding, BackFlip outperforms FlexPert on both global and per-target Pearson correlation (as reported in the FlexPert paper), while performing slightly worse in terms of MAE. Indeed, one-hot encoded sequence improves the performance of BackFlip, but it is clearly possible to estimate the flexibility already from the structure alone. Since we utilize BackFlip in the auxillary loss during training of GAFL-Flex and screening of de novo generated backbones, where the sequence is not defined, we find it advantageous that BackFlip demonstrates such a strong performance without requiring any sequence as input. We regard this as an important contribution since it contrasts the paradigm of relying solely on evolutionary or sequence information for predicting dynamical properties. We also compared BackFlip and FlexPert on a set of de novo proteins (from Table 1 in the main text of the submission), with results shown in Table R4. BackFlip significantly outperforms FlexPert on all metrics. We think the reason for this might be that pLM embeddings are not informative for these proteins, as there is no evolutionary information available. These results support our hypothesis that the geometry of a backbone and secondary structure composition of a well-folded, globular protein are sufficient to infer short-range nanosecond backbone flexibility without sequence information. However, we agree with the reviewer that sequence information is more important when it comes to long-range protein flexibility. We expect that BackFlip will generalize worse to highly dynamical systems, such as intrinsically disordered proteins, where dynamics is in the range of micro or milliseconds and is dictated to a large extent by the sequence [2]. ii. **On MD simulations of generated de novo backbones** We apologize for any lack of the clarity on MD procedure. We will clarify this in the final version. We conduct all MD simulations following the protocol published in the ATLAS paper, that is, all-atom simulations for 300 ns conducted as 3 replicas with explicit TIP3P water as solvent. The input structure to the MD simulation is the ESMfold-refolded protein (sequence is designed by ProteinMPNN, see A.7 section in the paper) with the lowest scRMSD to the backbone generated by GAFL-Flex. We report all metrics (r and MAE) on Cα RMSF profiles, but these are extracted from the all-atom trajectories. Indeed, BackFlip is trained to predict Cα RMSF from the ATLAS dataset structures, which are all-atom. Accordingly, all evaluations report Cα RMSF metrics. BackFlip only sees [N, CA, C] backbone atoms during training, thus it only implicitly predicts effects of side chain atom interactions - like AlphaFold2. **Table R3: Performance of BackFlip retrained with or without sequence embedding on the global RMSF metric on ATLAS dataset split of FlexPert [1].** | Model|Global *r* (↑)|MAE [Å] (↓)|Per-target *r* (↑)|Per-target MAE (↓)| |-|-|-|-|-| |BackFlip *|0.78|0.61|0.88|0.73| |BackFlip †|0.81|0.56|0.88|0.72| |FlexPert ‡|0.74|0.44|0.83|0.47| * No sequence embedding † One-hot sequence embedding ‡ pLM sequence embedding **Table R4: Performance of BackFlip without sequence embedding retrained on the global RMSF metric on ATLAS dataset split of FlexPert [1] on MD simulations of 100 de novo proteins sampled with RFdiffusion or FrameFlow.** |Model| Global *r* (↑)|MAE [Å] (↓)|Per-target *r* (↑)|Per-target MAE [Å] (↓)| |-|-|-|-|-| |BackFlip *|0.63|0.49|0.85|0.48| |FlexPert ‡|0.51|0.62|0.63|0.60| * No sequence embedding † One-hot sequence embedding ‡ pLM sequence embedding --- **References** [1] Kouba, Petr, et al. "Learning to engineer protein flexibility." arXiv preprint arXiv:2412.18275 (2024). [2] Radivojac, Predrag, et al. "Protein flexibility and intrinsic disorder." Protein Science 13.1 (2004): 71-80. --- Rebuttal Comment 1.1: Comment: I thank authors for the detailed responses and I would like to keep my positive rating to accept this paper. --- Reply to Comment 1.1.1: Comment: We are glad the reviewer has a positive view of the paper and recommends its acceptance. We are happy to answer any further open questions!
Summary: This paper proposed a framework for conditional structure generation conditioning on desired flexibility, a key characteristic in catalytic interactions and molecular recognition. They develop BackFlip, a backbone flexibility predicter that can be used for large-scale flexibility annotation, and combine it with a Geometric Algebra Flow Matching model to achieve flexibility conditioned generation. They show that GAFL-Flex can generate novel protein backbones with the desired flexibility, verified by Molecular Dynamics (MD) simulations. Claims And Evidence: The claim that BackFlip can accurately predict flexibility profile (as measured by MD RMSF) is supported by experimental results onthe ATLAS test dataset. Although the true flexibility might be different from the MD simulation. The flexibility-conditioned generation performance is supported by some evidence from small proteins, but its capability to generalize to bigger proteins (that are likely to contain more flexible regions) is undetermined. Methods And Evaluation Criteria: The authors proposed to use local RMSF instead of global RMSF to quantify flexibility and generate 10 conformations from MD to produce ground-truth training data. The conditional flow matching model is trained using similar techniques in classifier-free guidance that balances conditional and unconditional training and uses auxiliary loss that penalizes the deviating (predicted) flexibility of the generated structure. The methodology overall makes sense, although the training data seems to focus only on short proteins with len 60-128 (3673 structures) and might be too small to reliably learn complex geometries of the protein backbones. Since the BackFlip score is differentiable, maybe another baseline they should compare is directly using BackFlip to do test time guidance of pre-trained large-scale backbone generative models such as RFDiffusion. Theoretical Claims: The paper is mostly method development and empirical evaluation and therefore does not have theoretical results to assess. Experimental Designs Or Analyses: As mentioned in the Methods and Evaluation criteria, experimental designs are valid. It would be nice if a comparison with strong unconditional models, as well as classifier-guidance results, could be added (e.g., using DPS with BackFlip prediction as objective). In addition, experiments on longer proteins (>128 aa) are needed to prove that the model can be scaled to more complex protein backbones (and likely to contain more coils). Supplementary Material: I review the supplementary material. Relation To Broader Scientific Literature: The paper is one of the first method that this reviewer know on flexibility conditioned design. The task can be generally related to molecular dynamics generation of proteins, such as MD trajectory learning, conformation sampling, and structure prediction. Essential References Not Discussed: The references are covered quite comprehensively. Other Strengths And Weaknesses: It seems comparison on pLDDT and RMSD is omitted in the evaluation as the authors thought that more flexible protein will lead to worth RMSD/pLDDT. However, since both metrics are still important in current de novo design, how should we measure the designability/foldability of flexibility-conditioned designs if these two metrics lose their meanings? Other Comments Or Suggestions: No other comments. Questions For Authors: Does the model scale to larger data and longer proteins? Do we know if the model is not solely recognizing alpha-helix and beta sheet? Are there quantitative measures on how well the model differentiate flexibilities in non-loop regions? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the time they invested in reading the paper and their helpful suggestions! i. **General response** We retrained GAFL-Flex on the BackFlip-annotated PDB dataset of 22977 monomeric protein structures filtered by the (i) length between 60 and 512 residues and (ii) absence of breaks in the structure, and conducted a series of new experiments based on the review. The resulting model generates protein backbones that better match the desired flexibility profiles, demonstrates improved novelty, and succeeds at designing larger proteins for challenging flexibility profiles. We developed a BackFlip-guidance (BG) approach similar to classifier guidance that performs well but is slower than the original conditional model. Due to time constraints, we applied it with our unconditional model but will extend it to RFdiffusion in the final submission. We discuss new experiments below. **Evaluation of the model trained on PDB** We re-ran the experiment from the main text of our submission reported in the Table 2 and found that GAFL-Flex trained on PDB performs better than GAFL-Flex trained on SCOPe in terms of correlation and yields more novel backbones (Table R1). **Table R1: Performance of GAFL-Flex trained on the PDB (GAFL-Flex\*) at the benchmark reported in Table 2 of the original submission.** ||r (↑)|MAE [Å] (↓)|Novelty (↓)| |-|-|-|-| |**10 existing profiles**||||| |GAFL-Flex\*|**0.52 (0.00)**|0.20 (0.00)|**0.64 (0.02)**| |GAFL-Flex|0.45 (0.00)|**0.17 (0.00)**|0.68 (0.02)| |GAFL-uncond.|0.20 (0.00)|0.20 (0.00)|0.73 (0.02)| |SCOPe proteins|0.19 (0.00)|0.25 (0.00)|1.0 (-)| |**10 arbitrary profiles**||||| |GAFL-Flex\*|**0.56 (0.00)**|0.44 (0.00)|**0.64 (0.02)**| |GAFL-Flex|0.47 (0.00)|**0.43 (0.00)**|0.68 (0.03)| |GAFL-uncond.|0.09 (0.00)|0.48 (0.00)|0.72 (0.02)| |SCOPe proteins|0.09 (0.01)|0.48 (0.00)|1.0 (-)| ii. **Experiments on longer proteins** We defined 4 new target flexibility profiles suitable for longer proteins (given by 3 to 5 rectangular peaks with widths of 10 to 20 residues and amplitudes of 1 to 2.5 Å). We sampled 100 protein backbones for each length in [200, 250, 300]. We also included the SOTA unconditional structure generative models RFdiffusion and FoldFlow2 [1]. Table R2 reports the results of the experiment. Conditional generation yields protein backbones that closely follow the respective target profiles. Unconditional sampling, on the contrary, produces samples that do not reflect desired profiles. Similar to the experiment from our submission, we observe that conditioning improves novelty of sampled backbones. Average flexibility is elevated compared to unconditional generation. Remarkably, BackFlip-guidance (GAFL-BG\*) achieves the same performance as the conditional model (GAFL-Flex\*). GAFL-BG is about 20% slower. **Table R2: Performance of GAFL-Flex for longer proteins. We evaluated 4 new target flexibility profiles. Sampled lengths L ∈ [200, 250, 300], each 100 backbones. Metrics are evaluated using BackFlip on all samples.** | Method| r (↑)| MAE [Å] (↓)| Novelty (↓)| Avg. Flex [Å]| |-|-|-|-|-| | **4 arbitrary profiles**||||| | GAFL-Flex\*|**0.757**|**0.268**|0.569| 0.730| | GAFL-BG\*|**0.757**|**0.267**|0.566|0.730| | GAFL-uncond.\*|-0.03|0.33| 0.67|0.56| | FoldFlow2|0.00|0.33|**0.48**|0.46| | RFdiffusion|-0.01|0.32| 0.58|0.55| iii. **Designability of flexibility-conditioned generated backbones** We observe that the more novel and flexible even natural protein backbones are, the higher the self-consistency RMSD (scRMSD) computed in the refolding pipeline becomes (Figure 4 in the main text of the submission). Designability depends on the target flexibility profile (Figure A.8), which is not surprising as flexibility introduces fundamental uncertainty to scRMSD. We regard our finding as an important contribution to rethinking the designability definition for flexible protein design. We think approaches for alternative definitions could include making the refolding-RMSD threshold dependent on novelty and flexibility or to only consider stiff parts for calculating scRMSD. If one would rely on the well-established cut-offs for designability (e.g. scRMSD < 2.0, pLDDT > 70), one would inevitably introduce a bias towards selecting rigid proteins. It is important to note that these cutoff values were conceived having static protein representation in mind. iv. **Distinction between flexibilities in structured regions** Due to time constraints during the rebuttal phase, we were not able to make an experiment in this regard. However, we believe this is a great suggestion and we think can answer this question by computing the Pearson correlation and MAE to the ground-truth MD flexibility by masking loops during the computation of metrics. We will include this analysis in the final version of the paper. **Note: We also evaluated another recent flexibility prediction model and observe that BackFlip outperforms it (see answer VDWk, Table R3, R4, Section i).**
null
null
null
null
null
null
Fully Heteroscedastic Count Regression with Deep Double Poisson Networks
Accept (poster)
Summary: The paper introduces the Deep Double Poisson Network (DDPN), a novel neural network model for count regression that provides accurate input-conditional uncertainty quantification. The main conceptual idea is that DDPN extends deep ensembles to count regression by using the Double Poisson distribution, which allows for heteroscedastic variance in count data. This flexibility enables improved estimation of aleatoric uncertainty (inherent variability in data) and, consequently, better epistemic uncertainty (model uncertainty) estimation. The paper proves that DDPN exhibits properties similar to heteroscedastic Gaussian models. The authors introduce a loss modification to control the learnable loss attenuation mechanism, allowing for more precise uncertainty calibration. Experiments on diverse datasets show that DDPN outperforms existing count regression baselines in accuracy, calibration, and out-of-distribution detection. ## update after rebuttal I acknowledge that the authors have improved the proofs (my point 1.) but do not provide a strong argument for point 2. I will increase my score to 2, but still think the work does not reach the acceptance bar. Claims And Evidence: Overall, the claims made in the submission are supported by clear evidence, but I found that some theoretical claims are not supported by convincing mathematical arguments (see below). Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the theoretical claims and found significant flaws. Some may be fixable, but others appear more critical. - One issue concerns Proposition 3.3: the stated convergence to 0 does not seem valid under the proposed definition of the learnable attenuation loss function (Definition 3.2). A monotonically increasing function does not necessarily tend to infinity, just as a monotonically decreasing function does not necessarily tend to 0—both cases can have a constant asymptote. This flaw undermines the argument in the proof of Proposition 3.3. - Another issue concerns the derivation of the DDPN objective. The derivation in Appendix A.1 omits the normalizing constant of the Double Poisson (DP) distribution, denoted c(\mu, \gamma) at the beginning of Section 3. Since c(\mu, \gamma) is not constant with respect to (\mu, \gamma), the stated objective does not properly learn the parameters of a DDPN. - Additionally, most equations in Appendix A.1 fail to hold because the maxima and minima do not align due to omitted constants between successive lines. Using \arg\max and \arg\min would provide more precision. Also, the parameterization of the network f_\Theta(x_i) is inconsistent: while a log link function is used in the main text, this log transformation is omitted in the first line of Appendix A.1. - Finally, it is unclear why the DDPN objective loss in Equation (1) of Section 3.1 does not include a summation over all training examples i = 1, \dots, N. Experimental Designs Or Analyses: Owing to the previous flaws, I did not check the soundness of the experimental analyses. Supplementary Material: I did thoroughly review Sections A and C of the supplementary material. Relation To Broader Scientific Literature: The paper builds on prior work in deep ensembles for uncertainty estimation in regression, extending heteroscedastic modeling from Gaussian outputs to count data using the Double Poisson distribution, addressing a gap in discrete uncertainty modeling and improving epistemic uncertainty quantification. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: I identify a weakness in Proposition 3.1, which claims that DDPN regressors are fully heteroscedastic. In reality, the proposition is derived under the moment approximations proposed by Efron (1986), where the first two moments are approximated as \mu and \mu / \gamma. Given this approximation, full heteroscedasticity is unsurprising. It would be more meaningful to establish this result without relying on Efron’s approximation. I suspect that the exact Double Poisson distribution is inherently fully heteroscedastic, and proving this directly would be a more valuable contribution. Other Comments Or Suggestions: - The second line of the displayed equation in app C.6 is hard to parse: adding parentheses to the rhs 2nd sum would help. - There are a few (math) typos that could easily be fixed. Questions For Authors: Given the identified flaws and weaknesses, I would likely revise my evaluation if the authors: 1. Provide convincing proofs that correct the identified issues. 2. Establish a stronger version of full heteroscedasticity for Double Poisson regressors without relying on moment approximations. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. ## Monotonicity vs tend to infinity We agree with this remark and propose to change Def. 3.2 to: > where $\lim_{\hat{\phi} \to \infty} d(\hat{\phi}) = \infty$ and $\lim_{\hat{\phi} \to \infty} a(\hat{\phi}) = 0 $ Fortunately, the proof in Appendix C.3 holds under this new definition, as these limits are in fact used to show that the residual error tends towards 0 (lines 1108-1109). With this change, our proof of Prop. 3.4 also remains valid (since logx tends to infinity and 1/x tends to zero). ## Normalizing constant To simplify our objective, we followed previous work (see below) and assumed $c(\mu, \gamma) = 1$. This facilitates easier differentiation. We will make this more clear in App. A.1. - See Fact 1 (Eqn. 2.10) of [Efron, B. "Double exponential families and their use in generalized linear regression." Journal of the American Statistical Association. 1986] - Follow-up work has also set $c(\mu, \gamma)=1$ [Chow, N., and David Steenhard. 2009. "A flexible count data regression model using SAS Proc nlmixed”] ## Max/Min in Appendix 1 We propose two changes to the derivation of our objective to increase clarity and align with convention: 1. Replace max/min with argmax/argmin 2. Clarify that we maximize over $N$ training examples (see below) and derive the per-instance loss defined in Equation 1 (see discussion below) The NLL becomes: $\arg\min_{\mu_i, \gamma_i} [ -\sum_{i=1}^N \log p(y_i | \mu_i, \gamma_i)]$. ## Lack of Log link in Appendix A.1 In App. A.1 we derive the training objective. In contrast, Section 3 describes how it can be used to train DDPN. The connection between the two is trivial (exponentiate the log link to evaluate Eq. 1) and is stated in Footnote 1 (pg. 4). ## Lack of Summation in Equation 1 Eq. 1 expresses the loss for a single training example, $x_i$. We state in lines 183-184 that the loss is: > averaged across all prediction / target tuples…in the dataset To be more explicit, we propose to change $\mathcal{L}$ to $\mathcal{L}_i$. ## Prop. 3.1: DDPN regressors and full heteroscedasticity In line with prior work, we use Efron's approximations for the first two moments in the proof of Prop. 3.1. We propose to revise this proposition: *With mild assumptions*, DDPN regressors are fully heteroscedastic (where we assume that Efron's approximations hold). How good are Efron's approximations? We introduce the concept of moment-deviation functions (MDFs) to assess this theoretically: Let $Q$ be a family of distributions parametrized by $\psi\in\mathbb{R}^d$. Suppose we are given $\hat{\psi_n}:\mathbb{R}^n\to\mathbb{R}^d$, which outputs $\hat{\psi_n}(\boldsymbol{\mu})$ s.t. the first $n$ moments of $Z\sim Q_{\hat{\psi_n}(\boldsymbol{\mu})}$ are nearly equal to target moments $\boldsymbol{\mu}=(\mu_1,...,\mu_n)$. Then for any pair $(Q,\hat{\psi_n})$, {$ \varepsilon_i:\mathbb{R}^n\rightarrow\mathbb{R}$} for $i=1..n$ are moment-deviation functions if, for any valid $\boldsymbol{\mu}$, if $Z\sim Q_{\hat{\psi_n}(\boldsymbol{\mu})}$, we have $|\langle Z^i\rangle -\mu_i|\leq\varepsilon_i(\boldsymbol{\mu})$ for all $1\leq i\leq n$. We focus on n=2, (error for the mean/variance). If we can pick $\hat{\psi}$ s.t. $\varepsilon_1,\varepsilon_2$ are small, $Q$ is flexible, as there are parameters that can roughly achieve any mean and variance. In the Gaussian case ($\mathcal{N},\mathbb{I}$), we have $\varepsilon_1 =\varepsilon_2=0$. ### Proposition Let $DP$ denote the Double Poisson family. Set $\hat{\psi_2}(\mu_0, \sigma_0^2) = (\mu_0,\frac{\mu_0}{\sigma_0^2})$. Letting $\gamma_0=\frac{\mu_0}{\sigma_0^2}$, the MDFs for $(DP,\hat{\psi_2})$ are: \begin{align*} \varepsilon_1(\mu_0,\sigma_0^2) &= \left|\frac{\sum_{y=0}^{\infty}s(\mu_0, \gamma_0, y)(y - \mu_0)}{\sum_{y=0}^{\infty}s(\mu_0,\gamma_0,y)}\right| \\\\ \varepsilon_2(\mu_0, \sigma_0^2)&=\left|\frac{d(\mu_0,\gamma_0)\gamma_0^{\frac{1}{2}}\sum_{y=0}^{\infty}s(\mu_0,\gamma_0,y)-\gamma_0(\sum_{y=0}^{\infty}s(\mu_0, \gamma_0,y)(y-\mu_0))^2}{\gamma_0(\sum_{y=0}^{\infty}s(\mu_0,\gamma_0,y))^2}\right| \end{align*} where: $h(z)=\frac{e^{-z} z^z}{z!}, r(\mu,\gamma,z)=\gamma(z-\mu +z\log\mu -z\log z), s(\mu,\gamma,z)=h(z)\exp(r(\mu,\gamma,z)),$ and $d(\mu,\gamma)=\gamma^{-1/2}\left[\sum_{y=0}^{\infty}s(\mu, \gamma, y)(\gamma(y-\mu)^2-y)+\sum_{y=0}^{\infty}s(\mu,\gamma,y)(y-\mu)\right]$. If desired, we can provide the proof to this proposition in the follow-up response. We plot the error incurred via Efron’s estimates on a grid of target means and variances, using 100th partial sums (https://anonymous.4open.science/r/ddpn-651F/deep_uncertainty/figures/artifacts/epsilon_1.png). To see epsilon_2, change the filepath to epsilon_2.png. Except for the case of small μ, high σ², the error is essentially zero. Thus, in most settings we can treat DDPN as fully-heteroscedastic. Empirically, DDPN produces flexible, well-fit distributions (Fig. 3/4, Table 2). --- Rebuttal Comment 1.1: Comment: ### Monotonicity vs tend to infinity I agree that the proposed change to Def. 3.2 make the proof in Appendix C.3 now possible. ### Normalizing constant The authors replied: > assumed $c(\mu, \gamma) = 1$. I do not think this assumption was made clear anywhere in the submitted paper; it could only be discovered by checking the earlier work by Efron. ### Max/Min in Appendix 1 > We propose two changes to the derivation of our objective to increase clarity and align with convention: Replacing max/min with argmax/argmin is not just a question of clarity or convention: the proof is simply wrong without. ### Lack of Log link in Appendix A.1 I understand that the connection with and without log link is trivial. But as I noticed, the parameterization of the network $f_\Theta(x_i)$ is inconsistent between main text and supplementary. ### Lack of Summation in Equation 1 I agree that changing $\mathcal{L}$ to $\mathcal{L}_i$ helps with clarity. ### Prop. 3.1: DDPN regressors and full heteroscedasticity Introducing moment-deviation functions to assess the error of Efron’s approximation is a nice idea. However, I still believe that demonstrating full heteroscedasticity for the *exact* Double Poisson distribution should be the primary goal. After all, full heteroscedasticity simply means that, for any fixed mean, the variance can span the entire interval $(0, \infty)$. I genuinely think this is an attainable property for the *exact* Double Poisson distribution. ## Score revision I acknowledge that the authors have improved the proofs (my point 1.) but do not provide a strong argument for point 2. I will increase my score to 2, but still think the work does not reach the acceptance bar. --- Reply to Comment 1.1.1: Comment: We appreciate the additional thoughtful comments from the reviewer. As discussed, we will make all of the proposed improvements to the proofs (point 1) in the camera ready manuscript. With respect to point 2, we will include a discussion of moment-deviation functions (and the quality of Efron's approximations) in the appendix.
Summary: The work introduces deep double poisson networks for the count regression problem. The proposed approach can quantify both the aleatoric and epistemic uncertainty with ensemble. Also, double poisson network allows unrestricted variance to model discrete count data, and can show robustness to outiers. Authors carry out experiments where the approach performs better in terms of calibration, out-of-distribution detection, and accuracy. Claims And Evidence: - The authors claim that the proposed deep double poisson network can perform well on the count regression task. The claims are empirically validated through experiments on benchmark datasets and some baseline methods. Methods And Evaluation Criteria: The methods and evaluation criteria look reasonable. The authors consider discrete count regression problem, and look at the different metrics, evaluating the method along different dimensions including OOD detection, calibration, and accuracy. Theoretical Claims: - Authors present some theoretical claims, but these seem to be derived from standard double poisson networks. Experimental Designs Or Analyses: The experimental design looks sound. Supplementary Material: Supplimentary materials shows the loss function for the double poisson networks, role of the hyperparamter beta, experimental details, evaluation metrics used, training details, some additional details, and results. I briefly went through the supplementary materials. Relation To Broader Scientific Literature: The work is likely to have limited impact to a narrow subfield in the scientific community. Essential References Not Discussed: The work Natural Posterior Network: Deep Bayesian Uncertainty for Exponential Family Distributions [Bertrand Charpentier, Oliver Borchert, Daniel Zügner, Simon Geisler, Stephan Günnemann]) introduces a general evidential approach that can be effective for a wide range of problems including uncertainty-aware classification, uncertainty-aware regression, and uncertainty-aware count regression. Discussion and comparison with the work could be beneficial. Other Strengths And Weaknesses: Strengths - The paper is easy to follow and I found it to be a pleasant read. - The work introduces deep double possion network which seems to be effective in discrete count regression based on the experimental results The authors show the robustness to outliers of the proposed approach. Also, the approach performs well in terms of accuracy, calibration, and ood detection. Weaknesses: - Beyond ensembling, there are other approaches (e.g., Bayesian neural networks, evidential approaches, dropout-based uncertainties). While the authors compare with standard Poisson, NB, and Gaussian-based heteroscedastic networks, a thorough comparison can help better illustrate the effectiveness of the approach. Other Comments Or Suggestions: - Figures, labels and captions can be better presented. Many labels/legend texts are too small and not clearly legible. Also, the captions are too long and could be shortened for a better read. Questions For Authors: - How does the work perform compared to natural posterior networks and the presented baselines on the benchmark Bike Sharing dataset (Natural Posterior Network: Deep Bayesian Uncertainty for Exponential Family Distributions [Bertrand Charpentier, Oliver Borchert, Daniel Zügner, Simon Geisler, Stephan Günnemann]) ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful comments and helpful feedback. ## Comparison to the Natural Posterior Network We have followed the official repository to download the `bike-sharing` dataset file and pre-process exactly as used in the paper reviewer mentioned. For training, the paper mentioned they perform training after the grid search in the space `[1e−2, 5e−4]`, since the search step is not specified, we took the log-scale space step as below: [0.01, 0.005623, 0.003162, 0.001778, 0.001, 0.000708, 0.0005], and found the lr as: 0.003162 in our model’s configuration. Following the exact same settings, we conducted evaluation on 5 rounds of training and report the mean and standard deviation of our model’s performance on RMSE, the results are: | Method | RMSE | |----------------|--------------| | Dropout-N | 70.20 ± 1.30 | | Ensemble-N | 48.02 ± 2.78 | | EvReg-N | 49.58 ± 1.51 | | NatPN-N | 49.85 ± 1.38 | | Dropout-Poi | 66.57 ± 4.61 | | Ensemble-Poi | 48.22 ± 2.06 | | NatPN-Poi | 51.79 ± 0.78 | | **DDPN (ours)**| **47.87 ± 0.42** | ## How does DDPN compare to other uncertainty methods? In short, the objective function proposed in Equation 1 enables the network to capture aleatoric uncertainty over count data. We show how this can be combined with Deep Ensembles to better capture epistemic uncertainty. Table 2 shows this combination is effective. DDPN is presented in our paper in terms of maximum likelihood + ensembles for 1) simplicity, 2) effectiveness, and 3) likelihood of community adoption. DDPN can easily be combined with other UQ methods. ### Bayesian Neural Networks One could put a prior over the weights of the network (ideally an isotropic Gaussian prior). Let $\theta$ denote the neural network parameters, $f_\Theta(x_i)$, and $\mathcal{D}$ denote the training dataset. The log posterior is: $\log p(\theta | \mathcal{D})) = \log \frac{1}{Z} + \log p(\mathcal{D} | \theta) + \log p(\theta)$ where $\frac{1}{Z}$ is the normalizing partition function and is usually dropped during inference. Fortunately, the negative log likelihood, $-\log p(\mathcal{D} | \theta)$ is already defined in Equation 1 of our paper, and the log gaussian prior is easy to compute, $\log p(\theta) = -\frac{1}{2} \log (2\pi \sigma_0^2) - \frac{1}{2\sigma_0^2} (\theta - \mu_0)^2$, where the prior hyperparameters are $\mu_0$ and $\sigma_0^2$. Then one could choose the preferred inference algorithm: MAP, HMC, SGLD etc… and estimate the posterior. Empirically, Bayesian neural networks can outperform Deep Ensembles when using high fidelity inference algorithms such as Hamiltonian Monte Carlo. However, in practice MCMC-based inference is impractical and DEs often outperform less exact inference methods (i.e., SGLD, Variational Inference etc…) [Izmailov et al. What Are Bayesian Neural Network Posteriors Really Like? ICML’21]. We suspect the same results hold for DDPNs. Moreover, many recent works have directly connected Deep Ensembles to Bayesian Inference by showing that DEs are a coarse approximation of the posterior, sampled and multiple modes with no local uncertainty [Fort et al. Deep Ensembles: A Loss Landscape Perspective. 2019][Wilson and Izmailov. Bayesian Deep Learning and a Probabilistic Perspective of Generalization. NeurIPS’20]. The effectiveness, simplicity and attractive theoretical properties of DEs motivated our decisions to use them in our experiments. ### Evidential Approaches DDPN could also be easily applied with evidential regression techniques [Amini et al. Deep Evidential Regression. NeurIPS’20]. Because DDPN uses a likelihood function during training, one would simply have to specify evidential priors over the parameters of the DDPN, $p(\mu)$ for the mean and $p(\gamma)$ for the inverse dispersion. Then, the network would be trained to predict the parameters of the higher-order evidential distribution. ### Laplace Approximation (LA) This is perhaps the easiest since LA is typically a post-hoc technique. One would train the DDPN in the standard way described in our paper. Then, one could apply any of the post-hoc second-order, covariance approximation methods described in [Daxberger et al. Laplace Redux – Effortless Bayesian Deep Learning. NeurIPS’21] ### Dropout-based uncertainty Monte Carlo dropout estimates epistemic uncertainty by randomly dropping out weights at test time and approximates the Bayesian posterior. DDPN can easily be combined with MC dropout by 1) training the single-member DDPN to convergence, and 2) applying the MC dropout procedure with $T$ different forward passes through the dropped out model [Gal and Ghahramani. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML’16] However, [Lakshminarayanan et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. NeurIPS’17] show that MC dropout is clearly inferior to DEs. --- Rebuttal Comment 1.1: Comment: The authors have addressed my comment and I vote to keep my original score of weak accept.
Summary: In this paper, the authors consider the problem of estimating heteroscedastic uncertainty within the context of counting tasks, where the final outputs should represent positive integer numbers. While many successful solutions have been proposed for heteroscedastic uncertainty in general (real-valued) regression tasks, this is not the case for counting, as it requires a different parametrization of the output distribution. Earlier solutions for the counting setting, such as the Poisson distribution, suffer from restricted heteroscedastic variance, meaning that the parameter defining the mean value of the distribution significantly restricts the possible predicted variance. In this paper, the authors propose using the Double Poisson distribution for counting tasks and prove that it resolves the issue of the former method, namely, it has unrestricted variance. Additionally, they demonstrate that the proposed loss has the property of adaptive loss attenuation, which lowers the impact of outlier points during training. Finally, they propose a way to make this attenuation controllable through $\beta$-DDPN. The effectiveness of the proposed method is demonstrated on several datasets from different domains, showing that the proposed parametrization outperforms other parametrizations for counting tasks in terms of uncertainty quality (calibration) and accuracy. Claims And Evidence: The authors present their claims and contributions in a clear manner while also supporting them with both theoretical (for example, proving that the proposed DDPN regressors are fully heteroscedastic) and experimental results. Methods And Evaluation Criteria: The authors primarily compare against other loss-based heteroscedastic approaches on various counting tasks, clearly demonstrating the effectiveness of the proposed approaches in the discussed counting setups. Theoretical Claims: The main theoretical contributions of the paper could be considered Propositions 2.3 and 3.1, which prove that previously proposed parameterizations, such as Poisson and Negative Binomial, are not fully heteroscedastic, while DDPN is. The proposed proof appears to be correct and valid, with no observable issues. Experimental Designs Or Analyses: The experimental design and analysis are adequate and rigorous, with no issues. Supplementary Material: Additional experiments and the full proofs of the main propositions are provided in the supplementary material, both serving as a valuable extension of the results discussed in the main body. Relation To Broader Scientific Literature: The paper clearly positions itself within the existing literature by thoroughly discussing prior work on heteroscedastic uncertainty estimation, particularly in regression and counting tasks. It provides sufficient detail on previous parameterizations, such as Poisson and Negative Binomial, highlighting their limitations and demonstrating how the proposed DDPN framework overcomes these constraints. Essential References Not Discussed: No, there are no critical references missing in the paper. Other Strengths And Weaknesses: In short, the major Strengths of the paper are: * The paper clearly discusses the problem of heteroscedastic uncertainty estimation in the counting context, its existing problems, and solutions. * The proposed method is clear, easy to implement, and demonstrates good performance on a number of different tasks. * In contrast to many other uncertainty methods, it does not require a significant increase in computation/memory during inference while still producing high-quality uncertainty estimations. One of the potential Weaknesses: * The paper mostly compares the method against other loss-based uncertainty methods. Introducing additional uncertainty approaches, such as ensembling methods (Deep Ensembles, Batch Ensembles, etc.), could be beneficial. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful comments. ## Introducing additional uncertainty approaches, such as ensembling methods (Deep Ensembles, Batch Ensembles, etc.), could be beneficial. An important aspect of our work is the interplay between DDPN and Deep Ensembles. We demonstrate this connection throughout the paper. In Section 3.4 we show how individual DDPNs can be combined as ensembles. Then, in the bottom half of Table 2 we present results with ensemble DDPNs. Table 2 demonstrates that learning ensembles of DDPNs improves accuracy **and** the quality of predictive uncertainty. We suspect that similar results will hold for BatchEnsembles, as this type of ensemble just changes how the weights of each member are derived: combining slow shared weights and fast, independent rank one weights. The principles we propose in this paper could easily be applied to other types of ensembles. We leave the validation of this hypothesis to future work. Finally, in the discussion with Reviewer 68NH, we discuss how DDPN can be combined with other UQ methods such as Bayesian Neural Nets, Evidential Methods, Laplace Approximation and Monte Carlo Dropout.
Summary: The paper focuses on outputting distributions for non-negative integer predictions (i.e., count data). To do so, the paper has a model output the parameters for a Double Poisson distribution, which admits separate mean and variance parameterizations. Then the paper further utilize ensembles to include epistemic model uncertainty. Results comparing against other predictive distributions, such as a typical Gaussian, show that the proposed predictive distribution outperforms on count tasks. Claims And Evidence: Yes, the paper is quite clear on the proposed approach, the different uncertainties involved (e.g., the diff between aleatoric and epistemic, which is often confused or muddled), and the experimental setup. The reasoning for using the Double Poisson instead of a regular Poisson is clear and backed by both theory and empirical evidence. Methods And Evaluation Criteria: Yes, the proposed Double Poisson makes sense for count data and for the goal of heteroscedastic variance (i.e., per-example variance controlled by the model). The evaluation datasets are fine and varied, and the baselines being other predictive distributions is appropriate and expected. Theoretical Claims: The formal definitions of the distributions (e.g., 2.1, 2.2, 3.2) and the propositions appear to be correct. In general, the approach is straightforward: have a model output the parameters of the Double Poisson distribution and then optimize that distributions NLL; this is generally the same type of approach used in modern models, just with a different distributional family. Experimental Designs Or Analyses: Yes, the experimental setup, including datasets, metrics, and baselines, are appropriate and expected for the type of approach being proposed. E.g., accuracy and a proper scoring rule is an ideal combination (often, the latter is missed), and the baselines are appropriately other predictive distributions and do not conflate that with other modeling choices. Supplementary Material: I skimmed the appendix, particularly for the base models used per task. I did not rigorously check the derivations in the appendix, e.g., A.1. Relation To Broader Scientific Literature: This fits well within the broader literature on uncertainty quantification. Accordingly, the appropriate papers are generally referenced. Most existing work has focused on categorical or continuous problems. This paper's novelty is in focusing on count data and using the less common Double Poisson for full heteroscedasticity. Essential References Not Discussed: No Other Strengths And Weaknesses: This paper's novelty is in focusing on count data and using the less common Double Poisson for full heteroscedasticity. It's not a surprisingly result and is fairly straightforward, but it's a useful paper to have in the literature. In particular, I'm pleased by the way in which the paper is very clear on the various uncertainty concepts and does not confuse or conflate any terms. I am assigning a "4: Accept" to mean that it's a solid paper; a 5 would be for an exceptionally exciting result, such as showing that this pushes on SoTA in some current frontier model. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the recognition of our work.
null
null
null
null
null
null
OmiAD: One-Step Adaptive Masked Diffusion Model for Multi-class Anomaly Detection via Adversarial Distillation
Accept (poster)
Summary: The paper introduces OmiAD, a one-step adaptive masked diffusion model for multi-class anomaly detection (MUAD). The authors propose Adaptive Masking Diffusion Model (AMDM) to mitigate "identical shortcut" issues by dynamically adjusting mask ratios based on noise levels and Adversarial Score Distillation (ASD) to compress multi-step diffusion processes into a single inference step. ## Update after rebuttal The authors have addressed most of my concerns. I remain "weak accept". It is worth pointing out that "Inference Time" is decided by the algorithm and GPU, which is irrelevant to dataset. I understant that you run the speed test on several dataset. But you only need to report their average. Report Inference Time for each dataset is wierd. Claims And Evidence: CV methodology paper. No claims apart from the claim of superiority. Methods And Evaluation Criteria: The proposed method make sense. Theoretical Claims: None. Experimental Designs Or Analyses: The experiments are sound. The setting follows common MUAD setting. Supplementary Material: Yes Relation To Broader Scientific Literature: Related to Diffusion-based UAD methods, such as DiAD. Essential References Not Discussed: Some recent MUAD methods are not compared, including MambaAD (NIPS24), ViTAD(Arxiv24), ReContrast (NIPS23), etc. Other Strengths And Weaknesses: Strengths: 1. OmiAD reduces inference time drastically compared to other Diffusion-based UAD methods, making it viable for real-time industrial applications. 2. Extensive experiments, works well across different datasets. Weaknesses: 1. Several recent Multi-class Unsupervised Anomaly Detection (MUAD) methods appear to be missing from the comparison, including MambaAD (NeurIPS 2024), ViTAD (arXiv 2024), and ReContrast (NeurIPS 2023), etc. Their performances are relatively comparable to the results of this work. 2. In Table 2, why SimpleNet spend 18 seconds for a bach? It is extremely unreasonable. SimpleNet only consists of a CNN backbone and a lightweight head. It should be faster than RD4AD. Furthermore, why do different datasets have different inference speed? They should be the same. It is also not clear whether the time is for one image or one batch. 3. A figure that depicts the overall method is favored. Figure 1 only presents ASD, which is only a part of the proposed method. 4. The results of single-class UAD should be presented as a reference. 5. This article is application-oriented and has a relatively narrow domain. I am not sure if it aligns with ICML's interests. Other Comments Or Suggestions: NO Questions For Authors: NO Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for the time you spent reviewing our paper in detail. Your insightful comments have been extremely helpful, and we deeply appreciate your input. **Q1: Comparison with Recent Multi-class UAD Methods** A1:We have compared our results on the MVTEC and VISA datasets with MambaAD, ViTAD, and ReContrast, as summarized in the table below. All results are sourced from the original publications. As shown, OmiAD consistently achieves superior performance across datasets. | Method|**MVTec**||**VisA**|| |-|-|-|-|-| ||Image AUROC|Pixel AUROC|Image AUROC|Pixel AUROC| |ReContrast|98.2|–|95.1|–| |MambaAD|98.6|**97.7**|94.3|98.5| |ViTAD|98.3|**97.7**|90.5|98.2| |**OURS**|**98.8**|**97.7**|**95.3**|**98.9**| **Q2: Speed of SimpleNet** A2: We revisited the official SimpleNet implementation and found that it upsamples both the anomaly score map and feature map to a shape of batch × 1536 × 288 × 288. The resulting tensor is then transferred from GPU to CPU and stored as a NumPy array (as seen in line 113 of the source code in common.py). This operation accounts for the majority of the inference time, resulting in significantly slower inference. Since our evaluation was based on the official public code, we observed these slower inference times accordingly. Based on your valuable suggestion, we optimized the implementation by removing the unnecessary upsampling step and the conversion to NumPy arrays, and streamlined the overall code. This optimization resulted in inference speeds that are consistent with the expected performance of the embedding-based structure and are lower than most methods reported in Table 2 of our paper. The updated inference times on the four datasets are shown below. As observed, the inference time remains longer than that of OmiAD, and thus does not affect the conclusions presented in our paper. |Dataset|MVTEC|VISA|MPDD|REALIAD| |-|-|-|-|-| |Inference Time (s)|0.0292|0.0296|0.0281|0.0277| **Q3: Overall method** A3: Should the paper be accepted, we will include a comprehensive illustration of the overall method in the camera-ready version, covering all components of OmiAD, including both AMDM and ASD modules. **Q4:Single-class UAD results** A4: We present the single-class anomaly detection performance on the MVTec and MPDD datasets as follows. These results will be detailed in the appendix for further reference. The single-class results on VisA and Real-IAD are currently under evaluation and will be included in the camera-ready version. **Single class UAD results for MVTec dataset:** |Metric|bottle|cable|capsule|hazelnut|metal_nut|pill|screw|toothbrush|transistor|zipper|carpet|grid|leather|tile|wood|**mean**| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |Image AUROC|100.0|99.2|96.8|100.0|99.4|97.0|94.7|99.4|99.9|99.7|100.0|100.0|100.0|100.0|98.6|**99.0**| |Pixel AUROC|98.6|98.3|98.9|98.8|97.0|97.2|99.4|98.8|98.9|98.2|98.8|98.6|99.3|92.7|94.2|**97.8**| **Single-class UAD results for MPDD dataset:** |Metric|bracket_black|bracket_brown|bracket_white|connector|metal_plate|tubes|**mean**| |-|-|-|-|-|-|-|-| |Image AUROC|96.3|99.1|93.2|96.7|100.0|94.8|**96.7**| |Pixel AUROC|98.8|98.3|98.6|98.9|98.5|98.8|**98.7**| **Q5: Application-oriented focus and alignment with ICML's interests** A5: Thank you for your comment. While this work is focused on multi-class anomaly detection, we did not make innovations in areas like anomaly score calculation or anomaly classification, which are task-specific. Instead, our innovation lies in enhancing generative capabilities and inference speed, which are more general improvements that are highly relevant in the current machine learning field. In the context of anomaly detection, the proposed method improves detection performance by enhancing the completion of anomalies with the powerful generator. Besides, the core contribution of our work is a novel one-step adaptive masked diffusion model, which integrates a random masking strategy into the one-step diffusion process to enhance generation quality and efficiency. This design offers broader applicability to tasks demanding efficient and high-quality generation. Furthermore, the rapid and efficient generation characteristics of the proposed method are closely aligned with the challenges currently faced in industrial anomaly detection. Thus, we have applied and validated the capabilities of our method in this task. Our work aligns closely with ICML's focus on advancing machine learning techniques with practical applicability. Finally, anomaly detection is also an important direction of machine learning, and it has strong correlation with unsupervised machine learning, statistical machine learning and so on. In recent years, many related papers have been published at ICML. **Due to character limitations, we would be happy to discuss and address any remaining questions during the discussion phase.**
Summary: To address the slow inference speed due to the iterative denoising nature of the diffusion model, this paper proposes a one-step masked diffusion model for multi-class anomaly detection, OmiAD, which uses a multi-step Adaptive Masked Diffusion Model (ADM) with compression using ASD. State-of-the-art performance is achieved on all seven metrics for four different datasets, along with a significant improvement in inference speed. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The authors provide extensive experimental results on four diverse datasets, demonstrating the effectiveness of OmiAD in terms of both anomaly detection and localization. The proposed method shows significant improvements over existing approaches, which strongly supports the claims of enhanced performance and efficiency. The ablation studies further validate the contributions of individual components like the adaptive masking strategy and the adversarial score distillation. Methods And Evaluation Criteria: The proposed OmiAD method makes sense for the problem of multi-class anomaly detection, where efficiency and accuracy are crucial. The use of a diffusion model with an adaptive masking strategy and adversarial score distillation is appropriate for addressing the challenges of shortcut learning and slow inference. The evaluation criteria, including AU-ROC, AP, F1 max, and AU-PRO, are standard and suitable for assessing the performance of anomaly detection methods. Theoretical Claims: The theoretical claims regarding the adaptive masking strategy and adversarial score distillation are plausible and well-founded. Experimental Designs Or Analyses: The experimental designs are sound and comprehensive. The authors conducted experiments on four diverse datasets, comparing OmiAD with several baseline methods. The results demonstrate the superiority of OmiAD in terms of both detection performance and inference speed. The ablation studies provide insights into the contributions of different components of the proposed method. The visualizations of reconstruction results and anomaly maps further support the effectiveness of OmiAD in localizing anomalies. Supplementary Material: The authors provide a number of visualisations and illustrations in the appendix to help with better understanding Relation To Broader Scientific Literature: The key contributions of the paper are well-related to the broader scientific literature. The authors discuss how OmiAD advances the field of anomaly detection by addressing the limitations of existing diffusion-based methods. They connect their work to previous research on diffusion models, anomaly detection, and distillation techniques, showing how OmiAD builds upon and improves these approaches. The method's emphasis on reducing shortcut learning and improving inference efficiency aligns with current trends in developing more robust and practical machine learning models. Essential References Not Discussed: The paper cites relevant previous work on diffusion models, anomaly detection, and distillation methods. Other Strengths And Weaknesses: Strengths: The proposed OmiAD method effectively addresses the challenges of shortcut learning and slow inference in diffusion-based anomaly detection. The extensive experimental results on multiple datasets demonstrate the robustness and versatility of OmiAD. The adversarial score distillation approach offers a novel way to compress multi-step diffusion processes into a single step, significantly improving inference efficiency. Weaknesses: The assumption that the adaptive masking strategy will generalize well to all types of anomalies might be overly optimistic, as some anomalies could still rely on local features. The computational complexity of training the OmiAD model, especially with the adversarial distillation component, could be a limitation for practical applications with limited resources. Other Comments Or Suggestions: The paper is well-written and well-structured, making it easy to follow the methodology and experimental results. However, providing more details on the implementation of the adaptive masking strategy and the adversarial distillation process would be beneficial for readers interested in reproducing the results. Questions For Authors: 1. How would you address scenarios where anomalies are highly dependent on local features, potentially limiting the effectiveness of the adaptive masking strategy? 2. Could you provide more details on the computational overhead of the adversarial distillation process during training, and how it compares to the inference speed improvements? This is not my area of expertise, so I will be looking closely at other people's comments to adjust the score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We truly value the time and effort you invested in carefully reading our paper. Your thoughtful and constructive feedback is highly appreciated. **Q1:Effectiveness of Adaptive Masking for Localized Anomalies** A1: Thank you for raising this important point. We agree that anomalies heavily dependent on local features may pose a challenge for global masking strategies. To address this, our adaptive masking mechanism varies the mask ratio with the diffusion step, which allows the model to preserve fine-grained local information at earlier stages and gradually increases difficulty during training. Furthermore, since our model operates on feature-level inputs extracted by EfficientNet, it benefits from strong localized representations. Additionally, the Real-IAD dataset contains many small-scale anomalies. OmiAD demonstrates excellent performance on this dataset, further validating its effectiveness in handling fine-grained, localized anomalies. **Q2: Computational overhead of the adversarial distillation** A2: The computational overhead of adversarial distillation during training is comparable to that of training the teacher model once. This efficiency stems from the fact that the One-step Generator $g_θ$ directly inherits the architecture of the AMDM U-Net (as described in Algorithm 1), and the discriminator shares weights with the one-step generator’s encoder, resulting in minimal additional cost. Moreover, the distillation phase trains the One-step Generator for only 150 epochs—substantially fewer than the 1000 epochs required for AMDM. At inference time, OmiAD reduces the number of sampling steps from 100 (in AMDM) to just one, leading to a significant speedup. In summary, adversarial distillation introduces minimal training overhead while enabling substantial inference acceleration through drastic reduction in sampling steps.
Summary: The paper proposes a new multi-class anomaly detection method named OmiAD based on diffusion models. First, a diffusion model is trained. Different from standard diffusion models, the images are additionally partially masked to enforce the model to learn the global context. The trained diffusion model is then used by the teacher to distil reconstruction knowledge (using a novel adversarial distillation technique) inside a single-step generation model. It is unclear how the anomaly map is produced, but most likely through a reconstruction difference. OmiAD is then evaluated on four different datasets, achieving SOTA results. The proposed method is also exceptionally fast. ## update after rebuttal The authors have addressed most of my concerns so I have increased my score to 4. Claims And Evidence: The paper makes two claims: - Current diffusion-based anomaly detection models overlook the “identical shortcut” problem of reconstruction-based models, meaning the anomalous regions are not reconstructed to a normal look. - Current diffusion-based anomaly detection models require multiple steps, which leads to slow inference, which is suboptimal for real-world scenarios. The paper hypothesizes that a possible solution to the first claim is adaptive masking during the training of the diffusion model. The masking does indeed significantly improve the performance of their model, suggesting that this problem is partially improved. While I know from experience that this is a problem with diffusion-based models, some evidence verifying this would help. For example, by showing the distribution of anomaly scores for previous diffusion-based methods, some anomalous images should have low scores if that is the case. Something similar could be done to showcase that adaptive masking solves this. The second claim is solved by distilling the diffusion model into a single-step model. The achieved speed is small enough for real-world use and is therefore verified. Methods And Evaluation Criteria: The method is mostly clear, with a few minor details missing: how the anomaly masks are produced (this is the biggest missing detail), what is the architecture of the diffusion model (UNet, DiT?), and what is the input to the one-step generation model - the EfficientNet Features or the input image. The paper follows the standard evaluation protocol for multi-class anomaly detection methods and uses the standard evaluation metrics. The evaluation protocol, therefore, correctly evaluates anomaly detection performance. Theoretical Claims: The paper has derived the distillation loss. I have checked the derivation, and not all of the steps are entirely clear. For example, in Eq. 25, it is not clear how the final step is achieved and where the minus sign comes from, putting into question whether the loss has been correctly derived. Additionally, the paper definitions of the $\alpha$ and $\bar{\alpha}$ inside diffusion models are the opposite of the one used in the original paper [1]. I would suggest using the original ones, as this decreases the clarity for readers who are well-versed in diffusion models. [1] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, 6840-6851. Experimental Designs Or Analyses: The experiments verifying the performance in anomaly detection are thorough and adequate. The choice of compared methods is sufficient. However, I lack a more detailed ablation of some parameters for the model. More specifically, I am interested in how robust the model is to the choice of $p_{min}$, $p_{max}$ and $t_{init}$. I am especially interested in $t_{init}$, as it is set to 960 (at least this value appears in Algorithm 1), which seems incredibly high, meaning the input to the single-step generator is practically noise and very little signal. Additionally, the choice of the feature extractor (EfficientNet) was not ablated. Other parameters are sufficiently ablated. The inference speed for SimpleNet, however, looks significantly high. From my experience, the model is quite fast and should get a significantly lower inference speed. Additionally, it would help to have the inference speed of the base diffusion model (AMDM) to see the speed improvement brought by distillation. Supplementary Material: The supplementary material contains the algorithm for distillation, the proof for the distillation loss, additional implementation details, more detailed implementation details and additional qualitative results. Apart from the problem with the previously mentioned proof, the experiment with the false positive rates is not entirely clear. What is the threshold used to calculate? The threshold for the optimal $F_1$? Otherwise, other parts of the supplementary material do not contain any problems and bring a lot of additional information. Relation To Broader Scientific Literature: The paper improves upon previous multi-class diffusion-based models in two aspects: performance and speed. The most important contribution is the speed, making the use of diffusion-based models inside actual industrial scenarios feasible. As shown in Table 2 the model heavily outspeeds current SOTA multiclass diffusion-based methods, DDAD [2] and DiaD [3]. To my knowledge, it is the first method to apply adversarial distillation for diffusion models in the field of anomaly detection. In terms of performance, OmiAD achieves better results than other SOTA methods, such as HVQ-Trans [4]. [2] Mousakhan, A., Brox, T., & Tayyub, J. (2023). Anomaly detection with conditioned denoising diffusion models. arXiv preprint arXiv:2305.15956. [3] He, H., Zhang, J., Chen, H., Chen, X., Li, Z., Chen, X., ... & Xie, L. (2024, March). A diffusion-based framework for multi-class anomaly detection. In Proceedings of the AAAI conference on artificial intelligence (Vol. 38, No. 8, pp. 8472-8480). [4] Lu, R., Wu, Y., Tian, L., Wang, D., Chen, B., Liu, X., & Hu, R. (2023). Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. Advances in Neural Information Processing Systems, 36, 8487-8500. Essential References Not Discussed: The paper fails to mention the first approaches to anomaly detection with diffusion models, such as AnoDDPM [5] and DiffAD [6]. While not a big weakness, these methods should at least be mentioned to give a better idea of the development of such methods. Masking the image has also been done to improve the “identical shortcut” problem. While it has not been as successful, I would at least mention previous methods [7] trying this. [5] Wyatt, J., Leach, A., Schmon, S. M., & Willcocks, C. G. (2022). Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 650-656). [6] Zhang, X., Li, N., Li, J., Dai, T., Jiang, Y., & Xia, S. T. (2023). Unsupervised surface anomaly detection with diffusion probabilistic model. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6782-6791). [7] Zavrtanik, V., Kristan, M., & Skočaj, D. (2021). Reconstruction by inpainting for visual anomaly detection. Pattern Recognition, 112, 107706. Other Strengths And Weaknesses: The paper is nicely structured and easy to read. The claims are clearly written and discussed in the paper. Other Comments Or Suggestions: There are a few typos: Line 358 should probably be Qualitative Results and not Quantitative, Line 257 “distributionclosely” -> “distribution closely” I would perhaps also add a speed comparison to TransFusion [8] as it requires fewer steps (20) than most diffusion-based models. While I expect the improvement to be 50x or 100x times, it will be a fairer comparison. Extensive experiment analysis is not a contribution but a scientific standard. I would move it out of the contributions. [8] Fučka, M., Zavrtanik, V., & Skočaj, D. (2024, September). TransFusion–a transparency-based diffusion model for anomaly detection. In European conference on computer vision (pp. 91-108). Cham: Springer Nature Switzerland. Questions For Authors: I have listed the questions in terms of importance from most important to least important. - How are the anomaly maps produced? - How robust is the model to the choice of $t_{init}$? Is there a reason why it is set to 960? - How is the last step made in Eq. 25 the derivation of the adversarial loss? - How important is EfficientNet for the model? What happens if you exchange it for some other feature extractor? - What is the input to the single-step generation model (EfficientNet features or the image)? - What architecture is used for the diffusion model and the single-step generation model? Is it a UNet, DiT, etc.? - How is the FPR calculated in the supplementary material? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you invested in carefully reviewing our paper. Your insightful and constructive comments are greatly valued and have helped us improve the clarity and rigor of our work. **Q1: Inference stage and Anomaly Score Computation** A1:In the inference stage, we process both normal and anomalous data, following established methods like UniAD and HVQ-Trans for generating anomaly score maps. The pseudocode below outlines this process: 1. **Input**: $img$: Input image, $g_θ$: Trained one-step generator, $EfficientNet$: Feature extractor, $t_{init}$: Initial timestep. 2. **Output**: S: Pixel-wise anomaly score map 3. **Procedure**: 1. **Feature Extraction**: $x_0=EfficientNet(img)$ 2. **Noising**: $x_t=\sqrt{\bar\alpha_t} \cdot x_0+\sqrt{1 - \bar\alpha_t}\cdot\epsilon, \quad t=t_{\text{init}}, \epsilon\sim\mathcal{N}(0, I)$ 3. **One-step Reconstruction**: $\hat{{x}}_0=g_θ(x_t)$ 4. **Anomaly Score Computation**: $S=\|x_0-\hat{{x}}_0\|_2^2$ **Q2: Ablation study for $t_{init}$ and the rationale for choosing $t_{init}$ = 960** A2: We conducted ablations with different $t_{init}$ values and found the model remains robust between 800 and 960. We recommend $t_{init} = 960$ for the best trade-off between semantic preservation and anomaly detection. |$t_{init}$|600|700|800|940|960|980|1000| |-|-|-|-|-|-|-|-| |Image AUROC|97.7|98.1|98.3|98.7|98.8|95.9|73.5| |Pixel AUROC|96.9|97.1|97.4|97.7|97.7|97.3|80| We attribute the effective reconstruction at $t_{init} = 960$ to the high similarity among images within the same category. Using a pre-trained EfficientNet for feature extraction improved the model's ability to accurately reconstruct images. **Q3: Formula 25 derivation** A3: Thank you very much for your careful derivation. After reviewing it, we found that there was a typographical error. When substituting equation (22) into equation (21), we mistakenly wrote $\bar\beta$ as $\beta$. We have corrected and updated the derivation process, and equation (25) is now as follows: $$\mathbb{E}_{q(x\_t\mid x\_g, t)\,p\_\theta(x\_0)}\Big[\Big\langle \epsilon\_\phi(x\_t, t)-\epsilon\_\psi(x\_t, t),\bar{\beta}\_t \nabla\_{x\_t}\log p\_\theta(x\_t)\Big\rangle\Big]$$ $$=-\mathbb{E}_{x\_g \sim p\_\theta(x\_0)x\_t\sim p(x\_t \mid x\_g)} \Big[\Big\langle \epsilon\_\phi(x\_t, t)-\epsilon\_\psi(x\_t, t),\frac{x\_t-\bar{\alpha}\_t x\_g}{\bar{\beta}\_t} \Big\rangle\Big]$$ $$=-\mathbb{E}_{x\_g \sim p\_\theta(x\_0), z,\boldsymbol\epsilon \sim \mathcal{N}(0,I)} \Big[\Big\langle \epsilon\_\phi(x\_t, t)-\epsilon\_\psi(x\_t, t),\epsilon \Big\rangle\Big]\$$ This typographical error does not affect the subsequent loss calculation. **Q4: Importance of EfficientNet** A4: ResNet and EfficientNet are commonly used pre-trained feature extractors for anomaly detection. Replacing EfficientNet with ResNet34 on MVTec led to a performance drop (Image/Pixel AUROC: 93.7/96.5). Similar trends were observed in UniAD, where EfficientNet consistently outperformed ResNet. We therefore recommend using EfficientNet for better performance. **Q5: Input to the One step generator** A5: The input to the one-step generator is the feature map extracted by EfficientNet. **Q6: The diffusion model architecture** A6: We use a U-Net architecture as the backbone for the diffusion model. To evaluate the generality of our approach, we replaced the U-Net in OmiAD with DiT and U-ViT[1]. The results demonstrate that OmiAD maintains strong performance across different architectures, indicating that our method is architecture-agnostic. |Model|DiT|U-ViT|UNet| |-|-|-|-| |Image AUROC|98.2|98.4|98.8| |PIXEL AUROC|97.5|97.4|97.7| **Q7: Method for Calculating FPR** A7: In our experiments, the threshold is set based on the number of anomaly samples. We rank the samples by anomaly score and set the threshold to the score ranked at the position matching the number of anomaly samples. **Q8: More ablation study for $p_{min}$ and $p_{max}$** A8: We conducted ablation experiments to address this point. For further details, please refer to our response to **reviewer QjD4 Question 4**. **Q9: Speed of SimpleNet** A9: We identified that the slow speed was due to the official code transferring data from GPU to CPU and storing it as a numpy array. For further details, please refer to our response to **reviewer 96ub Question 2**. **Q10: Comparison of speed with Transfusion** A10: We added a time comparison with TransFusion. Despite fewer steps, TransFusion predicts masks and anomalies at each step and operates at the pixel level rather than in the latent space, both of which limit its inference speed. |Dataset|MVTec-ad|VisA|MPDD|Real-IAD| |-|-|-|-|-| |Inference Time|15.904|15.975|16.002|16.051| **Due to character limits, we welcome further discussion and are happy to address any remaining questions during the discussion phase.** [1]Bao F, et al. All are worth words: A vit backbone for diffusion models. --- Rebuttal Comment 1.1: Comment: The authors have satisfactorily answered almost all of my questions, and only one remains: 1. If the one-step generator generates reconstructed EfficientNet features (this assumption is based on the anomaly map generation algorithm), how did you achieve pixel-level reconstructions, as seen in Figure 2? As this is my only concern besides the extreme importance of EfficientNet, I will raise my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the positive feedback and for acknowledging our responses. Since EfficientNet and ResNet, which are widely used feature extractors in anomaly detection, are both CNN-based architectures, their convolutional structure enables local feature extraction through kernels while preserving the global spatial relationships of the input. As a result, the relative positions of anomalies in the input image are preserved in the feature space. In other words, the anomaly locations in the feature maps are spatially aligned with their actual positions in the original image. This spatial alignment ensures that anomaly maps generated in the feature space can be reliably used for both anomaly detection and localization. This design choice is consistent with existing works such as UniAD [1] and HVQ-Trans [2]. Regarding the pixel-level reconstructions shown in Figure 2, we additionally train a decoder to project the EfficientNet features back into the image space. This reconstruction is used solely for visualization purposes and does not participate in the anomaly scoring process. This practice is also consistent with visualization strategies adopted by prior methods such as UniAD[1] and HVQ-Trans[2]. We are sincerely grateful for your thoughtful review and the time you dedicated to evaluating our work. Your feedback has been invaluable in improving the quality of our paper. [1]You, Z, et al. A unified model for multi-class anomaly detection. [2]Lu, R, et al. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection.
Summary: This paper presents OmiAD, a one-step adaptive masked diffusion model designed for multi-class anomaly detection with enhanced inference efficiency. ## Paper contributions: - The paper introduces an innovative Adaptive Masking Diffusion Model (AMDM) strategy that dynamically adjusts masking patterns based on noise levels. AMDM is proposed to strengthen global context modeling and avoid shortcut reconstruction for anomaly pixels. - The paper utilizes Adversarial Score Distillation (ASD) to compress multi-step diffusion into single-step inference for test-time efficiency. - The experimental results show the proposed OmiAD achieves a speed-up over diffusion-based and transformer-based methods on anomaly detection benchmarks (MVTecAD, VisA, MPDD, Real-IAD). Claims And Evidence: - Details of F-Mask and effectiveness of A-Mask: In Table 3, the authors conduct an ablation study for F-Mask (Fixed Mask) and A-Mask (Adaptive Mask) to demonstrate the effectiveness of A-Mask. However, there is no introduction to the fixed mask strategy in the methodology section (3.2). How is the mask fixed? Is it fixed over the diffusion timesteps? What is the choice of probability $p(t)$ for F-Mask? Besides, the improvement from A-Mask to F-Mask seems incremental. Methods And Evaluation Criteria: The paper conducts extensive experiments on MVTecAD, VisA, MPDD, Real-IAD. Well-established metrics like Image-level & pixel-level AUROC, F1 max, and AUPRO are used for evaluation. Theoretical Claims: I am concerned about the theoretical correctness of the diffusion process since the authors perform two operations i) Gaussian noises and ii) Masking to the features, see eq.(10). The reconstruction of feature $x_0$ seems okay in the one-step generator, but it is not clear how the diffusion process is affected for the teacher diffusion model(AMDM) training. Especially for eq.(10), why $x_0$ can be estimated by masked features $x_m^t$ with noises? I hope the authors can provide a detailed explanation. Experimental Designs Or Analyses: The paper lacks a discussion on some important hyper-parameter tunning, such as the choice of $p_{min}, p_{max}$ for Adaptive Mask, the initial timestep $t_{init}$ for one-step generator. Are they sensitive to different types of anomalies? Supplementary Material: I have reviewed the supplementary materials and have some questions regarding the qualitative analysis. Specifically, I am curious about how OmiAD handles large-area anomalies, such as an entirely missing pill or a missing transistor in the MVTecAD dataset. Since the model is multi-class, for two pure-white images, would the reconstruction belongs to two different classes? Relation To Broader Scientific Literature: It addresses key limitations of prior diffusion-based works, such as multi-class AD, slow inference speed, and shortcut learning. It lacks comparisons with some classical AD methods, see the "Essential References Not Discussed" section. Essential References Not Discussed: I understand the paper focuses on multi-class AD, but I recommend the authors cite some classical anomaly detection papers: i) training-free methods: SPADE [Niv Cohen, etc], Padim[Thomas Defard, etc], ii) flow-based methods: Cflow-AD[ Denis Gudovskiy], and iii) some diffusion-based AD methods. Particularly, the training-free method Padim is efficient(>20 fps) and can be directly applied to multi-class cases. Other Strengths And Weaknesses: - The paper does not distinguish between the training stage and the inference stage. - The writing of the adversarial diffusion distillation part is confusing and not friendly to readers who are not familiar with diffusion distillation. Other Comments Or Suggestions: Null Questions For Authors: In summary, the most important questions: 1. Computation of the anomaly score 2. Eq.(10), why $x_0$ can be estimated by the masked features $x_m^t$? 3. The paper does not explicitly distinguish between the training stage(only the normal data) and the inference stage(inputs can be normal/abnormal). Most of the method section talks only about how to train the different modules. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your thorough review of our paper. Your valuable feedback and constructive suggestions have provided us with a clearer direction for improvement. **Q1: Inference stage and Anomaly Score Computation** A1: In the inference stage, we process both normal and anomalous data. The methodology for generating anomaly score maps follows established approaches, such as those outlined in UniAD[1] and HVQ-Trans[2]. The pseudocode below outlines the process for the inference stage, including the generation of anomaly score maps: 1. **Input**: $img$: Original input image, $g_θ$: Trained one-step generator, $EfficientNet$: Feature extractor, $t_{init}$: Initial timestep. 2. **Output**: S: Pixel-wise anomaly score map 3. **Procedure**: 1. **Feature Extraction**: $x_0=EfficientNet(img)$ 2. **Noising**: $x_t=\sqrt{\bar\alpha_t} \cdot x_0+\sqrt{1 - \bar\alpha_t}\cdot\epsilon,\quad t=t_{\text{init}}, \epsilon\sim\mathcal{N}(0, I)$ 3. **One step Reconstruction**: $\hat{{x}}_0=g_θ(x_t)$ 4. **Anomaly Score Computation**: $S=\|x_0-\hat{{x}}_0\|_2^2$ **Q2: Justification for estimating $x_0$ using masked features $x_m^t$** A2: Anomaly detection requires leveraging global information to reconstruct the anomalous part into the normal modality. We use Equation (10) to predict $x_0$, and apply Equation (11) as a constraint. This formulation allows $\epsilon_\theta$ to compensate for the anomalous part and generate the normal part. We conducted experiments where we replaced $x_m^t$ with $x^t$ in Equation (10). The resulting performance, with Image/Pixel AUROC scores of 90.1/92.9, is inferior to that of AMDM, which achieved 98.4/97.5, thereby further validating our conclusions. In addition, some works, such as DiffMAE[3] and MaskDiT[4], utilize unmasked regions to predict $\hat{{x}}_0$ **Q3: Fixed Mask Strategy** A3: Unlike A-Mask, where the mask probability varies with each time step, the mask probability in F-Mask remains constant. We set four different mask probabilities [0.1,0.2,0.3,0.4] and conducted experiments, selecting the best-performing combination as the F-Mask strategy for comparison. **Q4: More ablation study for $p_{min}$, $p_{max}$ and $t_{init}$** A4: We fixed $p_{min}$=0.1, with varying $p_{max}$, AMDM performance: |$p_{max}$|0.3|0.4|0.5| |-|-|-|-| |Image AUROC|98.1|98.4|98.2| |Pixel AUROC|97.4|97.5|97.1| We fixed $p_{max}$=0.4, with varying $p_{min}$, AMDM performance: |$p_{min}$|0|0.1|0.2| |-|-|-|-| |Image AUROC|98.2|98.4|98.1| |Pixel AUROC|97.5|97.5|97.3| The results show that AMDM maintains stable and strong performance across different $p_{\text{min}}$ and $p_{\text{max}}$ settings. The best performance is observed when $p_{\text{min}}=0.1$ and $p_{\text{max}}=0.4$. **Ablation study for $t_{init}$** We conducted ablation experiments on OmiAD at different $t_{init}$. The experimental results show that the model maintains performance robustness within a wide range of initial time steps (800 to 960). However, when $t_{init}$ = 1000, performance drops significantly due to the noisy input resembling pure noise, which hinders image recovery and reduces anomaly detection ability. We recommend using tinit = 960, as it provides optimal performance at both image and pixel levels. | $t_{init}$ |600|700|800|940|960|980|1000| |-|-|-|-|-|-|-|-| |Image AUROC|97.7|98.1|98.3|98.7|98.8|95.9|73.5| |Pixel AUROC|96.9|97.1|97.4|97.7|97.7|97.3|80.0| **Q5: Handling Large Anomalies** A5:Thanks to its multi-step generation process and global modeling capability, AMDM is capable of progressively reconstructing missing regions. We visualized AMDM's inference process on samples with missing transistors: before step 860, the model focuses on recovering the missing component, while after step 860, it shifts attention to refining fine-grained image details. Since OmiAD is distilled from AMDM, it inherits this reconstruction ability and achieves normal restoration from anomalous inputs in a single step. For pure white or pure black images, where no valid semantic information is available, the model produces meaningless outputs. **Q6: Essential References Not Discussed** A6:Thank you for the suggestion. We understand the importance of citing classical anomaly detection methods and will include references to SPADE, Padim, Cflow-AD, and other diffusion-based AD methods in the revised version of the paper. We also release our single-class anomaly detection results as a public benchmark. For detailed performance, please refer to **Reviewer 96ub, Question 3.** **Due to character limitations, we would be happy to discuss and address any remaining questions during the discussion phase.** [1]You, Z, et al. A unified model for multi-class anomaly detection. [2]Lu, R, et al. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. [3]Wei C, et al. Diffusion models as masked autoencoders. [4]Zheng H, et al. Fast training of diffusion models with masked transformers.
null
null
null
null
null
null
Disentangled Graph Spectral Domain Adaptation
Accept (poster)
Summary: To break away from the attribute and topology entanglement on Unsupervised Domain Adaptation (UDA), this paper introduces a novel method, DGSDA, directly aligning complicated graph spectral filters. This paper conducts experiments on various types of graph datasets to demonstrate the effectiveness of DGSDA. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence from both experimental and theoretical aspects. Methods And Evaluation Criteria: The proposed DGSDA is well aligned with the problem of UGDA. The disentanglement of attribute and topology alignments, the use of spectral filter alignment, and the comprehensive experiments on diverse datasets collectively demonstrate its effectiveness. Theoretical Claims: Yes, I have checked the correctness of the proofs for the theoretical claims presented in the paper. Specifically, I have verified the proofs for Theorems 4.3, 4.4, and 4.5, which are central to the theoretical analysis of DGSDA. Experimental Designs Or Analyses: Yes, I have checked the soundness of the experimental designs (including the compared methods and experimental setups) and analyses. Supplementary Material: Yes, I reviewed the supplementary material, which includes the proofs for the theorems, detailed dataset statistics, and additional experimental results. The supplementary material provides comprehensive and detailed support for the claims and results presented in the main paper. Relation To Broader Scientific Literature: The key contributions are directly related to the broader literature by proposing disentanglement techniques to solve the problems in traditional UGDA. It leverages recent advancements in spectral GNNs and builds on theoretical foundations of Lipschitz continuity and model alignment. Essential References Not Discussed: No, there are no essential related works missing in the paper that need further discussion. Other Strengths And Weaknesses: **Strengths** 1) The paper is well-written and easy to follow. 2) Experimental results on benchmark datasets are provided. **Weaknesses** The authors fail to clearly attribute the source of performance improvement in their method. First, DGSDA employs Bernstein polynomials, which are not commonly used in comparison methods. This choice alone may contribute to the performance gains, making it unclear how much of the improvement is due to the disentanglement strategy itself. Further empirical results are needed to isolate the specific role of disentanglement. Additionally, given the lack of labels in the target domain, it is unclear how the model ensures that the target domain parameters accurately capture the topological patterns. Other Comments Or Suggestions: 1) Some equations, such as Eq. (4) and Eq. (9), as well as Theorem 4.4, appear to be slightly misaligned. It is recommended to adjust the line breaks for better formatting. 2) On line 182: It possesses the following **three** advantages instead of **two**. 3) The name "PairAlign" is misspelled as "PariAlign" in Tables 1 and 2. Questions For Authors: Refer to Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1. The authors fail to clearly attribute the source of performance improvement in their method. Further empirical results are needed to isolate the specific role of disentanglement. R1. To address your concerns, we have conducted an additional experiment to clearly identify the source of performance improvement. This experiment introduced a variant of DGSDA that uses Bernstein polynomials and directly aligns node representations instead of separately aligning attributes and topology. The variant model's performance is consistently worse than that of our full DGSDA, as shown in the following table. This indicates that the disentanglement play a crucial role in enhancing the performance of our model. | | A→C | C→A | A→D | D→A | C→D | D→C | | --- | --- | --- | --- | --- | --- | --- | | DGSDA | 83.57$\pm$0.22 | 75.54$\pm$0.28 | 76.90$\pm$0.51 | 74.07$\pm$0.56 | 78.38$\pm$0.28 | 82.92$\pm$0.15 | | variant model | 81.01$\pm$0.32 | 73.25$\pm$0.22 | 73.15$\pm$0.19 | 72.03$\pm$0.24 | 76.32$\pm$0.17 | 80.25$\pm$0.21 | --- > Q2. Given the lack of labels in the target domain, it is unclear how the model ensures that the target domain parameters accurately capture the topological patterns. R2. We have conducted two additional experiments to demonstrate that the unsupervised loss can provide effects similar to supervised loss in terms of model parameter optimization. In the experiment, the supervised variant of DGSDA employs the supervised loss from 10% labeled data in the target domain, replacing the unsupervised loss: the spectral alignment loss and the entropy loss. The results are shown below. | | A→C | C→A | A→D | D→A | C→D | D→C | | --- | --- | --- | --- | --- | --- | --- | | DGSDA | 83.57$\pm$0.22 | 75.54$\pm$0.28 | 76.90$\pm$0.51 | 74.07$\pm$0.56 | 78.38$\pm$0.28 | 82.92$\pm$0.15 | | DGSDA (supervised) | 83.20$\pm$0.52 | 76.37$\pm$2.75 | 79.49$\pm$0.56 | 77.10$\pm$0.83 | 80.16$\pm$0.65 | 83.03$\pm$0.48 | The results indicate that the unsupervised DGSDA achieves comparable performance to the supervised version, highlighting the effectiveness of the unsupervised losses. This can be attributed to two key factors. First, by regularizing the coefficients of Bernstein polynomials (in Eq. 6), the method explicitly aligns the spectral filters across different domains. This alignment enables the target filters to inherit topology-aware patterns from the source domain, even without labels. Second, the entropy loss sharpens cluster assignments, which implicitly encourages the model to learn more discriminative topological features. In addition, we have compared the learned filter curves in both labeled and unlabeled target domains. The results can be found at https://anonymous.4open.science/r/DGSDA/figure/DC.png. In the supervised learning case, the model parameters capture the homophily topological pattern, characterized by increasing low-frequency information and suppressing high-frequency information. Similarly, in the unsupervised learning case, the target domain parameters can capture the same topological pattern. --- > Q3. Some equations, such as Eq. (4) and Eq. (9), as well as Theorem 4.4, appear to be slightly misaligned. It is recommended to adjust the line breaks for better formatting. R3. We will adjust the line breaks in Eq. (4), Eq. (9), and Theorem 4.4 to ensure proper alignment and improve the overall formatting. --- > Q4. Expression error on line 182 and spelling mistake of model "PairAlign". R4. We will perform a thorough review of the manuscript to correct any expression errors and spelling mistakes. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal addresses all my concerns. After checking the comments from other reviewers, I raise my score.
Summary: This study addresses the challenge of unsupervised graph domain adaptation in scenarios involving distribution shifts and missing labels by proposing a novel solution that disentangles the distribution shift. Specifically, the method DGSDA refines the topology alignment into GNN alignment and incorporates spectral filter alignment loss. Claims And Evidence: This paper provides a comprehensive evaluation of the DGSDA model, supported by both theoretical analysis and extensive experiments, thereby effectively demonstrating its efficacy. Thus, the claims made in the paper are well-substantiated by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method involves (1) disentangling embedding alignment into topology and attribute alignments and (2) exploiting alignments of filter parameters to flexibly implement topology alignment, both of which make sense for the graph-domain adaptation problem. Theoretical Claims: After a detailed examination of the proof, I have essentially confirmed its correctness. Experimental Designs Or Analyses: I examined all the experimental designs and analyses in Section 5 and Section B, and believe it effectively demonstrates the properties of the proposed model. Supplementary Material: Driven by my interest in this topic, I have thoroughly reviewed the supplementary material, including the theoretical proofs and additional experimental details. Relation To Broader Scientific Literature: Current graph domain adaption works focus on proposing topology alignment strategies, including aligning the edge distributions of two domains using the CSBM. DGSDA utilizes a new filter alignment to improve flexibility. Essential References Not Discussed: No other related works that are essential to understanding the (context for) key contributions need to be discussed or cited. Other Strengths And Weaknesses: **Strengths** 1)The idea of GNN alignment is interesting. 2)The method is simple yet has solid theoretical support. **Weaknesses** 1) The title appears to be somewhat ambiguous. The title does not reflect the focus on the **unsupervised** problem in graph domain adaptation, which is a key aspect of the study. 2) The notation $T$ used in the paper is not clear. For example, $\mathbf{X}^{T}$ represents the node attribute matrix of the target domain, but it can also be interpreted as the transpose of the node attribute matrix. 3) The description of the proposed method lacks clarity. While $L_{source}$ and $L_{mmd}$ are described in words, they are not accompanied by clear, formal formulations. Other Comments Or Suggestions: 1) In Section 4.1 Distribution Shift Disentanglement, “Topology alignment中“the graph data shift can be simplified from $P^S(A, X|Y) \neq P^T (A, X|Y)$ to $P^S(A|X, Y) = P^T (A|X, Y)$” seems to be misstatement. The correct statement should be $P^S(A|X, Y) \neq P^T (A|X, Y)$. Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1. The title appears to be somewhat ambiguous. The title does not reflect the focus on the **unsupervised** problem in graph domain adaptation, which is a key aspect of the study. R1. Thank you for pointing this out. The primary focus of this paper is indeed on the unsupervised problem in graph domain adaptation. While our architecture can also accommodate target labels when available, the unsupervised scenario remains our main emphasis. We will consider revising the title to better reflect this focus. --- > Q2. The notation T used in the paper is not clear. For example, X^T represents the node attribute matrix of the target domain, but it can also be interpreted as the transpose of the node attribute matrix. R2. Thanks for your careful check. We will correct $X^T$ to $X^{\top}$ to denote the transpose of the matrix. --- > Q3. The description of the proposed method lacks clarity. While $L_{source}$ and $L_{mmd}$ are described in words, they are not accompanied by clear, formal formulations. R3. The formal formulations of $L_{source}$ and $L_{mmd}$ are presented as follows: $\mathcal{L}\_{source} = -\frac{1}{N^{{S}}} \sum\_{i=1}^{N^{{S}}} \sum\_{c=1}^{C} y\_{i,c} \log p\_{i,c}$ $\mathcal{L}\_{mmd} = \frac{1}{\left(N^{{S}}\right)^2} \sum\_{i=1}^{N^{{S}}} \sum\_{j=1}^{N^{{S}}} k\left(H\_i^{{S}}, H\_j^{{S}}\right) + \frac{1}{\left(N^{{T}}\right)^2} \sum\_{i=1}^{N^{T}} \sum\_{j=1}^{N^{T}} k\left(H\_i^{T}, H\_j^{T}\right) - \frac{2}{N^{{S}} N^{T}} \sum\_{i=1}^{N^{{S}}} \sum\_{j=1}^{N^{T}} k\left(H\_i^{{S}}, H\_j^{T}\right) $ where $k(·,·)$ represents the kernel function. We will add them to the appendix to enhance the clarity of the manuscript. --- > Q4. In Section 4.1 Distribution Shift Disentanglement, "Topology alignment in the graph data shift can be simplified from $P^{S}(A,X| Y) \neq P^{T}(A,X| Y)$ to $P^{S}(A| X,Y) = P^{T}(A| X,Y)$" seems to be misstatement. The correct statement should be $P^{S}(A| X,Y) \neq P^{T}(A| X,Y)$. R4. Thanks for pointing out this important detail. We will correct this formula in the revised manuscript and ensure that all related discussions are consistent with this accurate representation.
Summary: This paper introduces a novel pipeline for unsupervised graph domain adaptation by disentangling attribute and topology alignments by considering that attribute alignment has been widely investigated. Based on the aligned node attribute, the topology alignment is converted to the model alignment by taking into consideration the widely developed GNN models. Then, the Bernstein polynomial is employed as the backbone for its approximation property and spectral perspective. Theoretical analysis and experimental evaluations justify the pipeline and proposed models. Claims And Evidence: The correctness of the introduced pipeline is verified by the derivation from the Bayesian theorem. The replacement of topology alignment with model alignment makes sense due to the connection between topology and GNN models. Theoretical analysis and experiments demonstrate the statements. Methods And Evaluation Criteria: As shown in the previous section, I think both the pipeline and the proposed GNN alignment make sense. Besides, the employment of these two strategies reduces the requirement of the pseudo label in the topology alignment, which often relies on the accurate estimation of the node membership. Theoretical Claims: I cursorily examined the proof of the theorems in the appendix and believe they are correct. Experimental Designs Or Analyses: The experiments are extensive, including quantitative and qualitative analyses. The setting is widely used in this field, and the baselines are recently proposed competitive ones. Thus, the experimental evaluations are convincing. Supplementary Material: I've had a general look at what's in the appendix, especially the proof. Relation To Broader Scientific Literature: Unsupervised graph domain adaptation is a critical topic in graph learning for the graph foundation model design. Although there exist GNN-based methods, as reviewed in the related work section, this paper gives a novel methodology by both decomposition and model parameter alignment. This is more powerful and efficient compared to the existing ones by considering the spectral perspective. Essential References Not Discussed: Sufficient. It covers most existing competitive SOTA and important milestones. Other Strengths And Weaknesses: This paper possesses high originality, which may inspire the following variants on model alignment. The claims are justified with a rigorous theory investigation and extensive experiments. The main weakness is the lack of source code. Since the proposed method is flexible and complicated with four terms as the objective function, it is necessary to provide source code to make the read easy to get the implementation details. Other Comments Or Suggestions: None Questions For Authors: Although the authors claim the use of model alignment avoids the requirements of pseudo-labels, I wonder whether pseudo-labels also benefit the model alignment since the polynomial coefficients can also be learned from labels as supervision. Whether the predicted pseudo-labels on the target domain can be employed? The theoretical analysis focuses on the Bernstein spectral GNN alignment. Does the decomposition strategy possess solid theoretical findings? Unsupervised graph domain adaptation is often composed of multiple terms as the objective function, and thus, training it is difficult to balance the impacts of terms. Can these terms be unified to felicitate the training? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1. The main weakness is the lack of source code. R1. The source code has been made available at (https://anonymous.4open.science/r/DGSDA) for verification purposes. We promise to make the code public once this paper is accepted. --- > Q2. Whether the predicted pseudo-labels on the target domain can be employed? R2. To answer your valuable question, we have conducted experiments to verify the feasibility of using predicted pseudo-labels on the target domain. This experiment introduces a variant model named DGSDA+PL, which combines pseudo-labels of the target domain. The compared results reveal that pseudo-labels consistently lead to performance degradation in all domain adaptation scenarios, demonstrating the infeasibility of the mentioned scheme. | | A→C | C→A | A→D | D→A | C→D | D→C | | --- | --- | --- | --- | --- | --- | --- | | DGSDA | 83.57$\pm$0.22 | 75.54$\pm$0.28 | 76.90$\pm$0.51 | 74.07$\pm$0.56 | 78.38$\pm$0.28 | 82.92$\pm$0.15 | | DGSDA+PL | 81.23$\pm$2.52 | 74.40$\pm$2.22 | 75.36$\pm$2.37 | 71.16$\pm$1.33 | 77.03$\pm$1.04 | 79.45$\pm$1.49 | This is primarily due to the low reliability of the pseudo-labels generated in the early stages of training, which can cause error accumulation in learning processes, and the noise amplification effect in graph neural networks, where erroneous pseudo-labels propagate through message-passing mechanisms. This is also the reason why the proposed method outperforms topology alignment with pseudo-labels. --- > Q3. Does the decomposition strategy possess solid theoretical findings? R3. We acknowledge that the current analysis focuses on demonstrating the **feasibility** of disentanglement without providing a precise error bound between the entangled and disentangled representations. This is a common limitation in the graph disentanglement field, where theoretical guarantees are still lacking. We will strive to address this in future work. --- > Q4. Unsupervised graph domain adaptation is often composed of multiple terms as the objective function, and thus, training it is difficult to balance the impacts of terms. Can these terms be unified to felicitate the training? R4. We understand your concern for the stability of training. Unfortunately, these terms cannot be unified, as each of the loss terms focuses on different objectives as other GDA methods. To be specific, $L_{source}$ targets minimizing prediction error in the source domain, ensuring effective training on labeled source data. $L_{align}$ focuses on aligning spectral coefficients between the source and target domains. $L_{mmd} $aims to align feature representations to reduce distribution differences. $L_{target}$ promotes model adaptation to the target domain through unsupervised learning. Moreover, the experiments in the hyper-parameter analysis demonstrated that our model is relatively robust to the hyper-parameters used for weighting these terms. We will explore integrating these loss terms in future work to facilitate easier balancing.
Summary: This paper proposes Disentangled Graph Spectral Domain Adaptation (DGSDA) to alleviate the inaccuracies of pseudo-labels and the limited expressive ability of graph encoders to capture rich topology information. It decomposes the attribute and topology alignments and replaces the topology alignment with the powerful model alignment. To harness the parameter efficiency of spectral GNNs, the Bernstein polynomial is employed, and the polynomial coefficients are aligned. Theoretical analysis shows its rationality and superiority compared to existing ones. Experimental experiments also justify the claims. Claims And Evidence: The rationality and superiority of the proposed DGSDA are supported by both theoretical and experimental evidence. It is clear and convincing. Methods And Evaluation Criteria: The proposed DGSDA makes sense and is novel by decomposing attribute and topology alignments. The experimental evaluations are reasonable with widely-sed criteria. Theoretical Claims: The correctness of theorems is checked, as well as their proofs in the appendix. However, the symbols are very complex. Experimental Designs Or Analyses: The soundness of the experiments is checked. The design is based on widely-employed datasets and criteria. The performances are verified on varying datasets. The ablation study and hyper-parameter analysis are conducted. Supplementary Material: The proof and experimental details in the appendix are checked. Relation To Broader Scientific Literature: UGDA is an important topic in the graph machine learning field. Previous work focuses on the employment of DA methods in i.i.d. data. This paper is along the line of topology alignment. It alleviates the issue of pseudo-label inaccuracy by adopting spectral model alignment beyond the topology one. Therefore, it is novel. Besides, the decomposition of topology and attribute alignment are also novel and interesting. Essential References Not Discussed: The references are sufficient. Other Strengths And Weaknesses: **Strengths** The motivations are interesting and make sense. The proposed method is novel and solid. The theoretical justification is rigorous. The experimental evaluations are convincing. **Weakness** The symbols, especially the theory part, are too complex to read. Other Comments Or Suggestions: 1. Some explanations should clarify the theoretical differences from [You et al., 2023]. 2. It is better to provide the source code to enhance reproducibility. Questions For Authors: 1. Why is the performance of the proposed method lower than that of JHGDA on the traffic network dataset? 2. The legend in Figure 2 is not clear. What are the differences between source/target and A/C. 3. Why are the results in Figure 2 continuous curves? I think they should be discrete values. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1. The symbols, especially the theory part, are too complex to read. R1. Thanks for your feedback. We will thoroughly review and modify all the symbols to make them easier to read. --- > Q2. Some explanations should clarify the theoretical differences from [You et al., 2023]. R2. The theoretical differences between this paper and the mentioned work [You et al., 2023] is **Polynomial Choice & Lipschitz Properties**: The proposed DGSDA adopts Bernstein polynomials, whose Lipschitz constant $C_{\lambda}$ is determined by the ground-truth function (in Theorem 4.3), rather than being restricted by the basic polynomial coefficients as in the work [You et al., 2023]. This allows more flexible and accurate spectral domain adaptation. --- > Q3. It is better to provide the source code to enhance reproducibility. R3. According to your suggestion, the source code has been made available at (https://anonymous.4open.science/r/DGSDA) for verification purposes. --- > Q4. Why is the performance of the proposed method lower than that of JHGDA on the traffic network dataset? R4. The proposed method generally outperforms JHGDA on most tasks in the traffic network dataset and is less effective than JHGDA on the $B → E$ and $E → B$ tasks. The performance weakness may be attributed to overfitting due to limited training data. The Brazil and Europe datasets contain a relatively small number of nodes, which makes the proposed models with multiple constraints more prone to over-capturing the patterns of individual hub nodes. This, in turn, makes it difficult to effectively generalize to the overall structure of the target domain. Nonetheless, the extensive results illustrate the effectiveness of the proposed method. --- > Q5. The legend in Figure 2 is not clear. What are the differences between source/target and A/C. R5. The solid lines represent the filter curves trained on the domain adaptation tasks, while the dashed lines represent the filter curves obtained by training BernNet only on the corresponding datasets. Thus, “source” and “target” in the legend mean the filter curves of source domain encoder and target domain encoder trained on the domain adaptation tasks; “A” and “C” in the legend denote the filter curves of BernNet on the A and C datasets, respectively. We will change them to “Training on A” or “Training on C” in the revised manuscript to enhance readability. --- > Q6. Why are the results in Figure 2 continuous curves? R6. Figure 2 shows the learned polynomials instead of their coefficients, and thus contain continuous curves. The x-axis in Figure 2 denotes the normalized graph signal frequency $\lambda$, and the y-axis represents filter gain $h(\lambda)$. They are independent of the polynomial order $K$ and coefficients $\theta_k$. The Bernstein polynomial is defined as $h(\lambda)=\sum_{k=0}^K\theta_kb^K_k(\frac{\lambda}{2})$, where $b^K_k(t)$ is the Bernstein basis function. For any given $\lambda$, it is first mapped to the [0, 1] interval (i.e., $\frac{\lambda}{2}$), and $h(\lambda)$ is calculated using the learned coefficients $\theta_k$ and the corresponding Bernstein basis functions $b^K_k$. Thus, by sampling sufficiently many $\lambda$ values, we can plot the continuous curves shown in Figure 2.
null
null
null
null
null
null
Scaling Laws for Floating–Point Quantization Training
Accept (poster)
Summary: This paper constructs scaling laws for floating point quantized training based on curve fitting to many small to medium scale LLM training experiments. Claims And Evidence: This paper seems to be mostly based on empirical curve fitting, as most scaling law papers are. Methods And Evaluation Criteria: This paper seems to make reasonable choices in the context of existing LLM scaling law papers. Theoretical Claims: This is a mostly empirical paper. Experimental Designs Or Analyses: See above. Supplementary Material: I read through the supplementary. Relation To Broader Scientific Literature: In my opinion, scaling law papers are hard to evaluate in general. On one hard, the results are probably useful, but there does not appear to be a lot of theory behind why models scale as such. The models studied in this work are also small (<1B params), making it hard to judge how much extrapolation is possible with the derived scaling curves. The authors do run validation on a 1.2B parameter model, but 1.2B parameters is still much smaller than production language models that are trained on trillions of tokens (vs. O(100B tokens) in this paper). I don't fault the authors for this since training enough LLMs to fit curves is extremely expensive, but at the same time the authors chose to write a scaling laws paper. Essential References Not Discussed: The key related work seems to be https://arxiv.org/abs/2411.04330 and the authors do discuss it. Other Strengths And Weaknesses: As mentioned above, I think it is hard to evaluate scaling law papers in general. Regarding this specific paper, integer quantization is a special case of EeMm where e = 0. Do your scaling laws reduce to existing integer training scaling laws when e = 0? Likewise, the SNR of EeMm datatypes is relatively constant ignoring over/underflow. Would it be more reasonable to fit a curve to SNR or datatype distortion instead of E and M separately? Figure 5 and the corresponding part of the scaling law suggests that the optimal setup for EeMm is when e == m. How much of this is dependent on activation and weight distributions? Some PTQ and QAT papers have proposed using random orthogonal transformations to transform the activation and weight distributions prior to quantization -- would that change the scaling law? Other Comments Or Suggestions: I'm not sure if Comic Sans is against the style guide rules but you clearly should have used Papyrus instead https://www.youtube.com/watch?v=jVhlJNJopOQ. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive suggestions and valuable comments! We hope our rebuttal could help address your concerns, and we would be grateful if you could consider increasing the overall recommendation of our work. ## Q1: Extension to larger models. A1: Thanks for the suggestion. We agree with the reviewer that experiments on larger models could further verify the effectiveness of our scaling law. We attempt to answer from the following aspects: (1) The scaling law exploration of LLMs is essential but extremely expensive in both GPUs and running time. To thoroughly explore the relationships between the loss and different floating-point quantization training factors (e.g., N, D, E, M, B), we have trained 366 models with different settings (model sizes from 41M to 679M) to draw our scaling laws, and successfully validate them on 1.2B models with 8 different settings in Fig. 4. **The adopted model sizes are comparable with those in other scaling law works** (e.g., Scaling law for precision [1], one of the most related work, adopts model sizes up to 1.7B parameter for validation). (2) We conduct several additional experiments with model sizes larger than 1.2B. Precisely, **we successfully predict the loss of 7B and 70B LLMs (with different settings) based on our scaling law**. The detailed model setting, actual loss and predicted loss are shown as follows: |N|D|B|E|M|$L_{\text{actual}}$|$L_{\text{predict}}$|$\Delta L$ |-:|-:|-:|-:|-:|-:|-:|-:| |1.2B|100B|512|4|3|2.50|2.54|-0.04 |7B|10B|64|4|3|2.65|2.70|-0.05 |7B|100B|64|4|3|2.38|2.38|0.00 |70B|10B|64|4|3|2.60|2.56|0.04 |70B|20B|64|4|3|2.44|2.42|0.02 (The results of 1.2B models have already been included in Fig. 4 of our paper) The actual losses of 7B/70B models are very close to the theoretical value calculated by our scaling law. Therefore, we could confidently claim that our scaling law could extend beyond larger model sizes. These additional results will also be given in the revision. ## Q2: Do the scaling laws reduce to existing integer training scaling laws when e = 0? A2: We thank the reviewer for this insight. Mathematically, our scaling laws approximately reduce to integer quantization formulations when e=0. However, there exist fundamental differences in hardware implementation between integer (fixed-point) and floating-point arithmetic: integer operations rely on dedicated fixed-point units and two’s complement representation, while floating-point architectures require separate processing of sign, exponent, and mantissa bits through distinct computational mechanisms. Such hardware-level disparities suggest that theoretical simplifications may not directly translate to empirical training scenarios. Our current work focuses on floating-point quantization scaling laws ($e \neq 0$), while the formal alignment of the special e=0 integer case with existing theories will be explored as a future direction. This requires joint optimization analysis incorporating hardware instruction sets and numerical representation properties for rigorous empirical validation. We will add this discussion in our revision. ## Q3: Would it be more reasonable to fit a curve to SNR or datatype distortion instead of E and M separately? How much of this is dependent on activation and weight distributions? Would some PTQ and QAT papers with random orthogonal transformations change the scaling law? A3: (1) This is a valuable suggestion. As shown in Eq. 37 of [1], using the Signal-to-Quantization-Noise Ratio (SQNR): $$\text{SQNR} = 10 \log_{10}\left( \frac{ \mathbb{E}[\mathbf{W}^2]\mathbb{E}[\mathbf{X}^2] }{ \mathbb{E}[(\mathbf{W}\mathbf{X} - \mathcal{Q}(\mathbf{W})\mathcal{Q}(\mathbf{X}))^2] } \right)$$ for GEMM operations could unify the effects of exponent (E) and mantissa (M) precision while accounting for input distribution modifications like random orthogonal transformations. However, SQNR calculation inherently depends on tensor distributions, which may limit its practical applicability. Therefore, we directly select the raw exponent and mantissa bits rather than the SNR as essential factors in our scaling law for more precise prediction ability. (2) Notably, our theory explicitly models dataset size (D) as a variable, and since weight/activation distributions evolve during training, we believe that our formulation exhibits partial robustness to such distributional shifts. The interaction between PTQ/QAT methods and scaling laws remains an open question requiring systematic analysis of how orthogonal transformations alter quantization noise dynamics. We will add our discussions in revision. ## Q4: Format and references. A4: Thanks for your suggestion. We will fix them in revision. ## References: [1] Kuzmin, Andrey et al. FP8 Quantization: The Power of the Exponent. --- Rebuttal Comment 1.1: Comment: Thank you for your response and additional experiments. I will keep my score for now.
Summary: This paper proposes a scaling law for LLM performance prediction according to model size, dataset size, exponent bit, and mantissa bit while training LLM under FP quantization. Based on previous research, the paper tries to predict LLM performance more precisely. To achieve this objective, the paper proposes the following. 1. Exponent bit is of greater importance than mantissa bit. 2. While training LLM under low precision conditions, the excessive size of a dataset which is larger than its critical size can degrade the performance of the model. 3. The optimal balance between cost and performance is among 4~8 bits. Claims And Evidence: - Exponent bit is more important than mantissa bit. - The paper supports this claim with various experiments and provides optimized exponent bit and mantissa bit settings per total number of bits. - There are already several works that contend the importance of exponent bits, so it is not a novel idea. - There is a critical size of a dataset according to the size of a model. - Figure 6 is depicted with not actual loss but predict loss. However, there isn’t any actual performance validation with benchmark datasets. It is not appropriate to draw conclusions based solely on predict loss value, without considering validation performance with real data. Methods And Evaluation Criteria: - The proposed scaling law - The authors show that the prediction of the proposed scaling law is more accurate compared to previous works with several experiments. - However, the paper doesn’t clearly provide how the constants in the scaling law equations are determined. - Also, there aren’t any experiments with benchmark datasets. - Propose optimal float layout per bit-width - The authors find optimal values with the proposed scaling law. - The grounds for asserting that the proposed layout is optimal are weak because the authors don’t provide experiments with any benchmark datasets. Theoretical Claims: The authors explain the proposed scaling law with mathematical formulas. However, the authors just assume that those formulas are right, and there isn’t rationale for how to induce them. Experimental Designs Or Analyses: The authors provide graphs about the correlation between predict loss and actual loss to verify how the proposed scaling law regresses the performance of language models well. The proposed scaling law shows better results compared to the previous works. However, they don’t validate the scaling law with the target models and practical validation datasets. It would be better to compare the correlation between the outputs of the scaling law and validation datasets. Likewise, verifying the claims with predict loss only is not persuasive, thus additional supports are needed. Supplementary Material: The authors provide a detailed explanation of the proposed scaling law and graphs depicting the relationship between predict loss and actual loss in supplementary material. They strengthen the theoretical support of the proposal, but they are not enough to resolve the problems mentioned above. Relation To Broader Scientific Literature: It has already been addressed in previous works that the exponent bit is important during FP quantization. Thus their claim is not a novel finding. Essential References Not Discussed: `FP8 Quantization: The Power of the Exponent` published in Neurips 2022 already addressed the importance of exponent bit while FP quantization. Other Strengths And Weaknesses: As the authors already mentioned in the paper, performance prediction of LLMs according to quantization bit settings can help design hardware architecture well. Also, from a practical perspective, if we can predict the size of the model and the amount of data for achieving the desired performance, it will help reduce costs. Therefore, this research topic has good scalability. However, the paper’s claims demonstrate limited novelty. The importance of the exponent bit for LLM is already known. Without having to the paper mentioned above, BF16 which has a larger number of exponent bits compared to FP16 is widely used for LLM. Also, the support of the claims is insufficient. The authors propose the scaling law, but they don’t provide how the scaling law is induced and how the constants in the scaling law are determined. Moreover, the authors verify claims based on predict loss, after showing that the predict loss calculated with the proposed scaling law can approximate actual loss well. However, it is not persuasive because the paper doesn’t confirm the claims with actual loss or validation performance with real datasets Other Comments Or Suggestions: The paper refers to figures like `Figure N`, but Figure 6 is referred to as `Fig 6`. It would be better to unify the reference style. Questions For Authors: - Figure 5 shows optimized bit settings between exponent and mantissa according to total bit-width. However, the authors don’t use those settings in Figure 6. Does a similar pattern appear based on the settings obtained from Figure 5? - In experiment settings in the Appendix, the size of target models is significantly smaller than those of current interest. Does the proposed scaling law hold strongly for larger models and various models with marginal differences? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive suggestions and valuable comments! We hope our rebuttal could help address your concerns. If so, we would be grateful if you could consider increasing the overall recommendation of our work. ## Q1: Novelty. Several works have already emphasized the importance of exponent bits. A1: (1) The central contribution of this work is to build a **scaling law for floating-point quantization training, which quantitatively models the relationships between the loss L and several essential factors**, e.g., data size D, model size N, exponent E, mantissa M, and block size of scaling factors B, by providing a joint scaling law formation in Eq. 1. We are the **first** to give this scaling law. We highlight that the discussions on the importance of exponent bit and mantissa bit are just one part of our multiple contributions, and **the novelty of our work still holds**. (2) Although there are some efforts that have explored the importance of exponent bit while FP quantization (e.g., FP8 Quantization: The Power of the Exponent), our efforts and discussions on exponent and mantissa within the scaling law are still novel and valuable: (a) Different from existing works, our work firstly provides a scaling law that could **quantitatively predict the model loss** via D, N, E, M, B, rather than simply focusing on the importance of E and M. Our work enables accurate loss prediction of LMs in practice, verifying the importance from a quantitative aspect. (b) Besides the quantitative marginal effects of exponent and mantissa, we further explore the quantitative **joint effects of D, N, E, M, B** on model loss. (3) Besides the novel findings on E and M, we also discover several **valuable observations and insights with extensive experiments** that could facilitate future low-precision LLM training in the last paragraph of Sec. 1. We will add this discussion and related works in our revision. ## Q2: Evaluation setting. The authors don’t provide experiments with benchmark datasets. A2: (1) Thanks for the suggestion. This work concentrates on providing a scaling law for FP quantization training. Following previous works on scaling laws [1,2] (including [1] for INT quantization), we conduct extensive experiments with different precision settings (366 models) to build the basic scaling law formation of various D/N/E/M/B, and validate its effectiveness on larger models. In Fig. 4, we have clearly shown the good fitting results of our scaling law. The validation points (1.2B models, noted as yellow stars) in the bottom left corner fit well for our scaling law, implying that the scaling law could extend to larger models (the fitting results of 7B/70B models are also satisfactory given in rebuttal). We DO NOT draw conclusions based solely on predicted loss value, but **rely on the fitting results of predicted losses and actual losses on real data**. (2) The goal of scaling law is to predict the loss based on essential factors. Therefore, it is natural and best to evaluate the correctness based on the differences between actual losses and predicted losses given by our scaling law. [1] is the most related work that also adopts the same evaluation metric fitting results for validation and does not evaluate on downstream benchmarks. We also clarify that the loss is a practical and accurate indicator that provides an overall evaluation of LMs' capabilities widely used in the real world, which is strongly correlated to the overall performance on downstream tasks. We will add this discussion in our revision. ## Q3: How are scaling law equation constants determined? A3: We have introduced in Sec. 3 and 4.1 that we first conduct experiments to reveal the marginal effects, and then design the joint scaling law. The formations are designed based on empirical insights and the distributions of losses in 366 experiments. Next, we fit and validate the formation via LM settings (D/N/E/M/B) and the corresponding losses to obtain the constants in our scaling laws. These constants are learned from practical losses, not theoretical derivation, similar to other works [1,2]. ## Q4: Whether optimized bit settings have a similar pattern in Fig. 6? A4: As stated in Sec. 4.3, Fig. 6 displays the implication that there is an optimal data size under a certain FP quantization setting **theoretically based on our scaling law**. The validation of our scaling law is in Sec. 4.1 and Fig. 4 (as in A2). Fig. 6 aims to show the intuitive phenomenon of theoretical “optimal data size” in different settings. Hence, using the optimized bit settings in Fig. 5 also has a similar pattern due to the formation. A comprehensive derivation of $D_{crit}$ is in Appendix D. ## Q5: Extension to larger models. A5: Due to space limits, please refer to Reviewer UF7C's A1 for larger LMs' results (7B/70B). ## References [1] Kumar T et al. Scaling laws for precision. [2] Hoffmann J et al. Training compute-optimal large language models.
Summary: The paper proposes a new scaling law tailored specifically for floating-point quantization during training of large language models (LLMs). Authors extensively studied how quantization parameters—exponent bits, mantissa bits, and scaling block sizes—impact LLM performance. Through extensive empirical experiments (366 runs), they developed a unified scaling law for predicting LLM losses under floating-point quantization. The authors also identified optimal bit layouts, critical data sizes to prevent performance degradation under low precision, and determined that 4–8 bits offer the best trade-off between computational cost and model performance. ## update after rebuttal I maintain my original score. I am generally satisfied with the authors’ response. Claims And Evidence: - Authors claim that exponent bits matter a bit more than mantissa bits for performance. This was supported by many experiments showing lower loss with optimal exponent allocation. - They further claim that there's a critical data size, beyond which adding extra data actually hurts performance in low precision. Authors mathematically derived this critical point and validated empirically across different configurations. - Finally, authors claim that optimal quantization precision depends on computational resources available, best balance typically between 4 to 8 bits. Experiments over many model sizes, precision levels, and compute budgets repeatedly show this optimal precision window. Methods And Evaluation Criteria: The authors used Transformer-based LLMs trained on subsets of the Dolma dataset, with sizes from 41M to 1.2B parameters. Quantization was simulated via QPyTorch, carefully varying exponent, mantissa, block size, data, and model sizes. Empirical outcomes were systematically compared with existing scaling laws (Chinchilla, OpenAI and Kumar et al. 2024), highlighting improvements over prior work. Theoretical Claims: The scaling law is empirically derived by running a comprehensive series of experiments (366 runs) varying parameters like model size, data size, exponent bits, mantissa bits, and block sizes. Other existing works like Kaplan et al. (2020), Hoffmann et al. (Chinchilla law), and Kumar et al. (2024), are also predominantly empirically derived. Experimental Designs Or Analyses: Experiments were thorough and well-organized, systematically exploring multiple dimensions: exponent/mantissa combinations, data scales, block sizes, and model sizes. Yet, the largest model studied was only 1.2B parameters, somewhat limiting how confidently we can extrapolate findings to the extremely large models popular nowadays (e.g., tens or hundreds of billions). Supplementary Material: The supplementary materials were detailed and helpful, including hyperparameter configurations and numerical fitting results clearly laid out in tables and additional figures. The derivation steps provided were very helpful for reproducibility. Relation To Broader Scientific Literature: The authors cited foundational scaling law papers (e.g., Kaplan et al. 2020; Hoffmann et al. 2022; Kumar et al. 2024) adequately. Essential References Not Discussed: Authors ignored real hardware studies like [1], [2] [3], who provide significant insights into floating-point performance on actual hardware. Including these studies would make findings stronger and more relevant. [1] Kuzmin, Andrey et al. “FP8 Quantization: The Power of the Exponent.” ArXiv abs/2208.09225 (2022). [2] Baalen, Mart van et al. “FP8 versus INT8 for efficient deep learning inference.” ArXiv abs/2303.17951 (2023). [3] Aggarwal, Shivam et al. “Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs.” 2024 34th International Conference on Field-Programmable Logic and Applications (FPL) (2023): 297-303. Other Strengths And Weaknesses: **Strengths** - Clearly fills a gap by focusing specifically on floating-point quantization. - Robust empirical validation with extensive experiments. - Offers actionable guidance on optimal floating-point bit allocation. - Presents clear visualizations (especially Figure 5 on optimal bit layouts) that make insights easy to understand. **Weaknesses** - Experiments capped at relatively modest model scales (max ~1B). - Hardware validation is missing, which could impact the practical usability of proposed methods. Other Comments Or Suggestions: Line 122 "Current Scaling Laws cannot Well Fit in Floating-point Quantization" can be rephrased -> Current Scaling Laws cannot Fit Well in Floating-point Quantization" Questions For Authors: - Do you think the proposed scaling law can extend beyond model sizes > 1.2 B? - Can the observed relationships between exponent and mantissa bit precision in your paper be conceptually related or compared with the notion of an 'effective parameter count' introduced by Kumar et al. (2024)? Would integrating such a concept clarify or enhance your theoretical interpretation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive suggestions and valuable comments! We hope our rebuttal could help address your concerns, and we would be grateful if you could consider increasing the overall recommendation of our work. ## Q1: Experiments capped at relatively modest model scales (max ~1B). Do you think the proposed scaling law can extend beyond model sizes > 1.2 B? A1: Thanks for the suggestion. We agree with the reviewer that experiments on larger models could further verify the effectiveness of our scaling law. We attempt to answer from the following aspects: (1) The scaling law exploration of LLMs is essential but extremely expensive in both GPUs and running time. To thoroughly explore the relationships between the loss and different floating-point quantization training factors (e.g., N, D, E, M, B), we have trained 366 models with different settings (model sizes from 41M to 679M) to draw our scaling laws, and successfully validate them on 1.2B models with 8 different settings in Fig. 4. **The adopted model sizes are comparable with those in other scaling law works** (e.g., Scaling law for precision [1], one of the most related work, adopts model sizes up to 1.7B parameter for validation). (2) We conduct several additional experiments with model sizes larger than 1.2B. Precisely, **we successfully predict the loss of 7B and 70B LLMs (with different settings) based on our scaling law**. The detailed model setting, actual loss and predicted loss are shown as follows: |N|D|B|E|M|$L_{\text{actual}}$|$L_{\text{predict}}$|$\Delta L$ |-:|-:|-:|-:|-:|-:|-:|-:| |1.2B|100B|512|4|3|2.50|2.54|-0.04 |7B|10B|64|4|3|2.65|2.70|-0.05 |7B|100B|64|4|3|2.38|2.38|0.00 |70B|10B|64|4|3|2.60|2.56|0.04 |70B|20B|64|4|3|2.44|2.42|0.02 (The results of 1.2B models have already been included in Fig. 4 of our paper) The actual losses of 7B/70B models are very close to the theoretical value calculated by our scaling law. Therefore, we could confidently claim that our scaling law could extend beyond larger model sizes. These additional results will also be given in the revision. ## Q2: Hardware validation is missing, which could impact the practical usability of proposed methods. A2: Thanks for your suggestion. We agree that these related works can provide valuable insights of floating-point quantization in practice. We will add these noted related works in our revision with detailed discussions. ## Q3: Can the observed relationships between exponent and mantissa bit precision in your paper be conceptually related or compared with the notion of an 'effective parameter count' introduced by [1]? Would integrating such a concept clarify or enhance your theoretical interpretation? A3: Thanks for the suggestion. As discussed in Appendix M of [1], the effective parameter count is formulated as a counterpart to the parameter count N in the Chinchilla scaling law. In Appendix C, Eq. 37 of our paper, we derive an analogous concept where our equivalent N demonstrates not only a positive correlation with precision metrics: $P^{\delta + \nu}$ or $(E + 0.5)^\delta(M + 0.5)^\nu$, aligning with their framework, but also a negative correlation with dataset size D. This reveals that increasing training data volume effectively reduces the equivalent parameter count, implying that larger datasets amplify the impact of numerical precision on model expressivity. ## References: [1] Kumar T et al. Scaling laws for precision.
null
null
null
null
null
null
null
null
The Energy Loss Phenomenon in RLHF: A New Perspective on Mitigating Reward Hacking
Accept (poster)
Summary: This paper introduces the energy loss phenomenon in RLHF, where the L1 norm difference between input and output in the final layer of LLM increases during fine-tuning, leading to reward hacking. To mitigate this, the authors propose EPPO (Energy loss-aware PPO), which penalizes energy loss during RL optimization. Experiments across various LLMs on summarization and instruction-following tasks show that EPPO improves response quality and reduces reward hacking, compared to conventional online alignment methods like PPO with KL or response length penalties. Claims And Evidence: The proposed method is empirically well-validated. However, it is unclear whether the theoretical analysis properly supports the proposed regularization term, the energy loss-based penalty, in the reward function. While the authors define an energy loss as the reduction in the L1 norm of the embedding vector after passing through the final layer and claim that its excessive increase leads to reward hacking, the proposed regularization term does not directly penalize the proposed energy loss. Instead, it ensures that the norm-size variation of the final layer's input and output vectors remains similar between the SFT and fine-tuned models. This raises concerns about the consistency between the theoretical analysis (and the paper's overall story) and the actual role of the regularization term, making the motivation and justification for the method unconvincing. Additionally, the validity of the theoretical analysis itself is questionable, as described below in Theoretical Claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable. The experiments use diverse LLMs and well-established tasks such as summarization and instruction-following. Theoretical Claims: I have reviewed the theoretical analysis, including the proofs, and found several concerns about their validity and significance. * Corollary 4: It is unclear what meaningful insight this result provides. As the authors briefly mention, when the energy loss $\Delta E$ is greater or smaller than the average ($-\alpha$), the correlation with the upper bound of the mutual information $I(X;Y)$ appears to be reversed. This raises doubts about the practical relevance of this corollary. Additionally, since \alpha itself may change as the LLM parameters are updated, it is unclear whether Corollary 4 remains valid throughout fine-tuning. This result does not seem trivially true, so a formal proof is necessary. * Theorem 3: While I checked the proof, whether the upper bound is meaningful or sufficiently tight is unclear. For instance, a potentially tighter bound could be obtained by considering $||H^{out}|| - \mathbb{E}[||H^{out}||]$ instead of incorporating $||H^{in}||$. Moreover, it appears that the expectation term in the upper bound might cancel out with $\sigma$. (Minor) * The parameter σ in Theorem 3 is not explained well. Is it the standard deviation of the energy loss? Additional clarification will be beneficial regarding its interpretation and role in the bound. * Theorem 3 states "any layer $l$," but if the model has residual connections, the Markov chain property used in the proof may not hold, making inequality (11) invalid. Does "any layer" correspond to any transformer block? For this theorem to be correct, conditions on the network architecture should be explicitly stated. * Corollary 5 appears to be correct, but its proof needs revision. Instead of stating "Y is uniquely determined by X" in line 709, it should be "$h^{out}$ is uniquely determined by $h^{in}$." Corresponding adjustments in the surroundings are also necessary. Experimental Designs Or Analyses: The experimental design is generally sound. However, one issue stands out in Section 6.2 (Figure 7), which aims to validate the theoretical analysis. For this result to properly support the theory, the plot should not only include the energy loss $\Delta E$ but also its mean value $-\alpha$ (or $\Delta E + \alpha$), as both terms contribute to the upper bound of the mutual information in Theorem 3. Without this, it is unclear whether the empirical results truly align with the theoretical predictions. Supplementary Material: I reviewed Appendix A. Other supplementary materials were briefly checked as needed based on references in the main text. Relation To Broader Scientific Literature: This paper studies reward hacking in RLHF, a crucial topic in LLM alignment. It particularly relates to prior works on online alignment methods, such as KL-regularized PPO. It explores an alternative approach by introducing a new regularization to constrain the norm-size variation between the SFT and fine-tuned models. Essential References Not Discussed: To the best of my knowledge, no essential references are missing from the paper. Other Strengths And Weaknesses: None Other Comments Or Suggestions: * The justification for calling the L1 norm difference between the final layer's input and output the "energy loss phenomenon" is unclear, as the authors seem not to establish a strong connection between this metric and physical energy concepts. The term activation decay or norm shrinkage might be more appropriate. * Table 1 does not appear to be referenced in the main text. Questions For Authors: * Instead of using the difference in L1 norm ($\Delta E$), a more straightforward approach could be to focus only on the L1 norm of the output. Have the authors considered comparing or discussing such an approach? * In Theorem 3, is introducing the variational distribution $q$ necessary? Since $H^{out}$ is deterministically determined by $H^{in}$, it seems possible to derive a similar result more simply as: $I(H^{in}; H^{out}) = \mathcal{H}(H^{out}) \leq H(q)$, where q is a normal distribution whose standard deviation is at least that of $H^{out}$. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments. We would like to highlight that our **main contribution** is the **empirical observation of the energy loss phenomenon** and **the corresponding RL regularization design**, as acknowledged by all other reviewers (wKXn, mgdS, and mmxN). **Theoretical analysis is included as an exploratory explanation**, with its limitations explicitly discussed in the manuscript (Lines 198-206, Page 4). We acknowledge that the current analysis only applies under certain conditions, and a **rigorous explanation is beyond the scope of this paper** but remains a promising direction for future work. --- > **Q1:** Why does the regularization term penalize energy loss variation relative to the SFT model, rather than the loss itself? **RQ1:** Our design **treats the SFT model's energy loss as a state-dependent baseline**—a common RL technique—**to reduce variance from the regularization term**. By penalizing deviations from this baseline, our approach **effectively suppressing excessive increases**, **aligning well with our theoretical analysis**. The effectiveness of this design is validated empirically in Figure [R10](https://ibb.co/N6Q9TJ8Z). > **Q2:** Practical Relevance of Corollary 4. **RQ2:** **We would like to clarify that our main contribution lies in the empirical findings and the corresponding RL regularization.** Corollary 4 **serves as an exploratory explanation** **for our findings** from the perspective of contextual mutual information. While it may not generalize to all settings, it **offers a plausible interpretation under specific conditions**. Corollary 4 shows that **if the energy loss increase stays below a threshold**, **the upper bound of contextual mutual information is reduced**, suppressing response-context relevance, offering a theoretical explanation for the emergence of reward hacking in such scenarios. **However, when increased energy loss exceeds the threshold**, **the upper bound may rise**, **but this does not necessarily improve contextual relevance**. Why excessively increased energy loss continues to be associated with reward hacking in such scenarios remains an open question. Thus, **Corollary 4 offers a theoretical insight into reward hacking as energy loss increases**, though it is limited to a specific regime. **A more rigorous theoretical explanation is beyond the scope** of this paper but is a promising area for future research. > **Q3:** Dynamic $\alpha$ in corollary 4 during fine-tuning. **RQ3:** We would like to clarify that Corollary 4 is **not a strict theoretical guarantee for the proposed RL regularization during fine-tuning**, but rather a lens for interpreting empirical observations. While it is true that $\alpha$ may shift as training progresses, the underlying relationship it describes still holds. > **Q4:** Questions about the upper bound of mutual information derived in Theorem 3. **RQ4:** We would like to clarify that the goal of Theorem 3 is to provide a theoretical perspective that **helps explain our empirical observations, rather than deriving the tightest possible upper bound on contextual mutual information.** While your suggested formulations, $||H^{\text{out}}||_1 - \mathbb{E}[||H^{\text{out}}||_1]$ and $H(q)$, may indeed yield a tighter upper bound, they are **significantly more difficult to estimate and optimize, particularly during dynamic fine-tuning**. In contrast, **our use of $||H^{\text{out}}||_1 - ||H^{\text{in}}||_1$ provides a more tractable and empirically estimable surrogate that still captures the core trend central to our analysis.** > **Q5:** More rigorous statement in Theorem 3’s condition and Corollary 5’s proof. & Suggestion for using ”activation decay” & Definition of $\sigma$. **RQ5:** Thank you for your valuable suggestion. We will revise the corresponding statements and definitions for greater clarity in the revised version. > **Q6:** Question about Figure 7. **RQ6:** We would like to clarify that Figure 7 is **not intended to validate the theoretical analysis**. Rather, it is inspired by the theoretical insights and aims to empirically investigate further, from the perspective of contextual relevance, why excessive increases in energy loss are accompanied by reward hacking. > **Q7:** Comparison with directly penalizing the L1 norm of the output. **RQ7:** Thank you for your comments. Following your suggestion, we **conducted additional experiments on Llama3-8b to compare our approach with directly penalizing the output L1 norm**. The results, shown in Figure [R11](https://ibb.co/RTScfjPx), **demonstrate the advantage of our approach**. We hypothesize that by penalizing the L1 norm difference between the input and output of the final layer, our approach **constrains only the final layer**, thereby **preserving optimization flexibility**. In contrast, directly penalizing the output norm **inevitably constrains the preceding layers, limiting the model’s optimization capacity**. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. While I’m not seeking full theoretical analysis, I still find a disconnect between the mathematical motivation and the actual design of the proposed method. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We understand your concerns. However, we would like to clarify that **our method’s design does not build on the theoretical analysis**. Rather, the theoretical analysis is provided solely as an exploratory explanation of our empirical observations, while our method's design is entirely based on these empirical observations. As we clearly indicated in Section 1 of introduction (Line 78–79: “To address this phenomenon”) and in Section 4 of our method design (Line 216: “Building on this observation,”), these empirical observations form the foundation of our proposed method. Moreover, to further address your concern, **we have refined our theoretical analysis, which can now explain our empirical observations across all scenarios**. The updated theorem and its corresponding proof are provided in Figure [R12](https://ibb.co/zhRq2Gdn), **with the specific revisions highlighted in blue**. In the revised theorem, we theoretically demonstrate that **the L1 norm of the final output hidden state from the LLM provides an upper bound on the contextual relevance of its responses**. Therefore, during the RL process, as the energy loss in the final layer of the LLM significantly increases, the L1 norm of its output hidden state correspondingly decreases. This reduction tends to compress the mutual information between the context and the response, which may potentially lead to hacking behavior. If any additional points require clarification or further adjustments, please do not hesitate to let us know.
Summary: This paper identifies the Energy Loss Phenomenon in RLHF, where increasing energy loss in the final layer of LLMs signals reward hacking, and provides a theoretical framework showing how this increase lowers response-context relevance, a key factor in reward hacking. To address this issue, The authors propose EPPO (Energy loss-aware PPO), a novel algorithm that penalizes the increase in energy loss during reward calculation to mitigate reward hacking. Extensive experiments show that EPPO effectively mitigates reward hacking, improves RLHF performance, and outperforms existing algorithms. The authors also demonstrate that EPPO can be viewed as an entropy-regularized RL algorithm, offering deeper insights into its effectiveness. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: Yes, I have checked the correctness of Theorem 3 and Corollary 5. Experimental Designs Or Analyses: Yes, I have checked the soundness and validity of the experiments in this paper. Supplementary Material: Yes, I have reviewed the main parts of the supplementary material. Relation To Broader Scientific Literature: This paper advances RLHF research by identifying energy loss in LLMs as a previously under-explored factor in reward hacking and introducing EPPO to effectively mitigate it. Essential References Not Discussed: No, all related works have been discussed in the paper. Other Strengths And Weaknesses: Strengths: 1. The paper is well-organized and is easy to understand. 2. The identification of the energy loss phenomenon in LLMs is a novel contribution, offering fresh insights into the problem of reward hacking in RLHF. 3. The theoretical explanations for the energy loss phenomenon and the proposed method are insightful. 4. The experimental results for the energy loss phenomenon and the proposed EPPO are thorough and compelling, highlighting their potential for practical applicability in real-world scenarios. Weakness: 1. Why does the comparison method PPO with length penalty still suffer from reward hacking in the later stages of reinforcement learning, as shown in Figure 5? This is confusing to me. 2. What specific advantages does the proposed EPPO algorithm have over the ensemble-based methods in [1], which are also designed to mitigate reward hacking? 3. The paper fails to discuss the potential computational overhead (time cost) introduced by the energy loss calculation during RL process. 4. Is there a simple and intuitive reason why the L1 norm is used in this work instead of the L2 norm? [1] Reward model ensembles help mitigate overoptimization. ICLR 2024 Other Comments Or Suggestions: There is a typo in the caption of Figure 14 in the appendix: AlpacaFarm, Anthropic-Helpful, Anthropic-Harmless, Reddit TL;DR datasets -> AlpacaFarm, Anthropic-Helpful, Anthropic-Harmless, and Reddit TL;DR datasets. Questions For Authors: See Weakness. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate your positive feedback on the clarity, novelty, theoretical insights, and compelling experimental results of our paper. We will address each of your comments and concerns below and also in our revised manuscript. --- > **Q1:** Why does the comparison method PPO with length penalty still suffer from reward hacking? > **RQ1:** Thank you for your valuable feedback. As discussed in Lines 367-369 on Page 7 of the manuscript, **PPO with length penalty can only capture hacking patterns characterized by longer response lengths**, such as excessive redundancy. However, **it is not effective in addressing hacking patterns that are unrelated to response length**, such as excessive caution. Therefore, **PPO with length penalty still suffers from reward hacking to some extent**. The related case study can be found in Appendix L of the manuscript. > **Q2:** What specific advantages does the proposed EPPO algorithm have over the ensemble-based methods? > **RQ2:** Thank you for your insightful question. As discussed in Lines 292-296 on Page 6 of the manuscript, although the ensemble-based RM method can significantly enhance the robustness of reward modeling, **it may still be susceptible to spurious features in reward modeling and distribution shifts during the RL stage, potentially leading to reward hacking**. In contrast, our proposed EPPO algorithm **directly focuses on the neuron behavior within the LLM that is related to reward hacking**, **enabling more effective mitigation**. Furthermore, **our EPPO is more efficient compared to ensemble-based RM methods**, as the latter often requires loading multiple reward models during the RL process, which significantly increases resource demands, especially for large-scale LLMs. > **Q3:** Computational overhead analysis. > **RQ3:** Thank you for your valuable suggestion. Following your advice, we report the training times for both PPO and EPPO on two tasks, using a single node equipped with 8 A100-SXM80GB GPUs in the table below. As shown, **while our EPPO does incur a slight increase in training time, it significantly improves training performance**, as demonstrated in our paper. | Tasks | PPO | EPPO | | --- | --- | --- | | General Dialogue | 7h 26min | 7h 45min | | Summarization | 3h 53min | 4h 06min | > **Q4:** Intuitive reason why the L1 norm is used in this work instead of the L2 norm. > **RQ4:** Thank you for your insightful question. The primary consideration for using the L1 norm instead of the L2 norm is that the **L1 norm is more robust to outliers**. This robustness to outliers allows the L1 norm to more reliably identify relevant features while minimizing the impact of noise, making it a more appropriate choice for analyzing the hidden state representations in LLMs. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses which address all of my concerns. --- Reply to Comment 1.1.1: Comment: Dear Reviewer wKXn, We are very glad that our response addressed your concerns. We truly appreciate your valuable feedback and the positive support for our work. Best regards, The authors of Paper 3248
Summary: This paper observes that the energy loss in the last layer of LLMs tends to get larger when using RL to train. Other than this finding, the authors also give theoretical analysis, which shows that under mild conditions, the increased energy loss reduces the upper bound of contextual relevance in LLMs. This will more easily make LLMs overfit to reward model-favored patterns in RL, which is harmful to the performance. To solve this problem, the authors provide an energy loss-aware PPO algorithm that aims to give the energy loss a penalty when it goes larger. Experiments show that the proposed approach receives good results for multiple different LLMs and it can also improve the performance of the RLHF. Claims And Evidence: The authors have well supported the claims made. Methods And Evaluation Criteria: - Overall speaking, the observation of this paper is interesting. According to the analysis and experiments from the paper, it is really an interesting problem, which is faced by multiple LLMs. - Though the proposed method is relatively simple, it can well address the issue posed by the authors. According to the experimental results, the proposed approach indeed works for various LLMs. - In addition to theoretical analysis, the authors also use in-distribution data and out-distribution data to do evaluations. The GPT-4 evaluation is adopted to assess the effectiveness of the proposed approach, which is similar to most previous works. The results look good. Theoretical Claims: - The authors have provided proofs in the appendix. It seems that there is no problem. Experimental Designs Or Analyses: - It is good to see the results on Llama3-8B, Mistral-7B, and DeepSeek-7B. Compared to OpenAI GPT-1o and DeepSeek-V3, these models are not large enough to support the claims made by the authors. The question is: Have the authors attempted to observe whether similar phenomenon happens to larger-sized models? - Another question I am interested in is that: What would the model performance change when the chain of thoughts strategy is used. As the energy loss gets larger, it seems that the model performance with deep thinking would reduce accordingly. I am just curious about this. Supplementary Material: The authors provide a supplementary material. It is about the source code of the proposed approach. Relation To Broader Scientific Literature: Match well. Essential References Not Discussed: This paper talks about the energy loss problem of LLMs. Relevant references have been cited. Other Strengths And Weaknesses: Strengths: - The presentation of this paper is good. It is easy to understand the motivation and the method. - The results are also good, which can well reflect the effectiveness of the proposed approach. Weaknesses: - Despite the good performance, I think the authors should provide more explanations about in what cases the proposed approach fails. - It would be good, if one of the qualitative examples in the appendix could be moved to the main paper to better see how the proposed approach works compared to the baselines. Other Comments Or Suggestions: No further comments. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your positive feedback on our empirical observation, the effectiveness of our approach for various LLMs, and your acknowledgment of both the theoretical analysis and comprehensive evaluations. We will address each of your comments and concerns below and also in our revised manuscript. --- > **Q1:** Have the authors attempted to observe whether similar phenomenon happens to larger-sized models? **RQ1:** We sincerely appreciate your valuable feedback. In response to your suggestion, **we** **extended our experiments to larger models**, specifically Qwen2.5-14B, to further validate our findings. The distribution of energy loss in the final layer of the LLM across various datasets is presented in Figure [R7](https://ibb.co/Fqw8Q2vY). As shown, **the substantial increase in energy loss consistently correlates with reward hacking across all datasets**. This observation **demonstrates the existence of the energy loss phenomenon in larger-scale LLMs** and further solidifies the contribution of our work. > **Q2:** What would the model performance change when the chain of thoughts strategy is used? **RQ2:** Thank you for this insightful suggestion. To investigate the impact of the chain of thought (CoT) strategy, we construct a CoT RL dataset, where we append the phrase "Let’s think step by step." to each prompt. The comparison between PPO with CoT and PPO without CoT is shown in Figure [R8](https://ibb.co/21D6CCKd). As demonstrated, **incorporating CoT leads to a significant improvement in RLHF performance.** We hypothesize that this enhancement is due to the fact that the CoT strategy effectively constrains the output space, guiding the model to generate responses that better align with the user’s input. > **Q3:** Failure case of the proposed approach. **RQ3:** Thank you for this comment. Based on our theoretical analysis, the proposed RL regularization term can also be interpreted as a contextual relevance constraint. This means that **our method may not perform as well in cases where the model's responses are required to be highly divergent, but with less emphasis on contextual relevance**. For instance, as shown in the case presented in Figure [R9](https://ibb.co/sp7S2wwx). In the revised version of our paper, we will explore and summarize additional failure cases to provide a more comprehensive and objective analysis. > **Q4:** It would be helpful to move one of the qualitative examples from the appendix to the main paper to better illustrate how the proposed approach compares to the baselines. **RQ4:** Thank you for your suggestion. In the revised version, we will **move one of the qualitative examples from the appendix to the main paper** to better illustrate the advantage of our proposed approach. --- Rebuttal Comment 1.1: Comment: Thanks for the reponses. My concerns have been addressed. In addition, I also read the reviews by other reviewers. It seems that all the reviewers recognize the novelty and the good motivation. Though the explanatory theory does not fully support the argument, this is not a big issue. I agree with Reviewer mmxN about this. So, I lean towards accepting this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer mgdS, Thank you for your positive feedback and for raising your score. We truly appreciate the time you dedicated to reviewing our paper, as well as your valuable suggestions for our work. Best regards, The authors of Paper 3248
Summary: This work identifies the Energy Loss Phenomenon in RLHF, where increasing energy loss in the final layer of an LLM is linked to reward hacking. To address this issue, the paper proposes EPPO, which penalizes energy loss growth in the RL reward function. Claims And Evidence: The authors support their claims with both theoretical proofs and experimental validation. Methods And Evaluation Criteria: - This paper introduces an interesting perspective on the reward hacking problem by uncovering a relationship between the energy loss in the final layer of LLMs and reward hacking. Specifically, the authors connect energy loss with contextual mutual information, which influences reward hacking. I find this to be a compelling research topic. - While the paper presents experimental evidence (e.g., Figure 4) to support this finding, I find the theoretical explanation somewhat unconvincing. If I understand correctly, Theorem 3 applies to any layer within the LLM, yet the authors emphasize the final layer’s role in reward hacking. Theoretically, wouldn’t this phenomenon also occur in other layers? Additionally, when energy loss surpasses the threshold -$\alpha$, it becomes positively correlated with the upper bound of contextual mutual information, making it difficult to fully reconcile the theoretical framework with the experimental findings. If I have misunderstood any aspects, I would appreciate clarification, as I am open to revising my score. - EPPO directly incorporates an energy loss growth penalty into the reward function. However, even if a connection between energy loss in the final layer and reward hacking exists, it may only be a symptom rather than the root cause. Reward hacking ultimately stems from flaws in reward design (or, in RLHF, the quality of preference data). If a well-designed reward function prevents PPO-trained LLMs from exhibiting reward hacking, what happens to the energy loss in their final layers? Moreover, what is the fundamental reason behind energy loss growth in PPO training when an ill-structured reward function is used? I believe the authors should further explore these underlying causes. Theoretical Claims: Based on my assessment, there are no obvious flaws in the theoretical proofs. Experimental Designs Or Analyses: - The paper conducts comprehensive experiments, demonstrating the relationship between energy loss in the final layer and reward hacking, as well as the effectiveness of EPPO in mitigating both. - In Figure 5, PPO initially performs well but significantly degrades in later RL stages. This suggests that early stopping in PPO training might serve as a potential solution to reward hacking—i.e., halting training at the peak PPO performance. Have the authors compared the performance of an early-stopped PPO model with that of the final EPPO model? Supplementary Material: I have reviewed the code in supplementary material. Relation To Broader Scientific Literature: In RL research, mitigating reward hacking is a well-studied topic. The authors should broaden their discussion to include approaches such as reward shaping, which can be used to address this issue. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: Based on the weaknesses and concerns outlined above, I am currently giving this paper a score of 2. However, if the authors can address my concerns and clarify these points, I would be happy to raise my rating. Questions For Authors: See the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing our research perspective. We will address your comments below. --- > **Q1:** Does the energy loss phenomenon occur in other layers? **RQ1:** Thanks for your thoughtful comment. We agree that, in theory, excessive energy loss at any layer could reduce contextual relevance and contribute to reward hacking. However, **extensive experiments show that only the final layer consistently exhibits a strong trend** across different LLMs and datasets. Evidence is provided in Figures [R1](https://ibb.co/67nmsyF0), [R2](https://ibb.co/50yt1WY), and [R3](https://ibb.co/x8wMFTcS). We speculate that the final layer, being closest to the output, encapsulates richer semantic information, making it a reliable indicator of overfitting or reward hacking. While earlier layers may show similar trends, their behaviors are influenced by model architecture, training strategies, and data, making patterns less stable and harder to generalize. > **Q2:** How to reconcile the theoretical framework with the experimental findings? **RQ2:** Great question! Our theoretical explanation holds when energy loss stays below the threshold, where **increased loss compresses contextual mutual information, providing a theoretical rationale for the energy loss phenomenon**. Conversely, while increased energy loss exceeding the threshold may raise the upper bound, **it does not necessarily correlate with enhanced contextual relevance.** Why increased energy loss is linked to reward hacking in such scenarios remains an open question. We would like to clarify that **our main contribution is the empirical findings of the energy loss phenomenon and the corresponding RL regularization design**, while **the theoretical analysis is an exploratory attempt to explain the empirical findings.** In the revised version, we will tone down the theoretical component and highlight the empirical contributions to better reflect the core value of this work. > **Q3:** The root cause of reward hacking and the role of EPPO in addressing it. **RQ3:** We agree that reward hacking stems from flaws in reward design. However, in practice—especially in RLHF—it is challenging to obtain a perfect reward model due to overfitting, misspecification, and misgeneralization. In such cases, RL regularization techniques are necessary to guide the model's optimization toward human-desired behavior, even with imperfect reward models. **EPPO is designed to address the limitations of imperfect reward models by incorporating RL regularization.** > **Q4:** What would happen to the energy loss using a well-designed reward function? **RQ4:** As discussed in RQ3, obtaining a well-designed (perfect) reward model in RLHF is challenging. To simulate this, we **use a stronger reward model based on LLaMA3-70B as the well-designed function**. The corresponding energy loss and hacking sample distributions after RLHF are shown in Figure [R4](https://ibb.co/7fgH3bc). We observe that **with a well-designed reward model, final-layer energy loss increases only moderately—unlike the sharp rise seen with an ill-structured reward model**. > **Q5:** The fundamental reason behind energy loss growth in PPO training when an ill-structured reward function is used. **RQ5:** Thanks for your comments. The growth in energy loss is primarily an **empirical observation**, consistently found across various LLMs, datasets, and tasks. To explore its causes, we first provide a theoretical proof that, in certain scenarios, **increased energy loss tends to suppress contextual relevance** (i.e., Theorem 3)—a key aspect of reward hacking. We then empirically demonstrate that **this effect is widely observed across all scenarios** (i.e., Figure 7 in the manuscripts). We suspect that **the fundamental reason behind energy loss growth lies in overfitting to reward model–favored patterns**—a phenomenon we associate with **reward hacking**. This overfitting prioritizes alignment with the reward model’s biases over accurately capturing user intent, leading to **deterioration in contextual relevance** and a significant reduction in neural activation, manifested as excessive energy loss. > **Q6:** Comparison with early-stopped PPO. **RQ6:** Following your suggestion, we **compared EPPO with** **early-stopped PPO** based on GPT-4 evaluations. As shown in Figure [R5](https://ibb.co/67yVdqj3), **EPPO consistently outperforms early-stopped PPO**, demonstrating its stronger resistance to reward hacking. > **Q7:** Discussion about reward shaping approach in RL research. **RQ7:** Thanks for your comment. As far as we know, **reward shaping** in RL is typically used to address the **sparse-reward challenge** by providing more informative reward signals, which differs from our focus on **mitigating reward hacking in RLHF** scenarios. Notably, **EPPO can be seen as a form of reward shaping**, as it uses the internal neural behaviors of the LLM as auxiliary signals to adjust the reward during RL. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. > We suspect that the fundamental reason behind energy loss growth lies in overfitting to reward model–favored patterns—a phenomenon we associate with reward hacking. This overfitting prioritizes alignment with the reward model’s biases over accurately capturing user intent, leading to deterioration in contextual relevance and a significant reduction in neural activation, manifested as excessive energy loss. I believe this insight may be valid. I encourage the authors to expand on this point in their discussion, while substantially trimming the explanatory theory that does not directly support the proposed method. As the authors’ detailed response has addressed several of my concerns, I am willing to raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer mmxN, We sincerely thank you for the updated score and your insightful suggestions. We will carefully revise the paper based on your valuable feedback in the new version. Best regards, The authors of Paper 3248
null
null
null
null
null
null
Test-Time Graph Neural Dataset Search With Generative Projection
Accept (poster)
Summary: This paper addresses the test-time adaptation challenge in graph neural networks (GNNs). The main focus is on the generalization about graph data and test-time GNN inference. The main challenge is that GNN models trained on training graphs may not work well on a new, unseen test graph. The authors propose a new learning problem, test-time graph neural dataset search, along with a generative Projection based test-time Graph Neural Dataset Search method, named PGNDS, to adjust the model at test time so it can better handle new test graphs. PGNDS uses a graph diffusion generator to map new generated test graph to a new distribution close to the training distribution and this process is guided by well-trained GNNs with different constraints for the reverse diffusion process. The main results of the proposed PGNDS on real-world datasets, show that PGNDS could improve the GNN test time inference results. Claims And Evidence: Yes, the overall paper is relatively clear, providing detailed descriptions of the proposed method and theoretical justifications. Methods And Evaluation Criteria: Yes, the proposed PGNDS methods and evaluation criteria (six molecular and protein datasets) are reasonable for the problem of test time GNN adaptation and the practical application. Theoretical Claims: Yes, the theoretical claims specifically in Section 2.4. Theoretical Justification are checked. Experimental Designs Or Analyses: Yes, the experimental design is appropriate overall, and the experimental results could verify its effectiveness. Supplementary Material: Yes, all the parts of the supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper is to formulate the test-time graph adaptation problem to a new test-time graph dataset generation problem, different from existing test-time adaptation methods on graph, it moves beyond direct model tuning and graph nodes/structure modification. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper demonstrates several notable strengths. First, it innovatively propose new test‐time adaptation problem by proposing “graph neural dataset search”. This creative formulation leverages generative projection and dual conditional diffusion to remap unseen test graph distributions toward the training distribution, which is a fresh perspective that could inspire further work in test‐time adaptation for graphs. Second, the modular design (dual conditional diffusion, dynamic search, ensemble inference) is well-structured, and the inclusion of theoretical justifications adds depth. Last, the extensive experimental evaluation—with thorough ablation study and hyperparameter sensitivity analyses—also validate the performance of the proposed PGNDS method on molecular and protein graph tasks. On the other hand, some potential weaknesses deserve mention. First, integrating multiple modules (dual conditional diffusion, dynamic search, and ensemble inference) that may impose a high computational overhead and could be challenging to reproduce. Simplifying some aspects or providing additional intuitive explanations might be helpful. Second, the approach assumes access to well-trained GNN and diffusion models, which may not always be feasible. Other Comments Or Suggestions: No. Questions For Authors: First, given the complexity of the overall framework, will code be release to ensure the reproducibility of the results? Second, the theoretical justification involving dual conditional diffusion and multiple guidance terms is quite dense. Could you offer a more intuitive explanation of how these components interact in practice to effectively reduce test-time distribution shifts? Last, how the proposed PGNDS would perform against adversarial perturbations or noisy test-time graphs? Understanding its resilience to noisy data would be valuable for real-world applications. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **[Re-Weakness(1)] Computation and Explanations of Multiple Module Components in PGNDS** >For the three modular components in the proposed PGNDS—dual conditional diffusion, dynamic search, and ensemble inference—we provide a simple one-sentence explanation for each module to help ease understanding: > - Dual conditional diffusion: a generative process that models test-time distribution shifts via reverse-time diffusion. > - Dynamic search: selecting or generating a refined set of test graphs by optimizing over the entire test distribution. > - Ensemble inference: aggregating information from the original test graphs and refined adapted graphs to obtain optimal test-time inference performance. >We have made efforts to clearly structure our paper (e.g., by providing more preliminary background in Section 2.1 and intuitive motivations for each modular design in Section 2.3). We would be happy to clarify any specific concepts that may affect your understanding of our proposed PGNDS. >Even with these three collaborative modules, we have carefully designed the framework to remain efficient at inference time without introducing substantial overhead. As shown in Table 4, PGNDS achieves comparable or even better runtime than prior baseline methods such as (EERM 2.784s vs. **PGNDS 2.244s** on the OGBG-BBBP dataset). Moreover, our method does not require GNN fine-tuning, and the diffusion steps are truncated ($T \ll T_{\text{tr}}$, refer to Line 128) with early stopping based on the dynamic search criterion in Eq. (12), which significantly reduces unnecessary computation. --- **[Re-Weakness(2)] Feasibility of Well-Trained GNN and Diffusion Models** >PGNDS is designed to operate under the common and practical test-time adaptation setting, where reasonably well-trained models are available when deployed in practice. This is consistent with prior TTA methods such as TENT and GTRANS, which also rely on pretrained models for test-time refinement. We argue that **this is feasible and reflects a realistic and standard deployment scenario** where well-trained models are commonly used. In this context, PGNDS can serve as a test-time plug-in to further enhance generalization without requiring any model retraining. --- **[Re-Questions(1) & (2)] Code and Intuitive Explanation of Dual Conditional Diffusion and Guidance Terms** >We will release the code upon the acceptance of this work, considering that it is the first to address the research question of test-time graph neural dataset search. The implementation is not complicated, as it operates entirely at test time without requiring model retraining. >For an intuitive explanation of dual conditional diffusion and the multiple guidance terms: >At a high level, PGNDS uses a reverse-time diffusion process with GNN-provided information as control variables to gradually refine test graphs—i.e., following three conditional guidance terms. Intuitively, they work as follows: > - First, graph structure preservation (**$r_{\text{struc}}$**) ensures that the refined graph remains close to the original input in structural space, which is important as structural information heavily affects graph properties in test-time graph-level tasks. > - Second, task-specific guidance (**$r_{\text{gtask}}$**) encourages the refined graph to align with the pretrained GNN’s predictions (using pseudo labels), thereby improving task relevance. > - Third, graph diversity (**$r_{\text{gdist}}$**) prevents the process from collapsing into trivial solutions, maintaining variety in the generated test-time graph candidates. >During each diffusion step, these constraints jointly influence the latent sampling space, guiding it toward a region in the space that balances test-time graph properties with the joint space defined by [training data \& the well-trained GNNs]. --- **[Re-Questions (3) ] Robustness to Adversarial Perturbations** >Thank you for highlighting this interesting and important research direction. To the best of our knowledge, there is currently **no prior work specifically studying adversarial robustness for test-time graph adaptation methods**, as these methods are inherently unsupervised and operate under the constraint of not modifying deployed models. The field of test-time graph learning is still at an early stage, where the primary focus remains on improving test-time performance under distribution shift. >Although PGNDS is not explicitly designed for adversarial defense, some components naturally contribute to robustness. For example, during the dynamic search phase, PGNDS generates multiple candidate graphs from the dual conditional diffusion and selects those with better task alignment, which serves as a **filtering mechanism against suboptimal test graphs**. Exploring the robustness of PGNDS under adversarial or noisy settings is indeed a promising direction for future work, and we appreciate the suggestion.
Summary: This paper tackles a really interesting and new problem called 'test-time graph neural dataset search.' It enables GNN models handle data they’ve never seen before at test time by creating new graphs similar to the training set. To tackle this problem, the authors propose PGNDS, a method that reconstructs an unseen test graph distribution by leveraging a well-trained GNN as a guiding mechanism. The proposed PGNDS framework contains graph diffusion model as a generator, for generating new test distributions through test-back-to-training distribution mapping, and it uses dynamic search to select new test graphs, and uses new generated graphs and original test graphs to provide final predictions. This work reports the main results and other auxiliary experiments on real-world test graphs. Claims And Evidence: The authors clearly state their claims, and personally, I find the evidence convincing overall, though some practical implications could be explained more clearly. Methods And Evaluation Criteria: I find the proposed three-stage PGNDS framework interesting and quite practical. Instead of tuning the model itself, it cleverly changes the graph data at test time. The design of PGNDS generally aligns well with the proposed test-time graph neural dataset search problem. The evaluation criteria (ROC-AUC, RMSE) used benchmark datasets (molecular and protein), and baselines, on graph learning tasks are appropriate. Theoretical Claims: I quickly checked the theoretical parts, and they seem logical. However, it would help if the authors could simplify or clarify the intuition behind these theories. Experimental Designs Or Analyses: I briefly checked the soundness/validity of the experimental designs and analyses, as well as the ablation study, hyperparameter analysis, and running time comparison. The experiments look good overall, no issues need to be discussed here. Supplementary Material: I briefly review the supplementary material. No issues need to be discussed here. Relation To Broader Scientific Literature: This work broadly discusses its connection to the graph distribution shift problem in the supplementary material, particularly in “Table A1, Comparison of different settings for different distribution shift related methods”. The relationship between this work and other research methods is relatively clear and helps distinguish it from existing approaches. Essential References Not Discussed: Not sure, maybe other graph diffusion/generation methods. Other Strengths And Weaknesses: ### Key strengths: - [Originality] Introduces test-time graph neural dataset search - a novel extension of TTA using generative projection. Shifts focus from graph-level adaptation to distribution-level optimization. - [Significance] Addresses critical GNN deployment challenge: performance degradation from graph distribution shifts. Practical solution through training distribution projection enhances real-world robustness.- - [Technical Clarity] Well-structured modular design (dual diffusion, dynamic search, ensemble inference) with rigorous mathematical formulation and theoretical grounding. - [Validation] Comprehensive experiments across datasets show PGNDS outperforms SOTA baselines. Ablation studies validate component contributions and parameter sensitivity. ### Weaknesses: - [Complexity] While the proposed generative projection method is innovative, the dual conditional diffusion and dynamic search procedures may introduce significant computational overhead during inference, potentially limiting its applicability in resource-constrained environments. - [Dependence on Well-trained GNN Models] The effectiveness of PGNDS heavily depends on well-trained pre-existing GNN and diffusion models. This assumes extensive initial training data and reliable model training, whether these assumptions are rational? Other Comments Or Suggestions: It would be beneficial to include a more detailed discussion on the limitations of the proposed PGNDS and future potential directions of the proposed test-time graph neural dataset search problem. Questions For Authors: Here are my questions: 1.The generative projection process seems involves randomness through graph diffusion model, then, under what circumstances might the generative projection process fail or lead to degraded performance? are there any potential theoretical or practical limitations? 2.The paper introduces ensemble inference as a final step. How is the optimal ratio between original test graphs and newly generated graphs determined for achieving the best inference results? 3.The paper claims that the test-time graph distribution is mapped back to the training distribution using a diffusion model. However, is there a formal guarantee that the generated graphs truly align with the training distribution in a way that improves generalization, rather than simply generating more training-like graphs that may not be representative of the test distribution? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **[Re-Weakness(1)] Computation of Dual Conditional Diffusion and Dynamic Search** >While PGNDS introduces dual conditional diffusion and dynamic search, we have carefully designed the framework to remain efficient at inference time **without introducing substantial overhead**. As shown in Table 4, PGNDS achieves comparable or even better runtime than prior baseline methods such as EERM:2.784s vs Ours PGNDS: 2.244s on Ogbg-BBBP dataset. Moreover, our method does not require well-trained GNN fine-tuning, and the diffusion steps are truncated ($T \ll T_{tr}$ refer to Line 128) with early stopping based on the dynamic search criterion in Eq. (12), which significantly reduces unnecessary computation. --- **[Re-Weakness(2)] Well-trained GNNs and Diffusion Model** >PGNDS is designed to operate under the common and practical test-time adaptation setting, where reasonably well-trained models are available when deployed in practice. This is consistent with prior TTA methods such as TENT and GTRANS, which also rely on pretrained models for test-time refinement. We argue that this is **not an assumption, but rather a realistic and standard deployment scenario** where well-trained models are commonly used. In this context, PGNDS can serves as a test-time plug-in to further enhance generalization without requiring any model retraining during test time. --- **[Re-Other Comments Or Suggestions] Limitations and Future Directions** >Thank you for the constructive suggestion. One potential limitation of PGNDS is that it currently focuses on graph-level tasks and relies on task-specific guidance derived from model predictions. For instance, extending the current design to node-level settings requires adapting the task-specific guidance term $r_\text{gtask}$ with careful design, especially to ensure compatibility with the structure and graph diversity constraints in Eq. (10). >Looking forward, the proposed test-time graph neural dataset search paradigm opens up several promising directions, such as (1) adapting to **more diverse graph types and GNN types (e.g., dynamic graphs and dynamic GNNs)** and (2) integrating PGNDS with **continual or online learning frameworks under open-world scenarios**. We will include a more detailed discussion on these aspects in the final version. --- **[Re-Questions(1)] Under What Circumstances the Generative Projection Process Might Fail** >While PGNDS leverages a stochastic generative diffusion process to refine test-time graphs, this randomness is **an advantage of our proposed PGNDS, as it provides a broader search space for learning meaningful test-time graph candidates** and does not degrade model performance in practice. One potential limitation that might affect the generative projection process arises under severe distribution shifts, where the test-time graph distribution is highly dissimilar from the training distribution. In such cases, the reverse-time projection may produce low-quality or less informative graphs due to insufficient alignment. Nonetheless, our empirical results (Table 1 and Table 2) demonstrate that PGNDS remains robust across a wide range of realistic test-time scenarios. --- **[Re-Questions(2)] Ensemble Inference Ratio** >In our current implementation, we adopt a 1:1 ratio between the original test graphs and the adapted graphs during ensemble inference, as defined in Eq. (13). This fixed ratio is both simple and effective, and consistently yields improved performance across datasets, as shown in Table 2 and Table 3. We are open to solutions like learning or tuning adaptive weights; it is not hard to implement but would introduce extra learning parameters and further complicate the method. --- **[Re-Questions(3)] Distribution Alignment and Generalization Guarantee** >While PGNDS uses a diffusion model to project the test-time graph distribution toward the training distribution, **the goal is not to perfectly change the test graphs to the training graphs, but rather to map test graphs into a joint space defined by [the training distribution \& the pretrained GNNs].** This process preserves the intrinsic properties of test graphs while enhancing their compatibility with the well-trained GNN’s learned decision boundary on the training graphs. >In other words, PGNDS performs a model-aware distribution alignment, guided by three constraints in Eq. (10), to improve generalization. This is also supported theoretically in Proposition 2.3 with Eq. (18) - (20), where we show a bounded deviation in the refined distribution, and empirically in Table 2 and Table 3, where the refined graphs consistently yield better predictions.
Summary: This paper introduces test-time graph neural dataset search with generative projection to improve test-time adaptation for Graph Neural Networks (GNNs) facing distribution shifts. The proposed method, PGNDS, uses a generative projection approach to refine test graphs without modifying the trained GNN. PGNDS consists of three key steps: dual conditional diffusion for graph generation, dynamic search for selecting the best test graphs, and ensemble inference for improved predictions. Experiments on real-world graph datasets shows better performance over baselines. Claims And Evidence: Yes, no problematic claims are found. Methods And Evaluation Criteria: Yes, make sense. Theoretical Claims: Did not go into much detail, but generally checked. Experimental Designs Or Analyses: Generally checked. Supplementary Material: Not all, only reviewed appendix A, C, D parts. Relation To Broader Scientific Literature: Relatively related to existing graph learning problem and GNNs. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: For strengths: (1)The paper introduces a new learning problem for handling test-time adaptation in graph data, which appears to be a novel challenge for GNN generalization at inference time. The idea of adapting test graphs without modifying model parameters seems practical for real-world applications. (2)The overall writing is organized well, the background about test-time adaptation on graphs and challenges is described relatively clearly. Each module of PGNDS method, i.e., leveraging dual conditional diffusion, dynamic search, and ensemble inference, are well-designed and well-described, although it is a little complex to understand. (3)This paper includes several experiments that show the proposed method performs better than existing approaches, showing the proposed PGNDS could capture unseen test graph distributions through a generative method guided by well-trained GNNs. For weaknesses: (1)This method involves many concepts like graph diffusion, conditional guidance, and dataset search, it is quite difficult to fully understand it. (2)How does this work relate to graph neural architecture search? should this be discussed? (3)Some parts, like how the "dynamic search" process selects useful graphs, are not fully explained, more details need to be explained further. How to stop the search and guarantee the selected test graphs are optimal? Other Comments Or Suggestions: See the previous question. Questions For Authors: See the weakness. Another question is about the generalization ability of the model. The evaluation primarily focuses on molecular graphs for graph classification and regression tasks. Would the proposed PGNDS be equally effective on other types of graphs, such as social networks or citation graphs, or on different graph learning tasks, like node classification? If so, could you explain how PGNDS would adapt to these different graph types and tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for highlighting **the novelty and practicality of our proposed test-time graph neural dataset search (PGNDS)**, as well as the **clear organization and strong empirical results**. Detailed responses regarding the more key concept explaination, the dynamic search process and stopping criterion, and the method’s generalizability across diverse tasks and datasets are provided below. --- **[Re-Weakness-(1)] On Method Built on Multiple Concepts** >Thank you for the feedback. We understand that the proposed PGNDS introduces multiple concepts—such as graph diffusion, conditional guidance, and dataset-level search—which may be challenging to follow at first. We provide a simple one-sentence explanation for each concept to help ease understanding: > - Graph diffusion: a generative process that models test-time distribution shifts via reverse-time diffusion; > - Conditional guidance: a mechanism to control the reverse-time diffusion for generating refined test graphs using pretrained GNNs' knowledge; > - Dataset-level search: selecting or generating a refined set of test graphs by optimizing over the entire test distribution. >We have made efforts to clearly structure our paper (e.g., provide more preliminary background in Section 2.1 and intuitive motivations for each modular design in Section 2.3). We would be happy to clarify any specific concepts that may affect your understanding of our proposed PGNDS. --- **[Re-Weakness-(2)] On Relation to Graph Neural Architecture Search (GNAS)** >Thank you for the question. Our test-time graph dataset search (GNDS) and GNAS have fundamentally different objectives. GNAS aims to **search for optimal GNN architectures** (e.g., layers, aggregators), typically during the training stage. In contrast, our method PGNDS focuses on test-time dataset-level graph refinement—**searching for optimal test graphs** to improve prediction under distribution shift, without modifying the model architecture. Therefore, our work serves a completely different purpose from GNAS. --- **[Re-Weakness-(3)] On Dynamic Search and Stopping Criterion** >The dynamic search process in PGNDS iteratively samples refined graphs from the conditional diffusion-based candidate set in Eq. (11) and evaluates them using a test-time rectification objective $\epsilon$ in Eq. (12). For each test graph, we select the adapted version that minimizes this objective across the reverse diffusion steps. To ensure search efficiency and stability, we adopt a **dynamic stopping strategy**: the process halts early if the objective does not improve over a certain number of patience steps (refer to Lines 267–270 in Section 2.4), thus avoiding exhaustive sampling. The effectiveness of this guided selection is also supported by our empirical study—Table 3 (Idx04 vs. Idx05) in Fig.3 and Fig.4 shows that it consistently identifies high-quality test graphs. --- **[Re-Questions For Authors] On Generalization to Other Graph Types and Tasks** >Thank you for the thoughtful question. In this work, we focus on molecular and protein graphs with graph-level tasks, **covering six benchmark datasets and two types of graph learning tasks (classification and regression, with nine detailed tasks)**. This demonstrates that PGNDS is effective across a wide range of real-world graph-level scenarios. We would like to emphasize that **the core framework of PGNDS is task-agnostic and generalizable**. For example, adapting PGNDS to node classification tasks (e.g., on other graph types, i.e., citation or social networks) would involve redefining the task-specific guidance term $r_{\text{gtask}}$ to reflect node-level objectives, and the conditional diffusion and structural constraints remain applicable. We believe the framework’s modular design makes such extensions straightforward. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I have also reviewed the comments from the other reviewers and the corresponding replies from the author. I will maintain my score.
Summary: The authors introduce a new problem, test-time graph neural dataset search, to learn the optimal distribution of unknown test graph datasets. For this purpose, they propose PGNDS, a generative projection driven by a diffusion model. By projecting test graphs back to the training distribution, PGNDS learns test-time adaptation by generating refined test graphs. Experimental results demonstrate the effectiveness of PGNDS. Claims And Evidence: What is the specific difference between the problem “test-time graph neural dataset search” and “test-time graph adaptation”? Would projecting the entire test graph set distribution back to the training set change the properties of refined test graphs? Methods And Evaluation Criteria: This method is applicable to small molecular and protein graphs. Is it applicable to larger graphs? Theoretical Claims: The theoretical analyses seem to be correct. Experimental Designs Or Analyses: The “ensemble inference” phase aggregates the representative information from both the original and adapted test graphs. Experiments should their individual results, and explain why the method does not use only the adapted graph. Because the adapted graph seems to be the best structure. Supplementary Material: Supplementary materials are related work and dataset details. Relation To Broader Scientific Literature: Compared with the test-time model adaptation methods, this paper avoids modifying GNN parameters and is more applicable. This paper also proposes an innovative method. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: The proposed method is novel for test-time graph-level tasks. Weaknesses: Recent methods are not compared in the experiment. Other Comments Or Suggestions: This paper should compare more recent methods, including test-time model adaptation methods and graph adaptation methods they refer to. Questions For Authors: In figure A2, I cannot understand the effect of graph structure adaption. Why (b) and (d) is better than (a) and (c)? Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **[Re-Claims and Evidence (1): Difference between “Test-Time Graph Neural Dataset Search” and “Test-Time Graph Adaptation”]** >We thank the reviewer for recognizing our contribution in proposing the **novel problem of test-time graph neural dataset search (test-time GNDS)**. In brief, **test-time GNDS can be viewed as a distribution-level extension of traditional test-time graph adaptation**, where instead of modifying each test graph individually, we **learn a parameterized graph distribution at the dataset level**. From a broader perspective, both approaches are data-centric test-time graph manipulation methods. We summarize the key differences in the following table, and we will further clarify this conceptual distinction in the revised version. >| Aspect | Test-Time Graph Adaptation | Test-Time Graph Neural Dataset Search | >|-------------------------------|----------------------------------------------------|------------------------------------------------------| >|Granularity | Per-graph (adapts each test graph individually) | Dataset-level (searches a distribution over graphs) | >|Adaptation Objective | Modify a single test graph to improve prediction locally | Generate task-aligned graph distributions toward training domain | >| Method Type | Direct feature and structure manipulation | Generative modeling + sampling via diffusion | >| Optimization Target | One graph at a time | Entire test-time graph set as a distribution | --- **[Re-Claims And Evidence (2)] Whether test-time distribution projection change the properties of refined test graphs?** > PGNDS projects the test-time graph distribution toward the training, but it is **not to overly change** the intrinsic properties of test graphs. This is achieved via the **dual conditional diffusion process** with three key constraints in Eq. (10): (1) **r_struc**: structure preservation; (2) **r_gdist**: graph diversity control; and (3) **r_gtask**: task-specific preservation. These constraints ensure that the refined test graphs remain closely related to the original inputs while gaining improved compatibility with the training distribution. >Besides, as shown in Proposition 2.3 and Eq. (20), the deviation from the original distribution is bounded by a controllable term $|\xi|$, guided by GNN-specific knowledge. Moreover, empirical results (Table 3, Fig. 3–4) confirm that removing these constraints significantly degrades performance, validating the role of dual conditional diffusion in preserving test graph properties while improving inference. --- **[Re-Methods and Evaluation Criteria] Applicability to Larger Graphs** >While our experiments focus on small molecular and protein graphs, PGNDS is not inherently limited to such settings. We evaluate PGNDS on the large-scale QM9 dataset, which contains 133,885 molecular graphs, demonstrating its applicability to large test sets. As a graph-level method, PGNDS can be applied to graphs with more nodes as well, with scalability primarily depending on the efficiency of the diffusion model. --- **[Re-Experimental Designs or Analyses] Ensemble Inference** >As confirmed in our ablation study (Table 3, Idx05: only adapted graphs vs. Idx06: ensemble of original and adapted graphs), ensemble inference consistently outperforms using adapted graphs alone. While the adapted test set is aligned with the training distribution, the test-time distribution learning remains inherently complex. Due to the large search space and potential approximation errors in the generation process [refer to Lines 265-272], relying solely on adapted graphs may not capture all informative patterns. Therefore, we propose an ensemble inference scheme to aggregate complementary information from both original and adapted graphs. --- **[Re-Weakness and Suggestions] Recent methods.** >Our work is **the first to formulate the problem of test-time graph-level dataset search** with generative projection. To the best of our knowledge, no existing method directly addresses this setting. While we include representative baselines from both test-time model adaptation (e.g., TENT) and graph adaptation (e.g., GTRANS), there are currently no more recent or directly comparable methods for our proposed task. We will continue to monitor future developments and further explore this new research direction. --- **[Re-Questions For Authors] On Figure A2** >Compared to the originals in (a) and (c), the adapted graphs in (b) and (d) exhibit more chemically meaningful substructures (e.g., double bonds).**This does not imply that the adapted graph is a “better” version in a ground-truth sense**, but rather that it is **the most aligned with the training distribution under a well-trained model**, and such alignment is beneficial for improving test-time inference performance.
null
null
null
null
null
null
One Arrow, Two Hawks: Sharpness-aware Minimization for Federated Learning via Global Model Trajectory
Accept (poster)
Summary: The paper proposes FedGMT, a federated learning framework that leverages sharpness-aware minimization (SAM) to enhance generalization, especially in highly skewed non-IID settings. To achieve this, the framework employs an exponential moving average (EMA) of the global model as a proxy for the global loss surface. An additional regularization term is introduced, defined by the KL divergence between the output distribution of a client’s local model and that of the EMA-based global model, which guides SAM updates to consider the global context. The paper also presents FedGMT-v2, a variant that approximates FedGMT to reduce communication overhead, albeit with a slight performance degradation. The framework's properties are thoroughly explored through both theoretical analysis and empirical experiments. Claims And Evidence: The paper makes the following claims: 1. FedGMT reduces the computational cost of SAM-aware federated learning (e.g., FedSAM) by nearly half. 2. FedGMT achieves a faster convergence rate of $\mathcal{O}(1/T)$. 3. The empirical results demonstrate the effectiveness of FedGMT. The paper provides evidence for each of these claims, although the evidence might be limited in certain aspects: 1. The paper argues that by avoiding the additional backward pass required for calculating perturbations in standard SAM, FedGMT reduces the computation cost. Table 1 provides numerical comparisons, though some details could be clarified further. 2. Theorem 3.5 presents theoretical evidence that, under standard assumptions (e.g., L-smoothness and bounded gradients), FedGMT achieves a convergence rate of $\mathcal{O}(1/T)$. While these assumptions may not hold perfectly in all deep learning scenarios, they are commonly used in the literature. 3. The experimental section demonstrates the empirical performance of FedGMT across multiple datasets and model architectures. Although the experiments are conducted using a relatively small number of clients and smaller model architectures compared to some real-world scenarios, they still provide credible evidence supporting the claims. Methods And Evaluation Criteria: The method is well-explained and easy to follow. However, some clarifications are needed regarding the communication and computational cost figures presented in Table 1. - **Communication Cost:** The paper reports the communication cost of the proposed framework to be $1.5\times$ of FedSAM. To my understanding, this value represents the average-case scenario. In the worst case, if the EMA changes significantly, the communication cost could approach $2\times$, whereas in the best case, if the EMA remains nearly unchanged, the cost would be close to $1\times$. Explicitly stating these scenarios would help readers better understand the variability in communication overhead. - **Computational Cost:** The computational cost column requires explanation (For example, why is it $1.2\times$?) and a more formal analysis. An example of such an analysis could be that if we denote the cost of a forward pass through the network as $f$ floating point operations per second (FLOPS) and assume that a backward pass costs approximately $2f$ (as commonly observed in the literature), then methods like FedSAM, which require two forward passes and two backward passes, have a computational cost of roughly $6f$. In contrast, FedGMT only requires two forward passes, one backward pass, the computation of the KL divergence (which is negligible relative to $f$), and some lightweight ADMM computations, which might be approximated as $2p$ FLOPS (please verify this number), where $p$ is the number of parameters in the model. This results in an overall overhead of approximately $4f+2p$. The overall complexity is lower because $f$ is typically much larger than $p$ (for example, in ResNet-18, many research works estimated that $p$ = 11,689,128, while $f$ =1,818,228,160 FLOPS). A more formal analysis with clear notations would strengthen the argument regarding the computational cost. Theoretical Claims: Overall, the theoretical claims in the paper are well supported with proof. However, the derivation in the Appendix concerning the ADMM updates and the related discussion remains largely intuitive. A more rigorous, step-by-step treatment would be beneficial. In particular, the authors could: - Include additional bounding steps or formalize the transition from local expansions to global updates. - Reference standard ADMM-based lemmas to reinforce the derivation. - Provide explicit error bounds quantifying the discrepancy between local and global updates. Such improvements would help readers verify each step of the argument and appreciate the theoretical underpinnings of the proposed method more fully. Experimental Designs Or Analyses: The paper studies various experimental setups to validate the framework, including different data skew conditions, sensitivity analyses, and varying client participation rates. However, there are two minor limitations in the experimental design: 1. *Model Architectures:* The experiments are conducted using relatively small models: a CNN with 150K–200K parameters, ResNet-8 with around 110K parameters, and a ViT-CIFAR variant with fewer than 6 million parameters. In contrast, related works like FedSAM and FedGAMMA often use larger models, such as ResNet-18. Including experiments with larger architectures would improve the generalizability of the results. 2. *Number of Clients:* The experiments are performed with only 10 clients, which is small compared to typical federated learning studies that involve around 100 clients. While the current experiments do support the claims made, expanding the study to include a larger number of clients would strengthen the evidence for scalability and robustness. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: This work is positioned within the expanding field of federated learning, addressing key challenges such as data heterogeneity, client drift, and communication/computation efficiency. It builds directly on recent advances in sharpness-aware minimization (SAM) applied to federated settings, which aim to achieve better generalization by optimizing for flat minima. The use of an exponential moving average (EMA) to capture the global model trajectory and guide local updates is a novel twist that resonates with prior work in model averaging and momentum methods in distributed optimization. Essential References Not Discussed: By and large, the paper provides a comprehensive view of the area, and to the best of my knowledge, no essential references are missing. Other Strengths And Weaknesses: **Strengths:** - The paper is well-written and easy to follow. - The visualizations used, specifically figures 1 and 6, are helpful to the reader. **Weaknesses:** - See other sections. Other Comments Or Suggestions: The paper is well written, barring the few clarifications needed and limitations in the experimental design. If the authors could provide sufficient clarification, I am open to increasing my score. Questions For Authors: See other sections in the review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows. --- **W1. Clarification of communication and computational cost in Table 1**:We apologize for not making it clear and we will modify the caption of Table 1 and add a discussion section with following content in the revised version. **Communication cost:** The communication cost is defined as the parameters transmitted per round. For example, if we denote the amount of model parameter as Ω, FedSAM requires 2Ω due to bidirectional model parameter exchange. In contrast, FedGMT incurs a communication cost of 3Ω due to the server transmitting the EMA model to clients, whereas FedGMTv2 achieves 2Ω by omitting this transmission. Thus, FedGMT's communication cost is either $1.5×$ or $1×$ that of FedSAM. **Computational cost:** The computational cost is defined as per-iteration training cost. Specifically, minor computational overhead (e.g., model aggregation, ADMM computations) is ignored compared to the primary training computation. Taking FedAvg as the baseline ($1×$ cost), FedSAM doubles the forward and backward processes, resulting in a $2×$ cost. FedGMT involves an extra forward pass. we refer to the official PyTorch paper [A], which shows that for ResNet50, the backward pass is $3.7×$ the forward pass. Based on this data, FedGMT's cost is estimated to be about $1.2×$ by $(\frac{1+1+3.7}{1+3.7})$. If we assume a forward cost of *f* and a backward cost of 2*f*, FedAvg requires 3*f* per iteration, while our FedGMT needs 4*f*, leading to a $1.33×$ cost. We will describe this hypothesis and adjust the estimate from $1.2×$ to $1.33×$ for better understanding to readers. While FedGMT incurs marginal computational overhead, it achieves superior accuracy and efficiency. As shown in following Table in **W2**, FedGMT requires only 47% of the time cost of FedAvg to reach the target accuracy, demonstrating its practical advantage in resource-constrained environments. For example, if one trains a FL model to reach the target accuracy with 8 Nvidia V-100 GPUs, considering ​​Google Cloud Platform charges \\$2.48 per GPU per hour, the total cost of FedAvg for 1 day will be \\$476, FedSAM will be \\$724 (476\*152%), while FedGMT will be \\$224 (476\*47%). There is a significant cost savings of implementing FedGMT. [A] Li S, Zhao Y, Varma R, et al. Pytorch distributed: Experiences on accelerating data parallel training[J]. Proc. VLDB Endow. 13(12): 3005-3018 (2020) --- **W2.Experiments with larger model:** We conducted experiments on CIFAR100-Dir(0.1) with Resnet18 on one NVIDIA 4090 GPU. We follow the same parameter settings in FedSMOO paper and give a comprehensive comparison. The results are stated below. | | Acc | Images/s |Per round time|Round(acc=44%) |Communication cost(acc=44%)| Time cost(acc=44%)| | ----- | ----- | ----- |----- |----- |----- |----- | | FedAvg| 43.95±0.22 | 816(100%) |33.85s(1x) |753(100%)|1506 Ω(100%) |425 min(100%) | | FedSAM| 44.56±0.20 | 429(53%) |61.31s(1.81x) |633(84%) |1266 Ω(84%) |647 min(152%) | | FedSMOO| 47.94±0.15 | 385(47%) |69.85s(2.06x) |412(55%) |1648 Ω(109%) |480 min(113%) | | FedLESAM-D| 46.42±0.23 | 657(81%) |42.20s(1.25x) |309(41%) |618 Ω(41%) |217 min(51%) | | FedGMT| **50.67±0.16** | 583(71%) |46.13s(1.36x) |257(34%) |771 Ω(51%) |**198 min(47%)** | | FedGMTv2|50.24±0.22 | 566(69%) |46.32s(1.37x)| **262(35%)**|**524 Ω(35%)** |202 min(48%) | FedGMT achieves the highest accuracy (50.67%) and fastest convergence (257 rounds/198min) to reach 44% accuracy, outperforming all baselines. While FedSMOO (47.94%) and FedLESAM-D (46.42%) show better accuracy than FedAvg (43.95%) and FedSAM (44.56%), they lag behind FedGMT in efficiency. FedGMT balances accuracy and efficiency, surpassing competitors in both metrics. --- **W3.Experiments with larger number of clients:** We conducted experiments on CIFAR10-Dir(0.1) with LeNet . We set 500 clients with 20% active ratio and 1000 clients with 10% active ratio to involve 100 clients per round. The results are stated below. | | 20%-500 | 10%-1000 | | ----- | ----- | ----- | | FedAvg | 61.13±0.47 | 54.39±0.35 | | FedSAM | 60.82±0.41 | 53.96±0.31 | | FedSMOO | 79.95±0.17 | 69.57±0.29 | | FedLESAM-D | 79.71±0.17 | 74.06±0.13 | | FedGMT | **80.78±0.13** | **74.34±0.11** | The result shows that FedGMT's superior scalability and robust for large-scale FL scenario. In addition, you can refer to our experiments in Reviewer 5WUM that also verify the robustness of FedGMT in an extreme non-IID FL setup. --- **W4. Theoretical improvements.** Thank you for your insightful feedback and suggestions to enhance the theoretical rigor of our paper. We wholeheartedly agree with your comments and will reorganize the theoretical proof by following your suggestions for each step in our updated version. --- It is a pleasure to discuss this with you, which will help us to further improve this work. Thank you again for reading this rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses and clarifications. I appreciate that you have addressed many of the concerns raised by me and the other reviewers, especially regarding the role of the dual variable and the communication/computational cost analysis. After reviewing all the comments and your rebuttals, my concerns have been answered, and I will advocate for a 4: Accept. Please ensure that these clarifications are incorporated into the final version of the paper, should it be accepted. I wish you the best of luck with your final submission! --- Reply to Comment 1.1.1: Comment: Thank you very much for your comment and affirmation of our work. We have tried to address most if not all concerns raised by the reviewers. We would be delighted to integrate these clarifications, analysis and theoretical improvements into the final manuscript, should our submission be accepted. --- Once again, thank you for your invaluable insights and suggestions. Your professional expertise and constructive suggestions will significantly improve the quality of our work. We also wish you the best of luck with your life and future!
Summary: This paper proposes a new Federated Learning algorithm, named FedGMT, to effectively cope with data heterogeneity by reducing the sharpness of the global model through a global model trajectory. This paper provides the convergence analyze of FedGMT in the non-convex and smooth cases. Experimental results show significant performances of the proposed method. Claims And Evidence: Considering the original FedSAM, it formulates the optimization target as a minimax problem, to minimize the maximum loss. However this work considers a minimization target, which is not identical to the SAM-based methods. How do the authors explain the optimization target? Methods And Evaluation Criteria: In Algorithm 1 line 16, the server aggregates update the u but doesn’t send it to the client. How does each client gain that or is this aggregation redundant? Theoretical Claims: This work lacks the generalization error bound analysis of proposed method. Experimental Designs Or Analyses: In the experiment, the authors don’t compare the federated ADMM-based work. It’s suggested to add some baselines for comparison. And the authors “report the final averaged test accuracy and standard deviation over the last 50 rounds for increased robustness and reliability”, this comparison criterion is not solid, especially some methods have not converged in the given communication rounds. Supplementary Material: I check all the supplementary materials. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to several existing lines of research in Federated learning and optimization, particularly in addressing the challenges posed by data heterogeneity and improving model generalization. Essential References Not Discussed: No. Other Strengths And Weaknesses: Weaknesses 1. Considering the original FedSAM, it formulates the optimization target as a minimax problem, to minimize the maximum loss. However this work considers a minimization target, which is not identical to the SAM-based methods. How do the authors explain the optimization target? 2. In Algorithm 1 line 16, the server aggregates update the u but doesn’t send it to the client. How does each client gain that or is this aggregation redundant? 3. In Table 1, the authors claim the FedGMT achieves 1\times or 1.5\times communication cost and 1.2\times computation cost. How do the authors gain this result? Please give further analysis. 4. This work lacks the generalization error bound analysis of proposed method. 5. In the experiment, the authors don’t compare the federated ADMM-based work. It’s suggested to add some baselines for comparison. And the authors “report the final averaged test accuracy and standard deviation over the last 50 rounds for increased robustness and reliability”, this comparison criterion is not solid, especially some methods have not converged in the given communication rounds. 6. Figure 6 cannot prove the global loss is more flatter, but only show the consensus of the local and global update trajectory. If authors can provide a visualization of the loss landscape, it can be more direct. 7. The paper discusses hyperparameters sensitivity but could provide more detailed insights into how different hyperparameters affect the performance of FedGMT. 8. The paper could enhance its originality by more clearly distinguishing its contributions from existing methods, its significance by including real-world deployment examples. Other Comments Or Suggestions: 1. While the paper includes a hyperparameters sensitivity analysis, it could provide more detailed guidance on how to select the right hyperparameters for different datasets or levels of data heterogeneity. A discussion on the robustness of FedGMT to suboptimal hyperparameter choices would also be valuable. 2. The paper needs to calculate and store additional parameters both locally and on the server, which will increase the amount of calculation. How to balance model performance and resources. Questions For Authors: See the Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the comments and suggestions! We answer your questions below. --- **1.Optimization target explanation.** By referring the Theorem 1 in original SAM(Foret et al.,2020), the objective loss function of FedSAM in each client $m$ can be rewritten as the sum of the vanilla loss and the loss associated to the sharpness measure, which is the maximized change of the training loss within the ρ-constrained neighborhood, i.e., $\arg\min_{w_m}[L_m(w_m)+S_m(w_m)]$ where $S_m(w_m) = max_{\epsilon: ||\epsilon||<\rho}L_m(w_m+\epsilon)-L_m(w_m)$, which is a minimax problem as you mentioned. The sharpness measure is approximated as $S_m(w_m) = L_m(w_m+\epsilon_m)-L_m(w_m)$, where $\epsilon_m = \rho\frac{\nabla\mathcal{L}_m(w_m)}{\|\nabla\mathcal{L}_m(w_m)\|}$ is the solution to an approximated version of the maximization problem in FedSAM. The sharpness measure in FedSAM approximates the inner maximization problem and is combined with the vanilla loss for minimization. This transforms the minimax problem into a standard minimization task. Our work based on this task to develop a better sharpness measure to minimize global sharpness in FL, as formally established in Eq. (8) of the manuscript. --- **2.Role of the dual variable $u$.** Note that in line 13 of Algorithm 1, each client has its own dual variable $u_m^{t+1} = u_m^{t}-\frac1\beta(w_m^t-w^t)$. This enables clients to ensure that the solutions of their respective sub-problems are consistent under global constraints. In line 16 of Algorithm 1, we define the global dual variable *$u^t$* because, according to Eq. (13), the global model $w^{t+1}$ needs to be updated as $w^{t+1}=\frac1N\sum_{m \in \mathbb{N}}(w^t_{m,K}-\beta u^{t+1}_m)$. To avoid having each client send $u_m^{t+1}$ to the server in every round, we define $u^t$ to assist in minimizing Eq. (13). We also conducted experiments on CIFAR10-Dir(0.1), FedGMT's accuracy drop from 79.17 to 65.59 without $u$. Therefore, updating $u$ is necessary, not redundant. --- **3.Analysis in Table 1.** To save space for other question, we have addressed this concern in our rebuttal to Reviewer Zauy (Section **W1**). Please refer to that section for our detailed response. --- **5.1 Compare the federated ADMM-based work.** The baselines we take in experiments, such as FedDyn, FedSpeed and FedSMOO, which are all state-of-the-art FL methods based on ADMM. **5.2 Comparison criterion.** The comparison criterion we used follows original FedSAM(Caldarola et al.,2022). we ensure sufficient communication rounds to guarantee convergence (see Fig.5), while using the final averaged accuracy to avoid fluctuations and provide a more reliable performance estimate. We also provide the historical best accuracy by random seed {1,2,3} in **anonymous link https://anonymous.4open.science/r/RG5**, the ranking of each method is consistent with our criterion. --- **6.Loss landscape.** We provide the loss landscape, top Hessian eigenvalue and Hessian trace of SAM-based method in anonymous link in 5.2, demonstrating that our FedGMT can reach a flatter minima than others. --- **7. Hyperparameters sensitivity and selection.** Thanks for the suggestion. Our experiments across 4 CV/NLP datasets, 4 model architectures, and different scenarios (heterogeneity and participation) show that FedGMT's hyperparameters can be consistent and robust. We will detail the meanings and selections of these hyperparameters in this part. - The strength $\gamma$ and temperature $\tau$ which is a basic hyperparameter for KL function. We follow the common settings used in previous works and $\gamma=1$ and $\tau = 3$ for all settings. - The penalty coefficient $\beta$ has been studied in many previous ADMM-based works. We test the selection of {10, 100} works on all settings. - $\alpha$ control the loss on the recent update trajectory in EMA. Since $\alpha$ needs to be close to 1 to retain past trajectories, we mainly chose from {0.5,0.9,0.95,0.99}. We test $\alpha=0.95$ works well on all settings. In summary, the hyperparameters in FedGMT can be selected by simple experience as we mentioned above. In practice, FedGMT ​​can achieve higher performance compared to other baselines without the complex tuning. --- **8.1 The originality** is that we develop a efficient (communication and computation) global sharpness measure for SAM-FL. We do increase the amount of calculation in line 13,16,18 of Algorithm 1, but it is negligible compared to the mainly training costs. Notably, FedGMT avoid 1 more backward pass required for calculating perturbations in standard SAM. **8.2 Real-world deployment.** To validate practical deployment feasibility, We implemented FedGMT on 2×RPi4B, 2×Jetson Nano, and 1×PC. FedGMT achieves about 47% energy savings vs. FedAvg across three datasets, as shown in the anonymous link in 5.2. --- It is a pleasure to discuss this with you, which will help us to further improve this work. Thank you again for reading this rebuttal.
Summary: The article proposes a novel solution to deal with the client drift problem of federated learning. It is based in sharpness aware minimization - addressing the two problems: how to make it efficient? How to guarantee that the global rather than client objectives are targeted? It proposes a novel algorithm which is based on a clever integration of trajectory loss. The method is introduced, and substantiated both empirically and with convergence analysis. #after discussion: I am still convinced that the paper contributes valuable tools. I appreciate the discussion about drift, very interesting. Claims And Evidence: The main claim is that the proposed method allows federated learning to better and efficiently deal with client drift. This is sufficiently demonstrated. Methods And Evaluation Criteria: The method is reasonable and aligns very well with what is currently state of the art in the domain. Data sets are standard ones - distorted client-wise using a dirichlet distribution. This is sufficient to document the results. It would be good to have more distortions/drift models for the clients, including disjoint data domains. Also, it might be good to have sensor data or any data which is not text or image. But it is sufficient as is to make the point. Theoretical Claims: Yes, the proofs and formalizations follow standard methodology and are correct as far as I can tell. Experimental Designs Or Analyses: Versy good design and solid evaluation. Code is provided, so reproducible. Supplementary Material: I shortly red through it. Relevant auxiliary information. Relation To Broader Scientific Literature: Yes, the authors are well aware of the field. Essential References Not Discussed: In my opinion this is good Other Strengths And Weaknesses: very relevant topic and apparently a good solution. Can you comment on the challenge of integrating privacy issues? In how far is this compatible? Other Comments Or Suggestions: none Questions For Authors: Please make a comment on realistic client drifts and propose and evaluate more than Dirichlet. Please comment on the challenge of privacy. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows. --- **1.Comment on realistic client drifts.** Client drift in federated learning refers to the phenomenon where data distributions across devices (clients) change over time or space, leading to performance degradation. After a comprehensive review of existing federated learning literature to address client drift problem, we summarize that numerous studies simulate non-IID federated settings using partitioning strategies on public datasets. This approach is preferred because real-world federated datasets are scarce due to data regulations and privacy constraints, while synthetic partitioning offers flexibility in controlling key FL parameters (e.g., client count, data size) and leveraging existing centralized training knowledge. Thus, synthetic client drifts is more complex and flexible, enabling methods developed under these conditions to generalize more readily to realistic client drift scenarios. In our work, as mentioned in Section 4.1, we adopt two widely used data partition strategy: - Pathological: Only selected categories can be sampled with a non-zero probability. The local dataset obeys a uniform distribution of active categories. - Dirichlet: Each category can be sampled with a non-zero probability. The local dataset obeys a Dirichlet distribution. Notably, we extend this by transforming the original balanced dataset into a long-tail distribution to further enhance the data heterogeneity and simulate real world scenarios. --- **2.Propose and evaluate more than Dirichlet.** We introduce a non-IID data partitioning method where client datasets are strictly non-overlapping (disjoint) across classes, simulating an extreme data heterogeneity scenario. Specifically: - CIFAR10 (10 total classes) : 10 clients, each assigned a unique class - CIFAR100 (100 total classes) : 100 clients, each assigned a unique class This single-class assignment ensures maximal label distribution discrepancy between clients. Experiments were conducted with participation rate 40% and communication rounds 500. The results under this extreme setting are presented below. | | | CIFAR10 | CIFAR100 | | ----- | ----- | ----- | ----- | | FedAvg | (AISTATS 2017) | 31.57±1.04 | 6.42±0.24 | | FedSAM |(ICML 2022) | 29.39±0.82 | 5.56±0.18 | | FedSMOO | (ICML 2023) | 10.00(failed) | 1.00(failed) | | FedLESAM-D |(ICML 2024) | 45.35±0.60 | 13.07±0.70 | | FedGMT | (ours) | **58.23±0.68** | **22.13±0.34** | | FedGMTv2 |(ours) | 54.76±0.40 | 17.19±0.14 | The results shows that in an extreme non-IID federated learning setup where clients hold disjoint class partitions, FedGMT achieves state-of-the-art results: 58.23% on CIFAR10 and 22.13% on CIFAR100, outperforming FedLESAM-D (45.35%/13.07%) and other baselines. FedAvg (31.57%/6.42%) and FedSAM (29.39%/5.56%) struggle with label heterogeneity, while FedSMOO fails entirely. These results validate FedGMT’s superior robustness to extreme label skewness. --- **3.Comment on the challenge of privacy.** Our work builds upon the well-established FedAvg framework, inheriting its parameter-based communication architecture between server and clients. This design ensures full compatibility with existing privacy-preserving techniques developed for FedAvg, including mainstream defenses against privacy attacks such as differential privacy [A], homomorphic encryption [B], and secure multi-party computation [C]. The core contributions of our paper focus on enhancing practical generalization accuracy for real-world datasets while minimizing communication and computation costs under data heterogeneity. Importantly, our methods impose no restrictions on the integration of privacy-preserving techniques, ensuring seamless compatibility with existing or emerging privacy frameworks. [A] Chen,H.,Vikalo,H.,etal. The best of both worlds: Accurate global and personalized models through federated learning with data-free hyper-knowledge distillation(ICLR , 2023). [B] Cai Y, Ding W, Xiao Y, et al. Secfed: A secure and efficient federated learning based on multi-key homomorphic encryption[J]. IEEE Transactions on Dependable and Secure Computing, 2023, 21(4): 3817-3833. [C] Chen L, Xiao D, Yu Z, et al. Secure and efficient federated learning via novel multi-party computation and compressed sensing[J]. Information Sciences, 2024, 667: 120481. --- It is a pleasure to discuss this with you, which will help us to further improve this work. Thank you again for reading this rebuttal. --- Rebuttal Comment 1.1: Comment: I keep my (positive) rating :-) Just one comment on the non iid: I am well aware of the standard procedure in FedLM, but there of literature on learning in the context of drift or covariate shift which is not FL (an older highly cited one with reference to data sets https://www.sciencedirect.com/science/article/pii/S0925231217315928); I would appreciate to see broader challenges here as FL starts to overfit on the data --- not necessarily for this ICML paper but in the future.... --- Reply to Comment 1.1.1: Comment: Thank you very much for your comment and affirmation of our work. We greatly appreciate your emphasis on the broader context of drift/covariate shift research, which aligns with our vision for advancing robust FL frameworks. We selected two drift datasets from the literature [A] you provided and one real - world scenario drift dataset to assess the performance of FL methods. We will detail the dataset and implement in this part. - **Interchanging RBF**. This dataset consists of fifteen Gaussians with random covariance matrices. Every 3000 samples, these Gaussians replace each other. With each replacement, the number of Gaussians changing their positions increases by one until all of them change their locations simultaneously. This setup enables us to evaluate an algorithm's performance in the face of abrupt drifts with increasing intensity. In total, there are 66 abrupt drifts within this dataset. - **Forest Cover Type**. This dataset assigns cartographic variables such as elevation, slope, and soil type of 30×30 - meter cells to different forest cover types. Only forests with minimal human - caused disturbances were considered, so the resulting forest cover types are mainly determined by ecological processes. - **HAR (Human Activity Recognition Using Smartphones DataSet).** [B] This dataset is constructed from the recordings of 30 subjects performing activities of daily living while carrying a waist - mounted smartphone with embedded inertial sensors. The data is naturally associated with each subject (client). For these three tabular datasets, we use a MLP with three hidden layers. The numbers of hidden units of three layers are 32, 16, and 8. The remaining settings are consistent with those in our manuscript. The experimental results are presented below. | | | Inter. RBF | Cover type | HAR | | - | - | - | - | - | | FedAvg | | 15.85 ± 1.95 | 70.75±7.05 | 86.92±0.99 | | FedSAM | | 16.14 ± 1.81 | 71.40±6.66 | 87.04±0.86 | | FedSMOO | | 17.57 ± 1.46 | 75.63±3.60 | 91.32±0.75 | | FedLESAM-D| | **19.16 ± 1.47** | 75.67±3.68 | 91.15±0.93 | | FedGMT| | 18.56±1.05 | **78.27±0.97** | **92.60±0.13** | FedGMT achieves the highest accuracy in the Forest Cover Type dataset (78.27 ± 0.97) and the HAR dataset (92.60 ± 0.13). This indicates that FedGMT is well - suited to handle the data characteristics and drifts present in these real - world datasets. Interchanging RBF dataset is designed to simulate abrupt drifts with increasing intensity. FedLESAM - D performs the best here with an accuracy of 19.16 ± 1.47. FedGMT follows closely with 18.56 ± 1.05. The relatively lower accuracy values for all methods on this dataset can be attributed to the complex and abrupt nature of the drifts. This shows that new methods are needed in FL to solve the drift of such datasets in the future. [A] Losing, Viktor, Barbara Hammer, and Heiko Wersing. "Incremental on-line learning: A review and comparison of state of the art algorithms." Neurocomputing 275 (2018): 1261-1274. [B] Anguita, Davide, et al. "A public domain dataset for human activity recognition using smartphones." Esann. Vol. 3. No. 1. 2013. --- Once again, thank you for your invaluable insights and suggestions. Your professional expertise and constructive suggestions will significantly improve the quality of our work.
Summary: This paper studies sharpness-aware minimization (SAM) in federated learning (FL) in the presence of data heterogeneity. The major problem for SAM in FL is that the clients cannot get an accurate estimate of the global objective/gradient due to heterogeneous data distribution. Existing literature has proposed utilizing the model updates in the previous communication round to approximate the global gradient. This paper proposes FedGMT, which leverages the global model trajectory which includes the pseudo-gradients from all the previous rounds through exponential moving averaging. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Extensive experiments on both CV and NLP problems, with various datasets and neural network models are conducted. Theoretical Claims: Yes. I quickly went through the proofs (which seem correct), but have not had the time to check them step-by-step. Experimental Designs Or Analyses: Yes. In the experiments, it is mentioned that SGD with a learning rate of 0.01 is adopted. While the momentum of 0.9 is quite common and standard in the literature, I expect that the learning rate should be tuned from a reasonable set for the baselines for the sake of a fair comparison. Supplementary Material: Yes. I mainly checked the proofs for the theoretical results. Relation To Broader Scientific Literature: There are many existing works endeavoring to address the data heterogeneity and computational overhead issues in SAM for federated learning, such as FedSMOO, FedSpeed, and FedLESAM. This paper advances the literature by incorporating the global model trajectory. Essential References Not Discussed: The literature review looks comprehensive. Other Strengths And Weaknesses: **Strengths:** 1. The topic and problem that this paper aims to tackle are interesting and important. 2. The paper is well-written and not difficult to follow. 3. Extensive experiments on both CV and NLP tasks are conducted to validate the effectiveness of the proposed algorithms. **Weakness:** 1. Incorporating KL divergence in training loss is not new. For example, the derived algorithm looks similar to the cited paper FedGKD [A], which also adopts the EMA at the server and adds a KL term to the local loss. Therefore, in my understanding, the major difference lies in the adaptation of ADMM, which was proposed in FedSMOO. 2. The improvement of the algorithm seems to arise mainly from ADMM since the performance of only using $L^{global}$ in the ablation study in Table 3 looks similar to Fed-SAM in Table 2. (78.18 vs. 78.31, 61.28 vs. 61.05 in Dir(1.0) and (0.01) conditions, respectively). More experimental results without ADMM would help demonstrate the effectiveness of the derived global sharpness measure via Global Model Trajectory, which is the major contribution of this paper based on my understanding. 3. The learning rates seem not carefully tuned for the baselines. [A] Yao, D., Pan, W., Dai, Y., Wan, Y., Ding, X., Yu, C., ... & Sun, L. (2023). FedGKD: Toward heterogeneous federated learning via global knowledge distillation. IEEE Transactions on Computers, 73(1), 3-17. Other Comments Or Suggestions: The paper is well-written. Questions For Authors: Please see my comments about weaknesses. In addition, could the authors help me understand how incorporating EMA and ADMM improves the learning rate for non-convex optimization from $O(1/\sqrt{T})$ to $O(1/T)$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows. --- **W1.FedGMT vs. FedGKD.** While FedGKD appears to be similar to our method, there are fundamental differences. Practically, FedGKD takes an element-wise average over the latest 5 rounds (as they suggest) of global models and sends it to clients. This averaging method isn't EMA; it demands extra storage space and doubles the communication overhead. We use EMA with KL divergence in the training loss because, as proven in Eq. (8), minimizing the loss difference $L(e)-L(w)$ between the EMA model $e$ and the global model $w$ is equivalent to minimizing the SAM’s sharpness measure for the global optimization. However, directly combining this loss difference with the FL objective will cancel out $L(w)$. So we replace the cross entropy (CE) loss with the KL divergence loss to decouple the vanilla loss since minimizing the CE loss is equivalent to minimizing the KL loss. Thus, minimizing the KL loss in FedGMT aims to minimize global sharpness and reduce computational cost compared to other SAM-based methods, a goal that FedGKD cannot achieve. The comparative experiments in the following table in **W2** demonstrate that FedGMT outperforms FedGKD by 2.1% on Dir(0.1) and 2.79% on Dir(0.01) when both methods employ the ADMM. --- **W2. Methods comparison without ADMM.** We note that the performance of $L^{glotra}$ degrades without ADMM, which is due to non-vanishing biases between local update and global update in heterogeneity scenario. This aligns with methods such as FedSpeed, FedSMOO, and FedLESAM, which also incorporate ADMM to address such issues. Thus, approaches that omit ADMM may fail to fully leverage the proposed sharpness measure's potential. We evaluate performance with/without ADMM on CIFAR10 to demonstrate ADMM's necessity for maintaining optimization stability under data heterogeneity. The results are stated below. | | W/O ADMM Dir(0.1) | W/ ADMM Dir(0.1) |W/O ADMM Dir(0.01) | W/ ADMM Dir(0.01) | | -----| ----- | ----- | ----- |----- | | FedAvg|70.61±3.51|75.71±0.95 |61.94±4.93|70.52±2.32 | | FedSAM|70.96±3.97|77.51±0.97 |61.05±4.85|71.96 ±1.67 | | FedSMOO|71.28±3.90|77.08±0.97 |61.60±4.48|72.11 ±1.79| | FedLESAM|70.89±3.59|76.11±0.83|**62.95±4.60**|71.10 ±2.82| | FedGKD |72.54±2.68|77.07±0.83 |61.50±3.96|71.88±2.77| | FedGMT|**72.68±2.19**|**79.17±0.49**|61.28±3.11|**74.67 ±0.77**| | FedGMTv2 | 72.62±2.27|78.73±0.47|61.90±3.46|74.11±1.32| The above experimental results show that FedGMT with ADMM achieve best performance due to our sharpness measure can more accurate estimate global sharpness than others (Figure 2 also demonstrate this). We will add these experimental results in our updated version. --- **W3.Learning rate tuning.** Thanks for the suggestions. Learning rate which is a basic hyperparameters in the deep learning, and usually we do not finetune this for the fair comparison in the experiments. After a comprehensive review of existing federated learning literature, we observe that prior works consistently use a fixed learning rate across all compared methods. In our study, we select learning rates from the set {0.001,0.01,0.1} for different datasets and model architectures to maximize SGD performance, then fix this optimal value for all methods. Specifically, a learning rate of 0.01 was employed for CIFAR10, CIFAR100, and CINIC10, while 0.1 was used for AG News. --- **Q1. Convergence rate understand.** In the federated stochastic non-convex setting, under standard assumptions of "smoothness," "bounded stochastic gradients," "heterogeneity," and other assumptions, prior works[A,B] have established $O(1/T)$ convergence rates for their algorithms. Specifically, our theoretical contributions lie in proving convergence without relying on the restrictive assumptions of bounded heterogeneity or requiring local minima in each communication round. This is enabled by our ADMM-based framework, which derives properties (e.g., Lemmas D.5 and D.6) to bound the global gradient norm in Eq. (34). Prior work [C] establishes faster convergence under the assumption of client similarity in non-convex settings. Our Eq. (10) scales $L^{glotra}$ as a non-negative convex function which can be seen as an exponential moving average of historical global gradients into local updates across all clients. This shared similarity in EMA across clients reduces variance between local and global updates, thereby accelerating convergence. [A] Fedpd: A federated learning framework with adaptivity to non-iid data. IEEE Transactions on Signal Processing, 69:6055–6070, 2021. [B] Fedadmm: A federated primal-dual algorithm allowing partial participation. arXiv preprint arXiv:2203.15104, 2022. [C] Scaffold:Stochastic controlled averaging for federated learning(ICML 2020). --- It is a pleasure to discuss this with you, which will help us to further improve this work. Thank you again for reading this rebuttal. --- Rebuttal Comment 1.1: Comment: Thanks for the efforts addressing my comments. I have some additional comments are as follows. 1. Since this paper is concerned with a new optimization method, a careful tuning may be preferred for a fair performance comparison (considering that SGD is sensitive to learning rate). 2. In general, the best rate a first-order method can attain is $O(1/\sqrt{T})$ for non-convex optimization. I was curious which part of the proposed algorithm boosts the convergence rate to $O(1/T)$. --- Reply to Comment 1.1.1: Comment: Thanks for your valuable comments to our work. We are happy to continue the discussion with you. We are also very honored to share some of our understandings with you. --- **1. Experiments with learning rate tuning.** We conducted experiments on CIFAR10-Dir(0.1) for each method with local different learning rates from {0.001,0.01,0.1}. For our FedGMT, we fix the learning rate decay at 0.998. The remaining hyperparameters are the same as those described in Section 4.1 (FedGMT Setting). For other methods, we refer to their official papers to search the best hyperparameters for their performance. These hyperparameters mainly include the SAM perturbation rate {0.001, 0.01 , 0.1} , penalized coefficient {1, 10, 100} and learning rate decay {0.998,0.9998,0.99998}. The results are stated below. | | | lr = 0.001 | lr = 0.01 | lr =0.1 | | - | - | - | - | - | | FedAvg | (AISTATS 2017) | 54.61±5.00 | 71.70±3.30 | 71.76±1.88 | | FedSAM | (ICML 2022) | 54.63±4.99 | 71.73±3.22 | 72.50±1.95 | | FedSMOO | (ICML 2023) | 70.94±1.22 | 77.49±0.57 | 77.48±1.05 | | FedLESAM-D| (ICML 2024) | 70.80±1.20 | 75.99±0.96 | 75.71±0.79 | | FedGMT| (Ours) | **74.85±0.63** | **79.84±0.27** | **78.21±0.55** | The result show that FedGMT still achieves superior accuracy than other baselines after a careful tuning. In addition, we find the SAM perturbation need to be carefully tuned for different local learning rate. In practice, FedGMT ​​can achieve higher performance compared to other baselines without the complex tuning. --- **2. Convergence rate explanation.** The essence of this problem is to understand the role of local steps (denoted as $K$) in FL algorithms. To better understand this issue, we first review some foundational theories. On the one hand, non-ADMM FL algorithms aim to solve $min L(w) = \sum L_m(w)$. In this problem, prior studies typically achieve a convergence rate of $O(1/ \sqrt {T})$ under non-convex condition. For example, [A] demonstrates that both FedAvg and SCAFFOLD attain $O(1/ \sqrt {TK})$. Here, the K is regarded as a constant and in such cases, these algorithme are $O(1/ \sqrt {T})$ rate. On the other hand, the problem of ADMM-based FL algorithms aim to solve $min L(w) = \sum L_m(w_m)\ \ s.t. w_m = w$. Note that when this problem finally converges to satisfy the constraint, these two problems are equivalent. By constructing the Lagrangian relaxation (Eq. (11)), the constrained problem is transformed into unconstrained form. Each client solves a local subproblem per round (Eq. (12)), and convergence analysis assumes clients approximate stationary points for these subproblems. Under this assumption, our proposed algorithm achieve $O(1/ T)$ convergence rates independent of local steps $K$. However, in practice we use K local steps to solve each sub-problem. This means $K$ needs to be long enough to approximate the solution of Eq. (12). Actually, [B] prove that $K$ need to satisfy $K = O(T)$ in ADMM-based FL algorithms to achieve $O(1/ T)$. Substituting $K = O(T)$ into non-ADMM methods' $O(1/ \sqrt {TK})$ rates (e.g., FedAvg/SCAFFOLD [A]) also yields $O(1/ T)$. **So this means that their convergence rate is actually equivalent. The difference between them is that ADMM-based algorithms need less comunication rounds.** For example, if we need to compute $T$ times gradients to satisfy the target acc or error, theoretically, FedAvg and SCAFFOLD need to comunicate $T/K$ times, but ADMM-based algorithms need $\sqrt T$ times comunication with $K = O(T)$. This theoretical advantage aligns with empirical results (Figure 5), where ADMM-based algorithms converge faster. However, enforcing $K = O(T)$ in FedAvg or SCAFFOLD introduces polynomial divergence in their proofs. So we usually regard $K$ as a constant in these non-ADMM based algorithms. In summary, our algorithm boosts the convergence rate to $O(1/ T)$ holds under the condition of local client approach a stationary point or $K = O(T)$. This is not a violation the best rate a first-order method can attain is $O(1/ \sqrt {T})$ for non-convex optimization since the $O(1/ T)$ rate relies on additional assumptions beyond standard non-convex settings. [A] Scaffold:Stochastic controlled averaging for federated learning(ICML 2020). [B] Fedspeed:Larger local interval,less communication round, and higher generalization accuracy (ICLR 2023). --- Thank you again for reading our rebuttal. We have tried to address most if not all concerns raised by the reviewers. If our responses have sufficiently addressed your questions/concerns, it will be great if you are willing to kindly improve your score. Thank you.
null
null
null
null
null
null
SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference
Accept (poster)
Summary: SparseVLM is a training-free framework that optimizes VLMs by reducing computational load through selective pruning of visual tokens based on relevance to text tokens. Using self-attention matrices, it introduces a rank-based adaptive sparsification strategy and a token recycling mechanism to retain essential information. Compatible with models like LLaVA, Mini-Gemini, and Qwen2-VL, it reduces latency and costs while preserving accuracy. ## update after rebuttal Thank the author for the rebuttal. I will keep my original rating, which was already positive. Claims And Evidence: SparseVLM can reduce computational overhead in VLMs without sacrificing performance. - Experimental results demonstrate that SparseVLM can reduce FLOPs by up to 77.8%, decrease CUDA latency by up to 37%, and maintain 97% of the original accuracy in image tasks like VQA. SparseVLM improves the efficiency of video question answering task. - In video understanding tasks, SparseVLM outperforms FastV by 34.4% in accuracy and achieves a smaller decrease in performance (0.17 GPT score drop vs. 1.02 for FastV). SparseVLM provides a trade-off between efficiency and performance. - The method shows significant improvement in efficiency (e.g., 62.8% reduction in FLOPs) with minimal accuracy loss, validated through extensive experiments on multiple benchmarks. Methods And Evaluation Criteria: - Methods: SparseVLM uses a two-step process for pruning and refining visual tokens. First, visual tokens are rated based on their correlation with relevant text tokens (text raters). Then, the tokens are sparsified using an adaptive rank-based strategy. For tokens that are pruned, a recycling mechanism is employed to aggregate and compress them into more compact forms. The method integrates well with Flash Attention for efficient computation. - Evaluation Criteria: The evaluation focuses on accuracy, FLOPs, and latency reduction across multiple vision-language benchmarks, such as GQA, VQAv2, and MSRVTT, as well as video benchmarks like TGIF-QA and MSVD-QA. Trade-offs between computational savings and performance are carefully analyzed. Theoretical Claims: - Visual tokens can be sparsified adaptively based on text tokens to optimize VLM performance: The authors argue that not all visual tokens are equally relevant to a given task and that adaptive sparsification based on textual context can improve both efficiency and accuracy. This claim is supported by the results showing the effectiveness of the rank-based strategy and text-aware pruning. - Token recycling helps preserve important visual information during sparsification: The authors suggest that recycling pruned tokens minimizes information loss, which is crucial for maintaining performance in downstream tasks. This is backed by experimental improvements in performance when token recycling is used. Experimental Designs Or Analyses: - Design: SparseVLM was tested on multiple VLMs (LLaVA, Mini-Gemini, Qwen2-VL) across image and video understanding tasks. Three token configurations (576, 192, 64 tokens) were used to assess the impact of sparsification. Comparative experiments with other methods (e.g., FastV, ToMe) were conducted to evaluate improvements in efficiency and accuracy. - Analysis: The primary analysis focuses on comparing the reduction in FLOPs and latency with minimal loss in accuracy. The method was also analyzed on its ability to handle video tasks, where temporal and spatial dependencies are critical. Supplementary Material: - Several figures illustrate how SparseVLM selectively prunes visual tokens, demonstrating how the model retains critical information for tasks like VQA while discarding redundant tokens. - Additional supplementary material includes detailed analyses of the latency vs. accuracy and FLOPs vs. accuracy trade-offs, showing how SparseVLM achieves an efficient balance between computation and performance. Relation To Broader Scientific Literature: - SparseVLM is an important contribution to the ongoing research on making vision-language models more efficient. It addresses the challenge of visual token redundancy that is prevalent in high-resolution image and video tasks, complementing other recent works that focus on reducing the computational burden in VLMs. - The method is compared with other pruning and sparsification strategies (e.g., FastV, ToMe, PDrop) and outperforms them in both computational efficiency and accuracy. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: The method might struggle with extremely complex tasks where minimal information loss is critical, as the adaptive sparsification might lead to dropping crucial visual tokens in certain edge cases. Other Comments Or Suggestions: It would be useful to see a deeper exploration of the impact of token recycling in more complex, multi-step reasoning tasks where preserving information is more crucial. Questions For Authors: Can SparseVLM be generalized to work with other transformer-based architectures beyond the ones tested in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the **reviewer SmbH** for the effort in reviewing our paper. Our responses according to the reviewer's comments are summarized as follows. --- > **1. The discussion examines how effectively SparseVLM performs on highly complex tasks.** Thank you for your attention to our performance on more complicated tasks. Firstly, due to adaptive sparsification, our method can determine whether the vision tokens in the current layer are redundant for the prompt to avoid losing critical information. Besides, our token recycling mechanism can retain as much information as possible in a cost-effective manner. To validate the effectiveness, we perform experiments on MMMU Pro, MMBench (Attribute Reasoning, AR), and MMBench (Logical Reasoning, LR) with LLaVA-7B. Our performance on MMMU Pro consistently approaches the average accuracy reported in Table 1 across all token settings, demonstrating our method's robustness in highly complex tasks. Furthermore, even with a severe reduction in the number of tokens from 576 to 192, the MMBench (LR) performance surpasses that of the baseline. The reason is that SparseVLM progressively sparsifies tokens, enabling the model to gradually focus, transitioning from the image background to the objects within the image and then to the details of those objects, and achieve outstanding performance. ***In summary, our SparseVLM can well handle highly complex tasks while maintaining vital information.*** | Settings | MMMU Pro | MMBench (AR) | MMBench (LR) | Avg. (Loss) | | ------------- | ----------------| ---------------| --------------| --------------| | *Upper bound* | 30.3 | 73.3 | 30.5 | 44.7 (0) | | **192 tokens** | 30.0 | 71.9 | 33.9 | **45.3 (+0.6)** | | **128 tokens** | 29.4 | 70.4 | 30.5 | **43.4 (-1.3)** | | **64 tokens** | 26.4 | 67.2 | 26.7 | **40.1 (-4.6)**| --- > **2. The analysis of the impact of token recycling in more complex, multi-step reasoning tasks.** We appreciate the insightful suggestion. To further explore the effectiveness of our token recycling strategy in more complex and multi-step reasoning tasks, we conduct ablation experiments on MMBench (LR) and MMMU Pro. Across multiple sparsity ratios (64, 128, 192), our algorithm achieves a significant average performance improvement of $0.8$% and $1.0$% on MMBench (LR) and MMMU Pro, respectively. Notably, as the number of pruned vision tokens increases, the benefit brought by our recycling method increases. For instance, when pruning from 192 to 64 tokens, pruned token recycling significantly boosts accuracy from $0.2$% to $1.6$% on MMMU Pro. ***In short, compared to common tasks (e.g., GQA in Table 4), our token recycling mechanism proves its higher value in complex, multi-step reasoning tasks.*** | Benchmark| 64 tokens| 128 tokens | 192 tokens | Avg. | | --- | :---: | :---: | :---: | --- | | **MMBench (LR)** | 25.2 | 30.0 | 33.6 | 29.6 | **+ TR**| 26.7 | 30.5 | 33.9 | 30.4 | **MMMU Pro** | 24.8 | 29.2 | 28.8 | 27.6 | **+ TR**| 26.4 | 29.4 | 30.0 | 28.6 --- > **3. Further experiments on other transformer-based VLM architectures.** We sincerely thank you for your advice. We have conducted experiments across various transformer-based architectures, including different vision encoders (e.g., CLIP in LLaVA and CLIP+ConvNeXt in Mini-Gemini) and different LLM decoders (e.g., LLaMA in LLaVA and Qwen in Qwen2-VL). To further validate compatibility, we also tested our approach on Cambrian-1 13B (576 tokens setting), which is another transformer-based VLM architecture. Shown in the table below, under the 192-token setting, our method achieves a $2.3$% lower performance drop than PDrop; meanwhile, under the 64-token setting, our approach maintains an accuracy loss below $8.0$%. ***Therefore, our method is fully compatible with other transformer-based architectures.*** | Method | GQA | MMB| SQA | SEED | TextVQA | MMMU | Avg. (Loss) | | --- | --- | --- | --- | --- | --- | --- | --- | | **Cambrian-1-13B** | 64.0 | 75.5 | 79.3 | 74.4 | 72.8 | 40.0 | 67.7 (0) | **Retain 192 Tokens**| | PDrop [*CVPR 2025*]| 59.5 | 72.5 | 75.3 |68.4 | 70.0 | 38.0 | 63.9 (-3.8) | SparseVLM | 61.6 | 74.2 | 79.0 | 72.0 | 71.4 | 38.8 | **66.2 (-1.5)** | **Retain 128 Tokens**| | PDrop [*CVPR 2025*]| 56.8 | 70.9 | 74.0 | 65.5 | 68.8 | 35.4 | 61.9 (-5.8) | SparseVLM | 60.4 | 73.0 | 78.2 | 70.6 | 70.2 | 38.0 | **65.1 (-2.6)** | **Retain 64 Tokens**| | PDrop [*CVPR 2025*]| 46.2 | 61.6 | 71.7 | 56.4 | 58.0 | 26.1 | 53.3 (-14.4) | SparseVLM | 54.4 | 68.7 | 78.0 | 64.0 | 62.2 | 31.2 | **59.8 (-7.9)** --- *We sincerely appreciate your thorough review and valuable suggestions to improve our work. If you find our responses satisfactory, we would be grateful for your reconsideration of the rating.*
Summary: SparseVLM introduces a text-guided visual token sparsification framework for efficient VLM inference without significant performance loss. The key idea is to use the textual input to identify which image regions (visual tokens) are most relevant and prune away the rest. Claims And Evidence: Based on the motivation, "visual tokens should be sparsified adaptively based on the question prompt", the author showed the performance of the SparseVLM with experiments. The motivation is clear, and experiments are well-done, although the overall paper is based on emprical results. Methods And Evaluation Criteria: See above. Theoretical Claims: - Experimental Designs Or Analyses: See above. Supplementary Material: The authors provide supplementary material containing additional analyses and technical details that complement the main paper. Relation To Broader Scientific Literature: The paper proposed SparseVLM, an efficient inference method for VLM without additional finetuning. Essential References Not Discussed: - Other Strengths And Weaknesses: The method’s applicability might be somewhat limited to scenarios where a guiding text is present. For task such as captioning without a specific query) SparseVLM in its current form might not know which tokens to prune because there’s no external query focusing it. They don’t explicitly address how their method might extend to, for example, caption generation or more complex tasks. Since the proposed method aims for efficient VLM, it should show generability to other general VLM tasks, rather than just simple QA with classification. Other Comments Or Suggestions: - Questions For Authors: - As noted, SparseVLM currently assumes a guiding text query. It would be interesting to see how it could be applied to tasks like image captioning or general visual dialogue, where the prompt might be generic (e.g., “Describe the image”). I'm curious about the experiment results of more general tasks, such as Tab.7 or Appendix Fig.1 / 2 from the original LLAVA paper [1], with LLaVA-Bench (In-the Wild) and LLaVA-Bench (COCO) datasets. [1] https://arxiv.org/pdf/2304.08485 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the **reviewer bpD8** for the effort in reviewing our paper. Our responses according to the reviewer's comments are summarized as follows. --- > **1. The analysis of the generality of SparseVLM to other general multimodal tasks.** We sincerely appreciate your valuable suggestion. To investigate how SparseVLM sparsifies visual tokens when dealing with questions similar to "*Describe this photo in detail.*", we further visualized the token sparsification results from different layers of SparseVLM when handling these questions. It was surprising that when confronted with such questions, SparseVLM first shifts the model's attention to the background information of the image. Subsequently, it sparsifies the tokens corresponding to this part and focuses on the information of the main objects in the image (*e.g.*, the people in the image). Finally, it pays attention to the detailed information of the objects (*e.g.*, the nose, mouth, and eyes of a person), enabling the model to describe the image smoothly and progressively. ***Therefore, our method follows a global-to-local understanding process, yet still effectively captures the key information in captioning tasks.*** For a visualization example, please refer to the anonymous link: https://imgur.com/a/lbp1qkI. --- > **2. The performance of SparseVLM on other general multimodal tasks.** ***(1) Qualitative analysis*** Here we take the captioning task as an example and conduct a qualitative analysis. Specifically, we ask the model with the question_id of 1 in LLaVA-Bench (In-the-Wild), where the prompt is "*Describe this photo in detail*". **The GPT-4 answer is**: *This photo features an aerial view of Diamond Head, located in the Hawaiian Islands. The volcanic tuff cone appears prominent and impressive, with its ridges and slopes covered in lush green vegetation. The crater of Diamond Head is clearly visible, surrounded by the steep walls of the structure. In the background, you can see the vast expanse of the ocean with varying shades of blue, showcasing the pristine waters surrounding the island.* **The original LLaVA answer is**: *The image showcases a breathtaking view of a large, rocky island with a mountainous terrain. The island is surrounded by a beautiful blue ocean, and it appears to be a tropical paradise. The island is covered in lush greenery, and there are several houses scattered across the landscape. In the distance, a city can be seen, adding to the picturesque scene. The combination of the island's natural beauty and the city's presence creates a unique and captivating view.* **Our method (192 tokens) answer is**: *The image showcases a breathtaking aerial view of a large, rocky island with mountainous terrain. The island is surrounded by a beautiful blue ocean, and the shoreline is dotted with houses scattered across the landscape. The houses vary in size and are positioned at different distances from the shore, creating a lively and picturesque scene.* ***From the answers, it can be seen that our method successfully preserves the general style, atmosphere, and scene composition of images.*** The observed minor omissions in specific details (such as '*lush greenery*') represent a conscious trade-off, as we intentionally maintain only $33.3$% (192/576) of visual tokens to optimize efficiency. ***(2) Quantitative analysis*** We further performed experiments on LLaVA-Bench (In-the Wild) and LLaVA-Bench (COCO), and the results are shown in the table below. It can be observed that SparseVLM achieves significant performance in the task of general visual dialogue. When the number of tokens is sparsified to 192 tokens, it may even lead to a slight performance improvement, which demonstrates the effectiveness of our algorithm. ***The reason is that SparseVLM is capable of progressively sparsifying tokens, enabling the model's attention to gradually focus, transitioning from the image background to the objects within the image and then to the details of those objects.*** This allows the model to understand the images in a step-by-step manner, and achieve outstanding performance in tasks such as general visual dialogue. | Settings | In-the Wild (Loss) | COCO (Loss) | Avg. (Loss) | | ---- | ---- | ---- | ---- | | LLaVA-7B | 62.7 (0) | 72.3 (0) | 67.5 (0) | | **192 tokens** | 63.7 (+1.0) | 71.9 (-0.4) | 67.8 (+0.3) | | **128 tokens** | 61.7 (-1.0) | 70.4 (-1.9) | 66.1 (-1.4) | | **64 tokens** | 60.4 (-2.3) | 66.2 (-6.1) | 63.3 (-4.2) | --- *We sincerely appreciate your thorough review and valuable suggestions to improve our work. If you find our responses satisfactory, we would be grateful for your reconsideration of the rating.*
Summary: This paper presents SparseVLM, a text-guided, training-free token optimization mechanism that improves the efficiency of vision-language models (VLMs). SparseVLM selects relevant text tokens to evaluate the significance of visual tokens using self-attention matrices and then progressively prunes irrelevant tokens. To enhance sparsity, it employs a rank-based strategy and a token recycling method. Experimental results demonstrate that SparseVLM improves VLM efficiency across various image and video understanding tasks. For example, integrating SparseVLM with LLaVA reduces FLOPs by 54% and CUDA latency by 37%, while retaining 97% of the original accuracy. Claims And Evidence: The claims in this paper—"visual tokens should be sparsified adaptively based on the question prompt" and "not all prompt tokens should be considered"—are intuitively reasonable and supported by experimental results. Methods And Evaluation Criteria: **Method** - L202-203: When first encountering the term "rank," it is unclear how it is computed. Although the supplementary material provides details, please add a reference here for clarity. - Rank(P) Selection: Is Rank(P) too tricky? How is the "appropriate threshold" selected? Additionally, how can you ensure that this threshold remains valid across all LLM layers? - Deletions: Where is the ablation study on determining the number of deletions? Please compare this approach with a simpler min-k method. - RoPE Position IDs: What are the position IDs of RoPE for the recycled token and the maintained vision tokens? **Evaluation** Overall, the evaluation setup is well-designed. If possible, could you provide results on more challenging benchmarks, such as VideoMME, MMMU Pro, or LVBench? Theoretical Claims: I have checked Section 3.4, and it looks fine. Experimental Designs Or Analyses: The experimental results are indeed impressive, significantly surpassing previous methods. However, as mentioned earlier, please provide results on more challenging benchmarks if possible. Supplementary Material: - I have checked the appendix. - The authors provided a code repository; however, due to the lack of instructions, I did not thoroughly review it. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: This paper is not the first to propose that "visual tokens should be sparsified adaptively based on the question prompt." Please include a discussion of related works, such as [a] VideoLLM-MoD: Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation (NeurIPS 2024) and [b] Free Video-LLM: Prompt-guided Visual Perception for Efficient Training-free Video LLMs (arXiv 2410.10441). Other Strengths And Weaknesses: Could you also provide some failure cases for Figure 6? This would help in understanding the scenarios where the method does not work. Other Comments Or Suggestions: Please see above. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: --- > **1. References on the computation of the rank of the matrix.** The rank of a matrix is the maximum number of linearly independent rows, determined by singular value decomposition. We will add a reference to the corresponding supplementary part. --- > **2. Explanation of the selection via the rank method and its threshold.** Mathematically, matrix rank quantifies non-redundant information via its linearly independent vectors. The motivation of Rank(P) is to adaptively determine dynamic thresholds to identify redundant visual tokens per layer for sparsification, eliminating heuristic sparsity ratios requiring manual tuning in other methods. Experiments scale Rank(P) solely for fair comparisons under matching sparsity to demonstrate algorithm superiority. Practical deployment requires no scaling adjustments or tuning. --- > **3. Ablation study on determining the number of deletions.** We conducted comparative experiments with min-$k$ (top-$k$) selection strategy. Compared to the min-$k$ method, our rank method outperforms across all token settings. The results suggest that the rank-based method is more advantageous in information-constrained scenarios. |Method|GQA| MMB| SQA| SEED| TextVQA| Avg.| |---|---|---|---|---|---|---| |**Retain 192 Tokens**| |min-k | 57.2 | 62.7 | 70.1 | 56.6 | 56.7 |60.7 |rank| 59.5 | 64.1 | 68.7 | 58.7| 57.8 |**61.8 (+0.9)** |**Retain 128 Tokens**| |min-k|55.2 |60.9 | 69.7| 54.0 |55.6| 59.1 |rank|58.4 |64.5 | 68.6|58.2|56.7| **61.3 (+2.2)** |**Retain 64 Tokens**| |min-k | 48.5 | 52.5|70.1|45.4|50.1| 53.3 |rank|53.8|60.1|69.8|52.2|53.4|**57.9 (+4.6)** --- > **4. Adjustment of the position IDs.** Token pruning in SparseVLM creates layer-varying KV token counts, typically below the original sequence length. Using unpruned position IDs misaligns RoPE, as IDs mismatch the pruned KV cache. We solve this via dynamic position ID recalibration: (1) Track retained tokens from prior steps via LLaVA’s KV cache. (2) Reassign position IDs to preserved/recycled tokens. (3) Apply RoPE with updated IDs. Our method updates positional encodings during token additions/removals, ensuring alignment with pruned sequences and spatial consistency across encoding schemes. --- > **5. Performance of SparseVLM on more challenging benchmarks.** (1) **Image tasks** We tested on MMBench (LR) and MMMU Pro. Our performance on MMMU Pro approaches the average accuracy reported in Table 1 across all token settings. Besides, SparseVLM progressive token sparsification even improves MMBench (LR) performance at 192 tokens by shifting focus from global to local. |Settings|MMMU Pro (Loss)|MMBench (LR) (Loss) |---|---|---| |LLaVA-7B|30.3 (0)|30.5 (0)| |**192 tokens**|30.0 (-0.3)|33.9 (+3.4) |**128 tokens**|29.4 (-0.9)|30.5 (0) |**64 tokens**|26.4 (-3.9)|26.7 (-3.8) (2) **Video tasks** We tested our method on VideoMME and LVBench. Our method prunes $90$% of vision tokens, retaining only 227, while preserving around $99$% of the original accuracy. It outperforms FastV by $6.8$% on VideoMME and $2.2$% on LVBench. |Method|VideoMME (Loss)|LVBench (Loss) |---|---|---| |VideoLLaVA|39.9 (0)|26.3 (0) |FastV|32.9 (-7.0)|23.8 (-2.5) |SparseVLM|**39.7 (-0.2)**|**26.0 (-0.3)** --- > **6. Claim about "first attempt to explore text-aware guidance" in vision token sparsification.** (1) VideoLLM-MoD uses a trained linear projection to predict the importance of vision tokens. The basis is from page 5 of their paper: "*The LayerExpert determines the importance score μ for a given vision token using a linear projection.*" Reference: arXiv:2408.16730. (2) FreeVideoLLM uses the prompt feature for temporal and spatial samplings. The evidence is from page 5 of their paper: "*We calculate the relation score (i.e., cosine similarity) between the frame features and the prompt feature.*" Reference: arXiv:2410.10441. We consider it as concurrent work with ours since it is not accepted yet. Still, we will update it in our related work. (3) ***Our approach builds solely on text tokens and filters the visual-relevant text raters to improve performance, which has a fundamental difference***. --- > **7. Failure cases for sparsification.** **"n281241.jpg" in GQA**. Q: "*What is the picture hanging above?*" The label is "*Chair*," but the VLM predicts "*Wall.*" The question asks what is above the object, not where the picture is hung. Misguided attention to background tokens causes spatial misinterpretation, resulting in incorrect answers. **"n494677.jpg" in GQA**. Q: "*Is the weather cloudy?*" The label is "*Yes,*" but the VLM predicts "*No.*" Despite sparse vision tokens accurately capturing the sky, the model still errs, indicating reasoning limitations rather than sparsification flaws. For visualizations, please refer to the anonymous link: https://imgur.com/a/Hq0qYYa. --- *We appreciate your thorough suggestions. If you find our responses satisfactory, we would be grateful for your reconsideration of the rating.*
Summary: This paper presents a novel framework for sparsifying visual tokens to enhance the efficiency of Vision-Language Models (VLMs) in a training-free manner. It proposes a strategy to select relevant text tokens as evaluators of visual tokens, followed by pruning redundant visual tokens with a recycling mechanism to minimize the loss of information. LLaVA equipped with SparseVLM can achieve 54% reduction in FLOPs, 37% decrease in CUDA latency while maintaining 97% performance. ## update after rebuttal Thank the author for the rebuttal. I will keep my original rating, which is weak accept. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The paper contributes to the area of efficient multi-modal large language models. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow, allowing readers to readily grasp the inherent design principles behind SparseLVM. 2. Experiments on both image and video setting prove the effectiveness of SparseVLM. 3. The performance has been validated across three distinct MLLM architectures, demonstrating its strong generalization capability. Weakness: 1. Lack of discussion with similar works like LLaVA-PruMerge[1] and FastV. As far as I'm concerned, they are also methods for text-aware guidance for visual token sparsification. You need to further support your claim 'first attempt'. 2. Questionable method for calculating visual redundancy: The approach uses Rank(P) based on attention maps to determine visual token redundancy. However, the correlation between attention vectors and visual redundancy is unclear, with no supporting evidence or references provided to justify why this approach is valid. [1] Shang, Y., Cai, M., Xu, B., Lee, Y. J., & Yan, Y. (2024). Llava-prumerge: Adaptive token reduction for efficient large multimodal models. arXiv preprint arXiv:2403.15388. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the **reviewer TyrH** for the effort in reviewing our paper. Our responses according to the reviewer's comments are summarized as follows. --- > **1. The further explanation of our claim about "first attempt to explore text-aware guidance" in vision token sparsification.** Our method is the first to ***explicitly*** utilize the guidance of text for the sparsification of VLMs. The detailed discussions are as follows. (1) ***LLaVA-PruMerge only leverages the class tokens*** to predict the importance of spatial visual tokens. The basis for this derives from the statement on page 5 of their paper: "*Note that we use the class attention value from the penultimate layer for this calculation.*" Reference: https://arxiv.org/pdf/2403.15388. (2) ***FastV directly utilizes all the tokens***, including text tokens, vision tokens themselves, and system tokens to evaluate the significance of vision tokens. The evidence is derived from the following statement on page 8 of their paper: "*We simply compute the average attention-score one token received from all other tokens as the criteria in our experiment.*" Reference: https://arxiv.org/pdf/2403.06764. (3) In contrast, ***our approach builds solely on text tokens and further filters the visual-relevant text raters to improve performance***, which has a fundamental difference from LLaVA-PruMerge and FastV. Our effectiveness is also validated in Table 1, where our method shows superiority. --- > **2. Further explanation on how the correlation among attention vectors relates to visual redundancy.** We identified several relevant studies that also investigated the correlation between matrix rank, model training, and the informativeness of visual tokens. The first study [1] reveals a positive correlation between the rank of attention matrices and the performance of Transformer models, suggesting that the rank of the attention matrix influences both the representational capacity and learning effectiveness of the attention mechanism. A higher rank generally indicates a greater ability to encode diverse and informative token interactions. On the other hand, lower ranks often indicate linearly dependent attention vectors, which correspond to overlapping or redundant visual information. Moreover, some methods leverage singular value decomposition (SVD) to prune the attention output matrix, illustrating that low-rank structures often correspond to redundant information. For example, in [2], the authors use SVD to quantify each token’s information contribution. Combined with attention scores, this allows them to prioritize and retain the most informative tokens. This approach closely aligns with our use of rank, as both rely on analyzing the redundancy of visual tokens in attention matrices. For the mathematical definition of rank, the rank captures the number of linearly independent directions in the attention space—lower rank implies that multiple tokens share similar attention patterns, justifying the removal of redundant ones. So, based on these findings and the mathematical definition of rank, we use Rank(P) to quantify redundancy among visual tokens. We also validate our approach experimentally by analyzing attention maps in various vision tasks. Results confirm that attention matrices often contain redundant structure, and pruning based on Rank(P) effectively reduces computational overhead with minimal impact on performance. [1] *Min, Zeping, and Zhong Li. "On the Limitation and Redundancy of Transformers: A Rank Perspective.*" [2] *Tan, Xudong, et al. "TokenCarve: Information-Preserving Visual Token Compression in Multimodal Large Language Models.*" --- *We sincerely appreciate your thorough review and valuable suggestions to improve our work. If you find our responses satisfactory, we would be grateful for your reconsideration of the rating.*
Summary: The paper introduces SparseVLM, which is a training-free technique to optimize number of visual tokens in a Vision-Language Model. The method consists of two main components, i.e. to identify relevant text tokens to rate visual tokens and method to prune and recycle visual tokens based on its significance. Furthermore, the authors explain how the method can be integrated with FlashAttention for efficient inference and present latency benchmarks. Claims And Evidence: The authors claim it is the first training-free approach that explores text guidance to prune visual tokens. After reviewing relevant literature, it might be accurate to claim that the proposed method is the first training-free approach to rely on explicit text-guidance, as opposed to implicit guidance used in methods like FastV. Claims on efficiency is backed by actual latency estimates, which adds credibility to the proposed method. Methods And Evaluation Criteria: The proposed method evaluates the popular LLaVA-1.5 model on typical VLM benchmarks like GQA, SQA, MMB, POPE, SEEDBench, etc., which is common for most works in this domain. They also, show performance of SparseVLM applied to Qwen2VL, a more recent VLM which is trained to support dynamic image resolution. Theoretical Claims: The theoretical claims seem accurate, I do not find any major issue with the proposed method and its complexity analysis. Experimental Designs Or Analyses: 1. Recent VLMs, like Qwen2-VL support dynamic image resolutions, an accurate baseline for Table 2 would be to simply resize the input image to support the same visual tokens as listed in rows 2, 3 and 4. 2. The method also focuses primarily on vision encoders that are ViT-based, for completeness, analysis of the method applied to visual tokens from hierarchical backbones like ConvNeXt and FastViTHD used in ConvLLaVA[1] and FastVLM[2] respectively would be good to have. 3. The method does not discuss any cases when multiple vision encoders with different pre-trainings are used in a VLM, for example Cambrian-1[3]. [1] - ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models [2] - FastVLM: Efficient Vision Encoding for Vision Language Models [3] - Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs Supplementary Material: The supplementary materials contains code for the proposed method, and extensive discussions on support for FlashAttention. Relation To Broader Scientific Literature: The introduced method is highly relevant to the broader community that optimizes VLM inference. More recent VLMs like Qwen2-VL or InternVL2 support resolutions as high as 4-7MP, which can lead to significant increase in visual tokens. Training-free techniques like SparseVLM (introduced in paper), enables cost-effective optimization of these models for inference. Essential References Not Discussed: I believe the paper discusses most of the relevant works that are published and not concurrent works. Other Strengths And Weaknesses: One potential weakness of the method could arise in multi-turn QA setup. From an initial assessment, it seems like the prefilled KV cache may not be the most accurate to answer a follow-up question that may pertain to a different region of the image. Token recycling may not be sufficient to alleviate this issue. Other Comments Or Suggestions: It would be good to discuss potential implications of the method in a multi-turn QA setup. Questions For Authors: Based on the recent trends in the VLM literature, it would be good to have the following comparisons in the paper. 1. SparseVLM applied to visual tokens from hierarchical backbones like ConvNeXt/FastViTHD used in ConvLLaVA[1]/FastVLM[2] respectively. 2. SparseVLM applied to VLMs with multiple vision encoders with different pre-trainings, for example Cambrian-1[3]. 3. For VLMs trained to support dynamic image resolutions like InternVL2, Qwen2-VL, comparisons with appropriate input sizes to match SparseVLM token lengths. [1] - ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models [2] - FastVLM: Efficient Vision Encoding for Vision Language Models [3] - Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the **reviewer PtfS** for the effort in reviewing our paper. Our responses according to the reviewer's comments are summarized as follows. --- > **1. The analysis of the efficiency of input image resizing and vision token pruning in VLM, which supports dynamic image resolutions.** Firstly, while dynamic resolution capability in VLMs requires additional training compared to fixed-resolution ones, our method is entirely training-free. We still conducted comparative experiments on Qwen2-VL to evaluate our effectiveness. The table below shows that SparseVLM incurs a comparatively minimal performance degradation compared with the resizing manner. Secondly, our method is orthogonal to dynamic resolution and can further reduce the number of tokens on resized images while maintaining nearly lossless accuracy. | Method | MMB | POPE | TextVQA | Avg. (Loss) |---|---|---|---|---| |**Qwen2-VL** | 80.5 (1323) | 86.4 (1311) | 84.3 (1326) | 83.7 (0) |**Retain 600 tokens** |Resize | 78.1 | 85.3 | 78.0 | 80.5 (-3.2) |SparseVLM | 79.6 | 86.5 | 80.3 | **82.1 (-1.6)** |**Retain 500 tokens** |Resize | 78.1 | 85.3 | 75.3 | 79.6 (-4.1) |SparseVLM | 78.8 | 86.3 | 79.0 | **81.4 (-2.3)** |**Retain 400 tokens** |Resize | 77.0 | 85.1 | 73.4 | 78.5 (-5.2) |SparseVLM | 79.0 | 85.8 | 77.1 | **80.7 (-3.0)** --- > **2. Extending SparseVLM to Hierarchical Vision Encoders VLM architecture.** Since FastVLM has not yet open-sourced its weights and model architecture, we conducted experiments on ConvLLaVA-7B (256-token setting). The table below shows that our results own near-lossless accuracy ($0.6$%) on ConvNeXt vision encoders under the 192 tokens setting. Besides, under extreme compression (64 tokens setting), the performance of PDrop drops by $19.0$%, while SparseVLM only drops by $8.8$%. | Method | MMB| POPE | SEED | TextVQA | MMVet | MMMU | Avg. (Loss) |---|---|---|---|---|---|---|---| |**ConvLLaVA-7B** | 68.8 | 87.6 | 69.3 | 62.5 | 44.4 | 35.1 | 61.3 (0) |**Retain 192 Tokens** |PDrop [*CVPR 2025*]| 67.3 | 84.0 | 62.5 | 60.0 | 42.9 | 33.7 | 58.4 (-2.9) |SparseVLM | 68.0 | 87.0 | 68.7 | 62.0 | 44.5 | 34.5 | **60.7 (-0.6)** |**Retain 128 Tokens** |PDrop [*CVPR 2025*]| 65.6 | 83.9 | 61.0 | 59.2 | 42.5 | 30.2 | 57.1 (-4.2) |SparseVLM | 67.0 | 86.3 | 66.2 | 60.8 | 43.2 | 32.7 | **59.3 (-2.0)** |**Retain 64 Tokens** |PDrop [*CVPR 2025*]| 35.8 | 57.0 | 46.0 | 49.4 | 42.0 | 24.8 | 42.3 (-19.0) |SparseVLM | 63.2 | 76.1 | 59.7 | 54.4 | 35.0 | 26.6 | **52.5 (-8.8)** --- > **3. Extending SparseVLM to Multiple Vision Encoders VLM architecture.** Our existing experiments (Figure 4) cover MiniGemini, a VLM architecture employing multiple vision encoders (CLIP and ConvNeXt). Following the reviewer's advice, we further conducted experiments on Cambrian-1 13B. Shown in the table below, our method is well-suited for multiple vision encoders. Under the 192-token setting, our method achieves a $2.3$% lower performance drop than PDrop; meanwhile, under the 64-token setting, our approach maintains an accuracy loss below $8.0$%. |Method|GQA|MMB|SQA|SEED|TextVQA|MMMU| Avg. (Loss) |---|---|---|---|---|---|---|---| |**Cambrian-1-13B**|64.0| 75.5 | 79.3 | 74.4 | 72.8 | 40.0 | 67.7 (0) |**Retain 192 Tokens** |PDrop [*CVPR 2025*]| 59.5 | 72.5 | 75.3 |68.4 | 70.0 | 38.0 | 63.9 (-3.8) |SparseVLM|61.6|74.2| 79.0 | 72.0 | 71.4 | 38.8 | **66.2 (-1.5)** |**Retain 128 Tokens** |PDrop [*CVPR 2025*]|56.8|70.9|74.0| 65.5 | 68.8 | 35.4 | 61.9 (-5.8) |SparseVLM | 60.4|73.0|78.2|70.6|70.2|38.0| **65.1 (-2.6)** |**Retain 64 Tokens** |PDrop [*CVPR 2025*]| 46.2 | 61.6 | 71.7 | 56.4 | 58.0 | 26.1 | 53.3 (-14.4) |SparseVLM | 54.4|68.7|78.0|64.0| 62.2|31.2|**59.8 (-7.9)** In summary, the experiments on MiniGemini in the paper, along with the above Cambrian-1 and ConvLLaVA experiments, demonstrate that ***our method can match well with various visual encoder architectures***. The reason is that ***our method is applied to the decoder of VLM, where the visual features are pre-aligned by the projection layers, ensuring the robustness of selecting appropriate text raters***. --- > **4. The potential implications of SparseVLM in multi-turn conversations.** Currently, most vision token pruning methods (*e.g.*, FastV [*ECCV 2024*] and PDrop [*CVPR 2025*]) fail to effectively maintain compatibility with multi-turn conversations. Here, we propose a potentially viable approach, where we can integrate DyCoke-like [*CVPR 2025*] dynamic pruning: ***(1) KV Cache Preservation***: Pruned tokens' KV Cache is stored via dynamic pruning (DP-KV Cache). ***(2) On-demand Updates***: New prompts trigger DP-KV Cache retrieval for retained tokens and updates for pruned ones. This ensures multi-round consistency by maintaining historical context while adapting to evolving queries. --- *We sincerely appreciate your valuable review and suggestions. If you find our responses satisfactory, we would be grateful for your reconsideration of the rating.* --- Rebuttal Comment 1.1: Comment: I thank the authors for the extended comparisons and discussion on the support for multi-turn conversations. I am still on-the fence on realizable efficiency gains especially in multi-turn scenarios. In view of the latest results, I will be upgrading my rating to weak accept. --- Reply to Comment 1.1.1: Comment: Dear **Reviewer PtfS**, Thank you for your thoughtful feedback and for recognizing our efforts in the revised comparisons and discussion. We think it is an important future direction for existing vision token pruning methods to support multi-turn scenarios. It is currently an open problem. Still, we think the dynamic KV Cache retrieval method discussed in #4 of our rebuttal is a promising solution and will be our next step. We appreciate your helpful discussions and valuable insights. We are grateful for your updated rating and glad the additional results helped address some of your concerns. We are happy to anwser any further questions if you have.
null
null
null
null
The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence
Accept (poster)
Summary: This paper explores the refusal behavior of LLMs and introduces a gradient-based algorithm called RDO to identify refusal directions $r$ in the activation space. These directions can shift the model’s behavior towards either refusal or acceptance. The authors design three distinct loss terms to help identify directions that can trigger refusal behavior while ensuring that other unrelated behaviors remain unaffected when adjusting refusal directions. Additionally, they extend their analysis to an N-dimensional polyhedral cone of refusal directions rather than a single vector $r$. The paper further examines whether the orthogonal refusal directions identified by RDO are independent and can be manipulated by modifying the input to the model. Experimental results reveal several key insights into the refusal behavior of LLMs, enhancing the understanding of this phenomenon. Claims And Evidence: The paper presents three main claims, each supported by a combination of theoretical arguments and empirical evidence. Overall, the evidence is generally convincing. **1. Gradient-based representation engineering can identify refusal directions.** - The authors introduce the Refusal Direction Optimization (RDO) method, which uses a gradient-based approach to identify vectors in the activation space that control refusal behavior. - They validate this claim by demonstrating that applying these vectors can reliably increase the probability of refusal (scaling property) and that projecting out these directions allows the model to respond to harmful prompts while retaining behavior on harmless prompts (projection property). - The use of cross-entropy loss for training these directions and the retain loss based on KL divergence to prevent interference with other model behaviors provides a robust framework for evaluating this claim. **2. Refusal behavior is governed by multi-dimensional cones rather than a single direction.** - The authors did experiments that measure ASR across cones with increasing dimensionality. The observation that higher-dimensional cones consistently mediate refusal behavior across different models suggests that refusal is indeed encoded in a multi-dimensional structure. - The use of orthonormal basis vectors ensures that the identified directions are non-overlapping, which further strengthens the evidence for multi-dimensional control of refusal. **3. Orthogonal directions in the cone are not just geometrically independent but also causally independent** - The experiment uses cosine similarity to test the interaction between orthogonal directions show that ablating one direction does not influence the other, demonstrating the causal independence of different directions. Methods And Evaluation Criteria: The methods proposed in the paper, particularly RDO and its extension to multi-dimensional cones, are well-designed for probing and controlling refusal behavior in LLMs. The use of a gradient-based approach to identify refusal directions and the reliance on orthonormal bases to define multi-dimensional cones are reasonable choices given the complexity of activation spaces in LLMs. The evaluation strategy, which primarily hinges on the Attack Success Rate (ASR) and cosine similarity measures, provides an empirical basis for the claims made. These choices align well with the paper’s objectives of understanding and isolating the mechanisms of refusal behavior. However, there are several limitations that could impact the validity and generalizability of the findings. Firstly, they rely on ASR as the main metric to evaluate the quality of the cones, which might be limited. Could there be alternative or complementary metrics that can better capture, e.g., the robustness of the cones? Additionally, the use of cosine similarity to infer causal independence between orthogonal directions might be insufficient. Cosine similarity captures only linear relationships, potentially overlooking complex, non-linear interactions between directions in the activation space. Furthermore, the experiments on orthogonality focus on a small subset of harmful prompts and a single LLM architecture (Gemma 2 2B). This raises concerns about the robustness of the results across different model architectures and prompt types. It would be beneficial to investigate larger models and datasets. Addressing these limitations through a broader set of metrics—such as mutual information or causal inference techniques—and expanding the evaluation to include a wider range of prompts and models could significantly strengthen the paper’s contributions. Theoretical Claims: The proof for the scaling property is straightforward and relies on the assumption that scaling a direction within the activation space should proportionally increase refusal behavior. This proof appears correct, as it logically follows from the gradient-based optimization objective designed to maximize refusal probability. The mathematical argument, which demonstrates that a higher scaling factor leads to an increased likelihood of refusal, is both clear and valid within the context of the assumptions made. The projection property proof is also reasonable, showing that projecting out the identified directions should enable the model to respond to harmful prompts while maintaining behavior on harmless ones. The proof leverages the orthogonality of the directions to argue that this ablation selectively removes the influence of refusal-related factors. While the proof is correct, given the orthogonality assumption, one potential issue is that it does not fully address the possibility of non-linear interactions between directions that might influence the model’s behavior indirectly. Experimental Designs Or Analyses: I examined the methodology behind the refusal direction optimization experiments, focusing primarily on the Attack Success Rate (ASR) metric and the directional ablation approach used to measure changes in refusal behavior. This setup is sound because it isolates the influence of specific directions in the activation space. However, relying on a single metric can overlook nuances in how robust or consistent those directions are across diverse prompts. Additionally, while the authors’ sampling strategy provides a reasonable set of prompts, the limited number of harmful examples may restrict the generalizability of the findings. The layer-wise analysis of orthogonal directions is a valuable step toward understanding deeper interactions in the model, but it is relatively narrow in scope, concentrating on one model architecture (Gemma 2 2B) and a small set of prompts. Broadening the experiments to include multiple models and larger, more varied prompt sets could improve the robustness of the conclusions. Supplementary Material: No. Relation To Broader Scientific Literature: The paper’s core contribution—using gradient-based representation engineering to identify and manipulate “refusal directions” in LLMs—builds upon several lines of existing research in interpretability and controllability. Recent works have investigated how specific concepts, such as sentiment or toxicity, can be localized in a model’s latent space and how targeted interventions (e.g., projecting out specific directions) can alter generation behaviors. The authors’ approach extends these ideas by focusing on refusal behaviors, a safety-critical function in many conversational and policy-driven systems. By introducing a gradient-based approach rather than relying solely on paired prompts or heuristic methods, the paper also connects to the broader endeavor of fine-grained model steering. It strengthens a growing body of evidence that language models’ internal representations can be both discovered and selectively edited to meet specific policy or safety goals. Essential References Not Discussed: No. Other Strengths And Weaknesses: No. Please see the previous sections. Other Comments Or Suggestions: No. Questions For Authors: 1. Could the authors provide more insights or motivations regarding the importance of identifying refusal directions for LLMs? What specific objectives can be achieved using these refusal directions? A deeper understanding of these aspects could help clarify the scope of the paper. 2. In Table 1, higher values indicate better performance. Why does RDO perform significantly worse than the baseline? 3. In Figure 2, it appears that the performance of RDO is not substantially better than that of DIM. What might be the reasons for this? Ethical Review Concerns: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort to review our manuscript which helps us to improve. ### Cosine similarity captures linear relationships. RepInd also catches non-linear interactions. If the reviewer means that the orthogonality of the cone basis vectors from section 5 is not enough, we fully agree! This is exactly why we investigate the representational independence in section 6. In case the reviewer questions our representational independence in mitigating non-linear effects this might be due to a missing equation. We noticed that we forgot to specify the definition of the ablated $\hat{x}$. $$ \hat{x}\_{abl}^{(l)} = [\hat{x}\_{\text{abl}}^{(l-1)} + f_{[l]}(\hat{x}\_{\text{abl}}^{(l-1)})]\_\text{abl} $$ where $f_{[l]}$ is the model's transformation at layer $l$. This means, that for rep. ind. we enforce the cosine similarity to be equal to the no-intervention baseline for every layer while propagating the ablated representation and ablating again. Intuitively, any non-linear effects would make it impossible to have the equality of cosine similarities of the subsequent layers. We will provide a simple proof by induction for the camera-ready version of the manuscript if the reviewer thinks this will bring clarity. To argue for causal independence, we would need to show that the cosine similarity to a concept is what drives the model prediction. Since the readout of standard transformers is not linear, we cannot prove this unfortunately. However, there exists some empirical evidence in the mechanistic interpretability field (Elhage et al., Geva et al., Yousefi et al.) that the directional information (captured by cosine similarity) is what greatly drives the model's readout. We will provide an extensive discussion of this topic in section 6 and thank the reviewer for the question as discussing it will improve our manuscript! ### Alternative metrics apart from ASR We provide alternative metrics to evaluate our refusal cones beyond the ASR. While ASR offers the most comprehensive assessment of the cone directions' effectiveness for model manipulation, it is computationally expensive. A more efficient alternative is the refusal metric proposed by Arditi et al. (2024). Using this metric, we show in [Figure 14](https://figshare.com/s/4ab2ec422f6bd0262b30) that the cone directions fulfill the monotonic scaling property we required from our refusal directions. Besides this, we now include over-refusal as an additional way to examine the performance of RDO directions, see response Over-Refusal to reviewer 4YKU. If the reviewer has another idea on what we should evaluate or could give us any pointers, we would be happy to include more evidence in the next reply. ### More data and models While we used the same test dataset as related work (Arditi et al., 2024), we agree with the reviewer that we could add more to convincingly validate our claims. We now include the datasets of StrongREJECT and SORRY-Bench, where our results behave similarly to JailbreakBench, strengthening our findings, see [Figures 21-23](https://figshare.com/s/4ab2ec422f6bd0262b30). Further, we added more models and data for the orthogonality / representational independence experiments, see [Figure 26, 28](https://figshare.com/s/4ab2ec422f6bd0262b30). ### Motivation for Refusal Directions (Q1) Due to writing constraints, we kindly ask the reviewer to read our **Applications** response to reviewer 5cSG. Apart from safety-relevant topics, this work further shows that RDO & RCO are suitable techniques to identify directions for precise representation steering which is currently a promising technique for LLM understanding. ### Questions 2 & 3 In Table 1, the term 'Baseline' was initially unclear—we now clarify it refers to 'No intervention'. While RDO leads to significantly fewer side effects than DIM, it still introduces some compared to no intervention. Since LLMs are highly optimized, any intervention can reduce coherence. Additionally, benchmarks include sensitive questions, where safety interventions can yield differing outputs. The relative performance has several explanations: (1) DIM was used to generate labels, creating a performance ceiling; (2) The StrongREJECT judge assesses harmfulness, which also reflects model capability. Outperforming DIM was not our goal—RDO serves as a foundation for RCO and the study of independent refusal directions. Figure 2 demonstrates that RDO achieves comparable or slightly better mediation performance with greater precision (fewer side effects). With further tuning, RDO could surpass DIM, but our focus is on proposing an alternative approach and analyzing its properties. ### Conclusion We again want to thank the reviewer for their insightful questions and remarks. The changes we applied to the paper improved our manuscript. We hope the reviewer agrees with us and would be happy to answer any further questions or concerns.
Summary: This paper investigates the refusal mechanism of LLMs with a gradient-based representation engineering method. They extract refusal directions with gradient optimization by transferring the two refusal direction requirements (addition and ablation) into two loss functions. Further, they give several novel findings: 1) there are multi-dimensional concept cones mediating refusal, and 2) we can find multiple refusal directions that are independent from each other. The research provides a foundation for future studies to understand and improve the robustness and reliability of LLMs in handling adversarially crafted inputs. ## update after rebuttal This paper presents a thorough investigation and offers interesting findings on the refusal mechanism of LLMs, which is a highly relevant topic. I continue to support acceptance. Claims And Evidence: The claims in the paper are well-supported by experiments. Methods And Evaluation Criteria: The evaluation benchmark used is JailbreakBench, which, while generally suitable, has shown biases in specific areas [2]. Including additional datasets such as AdvBench [1] or SorryBench [2] could provide a more comprehensive evaluation. Have you considered using other datasets? [1] Zou, A., Wang, Z., Carlini, N., Nasr, M., Kolter, J. Z., & Fredrikson, M. (2023). Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043. [2] Xie, T., Qi, X., Zeng, Y., Huang, Y., Sehwag, U. M., Huang, K., ... & Mittal, P. (2024). Sorry-bench: Systematically evaluating large language model safety refusal behaviors. arXiv preprint arXiv:2406.14598. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental design effectively addresses the research questions, and the analysis presents very novel findings. Supplementary Material: The supplementary material complements the paper by providing detailed experimental settings and additional results, lending more concreteness to the research. Relation To Broader Scientific Literature: The paper presents novel insights into the refusal mechanisms of LLMs, enhancing our understanding of AI safety and encouraging further research into the robustness and reliability of LLMs. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: In the direction addition and ablation experiments, do you focus on a specific layer or do you apply modifications across all layers? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time for the review, the kind words, and the positive assessment. ### Additional Data We agree that we should increase the number of datasets we use for our evaluations. We thank the reviewer for the pointer towards SORRY-Bench and decided to evaluate our method on their base set of 440 harmful instructions. AdvBench is in our training data since it is part of SALAD-Bench and we therefore cannot use it for evaluation. As an alternative, we further evaluate on StrongREJECT. We show the results in [Figures 21-23](https://figshare.com/s/4ab2ec422f6bd0262b30) for our three datasets. We observe that the results behave very similar to JailbreakBench which shows the generalizability of our findings and improves our manuscript. For the camera ready version we changed all figures to include the evaluation of all datasets together, instead of having plots for each dataset. If the reviewer suggests a different way to display the results though, we are happy to adjust. ### Intervention positions We apply the ablation operation across all layers and the activation addition on a single layer. This corresponds to the previous work by Arditi et al. in *Refusal is Mediated by a Single Direction*. We further always train the RDO directions and cones in the same layer as the DIM direction for a fair comparison. Future work could look at studying refusal directions across all layers. However, if the reviewer wants to see results for a different approach we are happy to investigate. ### Conclusion We again thank the reviewer for their time and effort. The additional data improves the significance of our findings and strengthens this work. We further added more experiments, other metrics, and evidence for causal independence during this rebuttal phase and detailed the changes in the responses to the other respective reviewers. We believe that the rebuttal further improved our work and we hope that the reviewer agrees with us. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The additional experiments address all my concerns. One additional question: Is it possible that independent refusal directions could represent different types of harmfulness, e.g., violence, privacy violation, etc... --- Reply to Comment 1.1.1: Comment: Great question! We were thinking along the same lines: since, for example, privacy violations and harmful content typically trigger the standard "As a large language model..." refusal during RLHF training, the model may form different internal concepts that lead to those refusals. However, answering this question empirically might be difficult since, e.g., interpreting these directions with SAE features is likely unsuccessful because it is reported in related work that they don’t perform particularly well in slightly out-of-distribution settings. A follow-up work, however, that thoroughly interprets these directions would be very interesting and we add this as an outlook at the end of our manuscript. We again want to thank the reviewer for their time and are grateful for the feedback and pointers to provide more evidence for our method!
Summary: This work challenges the notion that refusal behavior in language models is mediated by a single direction. Previous research suggested that by ablating the activation strength along a specific direction, LMs could be made to refuse more or less often. The authors introduce a novel approach called Refusal Direction Optimization (RDO), which identifies optimal refusal directions while minimizing unintended effects on unrelated inputs. Their experiments demonstrate that RDO outperforms traditional methods that rely on contrasting prompts. Importantly, the authors show that multiple independent refusal directions can be found within the same model, suggesting that refusal mechanisms are more complex than previously understood. Claims And Evidence: The primary claims in this paper appear supported by the provided evidence with the exception of the overall performance section. I discuss this concern later in my review. Methods And Evaluation Criteria: They do make sense. The ability to steer model activations during inference is a plausibly effective approach for reducing jailbreak attack success rates (ASR). The theoretical foundation behind this approach is sound, as manipulating the model's internal representations could intercept harmful outputs before they manifest. The authors' techniques build logically on previous work in this domain while addressing some of its limitations. Theoretical Claims: I did not verify proofs. My interpretation is that the emperical evidence is strong. Experimental Designs Or Analyses: I find the evidence supporting the claim that multiple refusal vectors can be identified to be convincing. However, the experimental design omits important measurements, making it difficult to assess how effectively their proposed techniques improve upon the baseline. **Overall Performance**: Studying the effects on overall performance is an important consideration when introducing a new steering method—an aspect often neglected in previous works. However, measuring solely on TruthfulQA (TQA) provides a relatively narrow assessment of overall performance. My understanding is that the authors selected TQA primarily because this is a benchmark where the DIM baseline struggled. Beyond TQA being an insufficient evaluation of overall performance, selecting this benchmark solely based on the baseline's poor performance could lead skeptical readers to suspect cherry picking. I recommend that the authors include additional commonly studied benchmarks that measure broader capabilities, such as MMLU and GSM8K (though these specific benchmarks aren't strictly necessary). Another important metric, especially if the paper aims to demonstrate the practicality of the technique, is over-refusal. Benchmarks like XStest and WildGuard are commonly used for this purpose. It would be valuable to investigate whether different refusal directions exhibit varying degrees of over-refusal. **Baseline Improvements**: I was unable to find the baseline ASR without any steering applied. How significantly does steering improve ASR compared to not steering at all? This information seems particularly relevant given that my interpretation of Figure 2 suggests ASR remains above 50% even with steering applied. Supplementary Material: I skimmed the codebase and did not observe any obvious issues. Relation To Broader Scientific Literature: This work challenges the notion that concepts of relevance to AI safety, such as refusal, are mediated by a single direction. The authors make their case via introducing a new direction identification technique. Essential References Not Discussed: I am unaware of any obvious omissions that aren't concurrent work. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort to examine our work. Below, we address the reviewers points. ### Overall performance We agree with the reviewer that our previous side-effect evaluation should be extended. We now provide results for the benchmarks: Arc Challenge, GSM8K, MMLU, and TruthfulQA (MC2), see [Table 5]((https://figshare.com/s/4ab2ec422f6bd0262b30)). While the improvement is not as strictly pronounced as for the Truthful QA, we observe that RDO performs better in the majority of the cases and provides on average a 24% reduction of error. We added these to the camera-ready appendix, linked to it from the main body, and adapted the framing in this paragraph to account for the overall improvement. ### Baseline Improvements We now provide the No-intervention baseline for the plot in Figure 2, see [Figures 21-23](https://figshare.com/s/4ab2ec422f6bd0262b30) for the respective datasets. It is important to note, that here in the experiment we show the attack success rate while trying to jailbreak the model, thus higher is better. We adjusted the text in the paper and the caption to make this more explicit. This, and our ablation study [Figures 18 & 19](https://figshare.com/s/4ab2ec422f6bd0262b30) shows we strictly improve the tradeoff between utility and ASR (higher is better) for our RDO direction compared to DIM as our direction Pareto-dominates DIM for many configurations. Due to space constraints, we kindly refer the reviewer to the 'Ablation studies ..' response to reviewer 5cSG for more details. ### Over-Refusal We acknowledge the important trade-off between safety and utility highlighted by the reviewer that is generally present for every safety method *compared* to no intervention. Our work primarily focuses on training refusal directions that are effective for model manipulation, specifically jailbreaking scenarios. Regarding over-refusal concerns, adding refusal directions increases the probability of refusal generally, including over-refusal, as shown in Figure [Figure 25](https://figshare.com/s/4ab2ec422f6bd0262b30). Here, we add the refusal vector onto the representation with varying strength and plot the SORRY-Bench refusal score (higher is better) over the XSTest safe prompts refusal score (lower is better). We observe that RDO provides a better tradeoff as it refuses more harmful queries for the same rate of benign query refusal. Instead of increasing refusal, we can also *reduce* over-refusal by ablating the refusal direction. In [Figure 27](https://figshare.com/s/4ab2ec422f6bd0262b30) we show that results on safe instructions from the XSTest dataset. Here we observe that both approaches significantly reduce the already low over-refusal. While our current methodology focuses on identifying and understanding refusal mechanisms, rather than defending from adversarial attacks (which would have more direct implications on over-refusal), it could be extended to selectively reduce over-refusal without compromising refusal on harmful instructions by adjusting the loss functions. This could be achieved by using falsely refused instructions instead of harmful instructions in the ablation loss, and adding a retain loss for preserving refusal on harmful instructions. Though this specific application wasn't the focus of our current work, our framework can flexibly be repurposed for such interventions in future research. ### Conclusion We again thank the reviewer for the time, effort, and valuable feedback on our work. We believe that we can address all of the reviewer's concerns and hope the reviewer agrees with us. Further, we are happy to address any remaining questions! --- Rebuttal Comment 1.1: Comment: I have read the author's response and find the new experiments broadly compelling. I would prefer to also see overall performance metrics for activation subtraction as well, but this is a lower priority result. That said, the additional experiments have largely mitigated my concerns regarding experiment exhaustiveness. I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: We want to again thank the reviewer for their time and feedback. The changes regarding over-refusal and overall performance improved the significance of our results greatly. We will further provide the overall performance metrics for activation subtraction for the camera-ready version of our manuscript.
Summary: This paper investigates the mechanisms behind refusal behaviors in large language models (LLMs) and discovers that refusals are controlled by multiple- dimension in the model’s activation space. The authors introduce RDO to enhance the refusal capability of large language models (LLMs) through training a learnable refusal subtraction feature. The experiments somewhat illustrate the effectiveness of proposed methods. Claims And Evidence: While the method improves refusal effectiveness, it also impacts performance on safe instructions. The results in Figure 2 (ASR) and Table 1 (utility) suggest a trade-off between safety and general utility. To better illustrate the advantages and limitations of different methods, it would be beneficial to include an overall performance metric that integrates both ASR and utility. Methods And Evaluation Criteria: The design of gradient-based optimization for discovering refusal directions is overall reasonable and effective. Theoretical Claims: The claim regarding the independence of refusal directions lacks formal theoretical validation. Further theoretical validation could strengthen the claims. But it is understandable that proving representational independence is a challenging problem due to the modeling difficulty of neural network. Experimental Designs Or Analyses: Figure 7 shows that multiple independent directions exist for safe refusal. However, a key question seems undiscovered: what happens if these independent directions are combined as subtract directions? Would this improve or hinder performance? Investigating this could provide further insights into the structure and impact of these refusal directions. Supplementary Material: I have reviewed the supplementary material, including Figure 13. However, one confusion remains: the results mainly present ASR, but it is unclear how retain loss weight affects the utility on safe instructions. Additionally, when the retain loss weight is set to 0 vs. non-zero (especially between 0 and 1), there is little change in ASR for directional ablation and only minor changes for activation subtraction. This raises the question of whether retain loss effectively mitigates excessive refusals for safe instructions. Moreover, as retain loss weight increases, the model should generally be less likely to refuse. However, in harmful instruction settings, a higher retain loss weight reduces ASR instead of increasing it. This result appears counterintuitive and requires further clarification. Relation To Broader Scientific Literature: The paper contributes to a broader understanding of LLM refusal mechanisms. It provides evidence that refusal is not governed by a single direction, as suggested in prior work, but rather by a combination of multiple directions. This insight expands the existing literature on LLM safety mechanisms and adversarial attacks. Essential References Not Discussed: 1. I think over-refusal is an important point for this task, due to the trade-off relationship between the safety and utility. Namely, when enhancing a model’s refusal capability against unsafe samples, over-refusal inevitably emerges on safe samples. There has been a body of literature [1, 2, 3, 4, 5] about over-refusal, discuss and provide some insights for defending over-refusal in this work is necessary. 2. There is also some work [6] about sparse autoencoders in safety, but it could be regarded as a concurrent work, so not necessary. **Refs:** [1] Paul Röttger, et al, “XSTEST: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models”, NAACL, 2024. [2] Chenyu Shi, et al, “Navigating the OverKill in Large Language Models”, ACL, 2024. [3] Justin Cui, et al, “Or-bench: An over-refusal benchmark for large language models”, arXiv, 2024. [4] Hao Li, et al, “InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models”, arXiv, 2024. [5] Bang An, et al, “Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models”, arXiv, 2024. [6] Kyle O'Brien et al, “Steering Language Model Refusal with Sparse Autoencoders”, arXiv, 2024. Other Strengths And Weaknesses: Strengths: 1. The paper provide an interesting insight of refusal mechanism in large language model. Weakness: 1. Figure 2 lacks baseline results, making it difficult to compare performance improvements.. 2. The paper does not include an ablation study for the three loss components, which would provide insights into their individual contributions. 3. Although the paper suggests that refusal mechanisms are governed by multiple directions, it does not offer deeper applications or guidance on how this knowledge can be leveraged in LLM safety or adversarial robustness. Other Comments Or Suggestions: 1. Minor typos exist, such as in the caption of Figure 6, where there should be a space before “on the right.” Based on previous issues and concerns, I would like to give weak reject first, but if authors could clarify these concerns, I am willing to raise my rating. Questions For Authors: Please refer to prior comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort spent reviewing our manuscript! ### Ablation studies & Safety and Utility Tradeoff We added a detailed ablation study, investigating the three loss components. We show the results in [Figures 18 & 19](https://figshare.com/s/4ab2ec422f6bd0262b30) for Llama-3-8B-Instruct. In Fig. 18, we measure how the ASR evolves as we shift the balance between addition and ablation loss weights, requiring $\lambda_{\text{abl}} + \lambda_{\text{add}} = 1$, and observe that performance is quite robust for balanced values (0.2 to 0.8), with both losses being necessary for good generalized performance. In Figure 19, we ablate the retain loss weight. We use our default ablation and addition weights with different retain weights to train directions, and plot the ASRs compared to the difference in average benchmark performance on MMLU, ARC, GSM8K, and TruthfulQA (here we use the smaller Qwen2.5-3B-Instruct for computational reasons). The ideal direction would have the highest ASR possible for this model, while having no effect on benchmarks or positive effect only so far as preventing the model from refusing benchmark questions. We observe that many combinations of loss weights pareto-dominate the DIM direction for this model, which shows that our approach robustly outperforms the current SOTA. Further, this figure is in our view the best option to investigate the tradeoff between effectiveness for model manipulation and utility. A combined score will always omit important details. We propose to highlight this figure in the paper to investigate this tradeoff if the reviewer agrees. Here we can see that our approach strictly improves this tradeoff compared to DIM as we pareto-dominate it for both metrics. However, if the reviewer would still like to see a combined score, we will provide one in a new response. ### Over-Refusal We added over-refusal experiments to our manuscript. Due to space constraints, we kindly ask the reviewer to see the rebuttal answer to reviewer 4YKU. We further added a paragraph to the related work section giving the reader an overview of the topic based on the provided references. ### (Formal) theoretical validation We agree that formal validation of the independence would be great. We employ an argument that proves the true causal independence with respect to cosine similarity until the final read-out of the model. We can then use the empirical arguments of related work that directionality to a concept primarily drives the model prediction. For a more in-depth answer please refer to the answer to reviewer NvcZ. ### Applications We agree with the reviewer that outlining practical uses of refusal directions is valuable. We added one paragraph to the manuscript and provided a brief overview here: - Most importantly, we aim to emphasize that top-down interpretability not only advances our understanding of AI safety but also deepens our general comprehension of how LLMs represent concepts. Numerous studies have explored the latter, suggesting that any method that enhances this approach could be highly impactful. - Offense Applications: Refusal directions can inform (white-box) jailbreak methods, for example, Huang et al. (2024) in *Stronger Universal and Transferable Attacks by Suppressing Refusals* show that aiming at suppressing DIM refusal directions with input attacks increases their universality and transferability. - Defensive Applications: Refusal directions can be used to improve safety via adversarial training (Yu et al., 2024) or inference-time monitoring. ### Other **Added No-Intervention to Figure 2:** We show the result in [Figures 21-23](https://figshare.com/s/4ab2ec422f6bd0262b30) and we observe that without intervention, the model rarely responds to harmful instructions. **Combination of Refusal Directions:** We thank the reviewer for the great idea and show the result in [Figure 24](https://figshare.com/s/4ab2ec422f6bd0262b30) for the RepInd directions shown in Figure 7. We observe that the composition improves the ASR and using three RepInd directions outperforms DIM. We add this to the main body and believe that this experiment significantly strengthens our manuscript. **Related Work:** We added a paragraph to over-refusal starting on the provided source papers and further list [6] as related work that uses SAEs and thank the reviewer for providing these references. **Typos:** We iterate over the manuscript to remove typos ### Conclusion We again want to thank the reviewer for the insightful comments and questions. We believe that addressing the reviewer's concerns greatly improves our manuscript and hope the reviewer agrees with us. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed rebuttals from authors, after carefully reading the authors' response, most of my concerns have been addressed, especially on over-refusal. Although I still have some confusion about theoretical validation of refusal direction independence, I think this paper has provided valuable insights to the community. I am happy to raise my rating. --- Reply to Comment 1.1.1: Comment: We want to again express our gratitude towards the reviewer for this helpful review process. We will make this very clear in the camera-ready version of our manuscript and thank the reviewer for the questions and feedback that helped us to do that.
null
null
null
null
null
null
Confounder-Free Continual Learning via Recursive Feature Normalization
Accept (poster)
Summary: This paper introduces a Recursive metadata normalization (R-MDN) layer, in order to remove confounders, that are extraneous variables that affectboth the input and the target. The paper extends the confounder-removing activity to continual learning community. R-MDN performs statistical regression via the recursive least squares algorithm to maintain and continually update an internal model state with respect to changing distributions of data and confounding variables. Claims And Evidence: The claims in this paper are well supported in both methology and experiment aspects. Methods And Evaluation Criteria: In the main context, the paper only reports results in medical datasets. It seems that the method should cover various datasets (at least from the title and abstract). The reviewer concerns that the proposed method cannot handle large-scale problems, e.g., ImageNet-1000. Theoretical Claims: The proofs for theoretical claims seems valid. Experimental Designs Or Analyses: The experiment settings are overall quite good Supplementary Material: I have read the supplementary materials. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: This paper discusses the recursive learning method in continual learning. There is a new branch of continual learning, named analytic continual learning (starting from ACIL [1]), which also adopts recursive merhod, such as recursive least squares related techniques. The papers in this branch shares several similarities in methology (from derivation to result). [1] ACIL: analytic class-incremental learning with absolute memorization and privacy protection Other Strengths And Weaknesses: See the comments marked above. Other Comments Or Suggestions: N.A Questions For Authors: The use of RLS in updating the normalization layer seems interesting. Please state the difference between you method and the analytic continual learning techniques. Could it be a trivial implementation? If not, please do make a thorough comparison. Please note that my rating is not a final decision. ##after rebuttal## according to the response, I am maintaining the rating. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and thoughtful feedback—we respond below to the concerns you have raised. --- > The paper only reports results in medical datasets. It seems that the method should cover various datasets (at least from the title and abstract). The reviewer concerns that the proposed method cannot handle large-scale problems, e.g., ImageNet-1000. Limitations of the methods on dataset size are definitely an important consideration. In this paper, we work within the framework of medical datasets for real-world setups (wherein issues with confounded learning is very prominent), and non-medical datasets for synthetic setups. We build on prior works from Zhao et al. (2020), Adeli et al. (2020a), Lu et al. (2021), and Vento et al. (2022), who have proposed methods within this framework that we can directly compare to. Several of these datasets are medium-sized, wherein our method successfully learns confounder-free representations, as demonstrated empirically. Larger problems manifest through either larger dataset sizes, longer pre-training, or both. We evaluate the effect of the number of training epochs on confounder-free learning in Suppl. K, where R-MDN’s quick convergence property becomes an advantage. For larger dataset sizes, catastrophic forgetting becomes an issue with task learning. We make preliminary advances in this direction in Exp. 4.2 by integrating R-MDN with existing continual learning frameworks such as LwF and EWC that deal with catastrophic forgetting. Memory replay is another technique that our method could be integrated with. Nevertheless, we acknowledge this issue, and implementing and analyzing plausible solutions is an exciting direction for future work. --- > Please state the difference between your method and the analytic continual learning techniques. Could it be a trivial implementation? If not, please do make a thorough comparison. Thank you for referencing the ACIL paper. ACIL works within the domain of class-incremental continual learning by breaking down training into two stages: (a) base training via backpropagation, which allows learning of model parameters for the task; and (b) analytic re-alignment, which maps intermediate model features onto the label matrix recursively. This recursive construction permits absolute memorization and effectively mitigates catastrophic forgetting. Our approach is different since we map intermediate features not onto the label space but onto the confounder space so that the residual component of the features can be extracted and passed to downstream layers. This effectively removes the influence of such confounding variables from learned feature representations. Additionally, everything happens in a single-stage end-to-end framework for R-MDN: no network parameters are frozen for the recursive updates. Thus, the network has to learn task-relevant features that are also confounder-free during base training. Overall, the implementation nuance (single-stage learning), the target for the recursive least squares estimator (confounding variables), as well as the input to downstream layers (the residual) differ between our method and ACIL. We will include these details in the final paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response. There a still a few concerns: 1) If you deal with medical data only, the title and abstract are too general. It is normal that it is treated as a regular continual learning paper and asked to compared in SOTA fashion. 2) This method resemble analytic continual learning series closely. If not for ACIL, you may try F-OAL in dealing with online tasks which are more close to your task. F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning I am lowing the raiting at the moment and it would be subject to increase again. --- Reply to Comment 1.1.1: Comment: Thank you for the comment. --- > This method resemble analytic continual learning series closely. If not for ACIL, you may try F-OAL in dealing with online tasks which are more close to your task. While we understand the reviewer's concern, as F-OAL has some similarities to our method in terms of using recursive least squares, it addresses an inherently different problem. With R-MDN, we are working within the **framework** of continual learning to learn feature representations that are not just task-specific (as extensive research in the past with works such as LwF, EWC, PackNet, ACIL, F-OAL, etc. have done) but also invariant to bias in the form of confounding variables present in the dataset. Thus, the aim of the work is not to come up with a new technique to avoid catastrophic forgetting, which past works mentioned before have extensively explored, but to build **on top of** them to **additionally** remove the influence of confounders from the features. R-MDN is a normalization layer and not a learning algorithm for model training. It computes the residual of the features that are confounder-invariant and passes it to the downstream layer. On the other hand, methods such as ACIL and F-OAL are proposed to **train** models in the setting of class-incremental learning. Their aim is to address catastrophic forgetting and improve **task accuracy**. We care about **how biased** the learned features are during training and how might we overcome that. High accuracy does not mean unbiased learning, as we have shown throughout the paper. Since R-MDN and works such as LwF, EWC, ACIL, F-OAL, etc. are **orthogonal** research directions. This means that R-MDN also be used **on top of** those methods to ensure minimzation of the effects from bias and confounders. Subsequently, we have shown through Exp. 4.2 and Table 4.2 that we can add R-MDN to LwF and EWC to promote confounder-free learning. Similarly, R-MDN can also be complemented with ACIL and F-OAL, since, reiterating, R-MDN is a normalization layer that is added to the model architecture, whereas F-OAL is a training framework for models. While both F-OAL and R-MDN might be building some variant of a recursive least squares regressor, F-OAL **does not** learn confounder-free representations, as shown through this table. To address the reviewer's concerns, we conducted additional experiments on the non-medical synthetic continual dataset we presented in the paper (Section 4.1), where both the confounders and main effects change (over 3 runs of random model initialization seeds): | Method | ACCd (&#8595;) | BWTd (&#8595;) | FWTd (&#8595;) | | ----- | ----- | ----- | ----- | | ACIL | 0.29 $\pm$ 0.00 | -0.00 $\pm$ 0.00 | 0.30 $\pm$ 0.00 | | F-OAL | 0.28 $\pm$ 0.00 | 0.01 $\pm$ 0.00 | 0.28 $\pm$ 0.00 | | R-MDN | 0.02 $\pm$ 0.01 | -0.00 $\pm$ 0.01 | 0.02 $\pm$ 0.00 | As can be seen, both ACIL and F-OAL have excellent BWTd, which means that they effectively mitigate catastrophic forgetting, as their papers propose. However, they result in significantly worse ACCd and FWTd, which means that they make use of confounder information to make predictions (exhibiting large deviations from the theoretical maximum accuracy). Here, R-MDN has better BWTd, ACCd, and FWTd, meaning that it learns confounder-free features for making predictions, thus approaching the theoretical accuracy. --- > If you deal with medical data only, the title and abstract are too general. It is normal that it is treated as a regular continual learning paper and asked to compared in SOTA fashion. The term "medical data" here is quite broad. Yes, we test our models on MRI neuroimaging data from ABCD and ADNI datasets, but we also test on RGB images from the HAM10K dataset, a fully different type of imaging. Additionally, none of our synthetic datasets are medical. When formulating our method, we never impose any constraints that prohibit using R-MDN with models that will be trained on natural images. R-MDN is purely a normalization layer, so its usage is independent of the dataset. Please note that we selected some real-world continual datasets to show how removing confounders is important for many applications, while we also show results on more classic image and synthetic datasets too. If the reviewer thinks it appropriate to add the terms “in medical studies” to the title of this paper, we would be happy to do so. Moreover, since our paper focuses not on **task-specific** learning within continual learning but advancing **confounder-free feature learning**, we compare our method to every previously proposed state-of-the-art method such as MDN, P-MDN, and BR-Net. All of these works use similar datasets, thus allowing us to make meaningful comparisons. --- We hope that this additional discussion and results clarify the reviewer's concerns around the novelty of the method proposed for confounder-free learning, and especially its difference from purely continual learning algorithms like F-OAL.
Summary: This paper studies how to remove confounder in continual learning process, and proposes the Recursive-MDN (R-MDN) layer. The R-MDN adopts statistical regression via the recursive least squares for maintaining an internal state. By removing the confounder factors, the model will be less fitted to the irrelevant information in each task, and therefore the forgetting issue can be mitigated. The proposed method is integrated with multiple baselines and evaluated over both synthetic and real datasets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There is no theoretical analysis in this work. Experimental Designs Or Analyses: Yes. Supplementary Material: I checked the entire Appendix. The Appendix provides rich additional information to the main paper, including detailed technical details and additional experimental results. Relation To Broader Scientific Literature: This paper studies how to boost the performance of continual learning models by removing the confounder factors. From the continual learning perspective, this research would be relevant to different areas, e.g. the current LLMs also require the continual pre-training techniques for keeping an up-to-date knowledge base. From the confounder removal perspective, the contirbution is also likely to be beneficial to other machine learning areas. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The studied problem is relevant to different machine learning research 2. The experimental results are comprtehensive. Weakness: 1. No theoretical analysis is provided to justify the proposed method. 2. The performance improvement of the proposed method is not always significant. 3. The adopted baselines are pretty old, and recent baselines are not included. Other Comments Or Suggestions: Discuss why baselines proposed in recent years (2023, 2024) are not included in experiments. Questions For Authors: The proposed method is claimed to be able to applied with any deep learning architectures. However, the studied models are mostly for learning over Euclidean data, e.g. images. Therefore, a natural question is: whether the proposed method can be applied to structured data and integrated with models like graph neural networks? Then the setting would be continual graph learning, e.g. 'Online Continual Graph Learning', 'CGLB: Benchmark Tasks for Continual Graph Learning'. I would recommend the authors to discuss this to justify the applicability of the proposed technique. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and thoughtful feedback—we respond below to the concerns you have raised. --- > No theoretical analysis is provided to justify the proposed method. A more extensive exploration of the theoretical framework would surely be valuable. However, we would like to highlight that our work follows the definitions of MDN, which provides a closed-form solution to the generalized linear model removing the effect of the confounders (this is a common practice in statistics for controlling for confounders). Since the closed-form solution cannot be applied to newer architectures or continual learning settings (see the section on Related Works) because of its dependence on batch statistics and knowing all confounding values (even for future time points), we proposed a recursive reformulation of MDN. Theoretically, these two formulations are equivalent under specific conditions that we mention in Section 3.1 and Suppl. A. Additionally, we explore synthetic setups based on carefully controlled environments for various variables to identify theoretical maximum accuracies achievable that methods can be validated against. As an addition, we are now constructing a second, slightly more complex, synthetic setup to further justify the effectiveness of the proposed method---specifically, we vary both the position as well as the intensity of the confounding variable over various stages of continual learning. Please refer to our response to Reviewer Shhj above (first paragraph) for more details about the setup and results. Results are presented in Figure 1 and Table 2 at https://docs.google.com/document/d/1VPSuH5B_XvGlGoSlg_jCikB8OPLfyI65oSgACuWxzco/edit?usp=sharing, which we will add to the final paper. We hope that this addition can further support our findings. --- > The performance improvement of the proposed method is not always significant. Please see our response to Reviewer s7kX (first paragraph) about this point. Across multiple experiments, we observe that our method achieves substantial improvements over prior works when results are analyzed over multiple different metrics, since baseline models might perform well on specific individual metrics. --- > The adopted baselines are pretty old, and recent baselines are not included. Discuss why baselines proposed in recent years (2023, 2024) are not included in experiments. Baselines for our work are of two different kinds: one that introduces methods for continual learning and two that introduces methods for confounder-free learning. Supervised continual learning is broadly partitioned into three categories: replay-based methods, regularization-based methods, and architecture-based methods. We include a baseline from each of these categories. While this might be non-exhaustive, these methods do not enforce any constraints on learning confounder-free features, so we think that any other recent baseline will perform poorly on our chosen metrics, specifically dcor$^2$. On the other hand, we test our method against the current state-of-the-art confounder-free learning methods such as BR-Net, MDN, and P-MDN, which encompass adversarial-, statistical regression-based, and gradient-based learning. We hope that this provides a comprehensive picture of the landscape. --- > …whether the proposed method can be applied to structured data and integrated with models like graph neural networks? While the focus of our work in this paper has been on learning over Euclidean data (e.g., images), there are no reasons why the proposed method could not be applied to structured data. We used images because the prior work on confounder-free learning that we discussed in the paper also deals with image input, so this allowed for effective comparison. If the input data is of tabular or structured graph forms, as opposed to images, similar layers of neural network (e.g., fully connected for tabular data or graph convolution for graphs) could be applied to the input. This results in mid-level feature representations. Hence, the R-MDN (and all other prior work) could be simply applied on top of the feature representations for de-confounding. It is noteworthy that online continual graph learning is an emerging direction (Donghi et al., 2025). This is particularly interesting because we can construct causal graphs that capture the interactions of various observed variables (confounders, mediators, etc.) and use them for learning bias-free representations. Exploring the application of our proposed method, as well as other methods for confounder-free learning in static or continual graph learning frameworks, is an exciting direction for future work. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses from the authors on theoretical analysis, performance improvement, baselines, and potential extension to graph data.
Summary: The paper develops a method to debias intermediate representations in a continual learning setting. The idea is to regress the representation on the biased feature and the label. Then, only the residuals after removing the role biased feature are used as the representations. The experiments show the method improves over existing methods. Claims And Evidence: The claims made are not proved, which is okay because theory with neural networks is difficult. Experimentally, the method seems to do well across a verity of tasks. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. It would have been useful to see that type of biases that said method would get rid of but analyzing internal representations of neural networks is not a solved problem, so this can't be held as a big problem of the paper. Of course this does not mean no analysis is possible. It would have been very useful to see the type of shifts over the continuum that the method can handle, maybe in a linear model? Does such analysis exist? Experimental Designs Or Analyses: The setup of the experiments vary the distributions over time and this does represent a few scenarios in the problem setup in the paper. Without the theory, the experiments should have explored more synthetic examples, with varying features beyond position, to study the interaction of changing relationships between features and confounders. For example, the positions of the features affected by the confounder can switch positions of the features that are affected by the non-confounder features. Supplementary Material: Dataset setup and details. Relation To Broader Scientific Literature: Continual learning is important and shifts such as those induced by confounding (or spurious features) occur in real life (such as the age being correlated with certain diseases). The findings in the paper combine tools for debiasing with tools from continual learning and that improves accuracy on some real world data. Essential References Not Discussed: The work does not discuss a recent works like https://arxiv.org/pdf/2404.19132 Other Strengths And Weaknesses: - The method is simple and the choices can be validated , as is done in the method. - The experiments done on real datasets are encouraging. Main weakness seems to be to understand what is missing with the method. See Questions. Other Comments Or Suggestions: It would be useful to understand why P-MDN works better in some settings. Questions For Authors: 1. I am not sure that I fully appreciate the reasons behind the linear de-confounding : "The assumption of an underlying linear relationship between confounders and learned features arises from two key considerations: (1) decisions made by nonlinear models are often challenging to interpret, and (2) sufficiently powerful nonlinear models can extract almost any arbitrary variable from the information present in the features, even if those variables are not explicitly represented." You build nonlinear models, so point 1 seems moot? Am I missing something? Second, linear regression is limited to not handle interactions between two coordinates in the representations. So any confounder that affects this interactions will not be removed with the proposed method. Is there reason to believe this is unlikely? I'm asking because it's not apriori clear which kinds of confounders can be debiased. 2. Can the authors intuitively explain the type of relationships between confounder and input (as a thinking tool for representations) that the proposed method handles? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and thoughtful feedback—we respond below to the concerns you have raised. --- > Without the theory, the experiments should have explored more synthetic examples… We agree that additional synthetic setups would strengthen the empirical results we observe. We selected the synthetic setup in the paper, which plays with the distribution of the intensity of main effects and the confounder across different training stages, as it is used extensively in prior work such as by Adeli et al. (2020a), Lu et al. (2021), and Vento et al. (2022). But complex setups seem important. We are now constructing a second synthetic setup which explores the influence of both the position of the confounder as well as the intensities of the main effects and the confounder. More specifically, we generate 1024 32$\times$32 images that are implicitly broken down into 16 8$\times$8 grids. The top left and bottom right grids contain Gaussian kernels of intensity $\sigma_A$, denoting the main effects. The confounder is represented by a Gaussian kernel of intensity $\sigma_B$, whose position varies from the bottom left to top right of the image over 4 different training stages. Both $\sigma_A$ and $\sigma_B$ are sampled from the distribution $\mathcal{U}(3, 5)$ for group 1, and $\mathcal{U}(4, 6)$ for group 2. A fair classifier should remain unaffected by confounder information, irrespective of its position within the image. Such a setup also allows us to compute theoretical maximum accuracy achievable for each training stage that we can validate methods against. Results are presented in Figure 1 and Table 2 at https://docs.google.com/document/d/1VPSuH5B_XvGlGoSlg_jCikB8OPLfyI65oSgACuWxzco/edit?usp=sharing. We will include the same in the final paper. --- > The work does not discuss recent works like https://arxiv.org/pdf/2404.19132 Unsupervised continual learning is an interesting framework to work in. We apologize for not discussing related works in the paper, but rest assured we will cite them in the final paper. In our work, we focus on supervised continual learning, where we train classification networks on labeled data. In UCL, the primary challenge is the lack of labeled confounders. Such a scenario is similar to when certain factors that bias learning are unobserved or hidden. Effectively ensuring plasticity, stability, and cross-task consolidation through the objective function, while making sure that intermediate features can be normalized by removing the influence of confounders (after identification) is an exciting opportunity for future work. In medical contexts, labeled information is often present as metadata with collected samples. Thus, techniques from UCL can be coupled with R-MDN to learn confounder-free representations that additionally exhibit minimized forgetting. --- > It would be useful to understand why P-MDN works better in some settings. P-MDN minimizes a proxy to the bi-level optimization problem that MDN tries to minimize with a closed-form solution. P-MDN replaces the MDN’s closed-form solution, which requires a batch-level operation (not friendly for many architects and deep learning), with a proxy objective. Successful minimization of the P-MDN proxy objective depends on the careful tuning of the hyperparameter that controls the loss function computed over the features and the metadata. The success of P-MDN is then affected by this choice of hyperparameter as well as how close the proxy objective represents the true objective to be minimized. In our experiments, we extensively tuned this hyperparameter and analyzed the influence from the learning rate and optimizer and found that gradient updates due to this proxy objective led to large variance in performance across runs (different model seeds), showing improvements in some settings and weaker performance in others. In contrast, our approach is better since it is a recursive reformulation of MDN that theoretically approaches the solution from MDN under specific conditions we analyze in Section 3.1. --- > I am not sure that I fully appreciate the reasons behind the linear de-confounding… Can the authors intuitively explain the type of relationships between confounder and input (as a thinking tool for representations) that the proposed method handles? Similar to the original MDN formulation, R-MDN does not assume that confounders have to linearly influence the input image, because the method does not directly operate on images, but on the feature embeddings, which are non-linear abstractions of the input. Moreover, R-MDN can be applied to feature embeddings at multiple layers, so overall it can effectively remove non-linear confounding between the input and the confounders. However, we admit that neither R-MDN nor MDN has a theoretical guarantee to remove all the non-linear confounding relationships. We will make this discussion more explicit in the final paper.
Summary: To remove the influence of confounding variables from intermediate feature representations, the authors introduce the Recursive MDN (R-MDN) layer. They note that such layer can be integrated into any deep learning architecture--including vision transformers--and at any model stage. R-MDN performs statistical regression via the recursive least squares algorithm to maintain and continually update an internal model state with respect to changing distributions of data and confounding variables. In the empirical evaluation the authors claim that R-MDN promotes equitable predictions across population groups, for static learning and for different stages of continual learning, by reducing catastrophic forgetting caused by confounder effects changing over time. Claims And Evidence: Claims - The authors note that the introduced R-MDN layer can be integrated into any deep learning architecture--including vision transformers--and at any model stage. - The authors claim that R-MDN promotes equitable predictions across population groups, for static learning and for different stages of continual learning, by reducing catastrophic forgetting caused by confounder effects changing over time. Evidence - The authors claim that R-MDN layer can be integrated into any deep learning architecture, only on a small subset of architectures are this claim is evaluated. - The claim that R-MDN promotes equitable predictions across population groups, for static learning and for different stages of continual learning, by reducing catastrophic forgetting caused by confounder effects changing over time is evaluated on synthetic and real world datasets. Methods And Evaluation Criteria: Methods - The methodology seems valid and clearly explained Evaluation Criteria - They define the Average Accuracy distance (ACCd), Backward Transfer distance (BWTd), and Forward Transfer distance (FWTd) metrics. As noted these metrics are adapted from ACC, BWT, and FWT defined by (Lopez-Paz & Ranzato, 2017) to work with the setting where a model is expected to achieve certain theoretical accuracies on data from both previous and future stages of training. Theoretical Claims: I have not cheeked the theoretical claims, the derivations seem right (related to equations 2, 3, and 4) but have not checked the detailed in the Appendix. Experimental Designs Or Analyses: The experimental design looks good. In the experimental design they follow past works and use also exiting datasets from in this domain. They evaluate on synthetic datasets and publicly available ones. Supplementary Material: Have not done detailed review on the supplementary material, I just did a quick pass trough it. Relation To Broader Scientific Literature: The authors consider the residual r from the expression z = x˜β˜x + yβ˜y + r = Xβ + r, where X = [x y ˜ ] and β = [β˜x; β˜y] is a set of learnable parameters. They take learned features z that are first projected onto the subspace spanned by the confounding variable and the labels, with the term x˜β˜x corresponding to the component in z explained by the confounder, and yβ˜y to that explained by the labels. Practically they compute the residual r = z − x˜β˜x; i.e., only with respect to β˜x. They focus on this residual that explains the components in the intermediate features irrelevant to the confounder which seems relevant to the labels and, thus, for the classification task. Essential References Not Discussed: I was not able to notice, although, I'm not an expert in the field. Other Strengths And Weaknesses: Strengths - Very well introduced, written and nicely structured paper. - The methodology seems novel and it is clearly explained. - Empirical evaluations and results on synthetic and publicly available dataset, and comparison with existing work under different continual learning regimes. Weaknesses - The claims somewhat not supported by the evaluation, marginal improvements or I was not able to see clear improvement separation - In the evaluation for the synthetic dataset they use CNN architecture, while in the evaluation on the publicly available dataset they use ViT architecture. I would have expected use of the methodology at 2 or three different NN architectures (to support the claim in the abstract and add more value to the paper). - On the metrics that they introduce it looks like the proposed approach does not have favorable scores. Other Comments Or Suggestions: / Questions For Authors: I have seen that the proposed layer was applied and placed at different layers, Have the authors checked the performance over different NN architectures under one setup to check what kind of improvement we can get? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and thoughtful feedback - we respond below to the concerns you have raised. --- > … marginal improvements or I was not able to see clear improvement separation In this work, we introduce a method that allows DNNs to learn confounder-free features during continual learning. A consequence is that the model promotes equitable predictions across class categories. Thus, we must analyze results across multiple metrics, as a baseline model might perform well on specific individual metrics. For example, in Exp. 4.1, a baseline CNN scores the best on the BWTd metric, but comparably on ACCd and FWTd. Here, lower is better all three metrics. Similarly, a stage-specific MDN model does poorly on ACCd. P-MDN, on the contrary, does well on ACCd and FWTd, but not on BWTd. With our approach, we observe that the method achieves competitive performance across all metrics, significantly outperforming other methods in metrics they perform poorly at (such as 1% improvement on FWTd but 10% on ACCd when compared to stage-specific MDN). In Exp. 4.3, while our approach leads to a modest 2% improvement on the dcor^2 (CN) metric, it has a significant 15% improvement on dcor^2 (AD) when compared to P-MDN. P-MDN is biased towards the AD category, while our approach has similar ranges of dcor^2 across all categories. Furthermore, significant improvements can be seen in Figure 3 (as evidenced by the five straight lines) and Figure 6 (as evidenced by the identification of the cerebellum). We believe that these, along with other results from the paper, provide clear evidence of substantial improvement over prior works for the claims we make, as analyzed through a combination of qualitative and quantitative evaluations. --- > I would have expected use of the methodology at 2 or three different NN architectures … Please note that we tested our method on both CNN and ViT with a mix of datasets and different architectural setups; though, we agree that evaluating the method on more DNN architectures would add more value to the paper. We selected a CNN and ViT model backbone because they lend themselves well to the task of image categorization, which is what we focus on in this paper. Synthetic datasets (as presented in Exp. 4.1 and Suppl. C) are evaluated using a CNN backbone to enable direct comparison to prior de-biasing methods in this space such as MDN, P-MDN, and BR-Net, all of which adopt a similar backbone and model size. This helps us understand how well our method performs. Additionally, we include an experiment on the ABCD dataset (Exp. 4.4) using a 3D CNN model to extend its application to a real-world setup. Finally, we evaluate a 2D ViT (Exp. 4.2) and a 3D ViT (Exp. 4.3) on different real-world setups to analyze their performance. To the best of our knowledge, this is the first time transformers have been used for confounder-free feature learning in this context. We hope that the demonstration of both 2D and 3D CNNs and ViTs will add value to our claim. To complete the loop and further strengthen our claim, we are now running an experiment with a ViT as a model backbone on the synthetic dataset. Results are presented in Table 1 at https://docs.google.com/document/d/1VPSuH5B_XvGlGoSlg_jCikB8OPLfyI65oSgACuWxzco/edit?usp=sharing. --- > On the metrics that they introduce it looks like the proposed approach does not have favorable scores. Across the different metrics, our method favorably improves scores on dcor^2, which quantifies the influence of the confounder on learned features. Improvement is seen through a decrease in dcor^2 in Tables 2, 3, and 4, often across all class categories to reduce inequitable predictions. On the accuracy metric, scores sometimes modestly drop to reflect the model not learning using shortcuts by looking at confounder information for prediction. Only for Exp. 4.2 we see that our approach leads to a slightly weaker score on FWT, possibly due to the corresponding influence from a slightly higher accuracy due to utilizing confounder information, which we seek to avoid. Since we can design carefully controlled environments through synthetic datasets, we can analyze the true theoretical accuracy, BWT, and FWT a fair model should get, and we see favorable scores across each in Exp. 4.1. --- > Have the authors checked the performance over different NN architectures under one setup… Yes, the placement of the R-MDN layer is an important hyperparameter that must be tuned to the specific DNN architecture being used. We analyze various placements across different architectures through Exp. 4.2 and Suppl. H. Prior work (Lu et al., 2021) has shown that applying MDN after both convolutional and fully connected layers results in the best performance in CNNs. We see similar performance improvement in CNNs with R-MDN in Suppl. H. We construct a similar setup for ViTs in Exp. 4.2 and find that adding R-MDN only after the pre-logits layer provides the best performance.
null
null
null
null
null
null
Eigenspectrum Analysis of Neural Networks without Aspect Ratio Bias
Accept (poster)
Summary: This paper proposes FARMS, a method for reducing the bias in the estimation of heavytailness (HT) metrics due to the aspect ratio of the weight matrix. FARMS samples submatrices with a fixed aspect ratio and averages the sampled empirical spectral density (ESD). Empirically, using HT metrics estimated by FARMS leads to improved downstream performance in pruning and choosing layerwise learning rates. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments in Section 4 are sound. Supplementary Material: I reviewed the full supplementary material. Relation To Broader Scientific Literature: The paper highlights the importance in recognizing that the numerical value of the same property measured in different layers can naturally have different scales and should not be directly compared without proper normalization. This is a basic but important idea that deserves more attention from the community. Similar considerations around the different shapes of the weight matrices in each layer have led to impactful theoretical and practical advancement in areas such as hyperparameter transfer and infinite-width limits of neural networks with feature learning [1, 2]. [1] Yang, Greg, and Edward J. Hu. "Tensor programs iv: Feature learning in infinite-width neural networks." International Conference on Machine Learning. PMLR, 2021. [2] Yang, Greg, et al. "Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer." arXiv preprint arXiv:2203.03466 (2022). Essential References Not Discussed: None that I'm aware of. Other Strengths And Weaknesses: Strength: The paper is well-written, and experimental results are strong and well-presented. The core idea of mitigating the aspect ratio bias makes intuitive sense. Weakness: While the proposed method empirically improves the downstream performance of the estimated HT metrics, it does not provide a theoretical justification for the approach's soundness. For example, while subsampling the matrices reduces the bias due to its aspect ratio, does it introduce additional biases since we no longer work with the full weight matrix which is what we ultimately care about? Other Comments Or Suggestions: The abbreviation FARMS is used in the title without being defined. Questions For Authors: 1. Is there any theoretical motivation for the particular subsampling strategy in FARMS? Can you quantify its effect and why it should preserve important spectral properties about the original matrix? 2. If the subsampling strategy is not theoretically well-motivated, have you tried alternative approaches to mitigating the aspect ratio bias? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful and constructive comments. We have addressed your comments as follows. ### Broader Literature Thank you for suggesting references [1, 2]. Tensor Programs IV [1] shows that standard parametrizations can collapse to the kernel regimes in the infinite-width limit. It proposes the Maximal Update Parametrization (µP) to ensure feature learning by carefully choosing parameter scaling rules based on layer dimensions. Tensor Programs V [2] further shows that µP enables zero-shot hyperparameter transfer: hyperparameters tuned on smaller models remain optimal for much larger ones. In this context, "shape" awareness relates to how fan-in/fan-out dimensions affect optimal scaling of initializations and learning rates to maintain stable training dynamics. Our paper, on the other hand, focuses on how the aspect ratio (number of rows vs. columns) of an individual weight matrix biases the measurement of empirical spectral density (ESD). We show that different aspect ratios can artificially stretch or compress the ESD, which confounds the interpretation of HT metrics like the Power Law (PL) exponent Alpha as indicators of training quality. FARMS targets this bias. By analyzing the average ESD of submatrices sampled at a fixed aspect ratio, FARMS provides a normalized HT metric for more reliable comparison across layers of diverse shapes within a given network. Thus, while [1][2] use shape-aware parametrization (like µP), our work uses a different shape-aware analysis technique (FARMS) to correct measurement bias in spectral diagnostics. We will discuss this in detail in the revised paper. References [1] Yang, Greg, and Edward J. Hu. "Tensor programs iv: Feature learning in infinite-width neural networks." International Conference on Machine Learning. PMLR, 2021. [2] Yang, Greg, et al. "Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer." arXiv preprint, 2022. ### Weakness 1 and Question 1 First, see our previous response to clarify what aspect ratio bias is and explain why FARMS addresses it. Then, we will answer your question in more detail. - Why does the FARMS subsampling approach preserve important spectral property about the original matrix? - Does it introduce additional bias? The goal of measuring heavy-tailness in HTSR is to evaluate the strength of correlations introduced by training, as established in previous work [1]. However, when we subsample a single submatrix and measure correlations only within that submatrix, some correlations between elements in the subsampled matrix and those outside it are inevitably lost. This motivates our approach of using multiple submatrices to capture a broader range of correlations. Does this approach introduce additional bias? We believe it does, but we view this "bias" more as a form of partial coverage of the entire matrix. At a high level, this is conceptually similar to bootstrap sampling in random forests—using multiple samples to mitigate the effects of limited coverage. Recent work, such as [2][3], aims to theoretically quantify heavy-tailedness by interpreting it as the accumulation and evolution of feature spikes in the ESD that align with the teacher model's features. These feature spikes are approximately rank-one updates to the original matrix, and with high probability, sampling a submatrix will capture that rank-one component because that component covers the whole matrix (so the subsampled matrix contains that rank-one component). Therefore, sampling a submatrix will not miss the feature spikes, which are believed by previous work [2][3] to cause the heavy-tail structure. We believe this provides further justification that FARMS can preserve important spectral information, measured by the feature spikes in the ESD. Finally, we direct the reviewer’s attention to the experiment in Appendix A.1. In this experiment, we demonstrate that for i.i.d. initialized weights of varying sizes, our method produces a constant measure of heavy-tailedness, whereas existing methods vary with matrix size, exhibiting an aspect ratio bias. While FARMS may introduce partial coverage of the entire matrix, it does so uniformly across matrices of all sizes, preventing any layer-shape-dependent bias. [1] Martin and Mahoney. "Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning." JMLR 2021. [2] Wang et al. "Spectral evolution and invariance in linear-width neural networks." NeurIPS 2023. [3] Kothapalli et al. "Crafting heavy-tails in weight matrix spectrum without gradient noise." arXiv preprint. ### Other Comments Or Suggestions Thank you for the suggestion. The abbreviation was used in the abstract without a definition. We will include the definition of FARMS in the abstract. ### Additional experiments Experiments to answer all reviewers' questions can be found [here](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal).
Summary: The paper studies the problem of eigenspectrum analysis of DNN weight matrices and its relationship with model quality in terms of the training process. The paper proposes Fixed-Aspect-Ratio Matrix Subsampling (FARMS) to address the aspect ratio bias in existing Heavy-Tailed Self-Regularization (HT-SR) eigenspectrum analysis, which enables better computation of HT metrics. The idea is to compute the HT metrics based on the average of a series of subsampled sub weight matrices with fixed aspect ratios. The FARMS further induces improved algorithm for layer-wise hyperparameter tuning, with extensive experiments demonstrating its effectiveness and improvement. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods make sense for the problem studied. Theoretical Claims: N/A Experimental Designs Or Analyses: The reviewer has checked the experimental designs and analyses. Supplementary Material: The reviewer has briefly checked the additional experiment results in the supplementary material. Relation To Broader Scientific Literature: The paper directly contributes to the line of works based on Heavy-Tailed Self-Regularization (HT-SR) [1, 2], which further induces improved algorithm for the task of layer-wise hyperparameter tuning. **References:** [1]Mahoney, M. and Martin, C. Traditional and heavy tailed self regularization in neural network models. In Proceedings of the 36th International Conference on Machine Learning [2] Martin, C. H. and Mahoney, M. W. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. Journal of Machine Learning Research, 22(165):1–73, 2021. Essential References Not Discussed: To the best knowledge of the reviewer, there is no lacking related works. Other Strengths And Weaknesses: **Strengths:** 1. The idea of matrix subsampling to debias the estimation of the heavy-tailedness metrics is new and interesting. 2. The layer-wise hyperparameter tuning algorithms based on the new algorithm for HT metric calculation shows promising results on a range of tasks compared with previous SOTAs. 3. The experiment design is quite extensive and solid. **Weaknesses:** 1. To me the main drawback is that it is not clear why in principle the HT statistics of subsampled matrix is a better estimator of the quality of the trained model, despite the fact that the aspect ration of the subsampled matrix has beed fixed. Please also see my question in the **Questions For Authors** part. Is there any theoretical justifications of such a method? 2. Also, as a minor drawback, the computational cost could be large compared with previous SOTA methods (Appendix C.5). Other Comments Or Suggestions: Please see the **Questions For Authors** part. Questions For Authors: 1. How to design experiments that can directly evaluate the correlation between the *HT metrics* (calculated by different methods, i.e., existing ones and the FARMS method) and the so called *"training quality of each layer"*? How to define the training quality of *different layers*? To me, this seems like the foundation that the newly proposed HT metric calculation algorithm can result in better performance in terms of **layer-wise** hyper-parameter tuning. Since there appears no theory for the design of the algorithm, such an experiment would make the overall methodology more convincing. The authors show in Section 4.3 that by using the FARMS assisted layer-wise hyperparameter tuning, the final testing accuracy improves over using the existing HT metrics calculation method. But this does not directly showcase that the FARMS method gives better indication of how each single layer is trained. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful and constructive comments. We have addressed your comments as follows. ### Weakness 1 and Question 1 - **Previous work on training quality** Previous work on HTSR has established that the heavy-tailness of ESDs is strongly correlated with the test accuracy of models [1][2]. While this does not imply that "training quality" is identical to "test accuracy," the correlation between heavy-tailedness and test accuracy has been used to justify HTSR metrics. Therefore, improving test accuracy or similar performance metrics (e.g., perplexity) remains our primary goal. Although previous work on HTSR does not explicitly define "training quality," several related quantities have been mentioned: (1) strong correlation between weight elements [1][2] and (2) feature spikes and their alignment with the target teacher model [3][4]. The feature spike, analyzed in the context of a single-index teacher and a two-layer student model [3][4], is approximately a rank-one update to the model (in the limit of infinite matrix size with fixed aspect ratio) and also persists after matrix subsampling. This is because the specific form of the rank-one update makes it cover the whole matrix with probability one. - **A toy experiment to measure training quality** We designed a toy experiment to test the correlation between "training quality" and the new HTSR metric measured using FARMS. Following [3][4], we use a single-index teacher to generate signals to train a two-layer student. The first layer of the student model is a weight matrix, while the second layer is a weight vector. Following [3][4], we only update the weight of first layer. To measure “training quality”, during training, we measure the alignment between the weight matrix and the ground truth target vector of the teacher model similar to [3][4], and we define this alignment to be the "training quality" of the student model. Throughout the training process, we select the student network checkpoint with the highest alignment and report both the alignment value and the PL_Alpha value (the HTSR metric). We then vary the sizes of the student model with different weight matrix aspect ratios on a fixed input dimension 500 to conduct multiple experiments. Each experiment provides one PL_Alpha value and one alignment value. The multiple experiments (conducted using varying sizes) produce one curve for PL_Alpha and one curve for the alignment value. We then plot the two curves using both existing methods for estimating PL_Alpha and our method FARMS. As shown in [Figure 2](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal), FARMS reveals a clear negative correlation between the two curves: the better the training quality, the larger the alignment, and the smaller the PL_Alpha. However, for the existing method, due to the aspect ratio bias, the correlation is incorrect. Furthermore, in Appendix A.1, we compared FARMS with existing ways of measuring heavy-tailedness. For i.i.d. initialized weights of varying sizes, FARMS produces a constant value of heavy-tailedness degree, whereas existing methods change with matrix size, showing an aspect ratio bias. In this setting, all matrices are equally undertrained since they are just initialized. - **Summary of this question** We understand that to formally define "training quality" requires substantial novel work. Our goal is not to reinvent the wheel or claim that our method measures training quality better than prior work. Instead, we aim to correct a specific oversight in prior work: the aspect ratio bias. In other words, our goal is to mitigate the aspect ratio bias without sacrificing the ability to measuring heavy-tailedness. [1] Martin and Mahoney. "Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning." JMLR 2021. [2] Martin, Peng and Mahoney. "Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data." Nature Communications 2021. [3] Ba et al. "High-dimensional asymptotics of feature learning: How one gradient step improves the representation." NeurIPS 2022. [4] Wang et al. "Spectral evolution and invariance in linear-width neural networks." NeurIPS 2023. ### Weakness 2 The computational cost can indeed be higher than previous methods, and we have acknowledged that. There are certain ways to mitigate the issue, e.g., by using a larger window size (like 4096 for LLaMA 7B/5120 for LLaMA 13B) and a limited number of sampling steps. In [Table 2](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal), when we use larger window size(such as 4096) and smaller sampling steps(such as 5), our method can still improve model performance and reduce the computational cost. ### Additional experiments Experiments to answer all reviewers' questions can be found [here](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal).
Summary: This paper introduces an approach called FARMS (Fixed Aspect Ratio Matrix Subsampling) that addresses a current deficiency of approaches that rely on quantifying the heavy-tail degree of an empirical spectral density (ESD) without accounting for matrix shape. In particular, the authors observe that random Gaussian matrices converge to distributions with differing levels of heavytail-ness depending on the ratio $m/n$ for an $m\times n$ matrix but this same effect is not captured in algorithms that rely on heavy-tail self regularization theory (HTSR) to capture how well trained certain layers are. FARMS accounts for this by computing a weight matrix's spectral density as the averaged spectral density of subsampled matrices of fixed dimension. By using a fixed dimension for the subsampling matrix used to compute the ESD for all matrices, the impact of the ratio of $m:n$ on ESD is controlled. The authors adopt FARMS for computing the ESD for three HTSR-based algorithms and show FARMS improves performance over the standard approach for computing ESD. Claims And Evidence: The authors demonstrate sufficient evidence that accounting for the matrix dimension when computing the ESD for subsequent heavy-tail analysis improves the performance and robustness of HTSR algorithms developed for adaptive learning rate optimization and pruning. Methods And Evaluation Criteria: The evaluation is relevant and sufficient for demonstrating the impact of accounting for the shape of a matrix. The authors evaluate FARMS applied to three different HTSR algorithms in three domains (computer vision, LLM pruning, and SciML) and show improvements for all three. Theoretical Claims: There were no substantial theoretical claims. Experimental Designs Or Analyses: I found the conclusions drawn from the experiment design and analyses valid in support of FARMS. Supplementary Material: I viewed the additional results presented in the supplementary material. The main thing that stood out was the zero-shot accuracy results for pruning Llama models which showed FARMS to improve accuracy but usually by < 1% relative to the baseline with FARMS. This echos a general theme in the results where FARMS does improve performance but generally only shifts the needle slightly. Relation To Broader Scientific Literature: The contribution of this paper is highly relevant to applications of HTSR and techniques that rely on accurately capturing the ESD. Essential References Not Discussed: I found the discussion of prior work sufficient. Other Strengths And Weaknesses: Strengths: - While the idea of accounting for the weight matrix ratio is simple and straightforward, the experiments demonstrate FARMS to offer consistent improvement over prior HTSR approaches across the board. - I found the paper clear and well written. - FARMS improves the robustness of TempBalance as shown in Figure 3 where FARMS does not need additional Layer Selection to improve performance. Weaknesses: - FARMS seems like it can be sensitive to the window size and sampling steps hyperparameter (Table 5) which could make it difficult to apply in practice. - Error bars not provided for the LLM pruning experiments which make it difficult to gauge the statistical significance of the results in Table 1 and Table 5. - The improvement from FARMS is somewhat incremental especially compared to the improvement from using HTSR adaptive learning rate relative to baseline. Other Comments Or Suggestions: It seems like there could be more of a theoretically grounded adjustment for this ratio that would preclude the need for window size, sliding window, and subsampling number hyperparameters of FARMS. Have you given this direction some thought? Questions For Authors: - Why do you think the LLM pruning results were more sensitive to sliding window and submatrices number? - As seen in Table 4, it makes sense intuitively that a square matrix would give most ratio-agnostic ESD. Do you think there's a parametric adjustment that can be made instead of taking a subsampling approach? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful and constructive comments. We have addressed your comments as follows. ### Weakness 2 While we agree that error bars would help show statistical significance, we respectfully point out that several representative works [1-3] on LLM pruning did not provide error bars in main results. Therefore, we did not provide error bars in the submitted paper. However, we also noticed that both SparseGPT [1] and Wanda [2] use calibration data to estimate input statistics, so we sample different calibration sets with 128 samples using six different seeds [0, 1, 2, 3, 4, 5]. We evaluate FARMS(our method) and AlphaPruning and provide the mean value and standard division(STD) of perplexity in [Table 1](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal). We find that FARMS achieves lower perplexity consistently. For example, the perplexity of LLaMA-7B reduces from **96.02 ± 1.59** to **79.42 ± 3.86** using the SparseGPT pruning method at a 0.8 sparsity ratio. We also found that FARMS achieve lower STD in most settings. [1] Frantar, Elias, and Dan Alistarh. "Sparsegpt: Massive language models can be accurately pruned in one-shot." In International Conference on Machine Learning, pp. 10323-10337. PMLR, 2023. [2] Sun, Mingjie, Zhuang Liu, Anna Bair, and J. Zico Kolter. "A simple and effective pruning approach for large language models." arXiv preprint arXiv:2306.11695 (2023). [3] Cheng, Hongrong, Miao Zhang, and Javen Qinfeng Shi. "A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations." IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). ### Weakness 3 Firstly, compared to existing HTSR adaptive methods, our approach demonstrates stronger robustness. Figure 3 in the paper shows that FARMS can achieve better performance without Layer Selection, a heuristic tuning method in TempBalance (TB) to avoid the aspect ratio bias. Secondly, HTSR methods, such as TB and AlphaPruning, are strong baselines. For example, in layer-wise learning rate assignment, TB beats the state-of-the-art optimizers and schedulers like SGDR, SGDP, LARS, and Lookahead. ### Other Comments or suggestions Please refer to our response to Weakness 1 and Question 1 from Reviewer C2rY for more details. We cannot use a very small window size or sampling steps because doing so may not cover the entire matrix. Conversely, selecting a very large size would result in too much overlap between sampled matrices. A precise theoretical characterization may be beyond the scope of this paper, but we will include a detailed discussion in the revised version. ### Question 1 & Weakness 1 Firstly, Layer-wise pruning LLMs at high sparsity is a challenging optimization problem. If sparsity ratios are not properly allocated, the performance of the model can become very unstable[4]. And the sensitivity shown in [Table 1](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal) is mainly from that. Secondly, we provide the additional results for validating the stability of our method. In [Table 2](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal), we repeat our experiments with six random seeds [0, 1, 2, 3, 4, 5] and provide the mean and STD of perplexity. We can find that except for cases where a small window size like 500 is used, the results of other settings are better than baseline, and the differences in perplexity corresponding to different parameters are not particularly significant. Thirdly, we show that the FARMS evaluation of the ESD is stable, which demonstrates that most sensitivity is from pruning itself. In [Figure 1](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal), we plot the layer-wise sparsity ratios assigned by FARMS across 4 × 4 different window sizes and sampling steps. We observe that the differences in sparsity ratio assignments across these settings are not significant. However, the assignment from the best FARMS setting is still distinct from the worst setting, which explains the performance difference. [4] Yin, Lu, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li et al. "Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity." arXiv preprint arXiv:2310.05175 (2023). ### Question 2 We are unsure what "parametric adjustment" means and kindly request the reviewer to clarify. If it refers to the "change of variable" method in the Marchenko–Pastur pdf function, this is challenging because the ESD transitions smoothly from Marchenko–Pastur to heavy-tailed in a way that is difficult to quantify precisely. If "parametric adjustment" instead refers to other hyperparameter assignment approaches, those are orthogonal to FARMS and can be combined with it. ### Additional experiments Experiments to answer all reviewers' questions can be found [here](https://github.com/Anonymous-For-Submission-code/FARMs-ICML-Rebuttal). --- Rebuttal Comment 1.1: Comment: I appreciate the author's response find most of my points addressed. I will increase my score to a 4.
null
null
null
null
null
null
null
null
Mechanistic PDE Networks for Discovery of Governing Equations
Accept (poster)
Summary: This work extends mechanistic neural networks to discover PDEs. The approach relies on a numerical solver for PDEs, specialized for linear PDEs; however, the approach may still be applied to nonlinear PDEs by using nonlinear basis functions. Solutions are constructed as simple trees, which are restricted to be sparse and concise by Lasso regularization. The method is as follows. Given data from some (perhaps set of) PDE(s) and a skeleton of the PDEs, an encoder produces a prediction of the solution $\tilde{u}$. Next, coefficients of a polynomial combination of $\tilde{u}$ are predicted and incorporated into the PDE structure. The constructed PDE is solved using the NeuRLP-PDE solver to compute $u$. Finally, a loss is calculated between the data and $u$, as well as $u$ and $tilde{u}$, and backpropagated through the solver and encoder. The authors introduce a sparse solver with efficient implementation on GPUs, which serves as a backbone for the proposed approach. Claims And Evidence: The authors claim that the PDE solver used in this work is fast and efficient, as well as scalable. However, there are no ablation studies which show to effectiveness of this solver in the proposed approach as compared to other differentiable solvers. Furthermore, the performance of the approach is not clear. The authors provide plots of TPR, but do not provide any details on the obtained solutions, for example, a table which shows the obtained parameters and solutions for the different test cases. This makes it difficult to judge the results. The authors also claim that the combination of the machine learning approach with the numerical solver improve robustness, while the plots in Figures 4 and 5 seem to show similar performance with strictly machine learning based approaches. This claim seems difficult to accept given the limited evidence. Methods And Evaluation Criteria: The experiments make sense, but the baseline methods seem limited. A recent approach, PDE-Learn, is referenced in this work, but not selected as a baseline. Is there a reason for this? The evaluation of the approach for the experiments is also relatively limited. TPR and a maximum relative error over the coefficients are provided, but the relative performance of the solution to the discovered PDEs as compared with a numerical solution to the true PDEs is not given. Overall, the evaluation metrics make it difficult to determine exactly how well each approach is doing. Additional figures and organization of the results would be helpful to illustrate the ability to discover the PDEs. Since efficiency is also a central claim, the computation time and required computational resources of all approaches (including baselines) are also a critical point for discussion. Theoretical Claims: No theoretical claims are made here. Experimental Designs Or Analyses: Adam optimizer is used with a small learning rate and no scheduler is used. Is this the case across all baseline models as well? If so, this is a bit questionable, as learning rate scheduling and tuning can be critical to many applications. Supplementary Material: I reviewed all of the supplementary material. Why does MechNN-PDE perform worse on Burger's with less noise? Relation To Broader Scientific Literature: Differentiable solvers are useful for a variety of tasks, particularly in solving inverse problems and engineering design. Fast, scalable solvers of this type are crucial, and this paper aims to address that, as well as show an interesting application in inverse problems (PDE discovery from data). Additionally, many areas of science rely heavily on modeling by PDEs, for example weather dynamics. In weather prediction, these PDEs are often not known exactly, and come from heuristics and experimental testing. Likewise, weather data is often noisy. An approach such as this one will aid understanding in these fields, and allow scientists to construct more accurate models of physical phenomena. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. The systems considered in this work are relatively challenging 2. The authors discuss limitations of their approach 3. The proposed numerical solver seems to be robust, with the ability to solve many systems of PDEs Weaknesses 1. As previously mentioned, the baseline methods seem to be relatively weak 2. It is relatively difficult to draw comparisons between the performance of all approaches explored in this work and the evaluations are limited 3. The contribution of this work to the literature seems minimal in the paper's current form, and the claims made by the authors are not strongly supported by experimental evidence 4. In general, I find the structure of the paper to be hard to follow Other Comments Or Suggestions: I would suggest the follow major changes for the work to improve the clarity: - The paper focuses on PDE discovery, when the sparse solver is ultimately the main contribution in this work. I believe by first restructuring the paper to discuss the numerical solver and its contributions and advantages over existing works, the impact of this work would be more clear. - Following this, MechNNs can be introduced. Currently, both section 2 and 4 discuss MechNNs, and so this information is split. It should be unified so that the reader can more easily follow the goals of the experiments section. - This will naturally lead into the results section. Results for both the NeuRLP solver and MechNN-PDE can be shown here, and some details on the baseline methods can also be included. - Additionally, the results section should be expanded to include more details. I think the previous sections could be shortened to cover the fundamental aspects of the approach, but technical details can be moved to the appendix. I would like to ask the authors to review the work for typos as well. Also, the NeuRLP acronym is not defined. Line 342, right column, references Figure 6, when it should reference Figure 3. Questions For Authors: 1. Are all the models retrained for each different noise setting? If so, how do the results compare with a model trained on a noiseless setting and evaluated on the noisy cases? 2. How is the threshold chosen? What if a PDE has many small parameters? 3. Can the approach be extended to the non-Cartesian domain? 4. The structure of the PDE is defined a priori, as this problem is usually very ill-posed. Therefore, it makes sense to only consider solving for the parameters of the PDE. Recently, several papers on finding analytical solutions to PDEs from data have been proposed. Would it be possible to extend this approach to find analytical solutions and their parameters? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comments. **Main Contribution and Paper Structure** We would like to strongly emphasize that we do not claim to replace or improve existing PDE solvers. Our contribution is to enhance the MechNN model (Pervez et al, ICML 24) that works with ODEs to support PDE representations, together with an efficient way of solving the representations and applying the model for PDE discovery. The PDE solver we develop is specialized for this architecture, where the PDE terms are produced by a (non-smooth) neural network output and solves a relaxation. The sparse solver is intended *to make MechNN feasible for PDEs* with multiple dimensions. Furthermore, we make no claims of either speed or memory improvement over the baseline sparse regression methods which are not neural networks and require much fewer resources. We do claim that our model is more expressive (handles complex expressions) and can handle more complex data as shown in the discovery experiments. We are open to restructuring the paper for clarity as long as it correctly emphasizes our main contributions and will certainly consider the reviewer’s suggestions. **Plots look similar** We disagree with the reviewer’s assessment of the plots. We claim significant improvement on the more complex reaction diffusion system (Figures 4,5 right) over the baselines which entirely fail for this dataset. We note that the TPR and max coefficient error plots should be considered together, since the correct terms have to be discovered for the coefficient error to be meaningful. The baselines fail in all cases to discover the correct terms (with low error) including the case with no noise. For our model both TPR and coefficient error show gradual degradation with noise, with near perfect recovery in low noise settings even though we do not fine-tune after thresholding. On the easier reaction diffusion system the WeakSINDy baseline performs well (although PDEFIND fails with noise) and our method matches its performance, with only some decrease in coefficient accuracy at the 80% noise level. **Evaluation/Simulation** The metrics of TPR and max coefficient error are chosen from the literature. In our view they provide a more quantitative and objective basis for comparison of discovery methods. Raw equations allow limited insight, especially when exact discovery is not possible in noisy settings. However, we will consider adding examples of discovered equations in the updated draft. Similarly, simulating a discovered equation does not serve as an accurate determinant of how well an equation’s form has been learned, especially in noisy settings. Nevertheless we link a video example of simulation for the harder reaction diffusion example simulating the ground truth and discovered equation on clean data (uploaded here: https://filebin.net/wqv4t3oki732nuq0). We will consider adding more examples in the next draft. **Burger’s equation** With the inviscid Burger’s equation there is only one term to be discovered with the coefficient of 1. However, since we do not finetune after thresholding there can be occasional variation in the discovered coefficients. The learned coefficient in the noise-free case for this experiment is 0.957 which gives a relative error (|1-0.957|/|0.957|) of 0.04. For the noisy case the coefficient is 1.01 with a relative error of 0.001. Running the experiment with fine-tuning after thresholding gives a small relative error (~0.001) for the first case as well. **Recent Methods** Due to lack of space, please see our response to reviewer Ct17 for details of the comparison with PDE-LEARN. **Hyperparameters** We clarify that the baselines (PDEFIND, WeakSINDy) are sparse regression methods which do not employ any neural networks and therefore the learning rates etc, do not apply to the baselines. For our method, for simplicity, we fixed the learning rate across all our experiments and do not use any scheduling. **Retraining** Yes, the models have to be retrained for new data. In the discovery setting we consider, the PDE parameters to be discovered are global parameters and are not functions of the data. This is also true for the baselines. This setting does not allow direct evaluation of parameters on a new dataset. However, developing a setting where this is sensible is an interesting question for further work. **Threshold** We chose the threshold per PDE qualitatively, for simplicity. The choice of threshold can affect small parameters which is also the reason why the TPR in the Navier-Stokes example is less than 1. **Analytical solutions** Most PDEs don’t have analytical solutions, but perhaps the question is with regard to learning the expression structure? If so, then yes, we think that symbolic regression methods could be combined with this method to learn expressions together with parameters and we are considering approaches for this. We would like to thank the reviewer again for the review. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for taking the time to respond to my questions and concerns. Based on this response, it seems the role of the PDE solver may be overstated within the main text. From my perspective, this is framed as a main contribution, e.g. "The workhorse of mechanistic PDE networks is NeuRLP-PDE – a specialized, parallel and differentiable solver for linear PDEs." As I understand it now, the main contribution of this work is the extension of MechNNs from ODEs to linear PDEs, and not the solver. Nonetheless, the answer to the question "how is this extension performed?" seems to be that the ODE solver from the founding MechNN work is replaced with this PDE solver. Although perhaps a bit circular, I have no problem with this argument in principle; however, in this case, impressive results are required to make up for the limited new work in the approach. Although the authors claim that significant improvement is present, I only see this clearly in one case (hard reaction-diffusion) and some moderate improvement in the NS problem. As it stands now, I believe that the presentation of the work, both contributions and results, is still quite difficult to understand. For that reason, I will not change my overall evaluation, but I encourage the authors to further refine this work, as I believe there is potential in it. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response. We would like to reiterate the following. **Contribution** The PDE solver remains a main contribution. Since that is the component that makes the extension of the MechNN ODE architecture to PDEs possible. It is also not a trivial addition since significant computational challenges have to be met to make the extension from ODEs to PDEs with multiple dimensions feasible. We solve this challenge by developing a specialized sparse multigrid PDE solver that works with differentiable optimization. **Evaluation** We believe that we have given ample evidence for superior discovery performance for our model. In summary * We demonstrate significant discovery on a complex reaction-diffusion and Navier-Stokes equation, where the baselines completely fail in both cases. We show that we can perform robust discovery in the presence of noise in these equations. We present this with quantitative evaluation metrics. * We also show that our method can model PDEs with complex expressions such as the Porous Medium equation. This equation cannot be modeled by the baselines since they are limited to linear combinations of fixed basis functions. * In addition to this we show that we match the performance of the baselines on simpler problems including simpler reaction-diffusion, diffusion and Burger’s equations (both viscous and inviscid).
Summary: This paper presents a model that learns spatial-temporal PDEs from data samples. The model selects a set of basis functions with spatially and temporally varying coefficients over the problem domain. Next, it implements a multigrid solver to solve the proposed PDE. Finally, the model learns a PDE by backpropagating the gradients and optimizing a loss function on data terms. The paper demonstrates the method’s efficacy on classic 1D and 2D PDE instances. ## update after rebuttal Thank you for your rebuttal. I didn't have too many questions to ask in my initial review, and the rebuttal didn't greatly affect my overall evaluation of this work. I will maintain my current score. Claims And Evidence: Most of them look OK to me. I have one comment on the claim of “discovering” PDEs from data. I notice that the experiments generated data from known PDEs. I understand that having a ground-truth PDE is good for calibrating the efficacy of the “discovery” of PDEs, and many prior works used this setting as well. However, I feel it would be more proper to use the term “discovering” PDEs if the data are from a (real-life) experiment without a known PDE. Perhaps “recovering” or “reconstructing” a PDE is a more appropriate term for what is going on in this paper. I understand that this might be an unpopular opinion, and I won’t hold this against the paper. Methods And Evaluation Criteria: Most of them look good to me. In particular, I want to commend the effort to build a multigrid solver. The authors definitely deserve some credit for this. I don’t quite get the neural network mapping (line 267) and would like to suggest an ablation study on it. Theoretical Claims: N/A. Experimental Designs Or Analyses: I read the experiments, and I think the results look reasonable. I have less experience with the baselines in the paper, so I am not sure whether they are the SOTAs for comparisons and will let the other reviewers decide. I also wonder whether PINNs and their variants are baselines for this work and would like to hear the authors’ thoughts on them. Supplementary Material: Yes, all of them. Relation To Broader Scientific Literature: I think the proposed method overall is new and interesting. I can think of some related works that tackle similar problems or use similar techniques, none of which can challenge the novelty of this paper. Essential References Not Discussed: Looks good to me. I don’t have extra references to suggest. Other Strengths And Weaknesses: I want to second the multigrid aspect of this work. Quite a few previous works evaluate their methods on small-sized toy examples only. I highly appreciate that this paper is willing to implement a multigrid solver in this setting, which has the potential to scale up the problem size. On the negative side, the problem size (256 x 256?) in this work is still relatively small compared with what modern multigrid solvers can achieve (i.e., solving linear systems with > millions of variables). I feel the current method hasn’t unleashed the full power of multigrid solvers, and I am curious to know which part in the method is now the bottleneck that prevents it from being applied to large-scale problems. Other Comments Or Suggestions: A minor comment: Line 95: is the notation x1x1 a typo? Based on the context, x1x2 seems to make more sense. Questions For Authors: None for now. I look forward to a constructive reviewer-author discussion. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. The reviewer’s point regarding use of the term ‘discovery’ is well received. However, the term is now endemic in the literature and we chose to employ the same term. We would like to appreciate the reviewer’s recognition of the effort that went into building the solver. We will certainly release the source code for our models with the camera ready version of the paper. **Multigrid** It is quite correct that multigrid solvers are also used for higher resolutions. Our use case is different in a few respects from the usual use of multigrid. * One difference is that we solve for the saddle point systems in equations 10,11 and we have extra constraints for ensuring smoothness which are usually not present in solvers. * Secondly, we work in a learning context with a nested iterative loop. The inner loop corresponds to the multgrid iteration and the outer loop is the learning loop that can run for hundreds of epochs. * Similarly we work in a batch setting where in each inner iteration we solve a minibatch of PDEs, all with their separate multgrid steps. * Furthermore the backward pass also uses the batched multigrid solver which doubles the memory use. * It is known that multigrid performance deteriorates with discontinuous coefficients. The data produced by neural networks over the grid (especially early in training) can be non-smooth which can deteriorate multigrid performance. * Our current implementation is in Pytorch which is not ideal for iterative processing. A native CUDA implementation can potentially produce significant computational improvements, but requires further work. Very high multigrid resolutions would have an adverse impact on the walltime, making the method very slow. Nevertheless, different from standard multigrid approaches, since we also have a learning component which can infer the underlying PDE given only a (batched) portion of the full resolution data. This implies that the solver is not required to solve for the full resolution data in each step. Rather it only has to solve for the given smaller resolution patch of data making it more efficient. **Encoder NN** The neural network transform (line 267) allows parameter capacity to the model to make it flexible. Without the transformation the only learnable parameters would be the PDE parameters, parameterizing the expression, which are few in number. This can lead to a brittle model making it harder to recover from bad initialization or noise. We will consider adding an ablation in the next draft. $x_1 x_1$ should be $x_1 x_2$. Yes, that is a typo in this case. We would like to thank the reviewer again for the review.
Summary: The paper presents mechanistic PDE networks, a method for discovering PDEs from spatiotemporal data. The proposed method integrates a differentiable PDE solver into the neural network and uses the neural network predictions to model the PDE instead of the raw data. The method discovers PDEs using the spatiotemporal data by expressing the PDE as a learnable polynomial kind of ansatz. The paper applies the proposed method to discover popular PDEs in the neural PDE solver domain and compares them with WeakSINDy and PDEFIND methods. ## Update after rebuttal The paper presents advancements in discovering equations from data. The paper also presents the limitations of baselines and mitigates the challenges therein for 2D problems. Hence, I have raised my score considering the author's responses. Claims And Evidence: The claims in the paper regarding applicability of the proposed method for noisy data is validated through experiments. Methods And Evaluation Criteria: The experimental setting regarding dataset and noisy data draw similarity with conventional methods employed for PDE discovery. Theoretical Claims: Not applicable Experimental Designs Or Analyses: Yes, I checked for all provided numerical experiments Supplementary Material: Yes, all part. Relation To Broader Scientific Literature: Discovering of governing equations from data is an important topic in many field of science and engineering. The proposed AI method is in the direction to model these problems motivated through canonical problems. Essential References Not Discussed: References are adequate. Other Strengths And Weaknesses: Strengths: The paper presents a novel method for discovering PDEs using spatiotemporal data, where the derivative computations do not need to be performed on the data but are instead performed on the neural network outputs. The experiments show that the proposed method is robust to noise and outperforms WeakSINDy and PDEFIND. The proposed method is scalable and the polynomial kind of ansatz enables generalized PDE operators. Weaknesses: The comparison with baseline methods is restricted to specific methods, and the paper does not compare it with recent methods. The memory requirements increase with an increase in grid size. This is problematic for high-dimensional PDE discovery. The computational cost of the method has a trade-off with accuracy, as also mentioned by the authors in the paper. Other Comments Or Suggestions: Typo: In line 95, should the product be taken for two different spatial coordinates? Suggestions: The authors might consider including a discussion on how the proposed method is advantageous over using rational neural networks for PDE discovery. Questions For Authors: The paper does not consider PDEs under external acting forces and only showcases their applicability for autonomous systems. How would the proposed method be applied to discover the PDE in case of unknown and known external acting forces? How does the method guarantee the uniqueness of the identified PDE? How will the method behave when multiple PDEs govern the data equally well? Extending the proposed method to discover stochastic PDEs or fractional differential equations seems complicated. How does one consider in priori the correct dynamical system framework modeling the dynamics? It would be informative if the authors compare the proposed method with the rational neural networks approach of discovering governing PDEs, for instance, as presented in PDE-LEARN. What will be the limitations and failure modes of the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. We attempt to address the concerns raised below. **Non-autonomous systems** Yes, in this paper we focus on autonomous systems. However, non-autonomous systems can be similarly handled since the method allows spatiotemporally varying terms. Any known external forces can be represented as time-space dependent terms similarly to what is achieved with basis functions. Any unknown forces that cannot be simply represented can be represented by a neural network. This is possible because the method allows arbitrary differentiable expressions in PDEs. **Identifiability** Whether the method finds the unique PDE requires further investigation into identifiability of the PDE learning problem. Identifiability is something we do not deal with in this paper and assume that the problems are identifiable and that there is a unique governing equation. We have not considered stochastic or fractional differential equations thus far and it would be interesting to consider whether such types of equations can be modeled in this way. **PINN style methods** The PDE-LEARN approach extends the PINN approach to PDE discovery. It reduces a PDE residual, constructed using basis functions, at random collocation points together with approximate L0 norm regularization. One difference of this approach with MechNN is that MechNN has a stronger physics informed bias due to the explicit modeling of PDEs as constraints. Unfortunately, the PDE-LEARN release does not have support for coupled equations, so it cannot be used with our 2D experiments in its current form. The main equations that we focused on were the 2D reaction diffusion and Navier-Stokes PDEs which are coupled equations. Nevertheless, we trained PDE-LEARN on our 1D Burger’s equation dataset with a 1/cosh(x) initial condition. Despite repeated attempts with multiple settings of hyperparameters, we found it to always over or under prune the equation (even on clean data), performing worse than the SINDy baselines from our paper. The maximum TPR we obtain for this experiment is 0.5, compared with a TPR of 1 for the SINDy baselines and MechNN. In particular one advantage of our method is that the modeled PDEs can contain arbitrary differentiable expressions and are not limited to linear combinations of fixed basis functions. Many other methods including PDE-LEARN model PDEs as linear combinations of fixed basis functions. A concrete example is the porous medium equation for which the equation cannot be modeled in this way, whereas we show that we can recover the true parameters. **Limitations** The following are some limitations of the method. * The method is slower than the baseline sparse regression methods. * The method is currently limited to Cartesian grids. * Although the method can represent arbitrary differentiable expressions, there remains the need for theory that dictates the conditions under which complex expressions can be exactly identified. * There is a tradeoff between accuracy and speed in the multigrid solver which needs to be tuned. * Application to very high dimensional data is not feasible in the current form. We thank the reviewer again for the review and hope that we were able to meet any concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The authors have addressed my concerns, and I have raised the score to 4. In the camera-ready version, it would be helpful to include the limitations and the additional experiment discussed here to foster further research.
Summary: The paper introduces a new methodology for learning PDEs on data. The key contribution is an optimization of how partial derivatives are handled in the network. Firstly, the theoretical formulation includes a dual formulation that elides a way to backpropagate through a linear solve effectively. Secondly, they introduce an optimized linear solving using a multigrid algorithm implemented on GPU. The paper then demonstrates that their method learns a few test PDEs more accurately than baseline PDE discovery methods with various levels of noise. Claims And Evidence: There are three distinct dimensions that the paper improves: - Flexibility by allowing nonlinear backpropagation with PDE terms embedded. - A very fast and efficient GPU implementation of a multigrid V-cycle algorithm for linear solving. - Improved accuracy on the PDE learning task. The paper only demonstrates evidence of the improved accuracy. A key improvement purported by this paper is supposedly the scaling and optimization of the new solver. However, no performance metrics or comparisons are reported. There is only theoretical calculations of memory utilization. It would greatly strengthen the paper to show speed and memory usage comparison with the baseline methods, as a function of the domain size. Methods And Evaluation Criteria: As mentioned, evaluation of the purported claims of a fast solver are missing. The results are also missing evidence of the discovered PDE forms or failure to discover the forms; see line 418 on page 8. Theoretical Claims: I did not see any issues in the theoretical formulations in the paper. Experimental Designs Or Analyses: I did not see any issues in the comparison between PDE learners sound. As mentioned, there could be more elaboration on the solver efficiency. There is a brief discussion of solver convergence in Section 6.1, but this does not have any data nor figure references. It is great to discuss testing methodology at the forefront, but this leaves some questions: What is meant by errors decreasing with increasing grid size? Shouldn’t it go the other way around? What is the convergence rate? Is this the extent of the testing of the code? What is the convergence rate in terms of iterations of the V-cycle solver? Supplementary Material: I reviewed the supplementary material which contains one extra problem and a pseudocode listing of the V-cycle. If possible, an accompanying release of the code for the GPU V-cycle would also be a good contribution to the ML literature. Relation To Broader Scientific Literature: As mentioned, the novel way of representing PDE solving in nonlinear networks coupled with a very efficient solver will be a contribution to the literature. Such an algorithm could be dropped in to many other PDE learning methods beyond the proposed MechNN-PDE. Essential References Not Discussed: Are there more papers about multigrid linear solvers implemented on GPU, even looking beyond the ML literature into scientifc computing? Is this the first time a GPU implementation of the multigrid method was published? Other Strengths And Weaknesses: I am supportive of the paper, but it is missing more discussion about the advantages of the solver technique. The paper is not sufficiently demonstrating the scaling and speed improvements. It mentions the theoretical scaling, but does not show the application to problems that the other PDE learning techniques cannot handle. As mentioned above, this would strengthen the contribution of the paper. That aspect would have the potential to be applied to other PDE discovery methods as a drop-in module, and even be adapted to other ML problems requiring a similar operation. I also think that the method also has fewer constraints than the baseline methods: whereas e.g. WeakSINDy does a sparse linear regression, MechNN-PDE is doing backpropagation over the nonlinear expressions. Other Comments Or Suggestions: - Page 2: “Derivatives never have to be directly on data and are” Firstly, is there a mistake in this sentence? Secondly, what does it mean? Which derivatives? Do you mean parameter tangents, or partial derivatives? - Page 2: Is the discretization only done for 1D? The paper has 2D problems: could you write the math in a higher dimension instead? - Page 8: “We note that m is a positive real value which precludes..” What is that supposed to mean? - Define FGMRES and GMRES for the audience. - I don’t understand the paragraph headings that are bold versus italic on Page 5 and 7,8. It is not obvious which is higher or lower. Questions For Authors: - What aspects of the multigrid V-cycle section are new in your paper, on page 4? Did you propose any changes to the algorithm, or is this just the standard V-cycle? - Do you have to solve the backward pass in Equation 11 with the multigrid solver as well? - Is the FGMRES solver also your implementation, or do you integrate with another library? Which library, or how did you implement it? - A few more details on the GPU implementation would be good for the paper, too. E.g., did you hand write CUDA kernels? For which library? - Page 5: “Finally a concise PDE is generated by thresholding the parameters...” How is that done? Is the thresholding applied iteratively in a loop with parameter optimization, as in SINDy? What is the thresholding parameter used? - Page 7: “The data is parameterized by 10-layer 2D ResNets”: What does this mean? Which data, which parameterization? How does that fit in? What are the details of the architecture? Are they CNNs? - Page 8, paragraph on line 411: Are the methods able to discover the equation forms here? - The usage of the neural network for $\tilde{u}$ in the MechNN-PDE architecture is unclear. Would it be possible to include it in Figure 1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the detailed review. **Clarification of our contribution.** We wish to clarify the nature of our contribution. We do not claim to replace or improve existing PDE solvers. Our contribution is to enhance the MechNN model (Pervez et al, ICML 24) that works with ODEs to support PDE representations, together with an efficient way of solving the representations and applying the model for PDE discovery. The PDE solver we develop is specialized for this architecture, where the PDE terms are produced by a (non-smooth) neural network and solves a relaxation. The sparse solver is intended to make the MechNNs *feasible* for PDEs with multiple dimensions. That said, we do not discount the possibility that the method could be useful for other applications. Furthermore, we make no claims of either speed or memory improvement over the baseline sparse regression methods which require much fewer resources. We do claim that our model is more expressive (handles complex expressions) and can handle more complex data as shown in the discovery experiments. **Missing Evidence and Failure.** We provide quantitative evidence of discovery performance using the TPR and Error metrics from the literature. A TPR of 1 means exact discovery of terms. This is shown in Figures 4,5 and 6 for our method and the baselines. The plots also show partial failure to discovery when noise is added. For instance with 80% noise (Figure 4) we only see a TPR of 0.6 which indicates extra or missing terms are present. From Figure 6 we see that the Navier-Stokes equation is also not exactly discovered (TPR ~0.8). We present quantitative metrics since they allow more objective comparison than qualitative comparison between equations. We will include examples of discovered equations. **Solver Eval** We have performed further evaluation of the multigrid linear solver including time and memory usage, convergence of relative error with the number of V-cycles, GS smoothing iterations, and FGMRES iterations for the 2D Laplace equation. See https://filebin.net/wqv4t3oki732nuq0 for plots. **GPU Multigrid** GPU multigrid methods are frequently used in PDE solvers. However, our solver is specialized to work with (possibly non-smooth) neural network outputs over the grid that parameterize the PDEs. The non-smoothness is handled by adding extra smoothing constraints and solving relaxations as saddle-point problems. We are not aware of available solutions for the special case that we consider. We plan to release the code for our experiments. **Comments** By grid size we mean increasing the grid resolution for the same problem and running until convergence. We will make the correction. The line about derivatives is with reference to the SINDy baselines which estimate numerical derivatives directly on data which makes them susceptible to noise. There is a typo and the word ‘computed’ is missing. The statement about the parameter $m$ was meant to indicate that it is not a fixed value which implies that fixed basis functions cannot be used to represent the PDE. We will make the corrections in the next draft. **Questions** The V-cycle algorithm itself is fairly standard, however, it is applied in a non-standard setting to solve saddle-point problems in eq 10 and 11. For these problems the standalone V-cycle has subpar performance in our setting. However, we were able to obtain better results using the V-cycle as a preconditioner for FGMRES. Yes, we also solve the backward pass with the V-cycle (with FGMRES). We will include more details about the solver implementation in the next draft. The code is in Pytorch together with CuPy for features not available in Pytorch (such as sparse triangular solves). We did not write CUDA kernels, however, that is an option for further improvement. For FGMRES we start with the CPU implementation of GMRES from SciPy. We adapted the GMRES algorithm to FGMRES using the algorithm described in Saad (2003). We ported the code to Pytorch adding batching and GPU support and V-cycle preconditioning, written from scratch. For simplicity we only threshold once at the end of training (unlike SINDy). The threshold is a hyperparameter per dataset (fixed across noise). We used 0.06, 0.3 and 0.0002 for the easy and hard reaction diffusion, and Navier-Stokes equations. Fine-tuning after thresholding can be used to obtain more accurate coefficients. The parameterization with a ResNet refers to computing \tilde{u} (line 267 or Figure 1). We parameterize the PDE terms (Figure 1) by a neural network (using \tilde{u}) instead of feeding data (u_data) (which may be noisy) directly. This also helps with training convergence. Line 411 discusses discovery in the presence of noise. Results are shown in Figure 4 (left). WeakSINDy and MechNN have similar performance with at least TPR 0.6 whereas PDEFIND fails entirely. We thank the reviewer again.
null
null
null
null
null
null